code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
---|---|
d Errors Errors
======
**Contents** 1. [The Error Handling Problem](#the_error_handling_problem)
2. [The D Error Handling Solution](#the_d_error_handling_solution)
> I came, I coded, I crashed. Julius C'ster
>
>
All programs have to deal with errors. Errors are unexpected conditions that are not part of the normal operation of a program. Examples of common errors are: * Out of memory.
* Out of disk space.
* Invalid file name.
* Attempting to write to a read-only file.
* Attempting to read a non-existent file.
* Requesting a system service that is not supported.
The Error Handling Problem
--------------------------
The traditional C way of detecting and reporting errors is not traditional, it is ad-hoc and varies from function to function, including: * Returning a NULL pointer.
* Returning a 0 value.
* Returning a non-zero error code.
* Requiring errno to be checked.
* Requiring that a function be called to check if the previous function failed.
To deal with these possible errors, tedious error handling code must be added to each function call. If an error happened, code must be written to recover from the error, and the error must be reported to the user in some user friendly fashion. If an error cannot be handled locally, it must be explicitly propagated back to its caller. The long list of errno values needs to be converted into appropriate text to be displayed. Adding all the code to do this can consume a large part of the time spent coding a project - and still, if a new errno value is added to the runtime system, the old code can not properly display a meaningful error message.
Good error handling code tends to clutter up what otherwise would be a neat and clean looking implementation.
Even worse, good error handling code is itself error prone, tends to be the least tested (and therefore buggy) part of the project, and is frequently simply omitted. The end result is likely a "blue screen of death" as the program failed to deal with some unanticipated error.
Quick and dirty programs are not worth writing tedious error handling code for, and so such utilities tend to be like using a table saw with no blade guards.
What's needed is an error handling philosophy and methodology such that: * It is standardized - consistent usage makes it more useful.
* The result is reasonable even if the programmer fails to check for errors.
* Old code can be reused with new code without having to modify the old code to be compatible with new error types.
* No errors get inadvertently ignored.
* ‘Quick and dirty’ utilities can be written that still correctly handle errors.
* It is easy to make the error handling source code look good.
The D Error Handling Solution
-----------------------------
Let's first make some observations and assumptions about errors: * Errors are not part of the normal flow of a program. Errors are exceptional, unusual, and unexpected.
* Because errors are unusual, execution of error handling code is not performance critical.
* The normal flow of program logic is performance critical.
* All errors must be dealt with in some way, either by code explicitly written to handle them, or by some system default handling.
* The code that detects an error knows more about the error than the code that must recover from the error.
The solution is to use exception handling to report errors. All errors are objects derived from abstract [`class $Error`](https://dlang.org/phobos/object.html#Error). `Error` has a pure virtual function called toString() which produces a `string` with a human readable description of the error.
If code detects an error like "out of memory," then an `Error` is thrown with a message saying "Out of memory". The function call stack is unwound, looking for a handler for the Error. [Finally blocks](statement#TryStatement) are executed as the stack is unwound. If an error handler is found, execution resumes there. If not, the default Error handler is run, which displays the message and terminates the program.
How does this meet our criteria?
It is standardized - consistent usage makes it more useful. This is the D way, and is used consistently in the D runtime library and examples. The result is reasonable result even if the programmer fails to check for errors. If no catch handlers are there for the errors, then the program gracefully exits through the default error handler with an appropriate message. Old code can be reused with new code without having to modify the old code to be compatible with new error types. Old code can decide to catch all errors, or only specific ones, propagating the rest upwards. In any case, there is no more need to correlate error numbers with messages, the correct message is always supplied. No errors get inadvertently ignored. Error exceptions get handled one way or another. There is nothing like a NULL pointer return indicating an error, followed by trying to use that NULL pointer. 'Quick and dirty' utilities can be written that still correctly handle errors. Quick and dirty code need not write any error handling code at all, and don't need to check for errors. The errors will be caught, an appropriate message displayed, and the program gracefully shut down all by default. It is easy to make the error handling source code look good. The try/catch/finally statements look a lot nicer than endless if (error) goto errorhandler; statements. How does this meet our assumptions about errors? Errors are not part of the normal flow of a program. Errors are exceptional, unusual, and unexpected. D exception handling fits right in with that. Because errors are unusual, execution of error handling code is not performance critical. Exception handling stack unwinding is a relatively slow process. The normal flow of program logic is performance critical. Since the normal flow code does not have to check every function call for error returns, it can be realistically faster to use exception handling for the errors. All errors must be dealt with in some way, either by code explicitly written to handle them, or by some system default handling. If there's no handler for a particular error, it is handled by the runtime library default handler. If an error is ignored, it is because the programmer specifically added code to ignore an error, which presumably means it was intentional. The code that detects an error knows more about the error than the code that must recover from the error. There is no more need to translate error codes into human readable strings, the correct string is generated by the error detection code, not the error recovery code. This also leads to consistent error messages for the same error between applications. Using exceptions to handle errors leads to another issue - how to write exception safe programs. [Here's how](https://dlang.org/exception-safe.html).
d rt.sections rt.sections
===========
License:
Distributed under the [Boost Software License 1.0](http://www.boost.org/LICENSE_1_0.txt). (See accompanying file LICENSE)
Authors:
Walter Bright, Sean Kelly, Martin Nowak
Source
[rt/sections.d](https://github.com/dlang/druntime/blob/master/src/rt/sections.d)
d Attributes Attributes
==========
**Contents** 1. [Linkage Attribute](#linkage)
2. [`align` Attribute](#align)
3. [`deprecated` Attribute](#deprecated)
4. [Visibility Attribute](#visibility_attributes)
5. [`const` Attribute](#const)
6. [`immutable` Attribute](#immutable)
7. [`inout` Attribute](#inout)
8. [`shared` Attribute](#shared)
9. [`__gshared` Attribute](#gshared)
10. [`@disable` Attribute](#disable)
11. [`@safe`, `@trusted`, and `@system` Attribute](#safe)
12. [`@nogc` Attribute](#nogc)
13. [`@property` Attribute](#property)
14. [`nothrow` Attribute](#nothrow)
15. [`pure` Attribute](#pure)
16. [`ref` Attribute](#ref)
17. [`return` Attribute](#return)
18. [`override` Attribute](#override)
19. [`static` Attribute](#static)
20. [`auto` Attribute](#auto)
21. [`scope` Attribute](#scope)
22. [`abstract` Attribute](#abstract)
23. [User-Defined Attributes](#uda)
```
AttributeSpecifier:
Attribute :
Attribute DeclarationBlock
Attribute:
LinkageAttribute
AlignAttribute
DeprecatedAttribute
VisibilityAttribute
Pragma
static
extern
abstract
final
override
synchronized
auto
scope
const
immutable
inout
shared
__gshared
AtAttribute
FunctionAttributeKwd
ref
return
FunctionAttributeKwd:
nothrow
pure
AtAttribute:
@ disable
@ nogc
@ live
Property
@ safe
@ system
@ trusted
UserDefinedAttribute
Property:
@ property
DeclarationBlock:
DeclDef
{ DeclDefsopt }
```
Attributes are a way to modify one or more declarations. The general forms are:
```
attribute declaration; // affects the declaration
attribute: // affects all declarations until the end of
// the current scope
declaration;
declaration;
...
attribute { // affects all declarations in the block
declaration;
declaration;
...
}
```
Linkage Attribute
-----------------
```
LinkageAttribute:
extern ( LinkageType )
extern ( C++, QualifiedIdentifier )
extern ( C++, NamespaceList )
LinkageType:
C
C++
D
Windows
System
Objective-C
NamespaceList:
ConditionalExpression
ConditionalExpression,
ConditionalExpression, NamespaceList
```
D provides an easy way to call C functions and operating system API functions, as compatibility with both is essential. The *LinkageType* is case sensitive, and is meant to be extensible by the implementation (`they are not keywords`). `C` and `D` must be supplied, the others are what makes sense for the implementation. `C++` offers limited compatibility with C++. `Objective-C` offers compatibility with Objective-C, see the [Interfacing to Objective-C](objc_interface) documentation for more information. `System` is the same as `Windows` on Windows platforms, and `C` on other platforms. `Implementation Note:` for Win32 platforms, `Windows` should exist.
C function calling conventions are specified by:
```
extern (C):
int foo(); // call foo() with C conventions
```
Note that `extern(C)` can be provided for all types of declarations, including `struct` or `class`, even though there is no corresponding match on the `C` side. In that case, the attribute is ignored. This behavior applies for nested functions and nested variables as well. However, for `static` member methods and `static` nested functions, adding `extern(C)` will change the calling convention, but not the mangling. D conventions are:
```
extern (D):
```
Windows API conventions are:
```
extern (Windows):
void *VirtualAlloc(
void *lpAddress,
uint dwSize,
uint flAllocationType,
uint flProtect
);
```
The Windows convention is distinct from the C convention only on Win32 platforms, where it is equivalent to the [stdcall](https://en.wikipedia.org/wiki/X86_calling_conventions) convention.
Note that a lone `extern` declaration is used as a [storage class](declaration#extern).
### C++ Namespaces
The linkage form `extern (C++,` *QualifiedIdentifier*`)` creates C++ declarations that reside in C++ namespaces. The *QualifiedIdentifier* specifies the namespaces.
```
extern (C++, N) { void foo(); }
```
refers to the C++ declaration:
```
namespace N { void foo(); }
```
and can be referred to with or without qualification:
```
foo();
N.foo();
```
Namespaces create a new named scope that is imported into its enclosing scope.
```
extern (C++, N) { void foo(); void bar(); }
extern (C++, M) { void foo(); }
bar(); // ok
foo(); // error - N.foo() or M.foo() ?
M.foo(); // ok
```
Multiple identifiers in the *QualifiedIdentifier* create nested namespaces:
```
extern (C++, N.M) { extern (C++) { extern (C++, R) { void foo(); } } }
N.M.R.foo();
```
refers to the C++ declaration:
```
namespace N { namespace M { namespace R { void foo(); } } }
```
`align` Attribute
------------------
```
AlignAttribute:
align
align ( AssignExpression )
```
Specifies the alignment of:
1. variables
2. struct fields
3. union fields
4. class fields
5. struct, union, and class types
`align` by itself sets it to the default, which matches the default member alignment of the companion C compiler.
```
struct S
{
align:
byte a; // placed at offset 0
int b; // placed at offset 4
long c; // placed at offset 8
}
static assert(S.alignof == 8);
static assert(S.sizeof == 16);
```
*AssignExpression* specifies the alignment which matches the behavior of the companion C compiler when non-default alignments are used. It must be a positive power of 2.
A value of 1 means that no alignment is done; fields are packed together.
```
struct S
{
align (1):
byte a; // placed at offset 0
int b; // placed at offset 1
long c; // placed at offset 5
}
static assert(S.alignof == 1);
static assert(S.sizeof == 13);
```
The natural alignment of an aggregate is the maximum alignment of its fields. It can be overridden by setting the alignment outside of the aggregate.
```
align (2) struct S
{
align (1):
byte a; // placed at offset 0
int b; // placed at offset 1
long c; // placed at offset 5
}
static assert(S.alignof == 2);
static assert(S.sizeof == 14);
```
Setting the alignment of a field aligns it to that power of 2, regardless of the size of the field.
```
struct S
{
byte a; // placed at offset 0
align (4) byte b; // placed at offset 4
align (16) short c; // placed at offset 16
}
static assert(S.alignof == 16);
static assert(S.sizeof == 32);
```
Do not align references or pointers that were allocated using [*NewExpression*](expression#NewExpression) on boundaries that are not a multiple of `size_t`. The garbage collector assumes that pointers and references to GC allocated objects will be on `size_t` byte boundaries.
**Undefined Behavior:** If any pointers and references to GC allocated objects are not aligned on `size_t` byte boundaries. The *AlignAttribute* is reset to the default when entering a function scope or a non-anonymous struct, union, class, and restored when exiting that scope. It is not inherited from a base class.
`deprecated` Attribute
-----------------------
```
DeprecatedAttribute:
deprecated
deprecated ( AssignExpression )
```
It is often necessary to deprecate a feature in a library, yet retain it for backwards compatibility. Such declarations can be marked as `deprecated`, which means that the compiler can be instructed to produce an error if any code refers to deprecated declarations:
```
deprecated
{
void oldFoo();
}
oldFoo(); // Deprecated: function test.oldFoo is deprecated
```
Optionally a string literal or manifest constant can be used to provide additional information in the deprecation message.
```
deprecated("Don't use bar") void oldBar();
oldBar(); // Deprecated: function test.oldBar is deprecated - Don't use bar
```
Calling CTFE-able functions or using manifest constants is also possible.
```
import std.format;
enum Message = format("%s and all its members are obsolete", Foobar.stringof);
deprecated(Message) class Foobar {}
auto f = new Foobar(); // Deprecated: class test.Foobar is deprecated - Foobar
// and all its members are obsolete
deprecated(format("%s is also obsolete", "This class")) class BarFoo {}
auto bf = new BarFoo(); // Deprecated: class test.BarFoo is deprecated - This
// class is also obsolete
```
`Implementation Note:` The compiler should have a switch specifying if `deprecated` should be ignored, cause a warning, or cause an error during compilation.
Visibility Attribute
--------------------
```
VisibilityAttribute:
private
package
package ( QualifiedIdentifier )
protected
public
export
```
Visibility is an attribute that is one of `private`, `package`, `protected`, `public`, or `export`. They may be referred to as protection attributes in documents predating [DIP22](http://wiki.dlang.org/DIP22).
Symbols with `private` visibility can only be accessed from within the same module. Private member functions are implicitly final and cannot be overridden.
`package` extends private so that package members can be accessed from code in other modules that are in the same package. If no identifier is provided, this applies to the innermost package only, or defaults to `private` if a module is not nested in a package.
`package` may have an optional parameter in the form of a dot-separated identifier list which is resolved as the qualified package name. The package must be either the module's parent package or one of its anscestors. If this optional parameter is present, the symbol will be visible in the specified package and all of its descendants.
`protected` only applies inside classes (and templates as they can be mixed in) and means that a symbol can only be seen by members of the same module, or by a derived class. If accessing a protected instance member through a derived class member function, that member can only be accessed for the object instance which can be implicitly cast to the same type as ‘this’. `protected` module members are illegal.
`public` means that any code within the executable can see the member. It is the default visibility attribute.
`export` means that any code outside the executable can access the member. `export` is analogous to exporting definitions from a DLL.
Visibility participates in [symbol name lookup](module#name_lookup).
`const` Attribute
------------------
The `const` attribute changes the type of the declared symbol from `T` to `const(T)`, where `T` is the type specified (or inferred) for the introduced symbol in the absence of `const`.
```
const int foo = 7;
static assert(is(typeof(foo) == const(int)));
const
{
double bar = foo + 6;
}
static assert(is(typeof(bar) == const(double)));
class C
{
const void foo();
const
{
void bar();
}
void baz() const;
}
pragma(msg, typeof(C.foo)); // const void()
pragma(msg, typeof(C.bar)); // const void()
pragma(msg, typeof(C.baz)); // const void()
static assert(is(typeof(C.foo) == typeof(C.bar)) &&
is(typeof(C.bar) == typeof(C.baz)));
```
`immutable` Attribute
----------------------
The `immutable` attribute modifies the type from `T` to `immutable(T)`, the same way as `const` does.
`inout` Attribute
------------------
The `inout` attribute modifies the type from `T` to `inout(T)`, the same way as `const` does.
`shared` Attribute
-------------------
The `shared` attribute modifies the type from `T` to `shared(T)`, the same way as `const` does.
`__gshared` Attribute
----------------------
By default, non-immutable global declarations reside in thread local storage. When a global variable is marked with the `__gshared` attribute, its value is shared across all threads.
```
int foo; // Each thread has its own exclusive copy of foo.
__gshared int bar; // bar is shared by all threads.
```
`__gshared` may also be applied to member variables and local variables. In these cases, `__gshared` is equivalent to `static`, except that the variable is shared by all threads rather than being thread local.
```
class Foo
{
__gshared int bar;
}
int foo()
{
__gshared int bar = 0;
return bar++; // Not thread safe.
}
```
Unlike the `shared` attribute, `__gshared` provides no safe-guards against data races or other multi-threaded synchronization issues. It is the responsibility of the programmer to ensure that access to variables marked `__gshared` is synchronized correctly.
`__gshared` is disallowed in safe mode.
`@disable` Attribute
---------------------
A reference to a declaration marked with the `@disable` attribute causes a compile time error. This can be used to explicitly disallow certain operations or overloads at compile time rather than relying on generating a runtime error.
```
@disable void foo() { }
```
```
void main() { foo(); /* error, foo is disabled */ }
```
[Disabling struct no-arg constructor](struct#Struct-Constructor) disallows default construction of the struct.
[Disabling struct postblit](struct#StructPostblit) makes the struct not copyable.
`@safe`, `@trusted`, and `@system` Attribute
---------------------------------------------
See [Function Safety](https://dlang.org/function.html#function-safety).
`@nogc` Attribute
------------------
See [No-GC Functions](https://dlang.org/function.html#nogc-functions).
`@property` Attribute
----------------------
See [Property Functions](function#property-functions).
`nothrow` Attribute
--------------------
See [Nothrow Functions](function#nothrow-functions).
`pure` Attribute
-----------------
See [Pure Functions](function#pure-functions).
`ref` Attribute
----------------
See [Ref Functions](function#ref-functions).
`return` Attribute
-------------------
See [Return Ref Parameters](function#return-ref-parameters).
`override` Attribute
---------------------
The `override` attribute applies to virtual functions. It means that the function must override a function with the same name and parameters in a base class. The override attribute is useful for catching errors when a base class's member function gets its parameters changed, and all derived classes need to have their overriding functions updated.
```
class Foo
{
int bar();
int abc(int x);
}
class Foo2 : Foo
{
override
{
int bar(char c); // error, no bar(char) in Foo
int abc(int x); // ok
}
}
```
`static` Attribute
-------------------
The `static` attribute applies to functions and data. It means that the declaration does not apply to a particular instance of an object, but to the type of the object. In other words, it means there is no `this` reference. `static` is ignored when applied to other declarations.
```
class Foo
{
static int bar() { return 6; }
int foobar() { return 7; }
}
...
Foo f = new Foo;
Foo.bar(); // produces 6
Foo.foobar(); // error, no instance of Foo
f.bar(); // produces 6;
f.foobar(); // produces 7;
```
Static functions are never virtual.
Static data has one instance per thread, not one per object.
Static does not have the additional C meaning of being local to a file. Use the `private` attribute in D to achieve that. For example:
```
module foo;
int x = 3; // x is global
private int y = 4; // y is local to module foo
```
`auto` Attribute
-----------------
The `auto` attribute is used when there are no other attributes and type inference is desired.
```
auto i = 6.8; // declare i as a double
```
For functions, the `auto` attribute means return type inference. See [Auto Functions](https://dlang.org/function.html#auto-functions).
`scope` Attribute
------------------
The `scope` attribute is used for local variables and for class declarations. For class declarations, the `scope` attribute creates a *scope* class. For local declarations, `scope` implements the RAII (Resource Acquisition Is Initialization) protocol. This means that the destructor for an object is automatically called when the reference to it goes out of scope. The destructor is called even if the scope is exited via a thrown exception, thus `scope` is used to guarantee cleanup.
If there is more than one `scope` variable going out of scope at the same point, then the destructors are called in the reverse order that the variables were constructed.
`scope` cannot be applied to globals, statics, data members, ref or out parameters. Arrays of `scope`s are not allowed, and `scope` function return values are not allowed. Assignment to a `scope`, other than initialization, is not allowed. `Rationale:` These restrictions may get relaxed in the future if a compelling reason to appears.
`abstract` Attribute
---------------------
An abstract member function must be overridden by a derived class. Only virtual member functions may be declared abstract; non-virtual member functions and free-standing functions cannot be declared abstract.
Classes become abstract if any of its virtual member functions are declared abstract or if they are defined within an abstract attribute. Note that an abstract class may also contain non-virtual member functions.
Classes defined within an abstract attribute or with abstract member functions cannot be instantiated directly. They can only be instantiated as a base class of another, non-abstract, class.
Member functions declared as abstract can still have function bodies. This is so that even though they must be overridden, they can still provide ‘base class functionality’, e.g. through `super.foo()` in a derived class. Note that the class is still abstract and cannot be instantiated directly.
User-Defined Attributes
-----------------------
```
UserDefinedAttribute:
@ ( ArgumentList )
@ Identifier
@ Identifier ( ArgumentListopt )
@ TemplateInstance
@ TemplateInstance ( ArgumentListopt )
```
User-Defined Attributes (UDA) are compile-time expressions that can be attached to a declaration. These attributes can then be queried, extracted, and manipulated at compile time. There is no runtime component to them.
A user-defined attribute looks like:
```
@(3) int a;
```
```
@("string", 7) int b;
enum Foo;
@Foo int c;
struct Bar
{
int x;
}
@Bar(3) int d;
```
If there are multiple UDAs in scope for a declaration, they are concatenated:
```
@(1)
{
@(2) int a; // has UDAs (1, 2)
@("string") int b; // has UDAs (1, "string")
}
```
UDAs can be extracted into an expression tuple using `__traits`:
```
@('c') string s;
pragma(msg, __traits(getAttributes, s)); // prints tuple('c')
```
If there are no user-defined attributes for the symbol, an empty tuple is returned. The expression tuple can be turned into a manipulatable tuple:
```
enum EEE = 7;
@("hello") struct SSS { }
@(3) { @(4) @EEE @SSS int foo; }
alias TP = __traits(getAttributes, foo);
pragma(msg, TP); // prints tuple(3, 4, 7, (SSS))
pragma(msg, TP[2]); // prints 7
```
Of course the tuple types can be used to declare things:
```
TP[3] a; // a is declared as an SSS
```
The attribute of the type name is not the same as the attribute of the variable:
```
pragma(msg, __traits(getAttributes, typeof(a))); // prints tuple("hello")
```
Of course, the real value of UDAs is to be able to create user-defined types with specific values. Having attribute values of basic types does not scale. The attribute tuples can be manipulated like any other tuple, and can be passed as the argument list to a template.
Whether the attributes are values or types is up to the user, and whether later attributes accumulate or override earlier ones is also up to how the user interprets them.
UDAs cannot be attached to template parameters.
| programming_docs |
d Templates Templates
=========
**Contents** 1. [Explicit Template Instantiation](#explicit_tmp_instantiation)
1. [Practical Example](#copy_example)
2. [Instantiation Scope](#instantiation_scope)
3. [Argument Deduction](#argument_deduction)
4. [Template Type Parameters](#template_type_parameters)
1. [Specialization](#parameters_specialization)
5. [Template This Parameters](#template_this_parameter)
6. [Template Value Parameters](#template_value_parameter)
7. [Template Alias Parameters](#aliasparameters)
1. [Typed alias parameters](#typed_alias_op)
2. [Specialization](#alias_parameter_specialization)
8. [Template Sequence Parameters](#variadic-templates)
1. [Specialization](#variadic_template_specialization)
9. [Template Parameter Default Values](#template_parameter_def_values)
10. [Eponymous Templates](#implicit_template_properties)
11. [Template Constructors](#template_ctors)
12. [Aggregate Templates](#aggregate_templates)
13. [Function Templates](#function-templates)
14. [Enum & Variable Templates](#variable-template)
15. [Alias Templates](#alias-template)
1. [Function Templates with Auto Ref Parameters](#auto-ref-parameters)
16. [Nested Templates](#nested-templates)
1. [Implicit Nesting](#implicit-nesting)
2. [Context Limitation](#nested_template_limitation)
17. [Recursive Templates](#recursive_templates)
18. [Template Constraints](#template_constraints)
19. [Limitations](#limitations)
Templates are D's approach to generic programming. Templates are defined with a *TemplateDeclaration*:
```
TemplateDeclaration:
template Identifier TemplateParameters Constraintopt { DeclDefsopt }
TemplateParameters:
( TemplateParameterListopt )
TemplateParameterList:
TemplateParameter
TemplateParameter ,
TemplateParameter , TemplateParameterList
TemplateParameter:
TemplateTypeParameter
TemplateValueParameter
TemplateAliasParameter
TemplateSequenceParameter
TemplateThisParameter
```
The body of the *TemplateDeclaration* must be syntactically correct even if never instantiated. Semantic analysis is not done until instantiated. A template forms its own scope, and the template body can contain classes, structs, types, enums, variables, functions, and other templates.
Template parameters can be types, values, symbols, or sequences. Types can be any type. Value parameters must be of an integral type, floating point type, or string type and specializations for them must resolve to an integral constant, floating point constant, null, or a string literal. Symbols can be any non-local symbol. Sequences can contain zero or more types, values or symbols.
Template parameter specializations constrain the values or types the *TemplateParameter* can accept.
Template parameter defaults are the value or type to use for the *TemplateParameter* in case one is not supplied.
Explicit Template Instantiation
-------------------------------
Templates are explicitly instantiated with:
```
TemplateInstance:
Identifier TemplateArguments
TemplateArguments:
! ( TemplateArgumentListopt )
! TemplateSingleArgument
TemplateArgumentList:
TemplateArgument
TemplateArgument ,
TemplateArgument , TemplateArgumentList
TemplateArgument:
Type
AssignExpression
Symbol
Symbol:
SymbolTail
. SymbolTail
SymbolTail:
Identifier
Identifier . SymbolTail
TemplateInstance
TemplateInstance . SymbolTail
TemplateSingleArgument:
Identifier
FundamentalType
CharacterLiteral
StringLiteral
IntegerLiteral
FloatLiteral
true
false
null
this
SpecialKeyword
```
Once instantiated, the declarations inside the template, called the template members, are in the scope of the *TemplateInstance*:
```
template TFoo(T) { alias Ptr = T*; }
...
TFoo!(int).Ptr x; // declare x to be of type int*
```
If the [*TemplateArgument*](#TemplateArgument) is one token long, the parentheses can be omitted:
```
TFoo!int.Ptr x; // same as TFoo!(int).Ptr x;
```
A template instantiation can be aliased:
```
template TFoo(T) { alias Ptr = T*; }
alias foo = TFoo!(int);
foo.Ptr x; // declare x to be of type int*
```
Multiple instantiations of a *TemplateDeclaration* with the same *TemplateArgumentList* will all refer to the same instantiation. For example:
```
template TFoo(T) { T f; }
alias a = TFoo!(int);
alias b = TFoo!(int);
...
a.f = 3;
assert(b.f == 3); // a and b refer to the same instance of TFoo
```
This is true even if the *TemplateInstance*s are done in different modules.
Even if template arguments are implicitly converted to the same template parameter type, they still refer to the same instance. This example uses a [`struct` template](#aggregate_templates):
```
struct TFoo(int x) { }
// Different template parameters create different struct types
static assert(!is(TFoo!(3) == TFoo!(2)));
// 3 and 2+1 are both 3 of type int - same TFoo instance
static assert(is(TFoo!(3) == TFoo!(2 + 1)));
// 3u is implicitly converted to 3 to match int parameter,
// and refers to exactly the same instance as TFoo!(3)
static assert(is(TFoo!(3) == TFoo!(3u)));
```
If multiple templates with the same *Identifier* are declared, they are distinct if they have a different number of arguments or are differently specialized.
### Practical Example
A simple generic copy template would be:
```
template TCopy(T)
{
void copy(out T to, T from)
{
to = from;
}
}
```
To use this template, it must first be instantiated with a specific type:
```
int i;
TCopy!(int).copy(i, 3);
```
See also [function templates](#function-templates).
Instantiation Scope
-------------------
*TemplateInstance*s are always instantiated in the scope of where the *TemplateDeclaration* is declared, with the addition of the template parameters being declared as aliases for their deduced types.
For example:
*module a*
```
template TFoo(T) { void bar() { func(); } }
```
*module b*
```
import a;
void func() { }
alias f = TFoo!(int); // error: func not defined in module a
```
and:
*module a*
```
template TFoo(T) { void bar() { func(1); } }
void func(double d) { }
```
*module b*
```
import a;
void func(int i) { }
alias f = TFoo!(int);
...
f.bar(); // will call a.func(double)
```
*TemplateParameter* specializations and default values are evaluated in the scope of the *TemplateDeclaration*.
Argument Deduction
------------------
The types of template parameters are deduced for a particular template instantiation by comparing the template argument with the corresponding template parameter.
For each template parameter, the following rules are applied in order until a type is deduced for each parameter:
1. If there is no type specialization for the parameter, the type of the parameter is set to the template argument.
2. If the type specialization is dependent on a type parameter, the type of that parameter is set to be the corresponding part of the type argument.
3. If after all the type arguments are examined, there are any type parameters left with no type assigned, they are assigned types corresponding to the template argument in the same position in the *TemplateArgumentList*.
4. If applying the above rules does not result in exactly one type for each template parameter, then it is an error.
For example:
```
template TFoo(T) { }
alias foo1 = TFoo!(int); // (1) T is deduced to be int
alias foo2 = TFoo!(char*); // (1) T is deduced to be char*
template TBar(T : T*) { }
alias bar = TBar!(char*); // (2) T is deduced to be char
template TAbc(D, U : D[]) { }
alias abc1 = TAbc!(int, int[]); // (2) D is deduced to be int, U is int[]
alias abc2 = TAbc!(char, int[]); // (4) error, D is both char and int
template TDef(D : E*, E) { }
alias def = TDef!(int*, int); // (1) E is int
// (3) D is int*
```
Deduction from a specialization can provide values for more than one parameter:
```
template Foo(T: T[U], U)
{
...
}
Foo!(int[long]) // instantiates Foo with T set to int, U set to long
```
When considering matches, a class is considered to be a match for any super classes or interfaces:
```
class A { }
class B : A { }
template TFoo(T : A) { }
alias foo = TFoo!(B); // (3) T is B
template TBar(T : U*, U : A) { }
alias bar = TBar!(B*, B); // (2) T is B*
// (3) U is B
```
Template Type Parameters
------------------------
```
TemplateTypeParameter:
Identifier
Identifier TemplateTypeParameterSpecialization
Identifier TemplateTypeParameterDefault
Identifier TemplateTypeParameterSpecialization TemplateTypeParameterDefault
TemplateTypeParameterSpecialization:
: Type
TemplateTypeParameterDefault:
= Type
```
### Specialization
Templates may be specialized for particular types of arguments by following the template parameter identifier with a : and the specialized type. For example:
```
template TFoo(T) { ... } // #1
template TFoo(T : T[]) { ... } // #2
template TFoo(T : char) { ... } // #3
template TFoo(T, U, V) { ... } // #4
alias foo1 = TFoo!(int); // instantiates #1
alias foo2 = TFoo!(double[]); // instantiates #2 with T being double
alias foo3 = TFoo!(char); // instantiates #3
alias fooe = TFoo!(char, int); // error, number of arguments mismatch
alias foo4 = TFoo!(char, int, int); // instantiates #4
```
The template picked to instantiate is the one that is most specialized that fits the types of the *TemplateArgumentList*. If the result is ambiguous, it is an error.
Template This Parameters
------------------------
```
TemplateThisParameter:
this TemplateTypeParameter
```
*TemplateThisParameter*s are used in member function templates to pick up the type of the *this* reference. It also will infer the mutability of the `this` reference. For example, if `this` is `const`, then the function is marked `const`.
```
import std.stdio;
struct S
{
const void foo(this T)(int i)
{
writeln(typeid(T));
}
}
void main()
{
const(S) s;
(&s).foo(1);
S s2;
s2.foo(2);
immutable(S) s3;
s3.foo(3);
}
```
Prints:
```
const(S)
S
immutable(S)
```
This is especially useful when used with inheritance. For example, consider the implementation of a final base method which returns a derived class type. Typically this would return a base type, but that would prohibit calling or accessing derived properties of the type:
```
interface Addable(T)
{
final auto add(T t)
{
return this;
}
}
class List(T) : Addable!T
{
List remove(T t)
{
return this;
}
}
void main()
{
auto list = new List!int;
list.add(1).remove(1); // error: no 'remove' method for Addable!int
}
```
Here the method `add` returns the base type, which doesn't implement the `remove` method. The `template this` parameter can be used for this purpose:
```
interface Addable(T)
{
final R add(this R)(T t)
{
return cast(R)this; // cast is necessary, but safe
}
}
class List(T) : Addable!T
{
List remove(T t)
{
return this;
}
}
void main()
{
auto list = new List!int;
list.add(1).remove(1); // ok
}
```
Template Value Parameters
-------------------------
```
TemplateValueParameter:
BasicType Declarator
BasicType Declarator TemplateValueParameterSpecialization
BasicType Declarator TemplateValueParameterDefault
BasicType Declarator TemplateValueParameterSpecialization TemplateValueParameterDefault
TemplateValueParameterSpecialization:
: ConditionalExpression
TemplateValueParameterDefault:
= AssignExpression
= SpecialKeyword
```
Template value parameter types can be any type which can be statically initialized at compile time. Template value arguments can be integer values, floating point values, nulls, string values, array literals of template value arguments, associative array literals of template value arguments, or struct literals of template value arguments.
```
template foo(string s)
{
string bar() { return s ~ " betty"; }
}
void main()
{
writefln("%s", foo!("hello").bar()); // prints: hello betty
}
```
This example of template foo has a value parameter that is specialized for 10:
```
template foo(U : int, int v : 10)
{
U x = v;
}
void main()
{
assert(foo!(int, 10).x == 10);
}
```
Template Alias Parameters
-------------------------
```
TemplateAliasParameter:
alias Identifier TemplateAliasParameterSpecializationopt TemplateAliasParameterDefaultopt
alias BasicType Declarator TemplateAliasParameterSpecializationopt TemplateAliasParameterDefaultopt
TemplateAliasParameterSpecialization:
: Type
: ConditionalExpression
TemplateAliasParameterDefault:
= Type
= ConditionalExpression
```
Alias parameters enable templates to be parameterized with symbol names or values computed at compile-time. Almost any kind of D symbol can be used, including user-defined type names, global names, local names, module names, template names, and template instance names.
**Symbol examples:**
* User-defined type names
```
class Foo
{
static int x;
}
template Bar(alias a)
{
alias sym = a.x;
}
void main()
{
alias bar = Bar!(Foo);
bar.sym = 3; // sets Foo.x to 3
}
```
* Global names
```
shared int x;
template Foo(alias var)
{
auto ptr = &var;
}
void main()
{
alias bar = Foo!(x);
*bar.ptr = 3; // set x to 3
static shared int y;
alias abc = Foo!(y);
*abc.ptr = 3; // set y to 3
}
```
* Local names
```
template Foo(alias var)
{
void inc() { var++; }
}
void main()
{
int v = 4;
alias foo = Foo!v;
foo.inc();
assert(v == 5);
}
```
See also [Implicit Template Nesting](#implicit-nesting).
* Module names
```
import std.conv;
template Foo(alias a)
{
alias sym = a.text;
}
void main()
{
alias bar = Foo!(std.conv);
bar.sym(3); // calls std.conv.text(3)
}
```
* Template names
```
shared int x;
template Foo(alias var)
{
auto ptr = &var;
}
template Bar(alias Tem)
{
alias instance = Tem!(x);
}
void main()
{
alias bar = Bar!(Foo);
*bar.instance.ptr = 3; // sets x to 3
}
```
* Template instance names
```
shared int x;
template Foo(alias var)
{
auto ptr = &var;
}
template Bar(alias sym)
{
alias p = sym.ptr;
}
void main()
{
alias foo = Foo!(x);
alias bar = Bar!(foo);
*bar.p = 3; // sets x to 3
}
```
**Value examples:**
* Literals
```
template Foo(alias x, alias y)
{
static int i = x;
static string s = y;
}
void main()
{
alias foo = Foo!(3, "bar");
writeln(foo.i, foo.s); // prints 3bar
}
```
* Compile-time values
```
template Foo(alias x)
{
static int i = x;
}
void main()
{
// compile-time argument evaluation
enum two = 1 + 1;
alias foo = Foo!(5 * two);
assert(foo.i == 10);
static assert(foo.stringof == "Foo!10");
// compile-time function evaluation
int get10() { return 10; }
alias bar = Foo!(get10());
// bar is the same template instance as foo
assert(&bar.i is &foo.i);
}
```
* Lambdas
```
template Foo(alias fun)
{
enum val = fun(2);
}
alias foo = Foo!((int x) => x * x);
static assert(foo.val == 4);
```
### Typed alias parameters
Alias parameters can also be typed. These parameters will accept symbols of that type:
```
template Foo(alias int x) { }
int x;
float f;
Foo!x; // ok
Foo!f; // fails to instantiate
```
### Specialization
Alias parameters can accept both literals and user-defined type symbols, but they are less specialized than the matches to type parameters and value parameters:
```
template Foo(T) { ... } // #1
template Foo(int n) { ... } // #2
template Foo(alias sym) { ... } // #3
struct S {}
int var;
alias foo1 = Foo!(S); // instantiates #1
alias foo2 = Foo!(1); // instantiates #2
alias foo3a = Foo!([1,2]); // instantiates #3
alias foo3b = Foo!(var); // instantiates #3
```
```
template Bar(alias A) { ... } // #4
template Bar(T : U!V, alias U, V...) { ... } // #5
class C(T) {}
alias bar = Bar!(C!int); // instantiates #5
```
Template Sequence Parameters
----------------------------
```
TemplateSequenceParameter:
Identifier ...
```
If the last template parameter in the *TemplateParameterList* is declared as a *TemplateSequenceParameter*, it is a match with any trailing template arguments. Such a sequence of arguments can be defined using the template [`std.meta.AliasSeq`](https://dlang.org/phobos/std_meta.html#AliasSeq) and will thus henceforth be referred to by that name for clarity. An *AliasSeq* is not itself a type, value, or symbol. It is a compile-time sequence of any mix of types, values or symbols.
An *AliasSeq* whose elements consist entirely of types is called a type sequence or *TypeSeq*. An *AliasSeq* whose elements consist entirely of values is called a value sequence or *ValueSeq*.
An *AliasSeq* can be used as an argument list to instantiate another template, or as the list of parameters for a function.
```
template print(args...)
{
void print()
{
writeln("args are ", args); // args is a ValueSeq
}
}
template write(Args...)
{
void write(Args args) // Args is a TypeSeq
// args is a ValueSeq
{
writeln("args are ", args);
}
}
void main()
{
print!(1,'a',6.8).print(); // prints: args are 1a6.8
write!(int, char, double).write(1, 'a', 6.8); // prints: args are 1a6.8
}
```
The number of elements in an *AliasSeq* can be retrieved with the `.length` property. The *n*th element can be retrieved by indexing the *AliasSeq* with [*n*], and sub-sequences are denoted by the slicing syntax.
*AliasSeq*-s are static compile-time entities, there is no way to dynamically change, add, or remove elements either at compile-time or run-time.
Type sequences can be deduced from the trailing parameters of an implicitly instantiated function template:
```
template print(T, Args...)
{
void print(T first, Args args)
{
writeln(first);
static if (args.length) // if more arguments
print(args); // recurse for remaining arguments
}
}
void main()
{
print(1, 'a', 6.8);
}
```
prints:
```
1
a
6.8
```
Type sequences can also be deduced from the type of a delegate or function parameter list passed as a function argument:
```
import std.stdio;
/* Partially applies a delegate by tying its first argument to a particular value.
* R = return type
* T = first argument type
* Args = TypeSeq of remaining argument types
*/
R delegate(Args) partial(R, T, Args...)(R delegate(T, Args) dg, T first)
{
// return a closure
return (Args args) => dg(first, args);
}
void main()
{
int plus(int x, int y, int z)
{
return x + y + z;
}
auto plus_two = partial(&plus, 2);
writefln("%d", plus_two(6, 8)); // prints 16
}
```
See also: [`std.functional.partial`](https://dlang.org/phobos/std_functional.html#partial) ### Specialization
If both a template with a sequence parameter and a template without a sequence parameter exactly match a template instantiation, the template without a *TemplateSequenceParameter* is selected.
```
template Foo(T) { pragma(msg, "1"); } // #1
template Foo(int n) { pragma(msg, "2"); } // #2
template Foo(alias sym) { pragma(msg, "3"); } // #3
template Foo(Args...) { pragma(msg, "4"); } // #4
import std.stdio;
// Any sole template argument will never match to #4
alias foo1 = Foo!(int); // instantiates #1
alias foo2 = Foo!(3); // instantiates #2
alias foo3 = Foo!(std); // instantiates #3
alias foo4 = Foo!(int, 3, std); // instantiates #4
```
Template Parameter Default Values
---------------------------------
Trailing template parameters can be given default values:
```
template Foo(T, U = int) { ... }
Foo!(uint,long); // instantiate Foo with T as uint, and U as long
Foo!(uint); // instantiate Foo with T as uint, and U as int
template Foo(T, U = T*) { ... }
Foo!(uint); // instantiate Foo with T as uint, and U as uint*
```
Eponymous Templates
-------------------
If a template contains members whose name is the same as the template identifier then these members are assumed to be referred to in a template instantiation:
```
template foo(T)
{
T foo; // declare variable foo of type T
}
void main()
{
foo!(int) = 6; // instead of foo!(int).foo
}
```
Using functions and more types than the template:
```
template foo(S, T)
{
// each member contains all the template parameters
void foo(S s, T t) {}
void foo(S s, T t, string) {}
}
void main()
{
foo(1, 2, "test"); // foo!(int, int).foo(1, 2, "test")
foo(1, 2); // foo!(int, int).foo(1, 2)
}
```
When the template parameters must be deduced, the eponymous members can't rely on a [`static if`](version#StaticIfCondition) condition since the deduction relies on how the in members are used:
```
template foo(T)
{
static if (is(T)) // T is not yet known...
void foo(T t) {} // T is deduced from the member usage
}
void main()
{
foo(0); // Error: cannot deduce function from argument types
foo!int(0); // Ok since no deduction necessary
}
```
Template Constructors
---------------------
```
ConstructorTemplate:
this TemplateParameters Parameters MemberFunctionAttributesopt Constraintopt :
this TemplateParameters Parameters MemberFunctionAttributesopt Constraintopt FunctionBody
```
Templates can be used to form constructors for classes and structs.
Aggregate Templates
-------------------
```
ClassTemplateDeclaration:
class Identifier TemplateParameters ;
class Identifier TemplateParameters Constraintopt BaseClassListopt AggregateBody
class Identifier TemplateParameters BaseClassListopt Constraintopt AggregateBody
InterfaceTemplateDeclaration:
interface Identifier TemplateParameters ;
interface Identifier TemplateParameters Constraintopt BaseInterfaceListopt AggregateBody
interface Identifier TemplateParameters BaseInterfaceList Constraint AggregateBody
StructTemplateDeclaration:
struct Identifier TemplateParameters ;
struct Identifier TemplateParameters Constraintopt AggregateBody
UnionTemplateDeclaration:
union Identifier TemplateParameters ;
union Identifier TemplateParameters Constraintopt AggregateBody
```
If a template declares exactly one member, and that member is a class with the same name as the template (see [Eponymous Templates](#implicit_template_properties):)
```
template Bar(T)
{
class Bar
{
T member;
}
}
```
then the semantic equivalent, called a *ClassTemplateDeclaration* can be written as:
```
class Bar(T)
{
T member;
}
```
See also: [Template This Parameters](#template_this_parameter).
Analogously to class templates, struct, union and interfaces can be transformed into templates by supplying a template parameter list.
Function Templates
------------------
If a template declares exactly one member, and that member is a function with the same name as the template, it is a function template declaration. Alternatively, a function template declaration is a function declaration with a [*TemplateParameterList*](#TemplateParameterList) immediately preceding the [*Parameters*](function#Parameters).
A function template to compute the square of type *T* is:
```
T Square(T)(T t)
{
return t * t;
}
```
It is lowered to:
```
template Square(T)
{
T Square(T t)
{
return t * t;
}
}
```
Function templates can be explicitly instantiated with a !(*TemplateArgumentList*):
```
writefln("The square of %s is %s", 3, Square!(int)(3));
```
or implicitly, where the *TemplateArgumentList* is deduced from the types of the function arguments:
```
writefln("The square of %s is %s", 3, Square(3)); // T is deduced to be int
```
If there are fewer arguments supplied in the *TemplateArgumentList* than parameters in the *TemplateParameterList*, the arguments fulfill parameters from left to right, and the rest of the parameters are then deduced from the function arguments.
The process of deducing template type parameters from function arguments is called Implicit Function Template Instantiation (IFTI).
Function template type parameters that are to be implicitly deduced may not have specializations:
```
void Foo(T : T*)(T t) { ... }
int x,y;
Foo!(int*)(x); // ok, T is not deduced from function argument
Foo(&y); // error, T has specialization
```
Template arguments not implicitly deduced can have default values:
```
void Foo(T, U=T*)(T t) { U p; ... }
int x;
Foo(x); // T is int, U is int*
```
If template type parameters match the literal expressions on function arguments, the deduced types may consider narrowing conversions of them.
```
void foo(T)(T v) { pragma(msg, "in foo, T = ", T); }
void bar(T)(T v, T[] a) { pragma(msg, "in bar, T = ", T); }
foo(1);
// an integer literal type is analyzed as int by default
// then T is deduced to int
short[] arr;
bar(1, arr);
// arr is short[], and the integer literal 1 is
// implicitly convertible to short.
// then T will be deduced to short.
bar(1, [2.0, 3.0]);
// the array literal is analyzed as double[],
// and the integer literal 1 is implicitly convertible to double.
// then T will be deduced to double.
```
The deduced type parameter for dynamic array and pointer arguments has an unqualified head:
```
void foo(T)(T arg) { pragma(msg, T); }
int[] marr;
const(int[]) carr;
immutable(int[]) iarr;
foo(marr); // T == int[]
foo(carr); // T == const(int)[]
foo(iarr); // T == immutable(int)[]
int* mptr;
const(int*) cptr;
immutable(int*) iptr;
foo(mptr); // T == int*
foo(cptr); // T == const(int)*
foo(iptr); // T == immutable(int)*
```
Type parameter deduction is not influenced by the order of function arguments.
Function templates can have their return types deduced based on the [*ReturnStatement*](statement#ReturnStatement)s in the function, just as with normal functions. See [Auto Functions](https://dlang.org/function.html#auto-functions).
```
auto Square(T)(T t)
{
return t * t;
}
```
Variadic Function Templates can have parameters with default values. These parameters are always set to their default value in case of IFTI.
```
size_t fun(T...)(T t, string file = __FILE__)
{
import std.stdio;
writeln(file, " ", t);
return T.length;
}
assert(fun(1, "foo") == 2); // uses IFTI
assert(fun!int(1, "foo") == 1); // no IFTI
```
Enum & Variable Templates
-------------------------
Like aggregates and functions, manifest constant and variable declarations can have template parameters, providing there is an [*Initializer*](declaration#Initializer):
```
enum string constant(TL...) = TL.stringof;
ubyte[T.sizeof] storage(T) = 0;
auto array(alias a) = a;
```
These declarations are transformed into templates:
```
template constant(TL...)
{
enum string constant = TL.stringof;
}
template storage(T)
{
ubyte[T.sizeof] storage = 0;
}
template array(alias a)
{
auto array = a;
}
```
Alias Templates
---------------
[*AliasDeclaration*](declaration#AliasDeclaration) can also have optional template parameters:
```
alias Sequence(TL...) = TL;
```
It is lowered to:
```
template Sequence(TL...)
{
alias Sequence = TL;
}
```
### Function Templates with Auto Ref Parameters
An auto ref function template parameter becomes a ref parameter if its corresponding argument is an lvalue, otherwise it becomes a value parameter:
```
int foo(Args...)(auto ref Args args)
{
int result;
foreach (i, v; args)
{
if (v == 10)
assert(__traits(isRef, args[i]));
else
assert(!__traits(isRef, args[i]));
result += v;
}
return result;
}
void main()
{
int y = 10;
int r;
r = foo(8); // returns 8
r = foo(y); // returns 10
r = foo(3, 4, y); // returns 17
r = foo(4, 5, y); // returns 19
r = foo(y, 6, y); // returns 26
}
```
Auto ref parameters can be combined with auto ref return attributes:
```
auto ref min(T, U)(auto ref T lhs, auto ref U rhs)
{
return lhs > rhs ? rhs : lhs;
}
void main()
{
int x = 7, y = 8;
int i;
i = min(4, 3); // returns 3
i = min(x, y); // returns 7
min(x, y) = 10; // sets x to 10
static assert(!__traits(compiles, min(3, y) = 10));
static assert(!__traits(compiles, min(y, 3) = 10));
}
```
Nested Templates
----------------
If a template is declared in aggregate or function local scope, the instantiated functions will implicitly capture the context of the enclosing scope.
```
class C
{
int num;
this(int n) { num = n; }
template Foo()
{
// 'foo' can access 'this' reference of class C object.
void foo(int n) { this.num = n; }
}
}
void main()
{
auto c = new C(1);
assert(c.num == 1);
c.Foo!().foo(5);
assert(c.num == 5);
template Bar()
{
// 'bar' can access local variable of 'main' function.
void bar(int n) { c.num = n; }
}
Bar!().bar(10);
assert(c.num == 10);
}
```
Above, `Foo!().foo` will work just the same as a member function of class `C`, and `Bar!().bar` will work just the same as a nested function within function `main()`.
### Implicit Nesting
If a template has a [template alias parameter](#aliasparameters), and is instantiated with a local symbol, the instantiated function will implicitly become nested in order to access runtime data of the given local symbol.
```
template Foo(alias sym)
{
void foo() { sym = 10; }
}
class C
{
int num;
this(int n) { num = n; }
void main()
{
assert(this.num == 1);
alias fooX = Foo!(C.num).foo;
// fooX will become member function implicitly, so &fooX
// returns a delegate object.
static assert(is(typeof(&fooX) == delegate));
fooX(); // called by using valid 'this' reference.
assert(this.num == 10); // OK
}
}
void main()
{
new C(1).main();
int num;
alias fooX = Foo!num.foo;
// fooX will become nested function implicitly, so &fooX
// returns a delegate object.
static assert(is(typeof(&fooX) == delegate));
fooX();
assert(num == 10); // OK
}
```
Not only functions, but also instantiated class and struct types can become nested via implicitly captured context.
```
class C
{
int num;
this(int n) { num = n; }
class N(T)
{
// instantiated class N!T can become nested in C
T foo() { return num * 2; }
}
}
void main()
{
auto c = new C(10);
auto n = c.new N!int();
assert(n.foo() == 20);
}
```
See also: [Nested Class Instantiation](class#nested-explicit).
```
void main()
{
int num = 10;
struct S(T)
{
// instantiated struct S!T can become nested in main()
T foo() { return num * 2; }
}
S!int s;
assert(s.foo() == 20);
}
```
A templated `struct` can become a nested `struct` if it is instantiated with a local symbol passed as an aliased argument:
```
struct A(alias F)
{
int fun(int i) { return F(i); }
}
A!F makeA(alias F)() { return A!F(); }
void main()
{
int x = 40;
int fun(int i) { return x + i; }
A!fun a = makeA!fun();
assert(a.fun(2) == 42);
}
```
### Context Limitation
Currently nested templates can capture at most one context. As a typical example, non-static template member functions cannot take local symbol by using template alias parameter.
```
class C
{
int num;
void foo(alias sym)() { num = sym * 2; }
}
void main()
{
auto c = new C();
int var = 10;
c.foo!var(); // NG, foo!var requires two contexts, 'this' and 'main()'
}
```
But, if one context is indirectly accessible from other context, it is allowed.
```
int sum(alias x, alias y)() { return x + y; }
void main()
{
int a = 10;
void nested()
{
int b = 20;
assert(sum!(a, b)() == 30);
}
nested();
}
```
Two local variables `a` and `b` are in different contexts, but outer context is indirectly accessible from innter context, so nested template instance `sum!(a, b)` will capture only inner context.
Recursive Templates
-------------------
Template features can be combined to produce some interesting effects, such as compile time evaluation of non-trivial functions. For example, a factorial template can be written:
```
template factorial(int n)
{
static if (n == 1)
enum factorial = 1;
else
enum factorial = n * factorial!(n - 1);
}
static assert(factorial!(4) == 24);
```
For more information and a CTFE (Compile-time Function Execution) factorial alternative, see: [Template Recursion](https://dlang.org/articles/templates-revisited.html#template-recursion).
Template Constraints
--------------------
```
Constraint:
if ( Expression )
```
*Constraint*s are used to impose additional constraints on matching arguments to a template beyond what is possible in the [*TemplateParameterList*](#TemplateParameterList). The *Expression* is computed at compile time and returns a result that is converted to a boolean value. If that value is true, then the template is matched, otherwise the template is not matched.
For example, the following function template only matches with odd values of `N`:
```
void foo(int N)()
if (N & 1)
{
...
}
...
foo!(3)(); // OK, matches
foo!(4)(); // Error, no match
```
Template constraints can be used with aggregate types (structs, classes, unions). Constraints are effectively used with library module [`std.traits`](https://dlang.org/phobos/std_traits.html):
```
import std.traits;
struct Bar(T)
if (isIntegral!T)
{
...
}
...
auto x = Bar!int; // OK, int is an integral type
auto y = Bar!double; // Error, double does not satisfy constraint
```
Limitations
-----------
Templates cannot be used to add non-static fields or virtual functions to classes or interfaces. For example:
```
class Foo
{
template TBar(T)
{
T xx; // becomes a static field of Foo
int func(T) { ... } // non-virtual
static T yy; // Ok
static int func(T t, int y) { ... } // Ok
}
}
```
| programming_docs |
d rt.cover rt.cover
========
Implementation of code coverage analyzer.
License:
Distributed under the [Boost Software License 1.0](http://www.boost.org/LICENSE_1_0.txt). (See accompanying file LICENSE)
Authors:
Walter Bright, Sean Kelly
Source
[rt/cover.d](https://github.com/dlang/druntime/blob/master/src/rt/cover.d)
void **dmd\_coverSourcePath**(string pathname);
Set path to where source files are located.
Parameters:
| | |
| --- | --- |
| string `pathname` | The new path name. |
void **dmd\_coverDestPath**(string pathname);
Set path to where listing files are to be written.
Parameters:
| | |
| --- | --- |
| string `pathname` | The new path name. |
void **dmd\_coverSetMerge**(bool flag);
Set merge mode.
Parameters:
| | |
| --- | --- |
| bool `flag` | true means new data is summed with existing data in the listing file; false means a new listing file is always created. |
void **\_d\_cover\_register2**(string filename, size\_t[] valid, uint[] data, ubyte minPercent);
The coverage callback.
Parameters:
| | |
| --- | --- |
| string `filename` | The name of the coverage file. |
| size\_t[] `valid` | ??? |
| uint[] `data` | ??? |
| ubyte `minPercent` | minimal coverage of the module |
d rt.cmath2 rt.cmath2
=========
Runtime support for complex arithmetic code generation (for Posix).
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
Walter Bright, Sean Kelly
void **\_Cmul**();
Multiply two complex floating point numbers, x and y.
Input
x.re ST3 x.im ST2 y.re ST1 y.im ST0
Output
ST1 real part ST0 imaginary part
void **\_Cdiv**();
Divide two complex floating point numbers, x / y.
Input
x.re ST3 x.im ST2 y.re ST1 y.im ST0
Output
ST1 real part ST0 imaginary part
void **\_Ccmp**();
Compare two complex floating point numbers, x and y.
Input
x.re ST3 x.im ST2 y.re ST1 y.im ST0
Output
8087 stack is cleared flags set
d std.functional std.functional
==============
Functions that manipulate other functions.
This module provides functions for compile time function composition. These functions are helpful when constructing predicates for the algorithms in [`std.algorithm`](std_algorithm) or [`std.range`](std_range).
| Function Name | Description |
| --- | --- |
| [`adjoin`](#adjoin) | Joins a couple of functions into one that executes the original functions independently and returns a tuple with all the results. |
| [`compose`](#compose), [`pipe`](#pipe) | Join a couple of functions into one that executes the original functions one after the other, using one function's result for the next function's argument. |
| [`forward`](#forward) | Forwards function arguments while saving ref-ness. |
| [`lessThan`](#lessThan), [`greaterThan`](#greaterThan), [`equalTo`](#equalTo) | Ready-made predicate functions to compare two values. |
| [`memoize`](#memoize) | Creates a function that caches its result for fast re-evaluation. |
| [`not`](#not) | Creates a function that negates another. |
| [`partial`](#partial) | Creates a function that binds the first argument of a given function to a given value. |
| [`curry`](#curry) | Converts a multi-argument function into a series of single-argument functions. f(x, y) == curry(f)(x)(y) |
| [`reverseArgs`](#reverseArgs) | Predicate that reverses the order of its arguments. |
| [`toDelegate`](#toDelegate) | Converts a callable to a delegate. |
| [`unaryFun`](#unaryFun), [`binaryFun`](#binaryFun) | Create a unary or binary function from a string. Most often used when defining algorithms on ranges. |
License:
[Boost License 1.0](http://boost.org/LICENSE_1_0.txt).
Authors:
[Andrei Alexandrescu](http://erdani.org)
Source
[std/functional.d](https://github.com/dlang/phobos/blob/master/std/functional.d)
template **unaryFun**(alias fun, string parmName = "a")
Transforms a `string` representing an expression into a unary function. The `string` must either use symbol name `a` as the parameter or provide the symbol via the `parmName` argument.
Parameters:
| | |
| --- | --- |
| fun | a `string` or a callable |
| parmName | the name of the parameter if `fun` is a string. Defaults to `"a"`. |
Returns:
If `fun` is a `string`, a new single parameter function If `fun` is not a `string`, an alias to `fun`.
Examples:
```
// Strings are compiled into functions:
alias isEven = unaryFun!("(a & 1) == 0");
assert(isEven(2) && !isEven(1));
```
template **binaryFun**(alias fun, string parm1Name = "a", string parm2Name = "b")
Transforms a `string` representing an expression into a binary function. The `string` must either use symbol names `a` and `b` as the parameters or provide the symbols via the `parm1Name` and `parm2Name` arguments.
Parameters:
| | |
| --- | --- |
| fun | a `string` or a callable |
| parm1Name | the name of the first parameter if `fun` is a string. Defaults to `"a"`. |
| parm2Name | the name of the second parameter if `fun` is a string. Defaults to `"b"`. |
Returns:
If `fun` is not a string, `binaryFun` aliases itself away to `fun`.
Examples:
```
alias less = binaryFun!("a < b");
assert(less(1, 2) && !less(2, 1));
alias greater = binaryFun!("a > b");
assert(!greater("1", "2") && greater("2", "1"));
```
alias **lessThan** = safeOp!"<".safeOp(T0, T1)(auto ref T0 a, auto ref T1 b);
Predicate that returns a < b. Correctly compares signed and unsigned integers, ie. -1 < 2U.
Examples:
```
assert(lessThan(2, 3));
assert(lessThan(2U, 3U));
assert(lessThan(2, 3.0));
assert(lessThan(-2, 3U));
assert(lessThan(2, 3U));
assert(!lessThan(3U, -2));
assert(!lessThan(3U, 2));
assert(!lessThan(0, 0));
assert(!lessThan(0U, 0));
assert(!lessThan(0, 0U));
```
alias **greaterThan** = safeOp!">".safeOp(T0, T1)(auto ref T0 a, auto ref T1 b);
Predicate that returns a > b. Correctly compares signed and unsigned integers, ie. 2U > -1.
Examples:
```
assert(!greaterThan(2, 3));
assert(!greaterThan(2U, 3U));
assert(!greaterThan(2, 3.0));
assert(!greaterThan(-2, 3U));
assert(!greaterThan(2, 3U));
assert(greaterThan(3U, -2));
assert(greaterThan(3U, 2));
assert(!greaterThan(0, 0));
assert(!greaterThan(0U, 0));
assert(!greaterThan(0, 0U));
```
alias **equalTo** = safeOp!"==".safeOp(T0, T1)(auto ref T0 a, auto ref T1 b);
Predicate that returns a == b. Correctly compares signed and unsigned integers, ie. !(-1 == ~0U).
Examples:
```
assert(equalTo(0U, 0));
assert(equalTo(0, 0U));
assert(!equalTo(-1, ~0U));
```
template **reverseArgs**(alias pred)
N-ary predicate that reverses the order of arguments, e.g., given `pred(a, b, c)`, returns `pred(c, b, a)`.
Parameters:
| | |
| --- | --- |
| pred | A callable |
Returns:
A function which calls `pred` after reversing the given parameters
Examples:
```
alias gt = reverseArgs!(binaryFun!("a < b"));
assert(gt(2, 1) && !gt(1, 1));
```
Examples:
```
int x = 42;
bool xyz(int a, int b) { return a * x < b / x; }
auto foo = &xyz;
foo(4, 5);
alias zyx = reverseArgs!(foo);
writeln(zyx(5, 4)); // foo(4, 5)
```
Examples:
```
alias gt = reverseArgs!(binaryFun!("a < b"));
assert(gt(2, 1) && !gt(1, 1));
int x = 42;
bool xyz(int a, int b) { return a * x < b / x; }
auto foo = &xyz;
foo(4, 5);
alias zyx = reverseArgs!(foo);
writeln(zyx(5, 4)); // foo(4, 5)
```
Examples:
```
int abc(int a, int b, int c) { return a * b + c; }
alias cba = reverseArgs!abc;
writeln(abc(91, 17, 32)); // cba(32, 17, 91)
```
Examples:
```
int a(int a) { return a * 2; }
alias _a = reverseArgs!a;
writeln(a(2)); // _a(2)
```
Examples:
```
int b() { return 4; }
alias _b = reverseArgs!b;
writeln(b()); // _b()
```
template **not**(alias pred)
Negates predicate `pred`.
Parameters:
| | |
| --- | --- |
| pred | A string or a callable |
Returns:
A function which calls `pred` and returns the logical negation of its return value.
Examples:
```
import std.algorithm.searching : find;
import std.functional;
import std.uni : isWhite;
string a = " Hello, world!";
writeln(find!(not!isWhite)(a)); // "Hello, world!"
```
template **partial**(alias fun, alias arg)
[Partially applies](http://en.wikipedia.org/wiki/Partial_application) fun by tying its first argument to arg.
Parameters:
| | |
| --- | --- |
| fun | A callable |
| arg | The first argument to apply to `fun` |
Returns:
A new function which calls `fun` with `arg` plus the passed parameters.
Examples:
```
int fun(int a, int b) { return a + b; }
alias fun5 = partial!(fun, 5);
writeln(fun5(6)); // 11
// Note that in most cases you'd use an alias instead of a value
// assignment. Using an alias allows you to partially evaluate template
// functions without committing to a particular type of the function.
```
auto **curry**(alias F)()
Constraints: if (isCallable!F && Parameters!F.length);
auto **curry**(T)(T t)
Constraints: if (isCallable!T && Parameters!T.length);
Takes a function of (potentially) many arguments, and returns a function taking one argument and returns a callable taking the rest. f(x, y) == curry(f)(x)(y)
Parameters:
| | |
| --- | --- |
| F | a function taking at least one argument |
| T `t` | a callable object whose opCall takes at least 1 object |
Returns:
A single parameter callable object
Examples:
```
int f(int x, int y, int z)
{
return x + y + z;
}
auto cf = curry!f;
auto cf1 = cf(1);
auto cf2 = cf(2);
writeln(cf1(2)(3)); // f(1, 2, 3)
writeln(cf2(2)(3)); // f(2, 2, 3)
```
Examples:
```
//works with callable structs too
struct S
{
int w;
int opCall(int x, int y, int z)
{
return w + x + y + z;
}
}
S s;
s.w = 5;
auto cs = curry(s);
auto cs1 = cs(1);
auto cs2 = cs(2);
writeln(cs1(2)(3)); // s(1, 2, 3)
writeln(cs1(2)(3)); // (1 + 2 + 3 + 5)
writeln(cs2(2)(3)); // s(2, 2, 3)
```
template **adjoin**(F...) if (F.length == 1)
template **adjoin**(F...) if (F.length > 1)
Takes multiple functions and adjoins them together.
Parameters:
| | |
| --- | --- |
| F | the call-able(s) to adjoin |
Returns:
A new function which returns a [`std.typecons.Tuple`](std_typecons#Tuple). Each of the elements of the tuple will be the return values of `F`.
Note
In the special case where only a single function is provided (`F.length == 1`), adjoin simply aliases to the single passed function (`F[0]`).
Examples:
```
import std.functional, std.typecons : Tuple;
static bool f1(int a) { return a != 0; }
static int f2(int a) { return a / 2; }
auto x = adjoin!(f1, f2)(5);
assert(is(typeof(x) == Tuple!(bool, int)));
assert(x[0] == true && x[1] == 2);
```
template **compose**(fun...) if (fun.length > 0)
Composes passed-in functions `fun[0], fun[1], ...`.
Parameters:
| | |
| --- | --- |
| fun | the call-able(s) or `string`(s) to compose into one function |
Returns:
A new function `f(x)` that in turn returns `fun[0](fun[1](...(x)))...`.
See Also:
[`pipe`](#pipe)
Examples:
```
import std.algorithm.comparison : equal;
import std.algorithm.iteration : map;
import std.array : split;
import std.conv : to;
// First split a string in whitespace-separated tokens and then
// convert each token into an integer
assert(compose!(map!(to!(int)), split)("1 2 3").equal([1, 2, 3]));
```
template **pipe**(fun...)
Pipes functions in sequence. Offers the same functionality as `compose`, but with functions specified in reverse order. This may lead to more readable code in some situation because the order of execution is the same as lexical order.
Parameters:
| | |
| --- | --- |
| fun | the call-able(s) or `string`(s) to compose into one function |
Returns:
A new function `f(x)` that in turn returns `fun[$-1](...fun[1](fun[0](x)))...`.
Example
```
// Read an entire text file, split the resulting string in
// whitespace-separated tokens, and then convert each token into an
// integer
int[] a = pipe!(readText, split, map!(to!(int)))("file.txt");
```
See Also:
[`compose`](#compose)
Examples:
```
import std.conv : to;
string foo(int a) { return to!(string)(a); }
int bar(string a) { return to!(int)(a) + 1; }
double baz(int a) { return a + 0.5; }
writeln(compose!(baz, bar, foo)(1)); // 2.5
writeln(pipe!(foo, bar, baz)(1)); // 2.5
writeln(compose!(baz, `to!(int)(a) + 1`, foo)(1)); // 2.5
writeln(compose!(baz, bar)("1"[])); // 2.5
writeln(compose!(baz, bar)("1")); // 2.5
writeln(compose!(`a + 0.5`, `to!(int)(a) + 1`, foo)(1)); // 2.5
```
ReturnType!fun **memoize**(alias fun)(Parameters!fun args);
ReturnType!fun **memoize**(alias fun, uint maxSize)(Parameters!fun args);
[Memoizes](https://en.wikipedia.org/wiki/Memoization) a function so as to avoid repeated computation. The memoization structure is a hash table keyed by a tuple of the function's arguments. There is a speed gain if the function is repeatedly called with the same arguments and is more expensive than a hash table lookup. For more information on memoization, refer to [this book chapter](http://docs.google.com/viewer?url=http%3A%2F%2Fhop.perl.plover.com%2Fbook%2Fpdf%2F03CachingAndMemoization.pdf).
Example
```
double transmogrify(int a, string b)
{
... expensive computation ...
}
alias fastTransmogrify = memoize!transmogrify;
unittest
{
auto slow = transmogrify(2, "hello");
auto fast = fastTransmogrify(2, "hello");
assert(slow == fast);
}
```
Parameters:
| | |
| --- | --- |
| fun | the call-able to memozie |
| maxSize | The maximum size of the GC buffer to hold the return values |
Returns:
A new function which calls `fun` and caches its return values.
Note
Technically the memoized function should be pure because `memoize` assumes it will always return the same result for a given tuple of arguments. However, `memoize` does not enforce that because sometimes it is useful to memoize an impure function, too.
Examples:
To memoize a recursive function, simply insert the memoized call in lieu of the plain recursive call. For example, to transform the exponential-time Fibonacci implementation into a linear-time computation:
```
ulong fib(ulong n) @safe nothrow
{
return n < 2 ? n : memoize!fib(n - 2) + memoize!fib(n - 1);
}
writeln(fib(10)); // 55
```
Examples:
To improve the speed of the factorial function,
```
ulong fact(ulong n) @safe
{
return n < 2 ? 1 : n * memoize!fact(n - 1);
}
writeln(fact(10)); // 3628800
```
Examples:
This memoizes all values of `fact` up to the largest argument. To only cache the final result, move `memoize` outside the function as shown below.
```
ulong factImpl(ulong n) @safe
{
return n < 2 ? 1 : n * factImpl(n - 1);
}
alias fact = memoize!factImpl;
writeln(fact(10)); // 3628800
```
Examples:
When the `maxSize` parameter is specified, memoize will used a fixed size hash table to limit the number of cached entries.
```
ulong fact(ulong n)
{
// Memoize no more than 8 values
return n < 2 ? 1 : n * memoize!(fact, 8)(n - 1);
}
writeln(fact(8)); // 40320
// using more entries than maxSize will overwrite existing entries
writeln(fact(10)); // 3628800
```
auto **toDelegate**(F)(auto ref F fp)
Constraints: if (isCallable!F);
Convert a callable to a delegate with the same parameter list and return type, avoiding heap allocations and use of auxiliary storage.
Parameters:
| | |
| --- | --- |
| F `fp` | a function pointer or an aggregate type with `opCall` defined. |
Returns:
A delegate with the context pointer pointing to nothing.
Example
```
void doStuff() {
writeln("Hello, world.");
}
void runDelegate(void delegate() myDelegate) {
myDelegate();
}
auto delegateToPass = toDelegate(&doStuff);
runDelegate(delegateToPass); // Calls doStuff, prints "Hello, world."
```
Bugs:
* Does not work with `@safe` functions.
* Ignores C-style / D-style variadic arguments.
Examples:
```
static int inc(ref uint num) {
num++;
return 8675309;
}
uint myNum = 0;
auto incMyNumDel = toDelegate(&inc);
auto returnVal = incMyNumDel(myNum);
writeln(myNum); // 1
```
template **forward**(args...)
Forwards function arguments while keeping `out`, `ref`, and `lazy` on the parameters.
Parameters:
| | |
| --- | --- |
| args | a parameter list or an [`std.meta.AliasSeq`](std_meta#AliasSeq). |
Returns:
An `AliasSeq` of `args` with `out`, `ref`, and `lazy` saved.
Examples:
```
class C
{
static int foo(int n) { return 1; }
static int foo(ref int n) { return 2; }
}
// with forward
int bar()(auto ref int x) { return C.foo(forward!x); }
// without forward
int baz()(auto ref int x) { return C.foo(x); }
int i;
writeln(bar(1)); // 1
writeln(bar(i)); // 2
writeln(baz(1)); // 2
writeln(baz(i)); // 2
```
Examples:
```
void foo(int n, ref string s) { s = null; foreach (i; 0 .. n) s ~= "Hello"; }
// forwards all arguments which are bound to parameter tuple
void bar(Args...)(auto ref Args args) { return foo(forward!args); }
// forwards all arguments with swapping order
void baz(Args...)(auto ref Args args) { return foo(forward!args[$/2..$], forward!args[0..$/2]); }
string s;
bar(1, s);
writeln(s); // "Hello"
baz(s, 2);
writeln(s); // "HelloHello"
```
Examples:
```
struct X {
int i;
this(this)
{
++i;
}
}
struct Y
{
private X x_;
this()(auto ref X x)
{
x_ = forward!x;
}
}
struct Z
{
private const X x_;
this()(auto ref X x)
{
x_ = forward!x;
}
this()(auto const ref X x)
{
x_ = forward!x;
}
}
X x;
const X cx;
auto constX = (){ const X x; return x; };
static assert(__traits(compiles, { Y y = x; }));
static assert(__traits(compiles, { Y y = X(); }));
static assert(!__traits(compiles, { Y y = cx; }));
static assert(!__traits(compiles, { Y y = constX(); }));
static assert(__traits(compiles, { Z z = x; }));
static assert(__traits(compiles, { Z z = X(); }));
static assert(__traits(compiles, { Z z = cx; }));
static assert(__traits(compiles, { Z z = constX(); }));
Y y1 = x;
// ref lvalue, copy
writeln(y1.x_.i); // 1
Y y2 = X();
// rvalue, move
writeln(y2.x_.i); // 0
Z z1 = x;
// ref lvalue, copy
writeln(z1.x_.i); // 1
Z z2 = X();
// rvalue, move
writeln(z2.x_.i); // 0
Z z3 = cx;
// ref const lvalue, copy
writeln(z3.x_.i); // 1
Z z4 = constX();
// const rvalue, copy
writeln(z4.x_.i); // 1
```
d Inline Assembler Inline Assembler
================
**Contents** 1. [Asm statement](#asmstatements)
2. [Asm instruction](#asminstruction)
3. [Labels](#labels)
4. [align *IntegerExpression*](#align)
5. [even](#even)
6. [naked](#naked)
7. [db, ds, di, dl, df, dd, de](#raw_data)
8. [Opcodes](#opcodes)
1. [Special Cases](#special_cases)
9. [Operands](#operands)
1. [Operand Types](#operand_types)
2. [Struct/Union/Class Member Offsets](#agregate_member_offsets)
3. [Stack Variables](#stack_variables)
4. [Special Symbols](#special_symbols)
10. [Opcodes Supported](#supported_opcodes)
1. [Pentium 4 (Prescott) Opcodes Supported](#P4_opcode_support)
2. [AMD Opcodes Supported](#amd_opcode_support)
3. [SIMD](#simd)
4. [Other](#other)
D, being a systems programming language, provides an inline assembler. The inline assembler is standardized for D implementations across the same CPU family, for example, the Intel Pentium inline assembler for a Win32 D compiler will be syntax compatible with the inline assembler for Linux running on an Intel Pentium.
Implementations of D on different architectures, however, are free to innovate upon the memory model, function call/return conventions, argument passing conventions, etc.
This document describes the `x86` and `x86_64` implementations of the inline assembler. The inline assembler platform support that a compiler provides is indicated by the `D_InlineAsm_X86` and `D_InlineAsm_X86_64` version identifiers, respectively.
Asm statement
-------------
```
AsmStatement:
asm FunctionAttributesopt { AsmInstructionListopt }
AsmInstructionList:
AsmInstruction ;
AsmInstruction ; AsmInstructionList
```
Assembler instructions must be located inside an `asm` block. Like functions, `asm` statements must be anotated with adequate function attributes to be compatible with the caller. Asm statements attributes must be explicitly defined, they are not infered.
```
void func1() pure nothrow @safe @nogc
{
asm pure nothrow @trusted @nogc
{}
}
void func2() @safe @nogc
{
asm @nogc // Error: asm statement is assumed to be @system - mark it with '@trusted' if it is not
{}
}
```
Asm instruction
---------------
```
AsmInstruction:
Identifier : AsmInstruction
align IntegerExpression
even
naked
db Operands
ds Operands
di Operands
dl Operands
df Operands
dd Operands
de Operands
db StringLiteral
ds StringLiteral
di StringLiteral
dl StringLiteral
dw StringLiteral
dq StringLiteral
Opcode
Opcode Operands
Operands:
Operand
Operand , Operands
```
Labels
------
Assembler instructions can be labeled just like other statements. They can be the target of goto statements. For example:
```
void *pc;
asm
{
call L1 ;
L1: ;
pop EBX ;
mov pc[EBP],EBX ; // pc now points to code at L1
}
```
align *IntegerExpression*
-------------------------
```
IntegerExpression:
IntegerLiteral
Identifier
```
Causes the assembler to emit NOP instructions to align the next assembler instruction on an *IntegerExpression* boundary. *IntegerExpression* must evaluate at compile time to an integer that is a power of 2.
Aligning the start of a loop body can sometimes have a dramatic effect on the execution speed.
even
----
Causes the assembler to emit NOP instructions to align the next assembler instruction on an even boundary.
naked
-----
Causes the compiler to not generate the function prolog and epilog sequences. This means such is the responsibility of inline assembly programmer, and is normally used when the entire function is to be written in assembler.
db, ds, di, dl, df, dd, de
--------------------------
These pseudo ops are for inserting raw data directly into the code. `db` is for bytes, `ds` is for 16 bit words, `di` is for 32 bit words, `dl` is for 64 bit words, `df` is for 32 bit floats, `dd` is for 64 bit doubles, and `de` is for 80 bit extended reals. Each can have multiple operands. If an operand is a string literal, it is as if there were *length* operands, where *length* is the number of characters in the string. One character is used per operand. For example:
```
asm
{
db 5,6,0x83; // insert bytes 0x05, 0x06, and 0x83 into code
ds 0x1234; // insert bytes 0x34, 0x12
di 0x1234; // insert bytes 0x34, 0x12, 0x00, 0x00
dl 0x1234; // insert bytes 0x34, 0x12, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
df 1.234; // insert float 1.234
dd 1.234; // insert double 1.234
de 1.234; // insert real 1.234
db "abc"; // insert bytes 0x61, 0x62, and 0x63
ds "abc"; // insert bytes 0x61, 0x00, 0x62, 0x00, 0x63, 0x00
}
```
Opcodes
-------
A list of supported opcodes is at the end.
The following registers are supported. Register names are always in upper case.
```
Register:
AL AH AX EAX
BL BH BX EBX
CL CH CX ECX
DL DH DX EDX
BP EBP
SP ESP
DI EDI
SI ESI
ES CS SS DS GS FS
CR0 CR2 CR3 CR4
DR0 DR1 DR2 DR3 DR6 DR7
TR3 TR4 TR5 TR6 TR7
ST
ST(0) ST(1) ST(2) ST(3) ST(4) ST(5) ST(6) ST(7)
MM0 MM1 MM2 MM3 MM4 MM5 MM6 MM7
XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7
```
`x86_64` adds these additional registers.
```
Register64:
RAX RBX RCX RDX
BPL RBP
SPL RSP
DIL RDI
SIL RSI
R8B R8W R8D R8
R9B R9W R9D R9
R10B R10W R10D R10
R11B R11W R11D R11
R12B R12W R12D R12
R13B R13W R13D R13
R14B R14W R14D R14
R15B R15W R15D R15
XMM8 XMM9 XMM10 XMM11 XMM12 XMM13 XMM14 XMM15
YMM0 YMM1 YMM2 YMM3 YMM4 YMM5 YMM6 YMM7
YMM8 YMM9 YMM10 YMM11 YMM12 YMM13 YMM14 YMM15
```
### Special Cases
`lock`, `rep`, `repe`, `repne`, `repnz`, `repz`
These prefix instructions do not appear in the same statement as the instructions they prefix; they appear in their own statement. For example:
```
asm
{
rep ;
movsb ;
}
```
`pause` This opcode is not supported by the assembler, instead use
```
asm
{
rep ;
nop ;
}
```
which produces the same result. `floating point ops` Use the two operand form of the instruction format;
```
fdiv ST(1); // wrong
fmul ST; // wrong
fdiv ST,ST(1); // right
fmul ST,ST(0); // right
```
Operands
--------
```
Operand:
AsmExp
AsmExp:
AsmLogOrExp
AsmLogOrExp ? AsmExp : AsmExp
AsmLogOrExp:
AsmLogAndExp
AsmLogOrExp || AsmLogAndExp
AsmLogAndExp:
AsmOrExp
AsmLogAndExp && AsmOrExp
AsmOrExp:
AsmXorExp
AsmOrExp | AsmXorExp
AsmXorExp:
AsmAndExp
AsmXorExp ^ AsmAndExp
AsmAndExp:
AsmEqualExp
AsmAndExp & AsmEqualExp
AsmEqualExp:
AsmRelExp
AsmEqualExp == AsmRelExp
AsmEqualExp != AsmRelExp
AsmRelExp:
AsmShiftExp
AsmRelExp < AsmShiftExp
AsmRelExp <= AsmShiftExp
AsmRelExp > AsmShiftExp
AsmRelExp >= AsmShiftExp
AsmShiftExp:
AsmAddExp
AsmShiftExp << AsmAddExp
AsmShiftExp >> AsmAddExp
AsmShiftExp >>> AsmAddExp
AsmAddExp:
AsmMulExp
AsmAddExp + AsmMulExp
AsmAddExp - AsmMulExp
AsmMulExp:
AsmBrExp
AsmMulExp * AsmBrExp
AsmMulExp / AsmBrExp
AsmMulExp % AsmBrExp
AsmBrExp:
AsmUnaExp
AsmBrExp [ AsmExp ]
AsmUnaExp:
AsmTypePrefix AsmExp
offsetof AsmExp
seg AsmExp
+ AsmUnaExp
- AsmUnaExp
! AsmUnaExp
~ AsmUnaExp
AsmPrimaryExp
AsmPrimaryExp:
IntegerLiteral
FloatLiteral
__LOCAL_SIZE
$
Register
Register : AsmExp
Register64
Register64 : AsmExp
DotIdentifier
this
DotIdentifier:
Identifier
Identifier . DotIdentifier
FundamentalType . Identifier
```
The operand syntax more or less follows the Intel CPU documentation conventions. In particular, the convention is that for two operand instructions the source is the right operand and the destination is the left operand. The syntax differs from that of Intel's in order to be compatible with the D language tokenizer and to simplify parsing.
The `seg` means load the segment number that the symbol is in. This is not relevant for flat model code. Instead, do a move from the relevant segment register.
A dotted expression is evaluated during the compilation and then must either give a constant or indicate a higher level variable that fits in the target register or variable.
### Operand Types
```
AsmTypePrefix:
near ptr
far ptr
word ptr
dword ptr
qword ptr
FundamentalType ptr
```
In cases where the operand size is ambiguous, as in:
```
add [EAX],3 ;
```
it can be disambiguated by using an [*AsmTypePrefix*](#AsmTypePrefix):
```
add byte ptr [EAX],3 ;
add int ptr [EAX],7 ;
```
`far ptr` is not relevant for flat model code.
### Struct/Union/Class Member Offsets
To access members of an aggregate, given a pointer to the aggregate is in a register, use the `.offsetof` property of the qualified name of the member:
```
struct Foo { int a,b,c; }
int bar(Foo *f)
{
asm
{
mov EBX,f ;
mov EAX,Foo.b.offsetof[EBX] ;
}
}
void main()
{
Foo f = Foo(0, 2, 0);
assert(bar(&f) == 2);
}
```
Alternatively, inside the scope of an aggregate, only the member name is needed:
```
struct Foo // or class
{
int a,b,c;
int bar()
{
asm
{
mov EBX, this ;
mov EAX, b[EBX] ;
}
}
}
void main()
{
Foo f = Foo(0, 2, 0);
assert(f.bar() == 2);
}
```
### Stack Variables
Stack variables (variables local to a function and allocated on the stack) are accessed via the name of the variable indexed by EBP:
```
int foo(int x)
{
asm
{
mov EAX,x[EBP] ; // loads value of parameter x into EAX
mov EAX,x ; // does the same thing
}
}
```
If the [EBP] is omitted, it is assumed for local variables. If `naked` is used, this no longer holds.
### Special Symbols
$ Represents the program counter of the start of the next instruction. So,
```
jmp $ ;
```
branches to the instruction following the jmp instruction. The $ can only appear as the target of a jmp or call instruction. `__LOCAL_SIZE` This gets replaced by the number of local bytes in the local stack frame. It is most handy when the `naked` is invoked and a custom stack frame is programmed. Opcodes Supported
-----------------
Opcodes| *aaa* | *aad* | *aam* | *aas* | *adc* |
| *add* | *addpd* | *addps* | *addsd* | *addss* |
| *and* | *andnpd* | *andnps* | *andpd* | *andps* |
| *arpl* | *bound* | *bsf* | *bsr* | *bswap* |
| *bt* | *btc* | *btr* | *bts* | *call* |
| *cbw* | *cdq* | *clc* | *cld* | *clflush* |
| *cli* | *clts* | *cmc* | *cmova* | *cmovae* |
| *cmovb* | *cmovbe* | *cmovc* | *cmove* | *cmovg* |
| *cmovge* | *cmovl* | *cmovle* | *cmovna* | *cmovnae* |
| *cmovnb* | *cmovnbe* | *cmovnc* | *cmovne* | *cmovng* |
| *cmovnge* | *cmovnl* | *cmovnle* | *cmovno* | *cmovnp* |
| *cmovns* | *cmovnz* | *cmovo* | *cmovp* | *cmovpe* |
| *cmovpo* | *cmovs* | *cmovz* | *cmp* | *cmppd* |
| *cmpps* | *cmps* | *cmpsb* | *cmpsd* | *cmpss* |
| *cmpsw* | *cmpxchg* | *cmpxchg8b* | *cmpxchg16b* | |
| *comisd* | *comiss* | | | |
| *cpuid* | *cvtdq2pd* | *cvtdq2ps* | *cvtpd2dq* | *cvtpd2pi* |
| *cvtpd2ps* | *cvtpi2pd* | *cvtpi2ps* | *cvtps2dq* | *cvtps2pd* |
| *cvtps2pi* | *cvtsd2si* | *cvtsd2ss* | *cvtsi2sd* | *cvtsi2ss* |
| *cvtss2sd* | *cvtss2si* | *cvttpd2dq* | *cvttpd2pi* | *cvttps2dq* |
| *cvttps2pi* | *cvttsd2si* | *cvttss2si* | *cwd* | *cwde* |
| *da* | *daa* | *das* | *db* | *dd* |
| *de* | *dec* | *df* | *di* | *div* |
| *divpd* | *divps* | *divsd* | *divss* | *dl* |
| *dq* | *ds* | *dt* | *dw* | *emms* |
| *enter* | *f2xm1* | *fabs* | *fadd* | *faddp* |
| *fbld* | *fbstp* | *fchs* | *fclex* | *fcmovb* |
| *fcmovbe* | *fcmove* | *fcmovnb* | *fcmovnbe* | *fcmovne* |
| *fcmovnu* | *fcmovu* | *fcom* | *fcomi* | *fcomip* |
| *fcomp* | *fcompp* | *fcos* | *fdecstp* | *fdisi* |
| *fdiv* | *fdivp* | *fdivr* | *fdivrp* | *feni* |
| *ffree* | *fiadd* | *ficom* | *ficomp* | *fidiv* |
| *fidivr* | *fild* | *fimul* | *fincstp* | *finit* |
| *fist* | *fistp* | *fisub* | *fisubr* | *fld* |
| *fld1* | *fldcw* | *fldenv* | *fldl2e* | *fldl2t* |
| *fldlg2* | *fldln2* | *fldpi* | *fldz* | *fmul* |
| *fmulp* | *fnclex* | *fndisi* | *fneni* | *fninit* |
| *fnop* | *fnsave* | *fnstcw* | *fnstenv* | *fnstsw* |
| *fpatan* | *fprem* | *fprem1* | *fptan* | *frndint* |
| *frstor* | *fsave* | *fscale* | *fsetpm* | *fsin* |
| *fsincos* | *fsqrt* | *fst* | *fstcw* | *fstenv* |
| *fstp* | *fstsw* | *fsub* | *fsubp* | *fsubr* |
| *fsubrp* | *ftst* | *fucom* | *fucomi* | *fucomip* |
| *fucomp* | *fucompp* | *fwait* | *fxam* | *fxch* |
| *fxrstor* | *fxsave* | *fxtract* | *fyl2x* | *fyl2xp1* |
| *hlt* | *idiv* | *imul* | *in* | *inc* |
| *ins* | *insb* | *insd* | *insw* | *int* |
| *into* | *invd* | *invlpg* | *iret* | *iretd* |
| *iretq* | *ja* | *jae* | *jb* | *jbe* |
| *jc* | *jcxz* | *je* | *jecxz* | *jg* |
| *jge* | *jl* | *jle* | *jmp* | *jna* |
| *jnae* | *jnb* | *jnbe* | *jnc* | *jne* |
| *jng* | *jnge* | *jnl* | *jnle* | *jno* |
| *jnp* | *jns* | *jnz* | *jo* | *jp* |
| *jpe* | *jpo* | *js* | *jz* | *lahf* |
| *lar* | *ldmxcsr* | *lds* | *lea* | *leave* |
| *les* | *lfence* | *lfs* | *lgdt* | *lgs* |
| *lidt* | *lldt* | *lmsw* | *lock* | *lods* |
| *lodsb* | *lodsd* | *lodsw* | *loop* | *loope* |
| *loopne* | *loopnz* | *loopz* | *lsl* | *lss* |
| *ltr* | *maskmovdqu* | *maskmovq* | *maxpd* | *maxps* |
| *maxsd* | *maxss* | *mfence* | *minpd* | *minps* |
| *minsd* | *minss* | *mov* | *movapd* | *movaps* |
| *movd* | *movdq2q* | *movdqa* | *movdqu* | *movhlps* |
| *movhpd* | *movhps* | *movlhps* | *movlpd* | *movlps* |
| *movmskpd* | *movmskps* | *movntdq* | *movnti* | *movntpd* |
| *movntps* | *movntq* | *movq* | *movq2dq* | *movs* |
| *movsb* | *movsd* | *movss* | *movsw* | *movsx* |
| *movupd* | *movups* | *movzx* | *mul* | *mulpd* |
| *mulps* | *mulsd* | *mulss* | *neg* | *nop* |
| *not* | *or* | *orpd* | *orps* | *out* |
| *outs* | *outsb* | *outsd* | *outsw* | *packssdw* |
| *packsswb* | *packuswb* | *paddb* | *paddd* | *paddq* |
| *paddsb* | *paddsw* | *paddusb* | *paddusw* | *paddw* |
| *pand* | *pandn* | *pavgb* | *pavgw* | *pcmpeqb* |
| *pcmpeqd* | *pcmpeqw* | *pcmpgtb* | *pcmpgtd* | *pcmpgtw* |
| *pextrw* | *pinsrw* | *pmaddwd* | *pmaxsw* | *pmaxub* |
| *pminsw* | *pminub* | *pmovmskb* | *pmulhuw* | *pmulhw* |
| *pmullw* | *pmuludq* | *pop* | *popa* | *popad* |
| *popf* | *popfd* | *por* | *prefetchnta* | *prefetcht0* |
| *prefetcht1* | *prefetcht2* | *psadbw* | *pshufd* | *pshufhw* |
| *pshuflw* | *pshufw* | *pslld* | *pslldq* | *psllq* |
| *psllw* | *psrad* | *psraw* | *psrld* | *psrldq* |
| *psrlq* | *psrlw* | *psubb* | *psubd* | *psubq* |
| *psubsb* | *psubsw* | *psubusb* | *psubusw* | *psubw* |
| *punpckhbw* | *punpckhdq* | *punpckhqdq* | *punpckhwd* | *punpcklbw* |
| *punpckldq* | *punpcklqdq* | *punpcklwd* | *push* | *pusha* |
| *pushad* | *pushf* | *pushfd* | *pxor* | *rcl* |
| *rcpps* | *rcpss* | *rcr* | *rdmsr* | *rdpmc* |
| *rdtsc* | *rep* | *repe* | *repne* | *repnz* |
| *repz* | *ret* | *retf* | *rol* | *ror* |
| *rsm* | *rsqrtps* | *rsqrtss* | *sahf* | *sal* |
| *sar* | *sbb* | *scas* | *scasb* | *scasd* |
| *scasw* | *seta* | *setae* | *setb* | *setbe* |
| *setc* | *sete* | *setg* | *setge* | *setl* |
| *setle* | *setna* | *setnae* | *setnb* | *setnbe* |
| *setnc* | *setne* | *setng* | *setnge* | *setnl* |
| *setnle* | *setno* | *setnp* | *setns* | *setnz* |
| *seto* | *setp* | *setpe* | *setpo* | *sets* |
| *setz* | *sfence* | *sgdt* | *shl* | *shld* |
| *shr* | *shrd* | *shufpd* | *shufps* | *sidt* |
| *sldt* | *smsw* | *sqrtpd* | *sqrtps* | *sqrtsd* |
| *sqrtss* | *stc* | *std* | *sti* | *stmxcsr* |
| *stos* | *stosb* | *stosd* | *stosw* | *str* |
| *sub* | *subpd* | *subps* | *subsd* | *subss* |
| *syscall* | *sysenter* | *sysexit* | *sysret* | *test* |
| *ucomisd* | *ucomiss* | *ud2* | *unpckhpd* | *unpckhps* |
| *unpcklpd* | *unpcklps* | *verr* | *verw* | *wait* |
| *wbinvd* | *wrmsr* | *xadd* | *xchg* | *xlat* |
| *xlatb* | *xor* | *xorpd* | *xorps* | |
### Pentium 4 (Prescott) Opcodes Supported
Pentium 4 Opcodes| *addsubpd* | *addsubps* | *fisttp* | *haddpd* | *haddps* |
| *hsubpd* | *hsubps* | *lddqu* | *monitor* | *movddup* |
| *movshdup* | *movsldup* | *mwait* | | |
### AMD Opcodes Supported
AMD Opcodes| *pavgusb* | *pf2id* | *pfacc* | *pfadd* | *pfcmpeq* |
| *pfcmpge* | *pfcmpgt* | *pfmax* | *pfmin* | *pfmul* |
| *pfnacc* | *pfpnacc* | *pfrcp* | *pfrcpit1* | *pfrcpit2* |
| *pfrsqit1* | *pfrsqrt* | *pfsub* | *pfsubr* | *pi2fd* |
| *pmulhrw* | *pswapd* | | | |
### SIMD
SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2 and AVX are supported.
| programming_docs |
d rt.sections_android rt.sections\_android
====================
Written in the D programming language. This module provides bionic-specific support for sections.
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
Martin Nowak
Source
[rt/sections\_android.d](https://github.com/dlang/druntime/blob/master/src/rt/sections_android.d)
d core.gc.config core.gc.config
==============
Contains the garbage collector configuration.
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
d std.algorithm.iteration std.algorithm.iteration
=======================
This is a submodule of [`std.algorithm`](std_algorithm). It contains generic iteration algorithms.
Cheat Sheet| Function Name | Description |
| [`cache`](#cache) | Eagerly evaluates and caches another range's `front`. |
| [`cacheBidirectional`](#cacheBidirectional) | As above, but also provides `back` and `popBack`. |
| [`chunkBy`](#chunkBy) | `chunkBy!((a,b) => a[1] == b[1])([[1, 1], [1, 2], [2, 2], [2, 1]])` returns a range containing 3 subranges: the first with just `[1, 1]`; the second with the elements `[1, 2]` and `[2, 2]`; and the third with just `[2, 1]`. |
| [`cumulativeFold`](#cumulativeFold) | `cumulativeFold!((a, b) => a + b)([1, 2, 3, 4])` returns a lazily-evaluated range containing the successive reduced values `1`, `3`, `6`, `10`. |
| [`each`](#each) | `each!writeln([1, 2, 3])` eagerly prints the numbers `1`, `2` and `3` on their own lines. |
| [`filter`](#filter) | `filter!(a => a > 0)([1, -1, 2, 0, -3])` iterates over elements `1` and `2`. |
| [`filterBidirectional`](#filterBidirectional) | Similar to `filter`, but also provides `back` and `popBack` at a small increase in cost. |
| [`fold`](#fold) | `fold!((a, b) => a + b)([1, 2, 3, 4])` returns `10`. |
| [`group`](#group) | `group([5, 2, 2, 3, 3])` returns a range containing the tuples `tuple(5, 1)`, `tuple(2, 2)`, and `tuple(3, 2)`. |
| [`joiner`](#joiner) | `joiner(["hello", "world!"], "; ")` returns a range that iterates over the characters `"hello; world!"`. No new string is created - the existing inputs are iterated. |
| [`map`](#map) | `map!(a => a * 2)([1, 2, 3])` lazily returns a range with the numbers `2`, `4`, `6`. |
| [`mean`](#mean) | Colloquially known as the average, `mean([1, 2, 3])` returns `2`. |
| [`permutations`](#permutations) | Lazily computes all permutations using Heap's algorithm. |
| [`reduce`](#reduce) | `reduce!((a, b) => a + b)([1, 2, 3, 4])` returns `10`. This is the old implementation of `fold`. |
| [`splitter`](#splitter) | Lazily splits a range by a separator. |
| [`substitute`](#substitute) | `[1, 2].substitute(1, 0.1)` returns `[0.1, 2]`. |
| [`sum`](#sum) | Same as `fold`, but specialized for accurate summation. |
| [`uniq`](#uniq) | Iterates over the unique elements in a range, which is assumed sorted. |
License:
[Boost License 1.0](http://boost.org/LICENSE_1_0.txt).
Authors:
[Andrei Alexandrescu](http://erdani.com)
Source
[std/algorithm/iteration.d](https://github.com/dlang/phobos/blob/master/std/algorithm/iteration.d)
auto **cache**(Range)(Range range)
Constraints: if (isInputRange!Range);
auto **cacheBidirectional**(Range)(Range range)
Constraints: if (isBidirectionalRange!Range);
`cache` eagerly evaluates [front](std_range_primitives#front) of `range` on each construction or call to [popFront](std_range_primitives#popFront), to store the result in a cache. The result is then directly returned when [front](std_range_primitives#front) is called, rather than re-evaluated.
This can be a useful function to place in a chain, after functions that have expensive evaluation, as a lazy alternative to [`std.array.array`](std_array#array). In particular, it can be placed after a call to [`map`](#map), or before a call [`std.range.filter`](std_range#filter) or [`std.range.tee`](std_range#tee)
`cache` may provide [bidirectional range](std_range_primitives#isBidirectionalRange) iteration if needed, but since this comes at an increased cost, it must be explicitly requested via the call to `cacheBidirectional`. Furthermore, a bidirectional cache will evaluate the "center" element twice, when there is only one element left in the range.
`cache` does not provide random access primitives, as `cache` would be unable to cache the random accesses. If `Range` provides slicing primitives, then `cache` will provide the same slicing primitives, but `hasSlicing!Cache` will not yield true (as the [`std.range.primitives.hasSlicing`](std_range_primitives#hasSlicing) trait also checks for random access).
Parameters:
| | |
| --- | --- |
| Range `range` | an [input range](std_range_primitives#isInputRange) |
Returns:
An [input range](std_range_primitives#isInputRange) with the cached values of range
Examples:
```
import std.algorithm.comparison : equal;
import std.range, std.stdio;
import std.typecons : tuple;
ulong counter = 0;
double fun(int x)
{
++counter;
// http://en.wikipedia.org/wiki/Quartic_function
return ( (x + 4.0) * (x + 1.0) * (x - 1.0) * (x - 3.0) ) / 14.0 + 0.5;
}
// Without cache, with array (greedy)
auto result1 = iota(-4, 5).map!(a =>tuple(a, fun(a)))()
.filter!(a => a[1] < 0)()
.map!(a => a[0])()
.array();
// the values of x that have a negative y are:
assert(equal(result1, [-3, -2, 2]));
// Check how many times fun was evaluated.
// As many times as the number of items in both source and result.
writeln(counter); // iota(-4, 5).length + result1.length
counter = 0;
// Without array, with cache (lazy)
auto result2 = iota(-4, 5).map!(a =>tuple(a, fun(a)))()
.cache()
.filter!(a => a[1] < 0)()
.map!(a => a[0])();
// the values of x that have a negative y are:
assert(equal(result2, [-3, -2, 2]));
// Check how many times fun was evaluated.
// Only as many times as the number of items in source.
writeln(counter); // iota(-4, 5).length
```
Examples:
Tip: `cache` is eager when evaluating elements. If calling front on the underlying range has a side effect, it will be observable before calling front on the actual cached range. Furthermore, care should be taken composing `cache` with [`std.range.take`](std_range#take). By placing `take` before `cache`, then `cache` will be "aware" of when the range ends, and correctly stop caching elements when needed. If calling front has no side effect though, placing `take` after `cache` may yield a faster range. Either way, the resulting ranges will be equivalent, but maybe not at the same cost or side effects.
```
import std.algorithm.comparison : equal;
import std.range;
int i = 0;
auto r = iota(0, 4).tee!((a){i = a;}, No.pipeOnPop);
auto r1 = r.take(3).cache();
auto r2 = r.cache().take(3);
assert(equal(r1, [0, 1, 2]));
assert(i == 2); //The last "seen" element was 2. The data in cache has been cleared.
assert(equal(r2, [0, 1, 2]));
assert(i == 3); //cache has accessed 3. It is still stored internally by cache.
```
template **map**(fun...) if (fun.length >= 1)
Implements the homonym function (also known as `transform`) present in many languages of functional flavor. The call `map!(fun)(range)` returns a range of which elements are obtained by applying `fun(a)` left to right for all elements `a` in `range`. The original ranges are not changed. Evaluation is done lazily.
Parameters:
| | |
| --- | --- |
| fun | one or more transformation functions |
See Also:
[Map (higher-order function)](http://en.wikipedia.org/wiki/Map_(higher-order_function))
Examples:
```
import std.algorithm.comparison : equal;
import std.range : chain, only;
auto squares =
chain(only(1, 2, 3, 4), only(5, 6)).map!(a => a * a);
assert(equal(squares, only(1, 4, 9, 16, 25, 36)));
```
Examples:
Multiple functions can be passed to `map`. In that case, the element type of `map` is a tuple containing one element for each function.
```
auto sums = [2, 4, 6, 8];
auto products = [1, 4, 9, 16];
size_t i = 0;
foreach (result; [ 1, 2, 3, 4 ].map!("a + a", "a * a"))
{
writeln(result[0]); // sums[i]
writeln(result[1]); // products[i]
++i;
}
```
Examples:
You may alias `map` with some function(s) to a symbol and use it separately:
```
import std.algorithm.comparison : equal;
import std.conv : to;
alias stringize = map!(to!string);
assert(equal(stringize([ 1, 2, 3, 4 ]), [ "1", "2", "3", "4" ]));
```
auto **map**(Range)(Range r)
Constraints: if (isInputRange!(Unqual!Range));
Parameters:
| | |
| --- | --- |
| Range `r` | an [input range](std_range_primitives#isInputRange) |
Returns:
A range with each fun applied to all the elements. If there is more than one fun, the element type will be `Tuple` containing one element for each fun.
template **each**(alias fun = "a")
Eagerly iterates over `r` and calls `fun` over each element.
If no function to call is specified, `each` defaults to doing nothing but consuming the entire range. `r.front` will be evaluated, but that can be avoided by specifying a lambda with a `lazy` parameter.
`each` also supports `opApply`-based types, so it works with e.g. [`std.parallelism.parallel`](std_parallelism#parallel).
Normally the entire range is iterated. If partial iteration (early stopping) is desired, `fun` needs to return a value of type [`std.typecons.Flag`](std_typecons#Flag)`!"each"` (`Yes.each` to continue iteration, or `No.each` to stop iteration).
Parameters:
| | |
| --- | --- |
| fun | function to apply to each element of the range |
| Range r | range or iterable over which `each` iterates |
Returns:
`Yes.each` if the entire range was iterated, `No.each` in case of early stopping.
See Also:
[`std.range.tee`](std_range#tee)
Examples:
```
import std.range : iota;
import std.typecons : Flag, Yes, No;
long[] arr;
iota(5).each!(n => arr ~= n);
writeln(arr); // [0, 1, 2, 3, 4]
iota(5).each!((n) { arr ~= n; return No.each; });
writeln(arr); // [0, 1, 2, 3, 4, 0]
// If the range supports it, the value can be mutated in place
arr.each!((ref n) => n++);
writeln(arr); // [1, 2, 3, 4, 5, 1]
arr.each!"a++";
writeln(arr); // [2, 3, 4, 5, 6, 2]
// by-ref lambdas are not allowed for non-ref ranges
static assert(!is(typeof(arr.map!(n => n).each!((ref n) => n++))));
// The default predicate consumes the range
auto m = arr.map!(n => n);
(&m).each();
assert(m.empty);
// Indexes are also available for in-place mutations
arr[] = 0;
arr.each!"a=i"();
writeln(arr); // [0, 1, 2, 3, 4, 5]
// opApply iterators work as well
static class S
{
int x;
int opApply(scope int delegate(ref int _x) dg) { return dg(x); }
}
auto s = new S;
s.each!"a++";
writeln(s.x); // 1
```
Examples:
`each` works with iterable objects which provide an index variable, along with each element
```
import std.range : iota, lockstep;
auto arr = [1, 2, 3, 4];
// 1 ref parameter
arr.each!((ref e) => e = 0);
writeln(arr.sum); // 0
// 1 ref parameter and index
arr.each!((i, ref e) => e = cast(int) i);
writeln(arr.sum); // 4.iota.sum
```
Flag!"**each**" **each**(Range)(Range r)
Constraints: if (!isForeachIterable!Range && (isRangeIterable!Range || \_\_traits(compiles, typeof(r.front).length)));
Flag!"**each**" **each**(Iterable)(auto ref Iterable r)
Constraints: if (isForeachIterable!Iterable || \_\_traits(compiles, Parameters!(Parameters!(r.opApply))));
Parameters:
| | |
| --- | --- |
| Range `r` | range or iterable over which each iterates |
template **filter**(alias predicate) if (is(typeof(unaryFun!predicate)))
Implements the higher order filter function. The predicate is passed to [`std.functional.unaryFun`](std_functional#unaryFun), and can either accept a string, or any callable that can be executed via `pred(element)`.
Parameters:
| | |
| --- | --- |
| predicate | Function to apply to each element of range |
Returns:
`filter!(predicate)(range)` returns a new range containing only elements `x` in `range` for which `predicate(x)` returns `true`.
See Also:
[Filter (higher-order function)](http://en.wikipedia.org/wiki/Filter_(higher-order_function))
Examples:
```
import std.algorithm.comparison : equal;
import std.math : approxEqual;
import std.range;
int[] arr = [ 1, 2, 3, 4, 5 ];
// Filter below 3
auto small = filter!(a => a < 3)(arr);
assert(equal(small, [ 1, 2 ]));
// Filter again, but with Uniform Function Call Syntax (UFCS)
auto sum = arr.filter!(a => a < 3);
assert(equal(sum, [ 1, 2 ]));
// In combination with chain() to span multiple ranges
int[] a = [ 3, -2, 400 ];
int[] b = [ 100, -101, 102 ];
auto r = chain(a, b).filter!(a => a > 0);
assert(equal(r, [ 3, 400, 100, 102 ]));
// Mixing convertible types is fair game, too
double[] c = [ 2.5, 3.0 ];
auto r1 = chain(c, a, b).filter!(a => cast(int) a != a);
assert(approxEqual(r1, [ 2.5 ]));
```
auto **filter**(Range)(Range range)
Constraints: if (isInputRange!(Unqual!Range));
Parameters:
| | |
| --- | --- |
| Range `range` | An [input range](std_range_primitives#isInputRange) of elements |
Returns:
A range containing only elements `x` in `range` for which `predicate(x)` returns `true`.
template **filterBidirectional**(alias pred)
Similar to `filter`, except it defines a [bidirectional range](std_range_primitives#isBidirectionalRange). There is a speed disadvantage - the constructor spends time finding the last element in the range that satisfies the filtering condition (in addition to finding the first one). The advantage is that the filtered range can be spanned from both directions. Also, [`std.range.retro`](std_range#retro) can be applied against the filtered range.
The predicate is passed to [`std.functional.unaryFun`](std_functional#unaryFun), and can either accept a string, or any callable that can be executed via `pred(element)`.
Parameters:
| | |
| --- | --- |
| pred | Function to apply to each element of range |
Examples:
```
import std.algorithm.comparison : equal;
import std.range;
int[] arr = [ 1, 2, 3, 4, 5 ];
auto small = filterBidirectional!("a < 3")(arr);
static assert(isBidirectionalRange!(typeof(small)));
writeln(small.back); // 2
assert(equal(small, [ 1, 2 ]));
assert(equal(retro(small), [ 2, 1 ]));
// In combination with chain() to span multiple ranges
int[] a = [ 3, -2, 400 ];
int[] b = [ 100, -101, 102 ];
auto r = filterBidirectional!("a > 0")(chain(a, b));
writeln(r.back); // 102
```
auto **filterBidirectional**(Range)(Range r)
Constraints: if (isBidirectionalRange!(Unqual!Range));
Parameters:
| | |
| --- | --- |
| Range `r` | Bidirectional range of elements |
Returns:
A range containing only the elements in `r` for which `pred` returns `true`.
Group!(pred, Range) **group**(alias pred = "a == b", Range)(Range r);
struct **Group**(alias pred, R) if (isInputRange!R);
Groups consecutively equivalent elements into a single tuple of the element and the number of its repetitions.
Similarly to `uniq`, `group` produces a range that iterates over unique consecutive elements of the given range. Each element of this range is a tuple of the element and the number of times it is repeated in the original range. Equivalence of elements is assessed by using the predicate `pred`, which defaults to `"a == b"`. The predicate is passed to [`std.functional.binaryFun`](std_functional#binaryFun), and can either accept a string, or any callable that can be executed via `pred(element, element)`.
Parameters:
| | |
| --- | --- |
| pred | Binary predicate for determining equivalence of two elements. |
| R | The range type |
| Range `r` | The [input range](std_range_primitives#isInputRange) to iterate over. |
Returns:
A range of elements of type `Tuple!(ElementType!R, uint)`, representing each consecutively unique element and its respective number of occurrences in that run. This will be an input range if `R` is an input range, and a forward range in all other cases.
See Also:
[`chunkBy`](#chunkBy), which chunks an input range into subranges of equivalent adjacent elements.
Examples:
```
import std.algorithm.comparison : equal;
import std.typecons : tuple, Tuple;
int[] arr = [ 1, 2, 2, 2, 2, 3, 4, 4, 4, 5 ];
assert(equal(group(arr), [ tuple(1, 1u), tuple(2, 4u), tuple(3, 1u),
tuple(4, 3u), tuple(5, 1u) ][]));
```
Examples:
Using group, an associative array can be easily generated with the count of each unique element in the range.
```
import std.algorithm.sorting : sort;
import std.array : assocArray;
uint[string] result;
auto range = ["a", "b", "a", "c", "b", "c", "c", "d", "e"];
result = range.sort!((a, b) => a < b)
.group
.assocArray;
writeln(result); // ["a":2U, "b":2U, "c":3U, "d":1U, "e":1U]
```
auto **chunkBy**(alias pred, Range)(Range r)
Constraints: if (isInputRange!Range);
Chunks an input range into subranges of equivalent adjacent elements. In other languages this is often called `partitionBy`, `groupBy` or `sliceWhen`.
Equivalence is defined by the predicate `pred`, which can be either binary, which is passed to [`std.functional.binaryFun`](std_functional#binaryFun), or unary, which is passed to [`std.functional.unaryFun`](std_functional#unaryFun). In the binary form, two range elements `a` and `b` are considered equivalent if `pred(a,b)` is true. In unary form, two elements are considered equivalent if `pred(a) == pred(b)` is true.
This predicate must be an equivalence relation, that is, it must be reflexive (`pred(x,x)` is always true), symmetric (`pred(x,y) == pred(y,x)`), and transitive (`pred(x,y) && pred(y,z)` implies `pred(x,z)`). If this is not the case, the range returned by chunkBy may assert at runtime or behave erratically.
Parameters:
| | |
| --- | --- |
| pred | Predicate for determining equivalence. |
| Range `r` | An [input range](std_range_primitives#isInputRange) to be chunked. |
Returns:
With a binary predicate, a range of ranges is returned in which all elements in a given subrange are equivalent under the given predicate. With a unary predicate, a range of tuples is returned, with the tuple consisting of the result of the unary predicate for each subrange, and the subrange itself.
Notes
Equivalent elements separated by an intervening non-equivalent element will appear in separate subranges; this function only considers adjacent equivalence. Elements in the subranges will always appear in the same order they appear in the original range.
See Also:
[`group`](#group), which collapses adjacent equivalent elements into a single element.
Examples:
Showing usage with binary predicate:
```
import std.algorithm.comparison : equal;
// Grouping by particular attribute of each element:
auto data = [
[1, 1],
[1, 2],
[2, 2],
[2, 3]
];
auto r1 = data.chunkBy!((a,b) => a[0] == b[0]);
assert(r1.equal!equal([
[[1, 1], [1, 2]],
[[2, 2], [2, 3]]
]));
auto r2 = data.chunkBy!((a,b) => a[1] == b[1]);
assert(r2.equal!equal([
[[1, 1]],
[[1, 2], [2, 2]],
[[2, 3]]
]));
```
Examples:
Showing usage with unary predicate:
```
import std.algorithm.comparison : equal;
import std.range.primitives;
import std.typecons : tuple;
// Grouping by particular attribute of each element:
auto range =
[
[1, 1],
[1, 1],
[1, 2],
[2, 2],
[2, 3],
[2, 3],
[3, 3]
];
auto byX = chunkBy!(a => a[0])(range);
auto expected1 =
[
tuple(1, [[1, 1], [1, 1], [1, 2]]),
tuple(2, [[2, 2], [2, 3], [2, 3]]),
tuple(3, [[3, 3]])
];
foreach (e; byX)
{
assert(!expected1.empty);
writeln(e[0]); // expected1.front[0]
assert(e[1].equal(expected1.front[1]));
expected1.popFront();
}
auto byY = chunkBy!(a => a[1])(range);
auto expected2 =
[
tuple(1, [[1, 1], [1, 1]]),
tuple(2, [[1, 2], [2, 2]]),
tuple(3, [[2, 3], [2, 3], [3, 3]])
];
foreach (e; byY)
{
assert(!expected2.empty);
writeln(e[0]); // expected2.front[0]
assert(e[1].equal(expected2.front[1]));
expected2.popFront();
}
```
auto **joiner**(RoR, Separator)(RoR r, Separator sep)
Constraints: if (isInputRange!RoR && isInputRange!(ElementType!RoR) && isForwardRange!Separator && is(ElementType!Separator : ElementType!(ElementType!RoR)));
auto **joiner**(RoR)(RoR r)
Constraints: if (isInputRange!RoR && isInputRange!(ElementType!RoR));
Lazily joins a range of ranges with a separator. The separator itself is a range. If a separator is not provided, then the ranges are joined directly without anything in between them (often called `flatten` in other languages).
Parameters:
| | |
| --- | --- |
| RoR `r` | An [input range](std_range_primitives#isInputRange) of input ranges to be joined. |
| Separator `sep` | A [forward range](std_range_primitives#isForwardRange) of element(s) to serve as separators in the joined range. |
Returns:
A range of elements in the joined range. This will be a forward range if both outer and inner ranges of `RoR` are forward ranges; otherwise it will be only an input range. The [range bidirectionality](std_range_primitives#isBidirectionalRange) is propagated if no separator is specified.
See Also:
[`std.range.chain`](std_range#chain), which chains a sequence of ranges with compatible elements into a single range.
Examples:
```
import std.algorithm.comparison : equal;
import std.conv : text;
assert(["abc", "def"].joiner.equal("abcdef"));
assert(["Mary", "has", "a", "little", "lamb"]
.joiner("...")
.equal("Mary...has...a...little...lamb"));
assert(["", "abc"].joiner("xyz").equal("xyzabc"));
assert([""].joiner("xyz").equal(""));
assert(["", ""].joiner("xyz").equal("xyz"));
```
Examples:
```
import std.algorithm.comparison : equal;
import std.range : repeat;
assert([""].joiner.equal(""));
assert(["", ""].joiner.equal(""));
assert(["", "abc"].joiner.equal("abc"));
assert(["abc", ""].joiner.equal("abc"));
assert(["abc", "def"].joiner.equal("abcdef"));
assert(["Mary", "has", "a", "little", "lamb"].joiner.equal("Maryhasalittlelamb"));
assert("abc".repeat(3).joiner.equal("abcabcabc"));
```
Examples:
joiner allows in-place mutation!
```
import std.algorithm.comparison : equal;
auto a = [ [1, 2, 3], [42, 43] ];
auto j = joiner(a);
j.front = 44;
writeln(a); // [[44, 2, 3], [42, 43]]
assert(equal(j, [44, 2, 3, 42, 43]));
```
Examples:
insert characters fully lazily into a string
```
import std.algorithm.comparison : equal;
import std.range : chain, cycle, iota, only, retro, take, zip;
import std.format : format;
static immutable number = "12345678";
static immutable delimiter = ",";
auto formatted = number.retro
.zip(3.iota.cycle.take(number.length))
.map!(z => chain(z[0].only, z[1] == 2 ? delimiter : null))
.joiner
.retro;
static immutable expected = "12,345,678";
assert(formatted.equal(expected));
```
Examples:
joiner can be bidirectional
```
import std.algorithm.comparison : equal;
import std.range : retro;
auto a = [[1, 2, 3], [4, 5]];
auto j = a.joiner;
j.back = 44;
writeln(a); // [[1, 2, 3], [4, 44]]
assert(equal(j.retro, [44, 4, 3, 2, 1]));
```
template **reduce**(fun...) if (fun.length >= 1)
Implements the homonym function (also known as `accumulate`, `compress`, `inject`, or `foldl`) present in various programming languages of functional flavor. There is also [`fold`](#fold) which does the same thing but with the opposite parameter order. The call `reduce!(fun)(seed, range)` first assigns `seed` to an internal variable `result`, also called the accumulator. Then, for each element `x` in `range`, `result = fun(result, x)` gets evaluated. Finally, `result` is returned. The one-argument version `reduce!(fun)(range)` works similarly, but it uses the first element of the range as the seed (the range must be non-empty).
Returns:
the accumulated `result`
Parameters:
| | |
| --- | --- |
| fun | one or more functions |
See Also:
[Fold (higher-order function)](http://en.wikipedia.org/wiki/Fold_(higher-order_function)) [`fold`](#fold) is functionally equivalent to [`reduce`](#reduce) with the argument order reversed, and without the need to use [`tuple`](std_typecons#tuple) for multiple seeds. This makes it easier to use in UFCS chains. [`sum`](#sum) is similar to `reduce!((a, b) => a + b)` that offers pairwise summing of floating point numbers.
Examples:
Many aggregate range operations turn out to be solved with `reduce` quickly and easily. The example below illustrates `reduce`'s remarkable power and flexibility.
```
import std.algorithm.comparison : max, min;
import std.math : approxEqual;
import std.range;
int[] arr = [ 1, 2, 3, 4, 5 ];
// Sum all elements
auto sum = reduce!((a,b) => a + b)(0, arr);
writeln(sum); // 15
// Sum again, using a string predicate with "a" and "b"
sum = reduce!"a + b"(0, arr);
writeln(sum); // 15
// Compute the maximum of all elements
auto largest = reduce!(max)(arr);
writeln(largest); // 5
// Max again, but with Uniform Function Call Syntax (UFCS)
largest = arr.reduce!(max);
writeln(largest); // 5
// Compute the number of odd elements
auto odds = reduce!((a,b) => a + (b & 1))(0, arr);
writeln(odds); // 3
// Compute the sum of squares
auto ssquares = reduce!((a,b) => a + b * b)(0, arr);
writeln(ssquares); // 55
// Chain multiple ranges into seed
int[] a = [ 3, 4 ];
int[] b = [ 100 ];
auto r = reduce!("a + b")(chain(a, b));
writeln(r); // 107
// Mixing convertible types is fair game, too
double[] c = [ 2.5, 3.0 ];
auto r1 = reduce!("a + b")(chain(a, b, c));
assert(approxEqual(r1, 112.5));
// To minimize nesting of parentheses, Uniform Function Call Syntax can be used
auto r2 = chain(a, b, c).reduce!("a + b");
assert(approxEqual(r2, 112.5));
```
Examples:
Sometimes it is very useful to compute multiple aggregates in one pass. One advantage is that the computation is faster because the looping overhead is shared. That's why `reduce` accepts multiple functions. If two or more functions are passed, `reduce` returns a [`std.typecons.Tuple`](std_typecons#Tuple) object with one member per passed-in function. The number of seeds must be correspondingly increased.
```
import std.algorithm.comparison : max, min;
import std.math : approxEqual, sqrt;
import std.typecons : tuple, Tuple;
double[] a = [ 3.0, 4, 7, 11, 3, 2, 5 ];
// Compute minimum and maximum in one pass
auto r = reduce!(min, max)(a);
// The type of r is Tuple!(int, int)
assert(approxEqual(r[0], 2)); // minimum
assert(approxEqual(r[1], 11)); // maximum
// Compute sum and sum of squares in one pass
r = reduce!("a + b", "a + b * b")(tuple(0.0, 0.0), a);
assert(approxEqual(r[0], 35)); // sum
assert(approxEqual(r[1], 233)); // sum of squares
// Compute average and standard deviation from the above
auto avg = r[0] / a.length;
writeln(avg); // 5
auto stdev = sqrt(r[1] / a.length - avg * avg);
writeln(cast(int)stdev); // 2
```
auto **reduce**(R)(R r)
Constraints: if (isIterable!R);
No-seed version. The first element of `r` is used as the seed's value.
For each function `f` in `fun`, the corresponding seed type `S` is `Unqual!(typeof(f(e, e)))`, where `e` is an element of `r`: `ElementType!R` for ranges, and `ForeachType!R` otherwise.
Once S has been determined, then `S s = e;` and `s = f(s, e);` must both be legal.
Parameters:
| | |
| --- | --- |
| R `r` | an iterable value as defined by `isIterable` |
Returns:
the final result of the accumulator applied to the iterable
Throws:
`Exception` if `r` is empty
auto **reduce**(S, R)(S seed, R r)
Constraints: if (isIterable!R);
Seed version. The seed should be a single value if `fun` is a single function. If `fun` is multiple functions, then `seed` should be a [`std.typecons.Tuple`](std_typecons#Tuple), with one field per function in `f`.
For convenience, if the seed is const, or has qualified fields, then `reduce` will operate on an unqualified copy. If this happens then the returned type will not perfectly match `S`.
Use `fold` instead of `reduce` to use the seed version in a UFCS chain.
Parameters:
| | |
| --- | --- |
| S `seed` | the initial value of the accumulator |
| R `r` | an iterable value as defined by `isIterable` |
Returns:
the final result of the accumulator applied to the iterable
template **fold**(fun...) if (fun.length >= 1)
Implements the homonym function (also known as `accumulate`, `compress`, `inject`, or `foldl`) present in various programming languages of functional flavor. The call `fold!(fun)(range, seed)` first assigns `seed` to an internal variable `result`, also called the accumulator. Then, for each element `x` in `range`, `result = fun(result, x)` gets evaluated. Finally, `result` is returned. The one-argument version `fold!(fun)(range)` works similarly, but it uses the first element of the range as the seed (the range must be non-empty).
Parameters:
| | |
| --- | --- |
| fun | the predicate function(s) to apply to the elements |
See Also:
[Fold (higher-order function)](http://en.wikipedia.org/wiki/Fold_(higher-order_function)) [`sum`](#sum) is similar to `fold!((a, b) => a + b)` that offers precise summing of floating point numbers. This is functionally equivalent to [`reduce`](#reduce) with the argument order reversed, and without the need to use [`tuple`](std_typecons#tuple) for multiple seeds.
Examples:
```
immutable arr = [1, 2, 3, 4, 5];
// Sum all elements
writeln(arr.fold!((a, b) => a + b)); // 15
// Sum all elements with explicit seed
writeln(arr.fold!((a, b) => a + b)(6)); // 21
import std.algorithm.comparison : min, max;
import std.typecons : tuple;
// Compute minimum and maximum at the same time
writeln(arr.fold!(min, max)); // tuple(1, 5)
// Compute minimum and maximum at the same time with seeds
writeln(arr.fold!(min, max)(0, 7)); // tuple(0, 7)
// Can be used in a UFCS chain
writeln(arr.map!(a => a + 1).fold!((a, b) => a + b)); // 20
// Return the last element of any range
writeln(arr.fold!((a, b) => b)); // 5
```
auto **fold**(R, S...)(R r, S seed);
Parameters:
| | |
| --- | --- |
| R `r` | the [input range](std_range_primitives#isInputRange) to fold |
| S `seed` | the initial value of the accumulator |
Returns:
the accumulated `result`
template **cumulativeFold**(fun...) if (fun.length >= 1)
Similar to `fold`, but returns a range containing the successive reduced values. The call `cumulativeFold!(fun)(range, seed)` first assigns `seed` to an internal variable `result`, also called the accumulator. The returned range contains the values `result = fun(result, x)` lazily evaluated for each element `x` in `range`. Finally, the last element has the same value as `fold!(fun)(seed, range)`. The one-argument version `cumulativeFold!(fun)(range)` works similarly, but it returns the first element unchanged and uses it as seed for the next elements. This function is also known as [partial\_sum](http://en.cppreference.com/w/cpp/algorithm/partial_sum), [accumulate](http://docs.python.org/3/library/itertools.html#itertools.accumulate), [scan](http://hackage.haskell.org/package/base-4.8.2.0/docs/Prelude.html#v:scanl), [Cumulative Sum](http://mathworld.wolfram.com/CumulativeSum.html).
Parameters:
| | |
| --- | --- |
| fun | one or more functions to use as fold operation |
Returns:
The function returns a range containing the consecutive reduced values. If there is more than one `fun`, the element type will be [`std.typecons.Tuple`](std_typecons#Tuple) containing one element for each `fun`.
See Also:
[Prefix Sum](http://en.wikipedia.org/wiki/Prefix_sum)
Note
In functional programming languages this is typically called `scan`, `scanl`, `scanLeft` or `reductions`.
Examples:
```
import std.algorithm.comparison : max, min;
import std.array : array;
import std.math : approxEqual;
import std.range : chain;
int[] arr = [1, 2, 3, 4, 5];
// Partial sum of all elements
auto sum = cumulativeFold!((a, b) => a + b)(arr, 0);
writeln(sum.array); // [1, 3, 6, 10, 15]
// Partial sum again, using a string predicate with "a" and "b"
auto sum2 = cumulativeFold!"a + b"(arr, 0);
writeln(sum2.array); // [1, 3, 6, 10, 15]
// Compute the partial maximum of all elements
auto largest = cumulativeFold!max(arr);
writeln(largest.array); // [1, 2, 3, 4, 5]
// Partial max again, but with Uniform Function Call Syntax (UFCS)
largest = arr.cumulativeFold!max;
writeln(largest.array); // [1, 2, 3, 4, 5]
// Partial count of odd elements
auto odds = arr.cumulativeFold!((a, b) => a + (b & 1))(0);
writeln(odds.array); // [1, 1, 2, 2, 3]
// Compute the partial sum of squares
auto ssquares = arr.cumulativeFold!((a, b) => a + b * b)(0);
writeln(ssquares.array); // [1, 5, 14, 30, 55]
// Chain multiple ranges into seed
int[] a = [3, 4];
int[] b = [100];
auto r = cumulativeFold!"a + b"(chain(a, b));
writeln(r.array); // [3, 7, 107]
// Mixing convertible types is fair game, too
double[] c = [2.5, 3.0];
auto r1 = cumulativeFold!"a + b"(chain(a, b, c));
assert(approxEqual(r1, [3, 7, 107, 109.5, 112.5]));
// To minimize nesting of parentheses, Uniform Function Call Syntax can be used
auto r2 = chain(a, b, c).cumulativeFold!"a + b";
assert(approxEqual(r2, [3, 7, 107, 109.5, 112.5]));
```
Examples:
Sometimes it is very useful to compute multiple aggregates in one pass. One advantage is that the computation is faster because the looping overhead is shared. That's why `cumulativeFold` accepts multiple functions. If two or more functions are passed, `cumulativeFold` returns a [`std.typecons.Tuple`](std_typecons#Tuple) object with one member per passed-in function. The number of seeds must be correspondingly increased.
```
import std.algorithm.comparison : max, min;
import std.algorithm.iteration : map;
import std.math : approxEqual;
import std.typecons : tuple;
double[] a = [3.0, 4, 7, 11, 3, 2, 5];
// Compute minimum and maximum in one pass
auto r = a.cumulativeFold!(min, max);
// The type of r is Tuple!(int, int)
assert(approxEqual(r.map!"a[0]", [3, 3, 3, 3, 3, 2, 2])); // minimum
assert(approxEqual(r.map!"a[1]", [3, 4, 7, 11, 11, 11, 11])); // maximum
// Compute sum and sum of squares in one pass
auto r2 = a.cumulativeFold!("a + b", "a + b * b")(tuple(0.0, 0.0));
assert(approxEqual(r2.map!"a[0]", [3, 7, 14, 25, 28, 30, 35])); // sum
assert(approxEqual(r2.map!"a[1]", [9, 25, 74, 195, 204, 208, 233])); // sum of squares
```
auto **cumulativeFold**(R)(R range)
Constraints: if (isInputRange!(Unqual!R));
No-seed version. The first element of `r` is used as the seed's value. For each function `f` in `fun`, the corresponding seed type `S` is `Unqual!(typeof(f(e, e)))`, where `e` is an element of `r`: `ElementType!R`. Once `S` has been determined, then `S s = e;` and `s = f(s, e);` must both be legal.
Parameters:
| | |
| --- | --- |
| R `range` | An [input range](std_range_primitives#isInputRange) |
Returns:
a range containing the consecutive reduced values.
auto **cumulativeFold**(R, S)(R range, S seed)
Constraints: if (isInputRange!(Unqual!R));
Seed version. The seed should be a single value if `fun` is a single function. If `fun` is multiple functions, then `seed` should be a [`std.typecons.Tuple`](std_typecons#Tuple), with one field per function in `f`. For convenience, if the seed is `const`, or has qualified fields, then `cumulativeFold` will operate on an unqualified copy. If this happens then the returned type will not perfectly match `S`.
Parameters:
| | |
| --- | --- |
| R `range` | An [input range](std_range_primitives#isInputRange) |
| S `seed` | the initial value of the accumulator |
Returns:
a range containing the consecutive reduced values.
auto **splitter**(alias pred = "a == b", Range, Separator)(Range r, Separator s)
Constraints: if (is(typeof(binaryFun!pred(r.front, s)) : bool) && (hasSlicing!Range && hasLength!Range || isNarrowString!Range));
auto **splitter**(alias pred = "a == b", Range, Separator)(Range r, Separator s)
Constraints: if (is(typeof(binaryFun!pred(r.front, s.front)) : bool) && (hasSlicing!Range || isNarrowString!Range) && isForwardRange!Separator && (hasLength!Separator || isNarrowString!Separator));
auto **splitter**(alias isTerminator, Range)(Range r)
Constraints: if (isForwardRange!Range && is(typeof(unaryFun!isTerminator(r.front))));
Lazily splits a range using an element or range as a separator. Separator ranges can be any narrow string type or sliceable range type.
Two adjacent separators are considered to surround an empty element in the split range. Use `filter!(a => !a.empty)` on the result to compress empty elements.
The predicate is passed to [`std.functional.binaryFun`](std_functional#binaryFun) and accepts any callable function that can be executed via `pred(element, s)`.
Notes
If splitting a string on whitespace and token compression is desired, consider using `splitter` without specifying a separator.
If no separator is passed, the predicate `isTerminator` decides whether to accept an element of `r`.
Parameters:
| | |
| --- | --- |
| pred | The predicate for comparing each element with the separator, defaulting to `"a == b"`. |
| Range `r` | The [input range](std_range_primitives#isInputRange) to be split. Must support slicing and `.length` or be a narrow string type. |
| Separator `s` | The element (or range) to be treated as the separator between range segments to be split. |
| isTerminator | The predicate for deciding where to split the range when no separator is passed |
Constraints
The predicate `pred` needs to accept an element of `r` and the separator `s`.
Returns:
An input range of the subranges of elements between separators. If `r` is a [forward range](std_range_primitives#isForwardRange) or [bidirectional range](std_range_primitives#isBidirectionalRange), the returned range will be likewise. When a range is used a separator, bidirectionality isn't possible. If an empty range is given, the result is an empty range. If a range with one separator is given, the result is a range with two empty elements.
See Also:
[`std.regex.splitter`](std_regex#splitter) for a version that splits using a regular expression defined separator and [`std.array.split`](std_array#split) for a version that splits eagerly.
Examples:
Basic splitting with characters and numbers.
```
import std.algorithm.comparison : equal;
assert("a|bc|def".splitter('|').equal([ "a", "bc", "def" ]));
int[] a = [1, 0, 2, 3, 0, 4, 5, 6];
int[][] w = [ [1], [2, 3], [4, 5, 6] ];
assert(a.splitter(0).equal(w));
```
Examples:
Adjacent separators.
```
import std.algorithm.comparison : equal;
assert("|ab|".splitter('|').equal([ "", "ab", "" ]));
assert("ab".splitter('|').equal([ "ab" ]));
assert("a|b||c".splitter('|').equal([ "a", "b", "", "c" ]));
assert("hello world".splitter(' ').equal([ "hello", "", "world" ]));
auto a = [ 1, 2, 0, 0, 3, 0, 4, 5, 0 ];
auto w = [ [1, 2], [], [3], [4, 5], [] ];
assert(a.splitter(0).equal(w));
```
Examples:
Empty and separator-only ranges.
```
import std.algorithm.comparison : equal;
import std.range : empty;
assert("".splitter('|').empty);
assert("|".splitter('|').equal([ "", "" ]));
assert("||".splitter('|').equal([ "", "", "" ]));
```
Examples:
Use a range for splitting
```
import std.algorithm.comparison : equal;
assert("a=>bc=>def".splitter("=>").equal([ "a", "bc", "def" ]));
assert("a|b||c".splitter("||").equal([ "a|b", "c" ]));
assert("hello world".splitter(" ").equal([ "hello", "world" ]));
int[] a = [ 1, 2, 0, 0, 3, 0, 4, 5, 0 ];
int[][] w = [ [1, 2], [3, 0, 4, 5, 0] ];
assert(a.splitter([0, 0]).equal(w));
a = [ 0, 0 ];
assert(a.splitter([0, 0]).equal([ (int[]).init, (int[]).init ]));
a = [ 0, 0, 1 ];
assert(a.splitter([0, 0]).equal([ [], [1] ]));
```
Examples:
Custom predicate functions.
```
import std.algorithm.comparison : equal;
import std.ascii : toLower;
assert("abXcdxef".splitter!"a.toLower == b"('x').equal(
[ "ab", "cd", "ef" ]));
auto w = [ [0], [1], [2] ];
assert(w.splitter!"a.front == b"(1).equal([ [[0]], [[2]] ]));
```
Examples:
Use splitter without a separator
```
import std.algorithm.comparison : equal;
import std.range.primitives : front;
assert(equal(splitter!(a => a == '|')("a|bc|def"), [ "a", "bc", "def" ]));
assert(equal(splitter!(a => a == ' ')("hello world"), [ "hello", "", "world" ]));
int[] a = [ 1, 2, 0, 0, 3, 0, 4, 5, 0 ];
int[][] w = [ [1, 2], [], [3], [4, 5], [] ];
assert(equal(splitter!(a => a == 0)(a), w));
a = [ 0 ];
assert(equal(splitter!(a => a == 0)(a), [ (int[]).init, (int[]).init ]));
a = [ 0, 1 ];
assert(equal(splitter!(a => a == 0)(a), [ [], [1] ]));
w = [ [0], [1], [2] ];
assert(equal(splitter!(a => a.front == 1)(w), [ [[0]], [[2]] ]));
```
Examples:
Leading separators, trailing separators, or no separators.
```
import std.algorithm.comparison : equal;
assert("|ab|".splitter('|').equal([ "", "ab", "" ]));
assert("ab".splitter('|').equal([ "ab" ]));
```
Examples:
Splitter returns bidirectional ranges if the delimiter is a single element
```
import std.algorithm.comparison : equal;
import std.range : retro;
assert("a|bc|def".splitter('|').retro.equal([ "def", "bc", "a" ]));
```
Examples:
Splitting by word lazily
```
import std.ascii : isWhite;
import std.algorithm.comparison : equal;
import std.algorithm.iteration : splitter;
string str = "Hello World!";
assert(str.splitter!(isWhite).equal(["Hello", "World!"]));
```
auto **splitter**(Range)(Range s)
Constraints: if (isSomeString!Range || isRandomAccessRange!Range && hasLength!Range && hasSlicing!Range && !isConvertibleToString!Range && isSomeChar!(ElementEncodingType!Range));
Lazily splits the character-based range `s` into words, using whitespace as the delimiter.
This function is character-range specific and, contrary to `splitter!(std.uni.isWhite)`, runs of whitespace will be merged together (no empty tokens will be produced).
Parameters:
| | |
| --- | --- |
| Range `s` | The character-based range to be split. Must be a string, or a random-access range of character types. |
Returns:
An [input range](std_range_primitives#isInputRange) of slices of the original range split by whitespace.
Examples:
```
import std.algorithm.comparison : equal;
auto a = " a bcd ef gh ";
assert(equal(splitter(a), ["a", "bcd", "ef", "gh"][]));
```
template **substitute**(substs...) if (substs.length >= 2 && isExpressions!substs)
auto **substitute**(alias pred = (a, b) => a == b, R, Substs...)(R r, Substs substs)
Constraints: if (isInputRange!R && (Substs.length >= 2) && !is(CommonType!Substs == void));
Returns a range with all occurrences of `substs` in `r`. replaced with their substitution.
Single value replacements (`'ö'.substitute!('ä', 'a', 'ö', 'o', 'ü', 'u)`) are supported as well and in Ο(`1`).
Parameters:
| | |
| --- | --- |
| R `r` | an [input range](std_range_primitives#isInputRange) |
| Value value | a single value which can be substituted in Ο(`1`) |
| Substs `substs` | a set of replacements/substitutions |
| pred | the equality function to test if element(s) are equal to a substitution |
Returns:
a range with the substitutions replaced.
See Also:
[`std.array.replace`](std_array#replace) for an eager replace algorithm or [`std.string.translate`](std_string#translate), and [`std.string.tr`](std_string#tr) for string algorithms with translation tables.
Examples:
```
import std.algorithm.comparison : equal;
// substitute single elements
assert("do_it".substitute('_', ' ').equal("do it"));
// substitute multiple, single elements
assert("do_it".substitute('_', ' ',
'd', 'g',
'i', 't',
't', 'o')
.equal("go to"));
// substitute subranges
assert("do_it".substitute("_", " ",
"do", "done")
.equal("done it"));
// substitution works for any ElementType
int[] x = [1, 2, 3];
auto y = x.substitute(1, 0.1);
assert(y.equal([0.1, 2, 3]));
static assert(is(typeof(y.front) == double));
import std.range : retro;
assert([1, 2, 3].substitute(1, 0.1).retro.equal([3, 2, 0.1]));
```
Examples:
Use the faster compile-time overload
```
import std.algorithm.comparison : equal;
// substitute subranges of a range
assert("apple_tree".substitute!("apple", "banana",
"tree", "shrub").equal("banana_shrub"));
// substitute subranges of a range
assert("apple_tree".substitute!('a', 'b',
't', 'f').equal("bpple_free"));
// substitute values
writeln('a'.substitute!('a', 'b', 't', 'f')); // 'b'
```
Examples:
Multiple substitutes
```
import std.algorithm.comparison : equal;
import std.range.primitives : ElementType;
int[3] x = [1, 2, 3];
auto y = x[].substitute(1, 0.1)
.substitute(0.1, 0.2);
static assert(is(typeof(y.front) == double));
assert(y.equal([0.2, 2, 3]));
auto z = "42".substitute('2', '3')
.substitute('3', '1');
static assert(is(ElementType!(typeof(z)) == dchar));
assert(equal(z, "41"));
```
auto **substitute**(Value)(Value value)
Constraints: if (isInputRange!Value || !is(CommonType!(Value, typeof(substs[0])) == void));
Substitute single values with compile-time substitution mappings.
Complexity
Ο(`1`) due to D's `switch` guaranteeing Ο(`1`);
auto **sum**(R)(R r)
Constraints: if (isInputRange!R && !isInfinite!R && is(typeof(r.front + r.front)));
auto **sum**(R, E)(R r, E seed)
Constraints: if (isInputRange!R && !isInfinite!R && is(typeof(seed = seed + r.front)));
Sums elements of `r`, which must be a finite [input range](std_range_primitives#isInputRange). Although conceptually `sum(r)` is equivalent to [`fold`](#fold)!((a, b) => a + b)(r, 0), `sum` uses specialized algorithms to maximize accuracy, as follows.
* If [`std.range.primitives.ElementType`](std_range_primitives#ElementType)!R is a floating-point type and `R` is a [random-access range](std_range_primitives#isRandomAccessRange) with length and slicing, then `sum` uses the [pairwise summation](http://en.wikipedia.org/wiki/Pairwise_summation) algorithm.
* If `ElementType!R` is a floating-point type and `R` is a finite input range (but not a random-access range with slicing), then `sum` uses the [Kahan summation](http://en.wikipedia.org/wiki/Kahan_summation) algorithm.
* In all other cases, a simple element by element addition is done.
For floating point inputs, calculations are made in [`real`](https://dlang.org/spec/type.html) precision for `real` inputs and in `double` precision otherwise (Note this is a special case that deviates from `fold`'s behavior, which would have kept `float` precision for a `float` range). For all other types, the calculations are done in the same type obtained from from adding two elements of the range, which may be a different type from the elements themselves (for example, in case of [integral promotion](https://dlang.org/spec/type.html#integer-promotions)).
A seed may be passed to `sum`. Not only will this seed be used as an initial value, but its type will override all the above, and determine the algorithm and precision used for summation. If a seed is not passed, one is created with the value of `typeof(r.front + r.front)(0)`, or `typeof(r.front + r.front).zero` if no constructor exists that takes an int.
Note that these specialized summing algorithms execute more primitive operations than vanilla summation. Therefore, if in certain cases maximum speed is required at expense of precision, one can use `fold!((a, b) => a + b)(r, 0)`, which is not specialized for summation.
Parameters:
| | |
| --- | --- |
| E `seed` | the initial value of the summation |
| R `r` | a finite input range |
Returns:
The sum of all the elements in the range r.
Examples:
Ditto
```
import std.range;
//simple integral sumation
writeln(sum([1, 2, 3, 4])); // 10
//with integral promotion
writeln(sum([false, true, true, false, true])); // 3
writeln(sum(ubyte.max.repeat(100))); // 25500
//The result may overflow
writeln(uint.max.repeat(3).sum()); // 4294967293U
//But a seed can be used to change the sumation primitive
writeln(uint.max.repeat(3).sum(ulong.init)); // 12884901885UL
//Floating point sumation
writeln(sum([1.0, 2.0, 3.0, 4.0])); // 10
//Floating point operations have double precision minimum
static assert(is(typeof(sum([1F, 2F, 3F, 4F])) == double));
writeln(sum([1F, 2, 3, 4])); // 10
//Force pair-wise floating point sumation on large integers
import std.math : approxEqual;
assert(iota(ulong.max / 2, ulong.max / 2 + 4096).sum(0.0)
.approxEqual((ulong.max / 2) * 4096.0 + 4096^^2 / 2));
```
T **mean**(T = double, R)(R r)
Constraints: if (isInputRange!R && isNumeric!(ElementType!R) && !isInfinite!R);
auto **mean**(R, T)(R r, T seed)
Constraints: if (isInputRange!R && !isNumeric!(ElementType!R) && is(typeof(r.front + seed)) && is(typeof(r.front / size\_t(1))) && !isInfinite!R);
Finds the mean (colloquially known as the average) of a range.
For built-in numerical types, accurate Knuth & Welford mean calculation is used. For user-defined types, element by element summation is used. Additionally an extra parameter `seed` is needed in order to correctly seed the summation with the equivalent to `0`.
The first overload of this function will return `T.init` if the range is empty. However, the second overload will return `seed` on empty ranges.
This function is Ο(`r.length`).
Parameters:
| | |
| --- | --- |
| T | The type of the return value. |
| R `r` | An [input range](std_range_primitives#isInputRange) |
| T `seed` | For user defined types. Should be equivalent to `0`. |
Returns:
The mean of `r` when `r` is non-empty.
Examples:
```
import std.math : approxEqual, isNaN;
static immutable arr1 = [1, 2, 3];
static immutable arr2 = [1.5, 2.5, 12.5];
assert(arr1.mean.approxEqual(2));
assert(arr2.mean.approxEqual(5.5));
assert(arr1[0 .. 0].mean.isNaN);
```
auto **uniq**(alias pred = "a == b", Range)(Range r)
Constraints: if (isInputRange!Range && is(typeof(binaryFun!pred(r.front, r.front)) == bool));
Lazily iterates unique consecutive elements of the given range (functionality akin to the [uniq](http://wikipedia.org/wiki/Uniq) system utility). Equivalence of elements is assessed by using the predicate `pred`, by default `"a == b"`. The predicate is passed to [`std.functional.binaryFun`](std_functional#binaryFun), and can either accept a string, or any callable that can be executed via `pred(element, element)`. If the given range is bidirectional, `uniq` also yields a [bidirectional range](std_range_primitives#isBidirectionalRange).
Parameters:
| | |
| --- | --- |
| pred | Predicate for determining equivalence between range elements. |
| Range `r` | An [input range](std_range_primitives#isInputRange) of elements to filter. |
Returns:
An [input range](std_range_primitives#isInputRange) of consecutively unique elements in the original range. If `r` is also a forward range or bidirectional range, the returned range will be likewise.
Examples:
```
import std.algorithm.comparison : equal;
import std.algorithm.mutation : copy;
int[] arr = [ 1, 2, 2, 2, 2, 3, 4, 4, 4, 5 ];
assert(equal(uniq(arr), [ 1, 2, 3, 4, 5 ][]));
// Filter duplicates in-place using copy
arr.length -= arr.uniq().copy(arr).length;
writeln(arr); // [1, 2, 3, 4, 5]
// Note that uniqueness is only determined consecutively; duplicated
// elements separated by an intervening different element will not be
// eliminated:
assert(equal(uniq([ 1, 1, 2, 1, 1, 3, 1]), [1, 2, 1, 3, 1]));
```
Permutations!Range **permutations**(Range)(Range r)
Constraints: if (isRandomAccessRange!Range && hasLength!Range);
struct **Permutations**(Range) if (isRandomAccessRange!Range && hasLength!Range);
Lazily computes all permutations of `r` using [Heap's algorithm](http://en.wikipedia.org/wiki/Heap%27s_algorithm).
Parameters:
| | |
| --- | --- |
| Range | the range type |
| Range `r` | the [random access range](std_range_primitives#isRandomAccessRange) to find the permutations for. |
Returns:
A [forward range](std_range_primitives#isForwardRange) of elements of which are an [`std.range.indexed`](std_range#indexed) view into `r`.
See Also:
[`std.algorithm.sorting.nextPermutation`](std_algorithm_sorting#nextPermutation).
Examples:
```
import std.algorithm.comparison : equal;
import std.range : iota;
assert(equal!equal(iota(3).permutations,
[[0, 1, 2],
[1, 0, 2],
[2, 0, 1],
[0, 2, 1],
[1, 2, 0],
[2, 1, 0]]));
```
| programming_docs |
d Vector Extensions Vector Extensions
=================
**Contents** 1. [`core.simd`](#core_simd)
1. [Properties](#properties)
2. [Conversions](#conversions)
3. [Accessing Individual Vector Elements](#accessing_individual_elems)
4. [Conditional Compilation](#conditional_compilation)
2. [X86 And X86\_64 Vector Extension Implementation](#x86_64_vec)
1. [Vector Operation Intrinsics](#vector_op_intrinsics)
CPUs often support specialized vector types and vector operations (a.k.a. *media instructions*). Vector types are a fixed array of floating or integer types, and vector operations operate simultaneously on them.
Specialized [*Vector*](type#Vector) types provide access to them.
The [*VectorBaseType*](type#VectorBaseType) must be a [Static Array](https://dlang.org/arrays.html#static-arrays). The VectorElementType is the unqualified element type of the static array. The dimension of the static array is the number of elements in the vector.
**Implementation Defined:** Which vector types are supported depends on the target. The implementation is expected to only support the vector types that are implemented in the target's hardware. **Best Practices:** Use the declarations in [`core.simd`](https://dlang.org/phobos/core_simd.html) instead of the language [*Vector*](type#Vector) grammar. `core.simd`
-----------
Vector types and operations are introduced by importing [`core.simd`](https://dlang.org/phobos/core_simd.html):
```
import core.simd;
```
**Implementation Defined:** These types and operations will be the ones defined for the architecture the compiler is targeting. If a particular CPU family has varying support for vector types, an additional runtime check may be necessary. The compiler does not emit runtime checks; those must be done by the programmer.
Depending on the architecture, compiler flags may be required to activate support for SIMD types.
The types defined will all follow the naming convention:
```
typeNN
```
where *type* is the vector element type and *NN* is the number of those elements in the vector type. The type names will not be keywords. ### Properties
Vector types have the property:
Vector Type Properties| **Property** | **Description** |
| .array | Returns static array representation |
Vectors support the following properties based on the vector element type. The value produced is that of a vector of the same type with each element set to the value corresponding to the property value for the element type.
Integral Vector Type Properties| **Property** | **Description** |
| .min | minimum value |
| .max | maximum value |
Floating Point Vector Type Properties| **Property** | **Description** |
| .epsilon | smallest increment to the value 1 |
| .infinity | infinity value |
| .max | largest representable value that is not infinity |
| .min\_normal | smallest representable value that is not 0 |
| .nan | NaN value |
### Conversions
Vector types of the same size can be implicitly converted among each other, this is done as a reinterpret cast (a type paint). Vector types can be cast to their [*VectorBaseType*](type#VectorBaseType).
Integers and floating point values can be implicitly converted to their vector equivalents:
```
int4 v = 7;
v = 3 + v; // add 3 to each element in v
```
### Accessing Individual Vector Elements
They cannot be accessed directly, but can be when converted to an array type:
```
int4 v;
(cast(int*)&v)[3] = 2; // set 3rd element of the 4 int vector
(cast(int[4])v)[3] = 2; // set 3rd element of the 4 int vector
v.array[3] = 2; // set 3rd element of the 4 int vector
v.ptr[3] = 2; // set 3rd element of the 4 int vector
```
### Conditional Compilation
If vector extensions are implemented, the [version identifier](version#PredefinedVersions) `D_SIMD` is set.
Whether a type exists or not can be tested at compile time with an [*IsExpression*](expression#IsExpression):
```
static if (is(typeNN))
... yes, it is supported ...
else
... nope, use workaround ...
```
Whether a particular operation on a type is supported can be tested at compile time with:
```
float4 a,b;
static if (__traits(compiles, a+b))
... yes, it is supported ...
else
... nope, use workaround ...
```
For runtime testing to see if certain vector instructions are available, see the functions in [core.cpuid](https://dlang.org/phobos/core_cpuid.html).
A typical workaround would be to use array vector operations instead:
```
float4 a,b;
static if (__traits(compiles, a/b))
c = a / b;
else
c[] = a[] / b[];
```
X86 And X86\_64 Vector Extension Implementation
-----------------------------------------------
**Implementation Defined:** The following describes the specific implementation of the vector types for the X86 and X86\_64 architectures.
The vector extensions are currently implemented for the OS X 32 bit target, and all 64 bit targets.
[`core.simd`](https://dlang.org/phobos/core_simd.html) defines the following types:
Vector Types| **Type Name** | **Description** | **gcc Equivalent** |
| void16 | 16 bytes of untyped data | *no equivalent* |
| byte16 | 16 `byte`s | `signed char __attribute__((vector_size(16)))` |
| ubyte16 | 16 `ubyte`s | `unsigned char __attribute__((vector_size(16)))` |
| short8 | 8 `short`s | `short __attribute__((vector_size(16)))` |
| ushort8 | 8 `ushort`s | `ushort __attribute__((vector_size(16)))` |
| int4 | 4 `int`s | `int __attribute__((vector_size(16)))` |
| uint4 | 4 `uint`s | `unsigned __attribute__((vector_size(16)))` |
| long2 | 2 `long`s | `long __attribute__((vector_size(16)))` |
| ulong2 | 2 `ulong`s | `unsigned long __attribute__((vector_size(16)))` |
| float4 | 4 `float`s | `float __attribute__((vector_size(16)))` |
| double2 | 2 `double`s | `double __attribute__((vector_size(16)))` |
| void32 | 32 bytes of untyped data | *no equivalent* |
| byte32 | 32 `byte`s | `signed char __attribute__((vector_size(32)))` |
| ubyte32 | 32 `ubyte`s | `unsigned char __attribute__((vector_size(32)))` |
| short16 | 16 `short`s | `short __attribute__((vector_size(32)))` |
| ushort16 | 16 `ushort`s | `ushort __attribute__((vector_size(32)))` |
| int8 | 8 `int`s | `int __attribute__((vector_size(32)))` |
| uint8 | 8 `uint`s | `unsigned __attribute__((vector_size(32)))` |
| long4 | 4 `long`s | `long __attribute__((vector_size(32)))` |
| ulong4 | 4 `ulong`s | `unsigned long __attribute__((vector_size(32)))` |
| float8 | 8 `float`s | `float __attribute__((vector_size(32)))` |
| double4 | 4 `double`s | `double __attribute__((vector_size(32)))` |
Note: for 32 bit gcc, it's `long long` instead of `long`.
Supported 128-bit Vector Operators| **Operator** | **void16** | **byte16** | **ubyte16** | **short8** | **ushort8** | **int4** | **uint4** | **long2** | **ulong2** | **float4** | **double2** |
| = | × | × | × | × | × | × | × | × | × | × | × |
| + | – | × | × | × | × | × | × | × | × | × | × |
| - | – | × | × | × | × | × | × | × | × | × | × |
| \* | – | – | – | × | × | – | – | – | – | × | × |
| / | – | – | – | – | – | – | – | – | – | × | × |
| `&` | – | × | × | × | × | × | × | × | × | – | – |
| | | – | × | × | × | × | × | × | × | × | – | – |
| `^` | – | × | × | × | × | × | × | × | × | – | – |
| += | – | × | × | × | × | × | × | × | × | × | × |
| -= | – | × | × | × | × | × | × | × | × | × | × |
| \*= | – | – | – | × | × | – | – | – | – | × | × |
| /= | – | – | – | – | – | – | – | – | – | × | × |
| `&`= | – | × | × | × | × | × | × | × | × | – | – |
| |= | – | × | × | × | × | × | × | × | × | – | – |
| `^=` | – | × | × | × | × | × | × | × | × | – | – |
| *unary*`~` | – | × | × | × | × | × | × | × | × | – | – |
| *unary*+ | – | × | × | × | × | × | × | × | × | × | × |
| *unary*- | – | × | × | × | × | × | × | × | × | × | × |
Supported 256-bit Vector Operators| **Operator** | **void32** | **byte32** | **ubyte32** | **short16** | **ushort16** | **int8** | **uint8** | **long4** | **ulong4** | **float8** | **double4** |
| = | × | × | × | × | × | × | × | × | × | × | × |
| + | – | × | × | × | × | × | × | × | × | × | × |
| - | – | × | × | × | × | × | × | × | × | × | × |
| \* | – | – | – | – | – | – | – | – | – | × | × |
| / | – | – | – | – | – | – | – | – | – | × | × |
| `&` | – | × | × | × | × | × | × | × | × | – | – |
| | | – | × | × | × | × | × | × | × | × | – | – |
| `^` | – | × | × | × | × | × | × | × | × | – | – |
| += | – | × | × | × | × | × | × | × | × | × | × |
| -= | – | × | × | × | × | × | × | × | × | × | × |
| \*= | – | – | – | – | – | – | – | – | – | × | × |
| /= | – | – | – | – | – | – | – | – | – | × | × |
| `&`= | – | × | × | × | × | × | × | × | × | – | – |
| |= | – | × | × | × | × | × | × | × | × | – | – |
| `^=` | – | × | × | × | × | × | × | × | × | – | – |
| *unary*`~` | – | × | × | × | × | × | × | × | × | – | – |
| *unary*+ | – | × | × | × | × | × | × | × | × | × | × |
| *unary*- | – | × | × | × | × | × | × | × | × | × | × |
Operators not listed are not supported at all.
### Vector Operation Intrinsics
See [`core.simd`](https://dlang.org/phobos/core_simd.html) for the supported intrinsics.
d rt.sections_osx_x86_64 rt.sections\_osx\_x86\_64
=========================
Written in the D programming language. This module provides OS X x86-64 specific support for sections.
License:
Distributed under the [Boost Software License 1.0](http://www.boost.org/LICENSE_1_0.txt). (See accompanying file LICENSE)
Authors:
Walter Bright, Sean Kelly, Martin Nowak, Jacob Carlborg
Source
[rt/sections\_osx\_x86\_64.d](https://github.com/dlang/druntime/blob/master/src/rt/sections_osx_x86_64.d)
d core.stdcpp.vector core.stdcpp.vector
==================
D header file for interaction with C++ std::vector.
License:
Distributed under the [Boost Software License 1.0](http://www.boost.org/LICENSE_1_0.txt). (See accompanying file LICENSE)
Authors:
Guillaume Chatelet Manu Evans
Source
[core/stdcpp/vector.d](https://github.com/dlang/druntime/blob/master/src/core/stdcpp/vector.d)
enum DefaultConstruct **Default**;
Constructor argument for default construction
d std.experimental.allocator.showcase std.experimental.allocator.showcase
===================================
Collection of typical and useful prebuilt allocators using the given components. User code would typically import this module and use its facilities, or import individual heap building blocks and assemble them.
Source
[std/experimental/allocator/showcase.d](https://github.com/dlang/phobos/blob/master/std/experimental/allocator/showcase.d)
template **StackFront**(size\_t stackSize, Allocator = GCAllocator)
Allocator that uses stack allocation for up to `stackSize` bytes and then falls back to `Allocator`. Defined as:
```
alias StackFront(size_t stackSize, Allocator) =
FallbackAllocator!(
InSituRegion!(stackSize, Allocator.alignment,
hasMember!(Allocator, "deallocate")
? Yes.defineDeallocate
: No.defineDeallocate),
Allocator);
```
Choosing `stackSize` is as always a compromise. Too small a size exhausts the stack storage after a few allocations, after which there are no gains over the backup allocator. Too large a size increases the stack consumed by the thread and may end up worse off because it explores cold portions of the stack.
Examples:
```
StackFront!4096 a;
auto b = a.allocate(4000);
writeln(b.length); // 4000
auto c = a.allocate(4000);
writeln(c.length); // 4000
a.deallocate(b);
a.deallocate(c);
```
auto **mmapRegionList**(size\_t bytesPerRegion);
Creates a scalable `AllocatorList` of `Regions`, each having at least `bytesPerRegion` bytes. Allocation is very fast. This allocator does not offer `deallocate` but does free all regions in its destructor. It is recommended for short-lived batch applications that count on never running out of memory.
Examples:
```
auto alloc = mmapRegionList(1024 * 1024);
const b = alloc.allocate(100);
writeln(b.length); // 100
```
d core.stdc.string core.stdc.string
================
D header file for C99.
This module contains bindings to selected types and functions from the standard C header [`<string.h>`](http://pubs.opengroup.org/onlinepubs/009695399/basedefs/string.h.html). Note that this is not automatically generated, and may omit some types/functions from the original C header.
License:
Distributed under the [Boost Software License 1.0](http://www.boost.org/LICENSE_1_0.txt). (See accompanying file LICENSE)
Authors:
Sean Kelly
Source
[core/stdc/string.d](https://github.com/dlang/druntime/blob/master/src/core/stdc/string.d)
Standards:
ISO/IEC 9899:1999 (E)
pure nothrow @nogc @system inout(void)\* **memchr**(return inout void\* s, int c, size\_t n); pure nothrow @nogc @system int **memcmp**(scope const void\* s1, scope const void\* s2, size\_t n); pure nothrow @nogc @system void\* **memcpy**(return void\* s1, scope const void\* s2, size\_t n); pure nothrow @nogc @system void\* **memmove**(return void\* s1, scope const void\* s2, size\_t n); pure nothrow @nogc @system void\* **memset**(return void\* s, int c, size\_t n); pure nothrow @nogc @system char\* **strcat**(return char\* s1, scope const char\* s2); pure nothrow @nogc @system inout(char)\* **strchr**(return inout(char)\* s, int c); pure nothrow @nogc @system int **strcmp**(scope const char\* s1, scope const char\* s2); nothrow @nogc @system int **strcoll**(scope const char\* s1, scope const char\* s2); pure nothrow @nogc @system char\* **strcpy**(return char\* s1, scope const char\* s2); pure nothrow @nogc @system size\_t **strcspn**(scope const char\* s1, scope const char\* s2); nothrow @nogc @system char\* **strdup**(scope const char\* s); nothrow @nogc @system char\* **strerror**(int errnum); nothrow @nogc @system const(char)\* **strerror\_r**(int errnum, return char\* buf, size\_t buflen); pure nothrow @nogc @system size\_t **strlen**(scope const char\* s); pure nothrow @nogc @system char\* **strncat**(return char\* s1, scope const char\* s2, size\_t n); pure nothrow @nogc @system int **strncmp**(scope const char\* s1, scope const char\* s2, size\_t n); pure nothrow @nogc @system char\* **strncpy**(return char\* s1, scope const char\* s2, size\_t n); pure nothrow @nogc @system inout(char)\* **strpbrk**(return inout(char)\* s1, scope const char\* s2); pure nothrow @nogc @system inout(char)\* **strrchr**(return inout(char)\* s, int c); pure nothrow @nogc @system size\_t **strspn**(scope const char\* s1, scope const char\* s2); pure nothrow @nogc @system inout(char)\* **strstr**(return inout(char)\* s1, scope const char\* s2); nothrow @nogc @system char\* **strtok**(return char\* s1, scope const char\* s2); nothrow @nogc @system size\_t **strxfrm**(scope char\* s1, scope const char\* s2, size\_t n);
d core.demangle core.demangle
=============
The demangle module converts mangled D symbols to a representation similar to what would have existed in code.
License:
Distributed under the [Boost Software License 1.0](http://www.boost.org/LICENSE_1_0.txt). (See accompanying file LICENSE)
Authors:
Sean Kelly
Source
[core/demangle.d](https://github.com/dlang/druntime/blob/master/src/core/demangle.d)
pure nothrow @safe char[] **demangle**(const(char)[] buf, char[] dst = null);
Demangles D mangled names. If it is not a D mangled name, it returns its argument name.
Parameters:
| | |
| --- | --- |
| const(char)[] `buf` | The string to demangle. |
| char[] `dst` | An optional destination buffer. |
Returns:
The demangled name or the original string if the name is not a mangled D name.
pure nothrow @safe char[] **demangleType**(const(char)[] buf, char[] dst = null);
Demangles a D mangled type.
Parameters:
| | |
| --- | --- |
| const(char)[] `buf` | The string to demangle. |
| char[] `dst` | An optional destination buffer. |
Returns:
The demangled type name or the original string if the name is not a mangled D type.
pure nothrow @safe char[] **reencodeMangled**(const(char)[] mangled);
reencode a mangled symbol name that might include duplicate occurrences of the same identifier by replacing all but the first occurence with a back reference.
Parameters:
| | |
| --- | --- |
| const(char)[] `mangled` | The mangled string representing the type |
Returns:
The mangled name with deduplicated identifiers
pure nothrow @safe char[] **mangle**(T)(const(char)[] fqn, char[] dst = null);
Mangles a D symbol.
Parameters:
| | |
| --- | --- |
| T | The type of the symbol. |
| const(char)[] `fqn` | The fully qualified name of the symbol. |
| char[] `dst` | An optional destination buffer. |
Returns:
The mangled name for a symbols of type T and the given fully qualified name.
Examples:
```
assert(mangle!int("a.b") == "_D1a1bi");
assert(mangle!(char[])("test.foo") == "_D4test3fooAa");
assert(mangle!(int function(int))("a.b") == "_D1a1bPFiZi");
```
pure nothrow @safe char[] **mangleFunc**(T : FT\*, FT)(const(char)[] fqn, char[] dst = null)
Constraints: if (is(FT == function));
Mangles a D function.
Parameters:
| | |
| --- | --- |
| T | function pointer type. |
| const(char)[] `fqn` | The fully qualified name of the symbol. |
| char[] `dst` | An optional destination buffer. |
Returns:
The mangled name for a function with function pointer type T and the given fully qualified name.
enum string **cPrefix**;
C name mangling is done by adding a prefix on some platforms.
d std.xml std.xml
=======
Warning: This module is considered out-dated and not up to Phobos' current standards. It will be removed from Phobos in 2.101.0. If you still need it, go to <https://github.com/DigitalMars/undeaD>
Classes and functions for creating and parsing XML
The basic architecture of this module is that there are standalone functions, classes for constructing an XML document from scratch (Tag, Element and Document), and also classes for parsing a pre-existing XML file (ElementParser and DocumentParser). The parsing classes *may* be used to build a Document, but that is not their primary purpose. The handling capabilities of DocumentParser and ElementParser are sufficiently customizable that you can make them do pretty much whatever you want.
Example
This example creates a DOM (Document Object Model) tree from an XML file.
```
import std.xml;
import std.stdio;
import std.string;
import std.file;
// books.xml is used in various samples throughout the Microsoft XML Core
// Services (MSXML) SDK.
//
// See http://msdn2.microsoft.com/en-us/library/ms762271(VS.85).aspx
void main()
{
string s = cast(string) std.file.read("books.xml");
// Check for well-formedness
check(s);
// Make a DOM tree
auto doc = new Document(s);
// Plain-print it
writeln(doc);
}
```
Example
This example does much the same thing, except that the file is deconstructed and reconstructed by hand. This is more work, but the techniques involved offer vastly more power.
```
import std.xml;
import std.stdio;
import std.string;
struct Book
{
string id;
string author;
string title;
string genre;
string price;
string pubDate;
string description;
}
void main()
{
string s = cast(string) std.file.read("books.xml");
// Check for well-formedness
check(s);
// Take it apart
Book[] books;
auto xml = new DocumentParser(s);
xml.onStartTag["book"] = (ElementParser xml)
{
Book book;
book.id = xml.tag.attr["id"];
xml.onEndTag["author"] = (in Element e) { book.author = e.text(); };
xml.onEndTag["title"] = (in Element e) { book.title = e.text(); };
xml.onEndTag["genre"] = (in Element e) { book.genre = e.text(); };
xml.onEndTag["price"] = (in Element e) { book.price = e.text(); };
xml.onEndTag["publish-date"] = (in Element e) { book.pubDate = e.text(); };
xml.onEndTag["description"] = (in Element e) { book.description = e.text(); };
xml.parse();
books ~= book;
};
xml.parse();
// Put it back together again;
auto doc = new Document(new Tag("catalog"));
foreach (book;books)
{
auto element = new Element("book");
element.tag.attr["id"] = book.id;
element ~= new Element("author", book.author);
element ~= new Element("title", book.title);
element ~= new Element("genre", book.genre);
element ~= new Element("price", book.price);
element ~= new Element("publish-date",book.pubDate);
element ~= new Element("description", book.description);
doc ~= element;
}
// Pretty-print it
writefln(join(doc.pretty(3),"\n"));
}
```
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
Janice Caron
Source
[std/xml.d](https://github.com/dlang/phobos/blob/master/std/xml.d)
pure nothrow @nogc @safe bool **isChar**(dchar c);
Returns true if the character is a character according to the XML standard
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210)
Parameters:
| | |
| --- | --- |
| dchar `c` | the character to be tested |
pure nothrow @nogc @safe bool **isSpace**(dchar c);
Returns true if the character is whitespace according to the XML standard
Only the following characters are considered whitespace in XML - space, tab, carriage return and linefeed
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210)
Parameters:
| | |
| --- | --- |
| dchar `c` | the character to be tested |
pure nothrow @nogc @safe bool **isDigit**(dchar c);
Returns true if the character is a digit according to the XML standard
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210)
Parameters:
| | |
| --- | --- |
| dchar `c` | the character to be tested |
pure nothrow @nogc @safe bool **isLetter**(dchar c);
Returns true if the character is a letter according to the XML standard
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210)
Parameters:
| | |
| --- | --- |
| dchar `c` | the character to be tested |
pure nothrow @nogc @safe bool **isIdeographic**(dchar c);
Returns true if the character is an ideographic character according to the XML standard
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210)
Parameters:
| | |
| --- | --- |
| dchar `c` | the character to be tested |
pure nothrow @nogc @safe bool **isBaseChar**(dchar c);
Returns true if the character is a base character according to the XML standard
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210)
Parameters:
| | |
| --- | --- |
| dchar `c` | the character to be tested |
pure nothrow @nogc @safe bool **isCombiningChar**(dchar c);
Returns true if the character is a combining character according to the XML standard
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210)
Parameters:
| | |
| --- | --- |
| dchar `c` | the character to be tested |
pure nothrow @nogc @safe bool **isExtender**(dchar c);
Returns true if the character is an extender according to the XML standard
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210)
Parameters:
| | |
| --- | --- |
| dchar `c` | the character to be tested |
S **encode**(S)(S s);
Encodes a string by replacing all characters which need to be escaped with appropriate predefined XML entities.
encode() escapes certain characters (ampersand, quote, apostrophe, less-than and greater-than), and similarly, decode() unescapes them. These functions are provided for convenience only. You do not need to use them when using the std.xml classes, because then all the encoding and decoding will be done for you automatically.
If the string is not modified, the original will be returned.
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210)
Parameters:
| | |
| --- | --- |
| S `s` | The string to be encoded |
Returns:
The encoded string
Example
```
writefln(encode("a > b")); // writes "a > b"
```
enum **DecodeMode**: int;
Mode to use for decoding.
NONE Do not decode LOOSE Decode, but ignore errors STRICT Decode, and throw exception on error
pure @safe string **decode**(string s, DecodeMode mode = DecodeMode.LOOSE);
Decodes a string by unescaping all predefined XML entities.
encode() escapes certain characters (ampersand, quote, apostrophe, less-than and greater-than), and similarly, decode() unescapes them. These functions are provided for convenience only. You do not need to use them when using the std.xml classes, because then all the encoding and decoding will be done for you automatically.
This function decodes the entities &, ", ', < and >, as well as decimal and hexadecimal entities such as €
If the string does not contain an ampersand, the original will be returned.
Note that the "mode" parameter can be one of DecodeMode.NONE (do not decode), DecodeMode.LOOSE (decode, but ignore errors), or DecodeMode.STRICT (decode, and throw a DecodeException in the event of an error).
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210)
Parameters:
| | |
| --- | --- |
| string `s` | The string to be decoded |
| DecodeMode `mode` | (optional) Mode to use for decoding. (Defaults to LOOSE). |
Throws:
DecodeException if mode == DecodeMode.STRICT and decode fails
Returns:
The decoded string
Example
```
writefln(decode("a > b")); // writes "a > b"
```
class **Document**: std.xml.Element;
Class representing an XML document.
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210)
string **prolog**;
Contains all text which occurs before the root element. Defaults to <?xml version="1.0"?>
string **epilog**;
Contains all text which occurs after the root element. Defaults to the empty string
this(string s);
Constructs a Document by parsing XML text.
This function creates a complete DOM (Document Object Model) tree.
The input to this function MUST be valid XML. This is enforced by DocumentParser's in contract.
Parameters:
| | |
| --- | --- |
| string `s` | the complete XML text. |
this(const(Tag) tag);
Constructs a Document from a Tag.
Parameters:
| | |
| --- | --- |
| const(Tag) `tag` | the start tag of the document. |
const bool **opEquals**(scope const Object o);
Compares two Documents for equality
Example
```
Document d1,d2;
if (d1 == d2) { }
```
const scope int **opCmp**(scope const Object o);
Compares two Documents
You should rarely need to call this function. It exists so that Documents can be used as associative array keys.
Example
```
Document d1,d2;
if (d1 < d2) { }
```
const scope @trusted size\_t **toHash**();
Returns the hash of a Document
You should rarely need to call this function. It exists so that Documents can be used as associative array keys.
const scope @safe string **toString**();
Returns the string representation of a Document. (That is, the complete XML of a document).
class **Element**: std.xml.Item;
Class representing an XML element.
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210)
Tag **tag**;
The start tag of the element
Item[] **items**;
The element's items
Text[] **texts**;
The element's text items
CData[] **cdatas**;
The element's CData items
Comment[] **comments**;
The element's comments
ProcessingInstruction[] **pis**;
The element's processing instructions
Element[] **elements**;
The element's child elements
pure @safe this(string name, string interior = null);
Constructs an Element given a name and a string to be used as a Text interior.
Parameters:
| | |
| --- | --- |
| string `name` | the name of the element. |
| string `interior` | (optional) the string interior. |
Example
```
auto element = new Element("title","Serenity")
// constructs the element <title>Serenity</title>
```
pure @safe this(const(Tag) tag\_);
Constructs an Element from a Tag.
Parameters:
| | |
| --- | --- |
| const(Tag) `tag_` | the start or empty tag of the element. |
pure @safe void **opOpAssign**(string op)(Text item)
Constraints: if (op == "~");
Append a text item to the interior of this element
Parameters:
| | |
| --- | --- |
| Text `item` | the item you wish to append. |
Example
```
Element element;
element ~= new Text("hello");
```
pure @safe void **opOpAssign**(string op)(CData item)
Constraints: if (op == "~");
Append a CData item to the interior of this element
Parameters:
| | |
| --- | --- |
| CData `item` | the item you wish to append. |
Example
```
Element element;
element ~= new CData("hello");
```
pure @safe void **opOpAssign**(string op)(Comment item)
Constraints: if (op == "~");
Append a comment to the interior of this element
Parameters:
| | |
| --- | --- |
| Comment `item` | the item you wish to append. |
Example
```
Element element;
element ~= new Comment("hello");
```
pure @safe void **opOpAssign**(string op)(ProcessingInstruction item)
Constraints: if (op == "~");
Append a processing instruction to the interior of this element
Parameters:
| | |
| --- | --- |
| ProcessingInstruction `item` | the item you wish to append. |
Example
```
Element element;
element ~= new ProcessingInstruction("hello");
```
pure @safe void **opOpAssign**(string op)(Element item)
Constraints: if (op == "~");
Append a complete element to the interior of this element
Parameters:
| | |
| --- | --- |
| Element `item` | the item you wish to append. |
Example
```
Element element;
Element other = new Element("br");
element ~= other;
// appends element representing <br />
```
const bool **opEquals**(scope const Object o);
Compares two Elements for equality
Example
```
Element e1,e2;
if (e1 == e2) { }
```
const @safe int **opCmp**(scope const Object o);
Compares two Elements
You should rarely need to call this function. It exists so that Elements can be used as associative array keys.
Example
```
Element e1,e2;
if (e1 < e2) { }
```
const scope @safe size\_t **toHash**();
Returns the hash of an Element
You should rarely need to call this function. It exists so that Elements can be used as associative array keys.
const string **text**(DecodeMode mode = DecodeMode.LOOSE);
Returns the decoded interior of an element.
The element is assumed to contain text *only*. So, for example, given XML such as "<title>Good & Bad</title>", will return "Good & Bad".
Parameters:
| | |
| --- | --- |
| DecodeMode `mode` | (optional) Mode to use for decoding. (Defaults to LOOSE). |
Throws:
DecodeException if decode fails
const scope string[] **pretty**(uint indent = 2);
Returns an indented string representation of this item
Parameters:
| | |
| --- | --- |
| uint `indent` | (optional) number of spaces by which to indent this element. Defaults to 2. |
const scope @safe string **toString**();
Returns the string representation of an Element
Example
```
auto element = new Element("br");
writefln(element.toString()); // writes "<br />"
```
enum **TagType**: int;
Tag types.
START Used for start tags END Used for end tags EMPTY Used for empty tags
class **Tag**;
Class representing an XML tag.
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210) The class invariant guarantees * that **type** is a valid enum TagType value
* that **name** consists of valid characters
* that each attribute name consists of valid characters
TagType **type**;
Type of tag
string **name**;
Tag name
string[string] **attr**;
Associative array of attributes
pure @safe this(string name, TagType type = TagType.START);
Constructs an instance of Tag with a specified name and type
The constructor does not initialize the attributes. To initialize the attributes, you access the **attr** member variable.
Parameters:
| | |
| --- | --- |
| string `name` | the Tag's name |
| TagType `type` | (optional) the Tag's type. If omitted, defaults to TagType.START. |
Example
```
auto tag = new Tag("img",Tag.EMPTY);
tag.attr["src"] = "http://example.com/example.jpg";
```
const bool **opEquals**(scope Object o);
Compares two Tags for equality
You should rarely need to call this function. It exists so that Tags can be used as associative array keys.
Example
```
Tag tag1,tag2
if (tag1 == tag2) { }
```
const int **opCmp**(Object o);
Compares two Tags
Example
```
Tag tag1,tag2
if (tag1 < tag2) { }
```
const size\_t **toHash**();
Returns the hash of a Tag
You should rarely need to call this function. It exists so that Tags can be used as associative array keys.
const @safe string **toString**();
Returns the string representation of a Tag
Example
```
auto tag = new Tag("book",TagType.START);
writefln(tag.toString()); // writes "<book>"
```
const pure nothrow @nogc @property @safe bool **isStart**();
Returns true if the Tag is a start tag
Example
```
if (tag.isStart) { }
```
const pure nothrow @nogc @property @safe bool **isEnd**();
Returns true if the Tag is an end tag
Example
```
if (tag.isEnd) { }
```
const pure nothrow @nogc @property @safe bool **isEmpty**();
Returns true if the Tag is an empty tag
Example
```
if (tag.isEmpty) { }
```
class **Comment**: std.xml.Item;
Class representing a comment
pure @safe this(string content);
Construct a comment
Parameters:
| | |
| --- | --- |
| string `content` | the body of the comment |
Throws:
CommentException if the comment body is illegal (contains "--" or exactly equals "-")
Example
```
auto item = new Comment("This is a comment");
// constructs <!--This is a comment-->
```
const bool **opEquals**(scope const Object o);
Compares two comments for equality
Example
```
Comment item1,item2;
if (item1 == item2) { }
```
const scope int **opCmp**(scope const Object o);
Compares two comments
You should rarely need to call this function. It exists so that Comments can be used as associative array keys.
Example
```
Comment item1,item2;
if (item1 < item2) { }
```
const nothrow scope size\_t **toHash**();
Returns the hash of a Comment
You should rarely need to call this function. It exists so that Comments can be used as associative array keys.
const pure nothrow scope @safe string **toString**();
Returns a string representation of this comment
const pure nothrow @nogc @property scope @safe bool **isEmptyXML**();
Returns false always
class **CData**: std.xml.Item;
Class representing a Character Data section
pure @safe this(string content);
Construct a character data section
Parameters:
| | |
| --- | --- |
| string `content` | the body of the character data segment |
Throws:
CDataException if the segment body is illegal (contains "]]>")
Example
```
auto item = new CData("<b>hello</b>");
// constructs <![CDATA[<b>hello</b>]]>
```
const bool **opEquals**(scope const Object o);
Compares two CDatas for equality
Example
```
CData item1,item2;
if (item1 == item2) { }
```
const scope int **opCmp**(scope const Object o);
Compares two CDatas
You should rarely need to call this function. It exists so that CDatas can be used as associative array keys.
Example
```
CData item1,item2;
if (item1 < item2) { }
```
const nothrow scope size\_t **toHash**();
Returns the hash of a CData
You should rarely need to call this function. It exists so that CDatas can be used as associative array keys.
const pure nothrow scope @safe string **toString**();
Returns a string representation of this CData section
const pure nothrow @nogc @property scope @safe bool **isEmptyXML**();
Returns false always
class **Text**: std.xml.Item;
Class representing a text (aka Parsed Character Data) section
pure @safe this(string content);
Construct a text (aka PCData) section
Parameters:
| | |
| --- | --- |
| string `content` | the text. This function encodes the text before insertion, so it is safe to insert any text |
Example
```
auto Text = new CData("a < b");
// constructs a < b
```
const bool **opEquals**(scope const Object o);
Compares two text sections for equality
Example
```
Text item1,item2;
if (item1 == item2) { }
```
const scope int **opCmp**(scope const Object o);
Compares two text sections
You should rarely need to call this function. It exists so that Texts can be used as associative array keys.
Example
```
Text item1,item2;
if (item1 < item2) { }
```
const nothrow scope size\_t **toHash**();
Returns the hash of a text section
You should rarely need to call this function. It exists so that Texts can be used as associative array keys.
const pure nothrow @nogc scope @safe string **toString**();
Returns a string representation of this Text section
const pure nothrow @nogc @property scope @safe bool **isEmptyXML**();
Returns true if the content is the empty string
class **XMLInstruction**: std.xml.Item;
Class representing an XML Instruction section
pure @safe this(string content);
Construct an XML Instruction section
Parameters:
| | |
| --- | --- |
| string `content` | the body of the instruction segment |
Throws:
XIException if the segment body is illegal (contains ">")
Example
```
auto item = new XMLInstruction("ATTLIST");
// constructs <!ATTLIST>
```
const bool **opEquals**(scope const Object o);
Compares two XML instructions for equality
Example
```
XMLInstruction item1,item2;
if (item1 == item2) { }
```
const scope int **opCmp**(scope const Object o);
Compares two XML instructions
You should rarely need to call this function. It exists so that XmlInstructions can be used as associative array keys.
Example
```
XMLInstruction item1,item2;
if (item1 < item2) { }
```
const nothrow scope size\_t **toHash**();
Returns the hash of an XMLInstruction
You should rarely need to call this function. It exists so that XmlInstructions can be used as associative array keys.
const pure nothrow scope @safe string **toString**();
Returns a string representation of this XmlInstruction
const pure nothrow @nogc @property scope @safe bool **isEmptyXML**();
Returns false always
class **ProcessingInstruction**: std.xml.Item;
Class representing a Processing Instruction section
pure @safe this(string content);
Construct a Processing Instruction section
Parameters:
| | |
| --- | --- |
| string `content` | the body of the instruction segment |
Throws:
PIException if the segment body is illegal (contains "?>")
Example
```
auto item = new ProcessingInstruction("php");
// constructs <?php?>
```
const bool **opEquals**(scope const Object o);
Compares two processing instructions for equality
Example
```
ProcessingInstruction item1,item2;
if (item1 == item2) { }
```
const scope int **opCmp**(scope const Object o);
Compares two processing instructions
You should rarely need to call this function. It exists so that ProcessingInstructions can be used as associative array keys.
Example
```
ProcessingInstruction item1,item2;
if (item1 < item2) { }
```
const nothrow scope size\_t **toHash**();
Returns the hash of a ProcessingInstruction
You should rarely need to call this function. It exists so that ProcessingInstructions can be used as associative array keys.
const pure nothrow scope @safe string **toString**();
Returns a string representation of this ProcessingInstruction
const pure nothrow @nogc @property scope @safe bool **isEmptyXML**();
Returns false always
abstract class **Item**;
Abstract base class for XML items
abstract const @safe bool **opEquals**(scope const Object o);
Compares with another Item of same type for equality
abstract const @safe int **opCmp**(scope const Object o);
Compares with another Item of same type
abstract const scope @safe size\_t **toHash**();
Returns the hash of this item
abstract const scope @safe string **toString**();
Returns a string representation of this item
const scope @safe string[] **pretty**(uint indent);
Returns an indented string representation of this item
Parameters:
| | |
| --- | --- |
| uint `indent` | number of spaces by which to indent child elements |
abstract const pure nothrow @nogc @property scope @safe bool **isEmptyXML**();
Returns true if the item represents empty XML text
class **DocumentParser**: std.xml.ElementParser;
Class for parsing an XML Document.
This is a subclass of ElementParser. Most of the useful functions are documented there.
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210)
Bugs:
Currently only supports UTF documents. If there is an encoding attribute in the prolog, it is ignored.
this(string xmlText\_);
Constructs a DocumentParser.
The input to this function MUST be valid XML. This is enforced by the function's in contract.
Parameters:
| | |
| --- | --- |
| string `xmlText_` | the entire XML document as text |
class **ElementParser**;
Class for parsing an XML element.
Standards:
[XML 1.0](http://www.w3.org/TR/1998/REC-xml-19980210) Note that you cannot construct instances of this class directly. You can construct a DocumentParser (which is a subclass of ElementParser), but otherwise, Instances of ElementParser will be created for you by the library, and passed your way via onStartTag handlers.
const pure nothrow @nogc @property @safe const(Tag) **tag**();
The Tag at the start of the element being parsed. You can read this to determine the tag's name and attributes.
ParserHandler[string] **onStartTag**;
Register a handler which will be called whenever a start tag is encountered which matches the specified name. You can also pass null as the name, in which case the handler will be called for any unmatched start tag.
Example
```
// Call this function whenever a <podcast> start tag is encountered
onStartTag["podcast"] = (ElementParser xml)
{
// Your code here
//
// This is a a closure, so code here may reference
// variables which are outside of this scope
};
// call myEpisodeStartHandler (defined elsewhere) whenever an <episode>
// start tag is encountered
onStartTag["episode"] = &myEpisodeStartHandler;
// call delegate dg for all other start tags
onStartTag[null] = dg;
```
This library will supply your function with a new instance of ElementHandler, which may be used to parse inside the element whose start tag was just found, or to identify the tag attributes of the element, etc. Note that your function will be called for both start tags and empty tags. That is, we make no distinction between <br></br> and <br/>. ElementHandler[string] **onEndTag**;
Register a handler which will be called whenever an end tag is encountered which matches the specified name. You can also pass null as the name, in which case the handler will be called for any unmatched end tag.
Example
```
// Call this function whenever a </podcast> end tag is encountered
onEndTag["podcast"] = (in Element e)
{
// Your code here
//
// This is a a closure, so code here may reference
// variables which are outside of this scope
};
// call myEpisodeEndHandler (defined elsewhere) whenever an </episode>
// end tag is encountered
onEndTag["episode"] = &myEpisodeEndHandler;
// call delegate dg for all other end tags
onEndTag[null] = dg;
```
Note that your function will be called for both start tags and empty tags. That is, we make no distinction between <br></br> and <br/>. pure nothrow @nogc @property @safe void **onText**(Handler handler);
Register a handler which will be called whenever text is encountered.
Example
```
// Call this function whenever text is encountered
onText = (string s)
{
// Your code here
// The passed parameter s will have been decoded by the time you see
// it, and so may contain any character.
//
// This is a a closure, so code here may reference
// variables which are outside of this scope
};
```
pure nothrow @nogc @safe void **onTextRaw**(Handler handler);
Register an alternative handler which will be called whenever text is encountered. This differs from onText in that onText will decode the text, whereas onTextRaw will not. This allows you to make design choices, since onText will be more accurate, but slower, while onTextRaw will be faster, but less accurate. Of course, you can still call decode() within your handler, if you want, but you'd probably want to use onTextRaw only in circumstances where you know that decoding is unnecessary.
Example
```
// Call this function whenever text is encountered
onText = (string s)
{
// Your code here
// The passed parameter s will NOT have been decoded.
//
// This is a a closure, so code here may reference
// variables which are outside of this scope
};
```
pure nothrow @nogc @property @safe void **onCData**(Handler handler);
Register a handler which will be called whenever a character data segment is encountered.
Example
```
// Call this function whenever a CData section is encountered
onCData = (string s)
{
// Your code here
// The passed parameter s does not include the opening <![CDATA[
// nor closing ]]>
//
// This is a a closure, so code here may reference
// variables which are outside of this scope
};
```
pure nothrow @nogc @property @safe void **onComment**(Handler handler);
Register a handler which will be called whenever a comment is encountered.
Example
```
// Call this function whenever a comment is encountered
onComment = (string s)
{
// Your code here
// The passed parameter s does not include the opening <!-- nor
// closing -->
//
// This is a a closure, so code here may reference
// variables which are outside of this scope
};
```
pure nothrow @nogc @property @safe void **onPI**(Handler handler);
Register a handler which will be called whenever a processing instruction is encountered.
Example
```
// Call this function whenever a processing instruction is encountered
onPI = (string s)
{
// Your code here
// The passed parameter s does not include the opening <? nor
// closing ?>
//
// This is a a closure, so code here may reference
// variables which are outside of this scope
};
```
pure nothrow @nogc @property @safe void **onXI**(Handler handler);
Register a handler which will be called whenever an XML instruction is encountered.
Example
```
// Call this function whenever an XML instruction is encountered
// (Note: XML instructions may only occur preceding the root tag of a
// document).
onPI = (string s)
{
// Your code here
// The passed parameter s does not include the opening <! nor
// closing >
//
// This is a a closure, so code here may reference
// variables which are outside of this scope
};
```
void **parse**();
Parse an XML element.
Parsing will continue until the end of the current element. Any items encountered for which a handler has been registered will invoke that handler.
Throws:
various kinds of XMLException
const pure nothrow @nogc @safe string **toString**();
Returns that part of the element which has already been parsed
pure @safe void **check**(string s);
Check an entire XML document for well-formedness
Parameters:
| | |
| --- | --- |
| string `s` | the document to be checked, passed as a string |
Throws:
CheckException if the document is not well formed CheckException's toString() method will yield the complete hierarchy of parse failure (the XML equivalent of a stack trace), giving the line and column number of every failure at every level.
class **XMLException**: object.Exception;
The base class for exceptions thrown by this module
class **CommentException**: std.xml.XMLException;
Thrown during Comment constructor
class **CDataException**: std.xml.XMLException;
Thrown during CData constructor
class **XIException**: std.xml.XMLException;
Thrown during XMLInstruction constructor
class **PIException**: std.xml.XMLException;
Thrown during ProcessingInstruction constructor
class **TextException**: std.xml.XMLException;
Thrown during Text constructor
class **DecodeException**: std.xml.XMLException;
Thrown during decode()
class **InvalidTypeException**: std.xml.XMLException;
Thrown if comparing with wrong type
class **TagException**: std.xml.XMLException;
Thrown when parsing for Tags
class **CheckException**: std.xml.XMLException;
Thrown during check()
CheckException **err**;
Parent in hierarchy
string **msg**;
Name of production rule which failed to parse, or specific error message
size\_t **line**;
Line number at which parse failure occurred
size\_t **column**;
Column number at which parse failure occurred
| programming_docs |
d core.sync.condition core.sync.condition
===================
The condition module provides a primitive for synchronized condition checking.
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt)
Authors:
Sean Kelly
Source
[core/sync/condition.d](https://github.com/dlang/druntime/blob/master/src/core/sync/condition.d)
class **Condition**;
This class represents a condition variable as conceived by C.A.R. Hoare. As per Mesa type monitors however, "signal" has been replaced with "notify" to indicate that control is not transferred to the waiter when a notification is sent.
Examples:
```
import core.thread;
import core.sync.mutex;
import core.sync.semaphore;
void testNotify()
{
auto mutex = new Mutex;
auto condReady = new Condition( mutex );
auto semDone = new Semaphore;
auto synLoop = new Object;
int numWaiters = 10;
int numTries = 10;
int numReady = 0;
int numTotal = 0;
int numDone = 0;
int numPost = 0;
void waiter()
{
for ( int i = 0; i < numTries; ++i )
{
synchronized( mutex )
{
while ( numReady < 1 )
{
condReady.wait();
}
--numReady;
++numTotal;
}
synchronized( synLoop )
{
++numDone;
}
semDone.wait();
}
}
auto group = new ThreadGroup;
for ( int i = 0; i < numWaiters; ++i )
group.create( &waiter );
for ( int i = 0; i < numTries; ++i )
{
for ( int j = 0; j < numWaiters; ++j )
{
synchronized( mutex )
{
++numReady;
condReady.notify();
}
}
while ( true )
{
synchronized( synLoop )
{
if ( numDone >= numWaiters )
break;
}
Thread.yield();
}
for ( int j = 0; j < numWaiters; ++j )
{
semDone.notify();
}
}
group.joinAll();
assert( numTotal == numWaiters * numTries );
}
void testNotifyAll()
{
auto mutex = new Mutex;
auto condReady = new Condition( mutex );
int numWaiters = 10;
int numReady = 0;
int numDone = 0;
bool alert = false;
void waiter()
{
synchronized( mutex )
{
++numReady;
while ( !alert )
condReady.wait();
++numDone;
}
}
auto group = new ThreadGroup;
for ( int i = 0; i < numWaiters; ++i )
group.create( &waiter );
while ( true )
{
synchronized( mutex )
{
if ( numReady >= numWaiters )
{
alert = true;
condReady.notifyAll();
break;
}
}
Thread.yield();
}
group.joinAll();
assert( numReady == numWaiters && numDone == numWaiters );
}
void testWaitTimeout()
{
auto mutex = new Mutex;
auto condReady = new Condition( mutex );
bool waiting = false;
bool alertedOne = true;
bool alertedTwo = true;
void waiter()
{
synchronized( mutex )
{
waiting = true;
// we never want to miss the notification (30s)
alertedOne = condReady.wait( dur!"seconds"(30) );
// but we don't want to wait long for the timeout (10ms)
alertedTwo = condReady.wait( dur!"msecs"(10) );
}
}
auto thread = new Thread( &waiter );
thread.start();
while ( true )
{
synchronized( mutex )
{
if ( waiting )
{
condReady.notify();
break;
}
}
Thread.yield();
}
thread.join();
assert( waiting );
assert( alertedOne );
assert( !alertedTwo );
}
testNotify();
testNotifyAll();
testWaitTimeout();
```
nothrow @safe this(Mutex m);
shared nothrow @safe this(shared Mutex m);
Initializes a condition object which is associated with the supplied mutex object.
Parameters:
| | |
| --- | --- |
| Mutex `m` | The mutex with which this condition will be associated. |
Throws:
SyncError on error.
@property Mutex **mutex**();
shared @property shared(Mutex) **mutex**();
Gets the mutex associated with this condition.
Returns:
The mutex associated with this condition.
void **wait**();
shared void **wait**();
void **wait**(this Q)(bool \_unused\_)
Constraints: if (is(Q == Condition) || is(Q == shared(Condition)));
Wait until notified.
Throws:
SyncError on error.
bool **wait**(Duration val);
shared bool **wait**(Duration val);
bool **wait**(this Q)(Duration val, bool \_unused\_)
Constraints: if (is(Q == Condition) || is(Q == shared(Condition)));
Suspends the calling thread until a notification occurs or until the supplied time period has elapsed.
Parameters:
| | |
| --- | --- |
| Duration `val` | The time to wait. |
In
val must be non-negative.
Throws:
SyncError on error.
Returns:
true if notified before the timeout and false if not.
void **notify**();
shared void **notify**();
void **notify**(this Q)(bool \_unused\_)
Constraints: if (is(Q == Condition) || is(Q == shared(Condition)));
Notifies one waiter.
Throws:
SyncError on error.
void **notifyAll**();
shared void **notifyAll**();
void **notifyAll**(this Q)(bool \_unused\_)
Constraints: if (is(Q == Condition) || is(Q == shared(Condition)));
Notifies all waiters.
Throws:
SyncError on error.
d std.uri std.uri
=======
Encode and decode Uniform Resource Identifiers (URIs). URIs are used in internet transfer protocols. Valid URI characters consist of letters, digits, and the characters **;/?:@&=+$,-.!~\*'()** Reserved URI characters are **;/?:@&=+$,** Escape sequences consist of **%** followed by two hex digits.
See Also:
[RFC 3986](https://www.ietf.org/rfc/rfc3986.txt)
[Wikipedia](http://en.wikipedia.org/wiki/Uniform_resource_identifier)
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
[Walter Bright](http://digitalmars.com)
Source
[std/uri.d](https://github.com/dlang/phobos/blob/master/std/uri.d)
class **URIException**: object.Exception;
This Exception is thrown if something goes wrong when encoding or decoding a URI.
Examples:
```
import std.exception : assertThrown;
assertThrown!URIException("%ab".decode);
```
string **decode**(Char)(scope const(Char)[] encodedURI)
Constraints: if (isSomeChar!Char);
Decodes the URI string encodedURI into a UTF-8 string and returns it. Escape sequences that resolve to reserved URI characters are not replaced. Escape sequences that resolve to the '#' character are not replaced.
Examples:
```
writeln("foo%20bar".decode); // "foo bar"
writeln("%3C%3E.@.%E2%84%A2".decode); // "<>.@.™"
writeln("foo&/".decode); // "foo&/"
writeln("!@#$&*(".decode); // "!@#$&*("
```
string **decodeComponent**(Char)(scope const(Char)[] encodedURIComponent)
Constraints: if (isSomeChar!Char);
Decodes the URI string encodedURI into a UTF-8 string and returns it. All escape sequences are decoded.
Examples:
```
writeln("foo%2F%26".decodeComponent); // "foo/&"
writeln("dl%C3%A4ng%20r%C3%B6cks".decodeComponent); // "dläng röcks"
writeln("!%40%23%24%25%5E%26*(".decodeComponent); // "!@#$%^&*("
```
string **encode**(Char)(scope const(Char)[] uri)
Constraints: if (isSomeChar!Char);
Encodes the UTF-8 string uri into a URI and returns that URI. Any character not a valid URI character is escaped. The '#' character is not escaped.
Examples:
```
writeln("foo bar".encode); // "foo%20bar"
writeln("<>.@.™".encode); // "%3C%3E.@.%E2%84%A2"
writeln("foo/#?a=1&b=2".encode); // "foo/#?a=1&b=2"
writeln("dlang+rocks!".encode); // "dlang+rocks!"
writeln("!@#$%^&*(".encode); // "!@#$%25%5E&*("
```
string **encodeComponent**(Char)(scope const(Char)[] uriComponent)
Constraints: if (isSomeChar!Char);
Encodes the UTF-8 string uriComponent into a URI and returns that URI. Any character not a letter, digit, or one of -.!~\*'() is escaped.
Examples:
```
writeln("!@#$%^&*(".encodeComponent); // "!%40%23%24%25%5E%26*("
writeln("<>.@.™".encodeComponent); // "%3C%3E.%40.%E2%84%A2"
writeln("foo/&".encodeComponent); // "foo%2F%26"
writeln("dläng röcks".encodeComponent); // "dl%C3%A4ng%20r%C3%B6cks"
writeln("dlang+rocks!".encodeComponent); // "dlang%2Brocks!"
```
ptrdiff\_t **uriLength**(Char)(scope const(Char)[] s)
Constraints: if (isSomeChar!Char);
Does string s[] start with a URL?
Returns:
-1 it does not len it does, and s[0 .. len] is the slice of s[] that is that URL
Examples:
```
string s1 = "http://www.digitalmars.com/~fred/fredsRX.html#foo end!";
writeln(uriLength(s1)); // 49
string s2 = "no uri here";
writeln(uriLength(s2)); // -1
assert(uriLength("issue 14924") < 0);
```
ptrdiff\_t **emailLength**(Char)(scope const(Char)[] s)
Constraints: if (isSomeChar!Char);
Does string s[] start with an email address?
Returns:
-1 it does not len it does, and s[0 .. i] is the slice of s[] that is that email address
References
RFC2822
Examples:
```
string s1 = "[email protected] with garbage added";
writeln(emailLength(s1)); // 32
string s2 = "no email address here";
writeln(emailLength(s2)); // -1
assert(emailLength("issue 14924") < 0);
```
d std.digest.md std.digest.md
=============
Computes MD5 hashes of arbitrary data. MD5 hashes are 16 byte quantities that are like a checksum or CRC, but are more robust.
| Category | Functions |
| --- | --- |
| Template API | [*MD5*](#MD5) |
| OOP API | [*MD5Digest*](#MD5Digest) |
| Helpers | [*md5Of*](#md5Of) |
This module conforms to the APIs defined in `std.digest`. To understand the differences between the template and the OOP API, see [`std.digest`](std_digest).
This module publicly imports [`std.digest`](std_digest) and can be used as a stand-alone module.
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
CTFE
Digests do not work in CTFE
Authors:
Piotr Szturmaj, Kai Nacke, Johannes Pfau
The routines and algorithms are derived from the *RSA Data Security, Inc. MD5 Message-Digest Algorithm*.
References
[Wikipedia on MD5](http://en.wikipedia.org/wiki/Md5)
Source
[std/digest/md.d](https://github.com/dlang/phobos/blob/master/std/digest/md.d)
Examples:
```
//Template API
import std.digest.md;
//Feeding data
ubyte[1024] data;
MD5 md5;
md5.start();
md5.put(data[]);
md5.start(); //Start again
md5.put(data[]);
auto hash = md5.finish();
```
Examples:
```
//OOP API
import std.digest.md;
auto md5 = new MD5Digest();
ubyte[] hash = md5.digest("abc");
writeln(toHexString(hash)); // "900150983CD24FB0D6963F7D28E17F72"
//Feeding data
ubyte[1024] data;
md5.put(data[]);
md5.reset(); //Start again
md5.put(data[]);
hash = md5.finish();
```
struct **MD5**;
Template API MD5 implementation. See `std.digest` for differences between template and OOP API.
Examples:
```
//Simple example, hashing a string using md5Of helper function
ubyte[16] hash = md5Of("abc");
//Let's get a hash string
writeln(toHexString(hash)); // "900150983CD24FB0D6963F7D28E17F72"
```
Examples:
```
//Using the basic API
MD5 hash;
hash.start();
ubyte[1024] data;
//Initialize data here...
hash.put(data);
ubyte[16] result = hash.finish();
```
Examples:
```
//Let's use the template features:
void doSomething(T)(ref T hash)
if (isDigest!T)
{
hash.put(cast(ubyte) 0);
}
MD5 md5;
md5.start();
doSomething(md5);
writeln(toHexString(md5.finish())); // "93B885ADFE0DA089CDF634904FD59F71"
```
pure nothrow @nogc @trusted void **put**(scope const(ubyte)[] data...);
Use this to feed the digest with data. Also implements the [`std.range.primitives.isOutputRange`](std_range_primitives#isOutputRange) interface for `ubyte` and `const(ubyte)[]`.
Example
```
MD5 dig;
dig.put(cast(ubyte) 0); //single ubyte
dig.put(cast(ubyte) 0, cast(ubyte) 0); //variadic
ubyte[10] buf;
dig.put(buf); //buffer
```
pure nothrow @nogc @safe void **start**();
Used to (re)initialize the MD5 digest.
Note
For this MD5 Digest implementation calling start after default construction is not necessary. Calling start is only necessary to reset the Digest.
Generic code which deals with different Digest types should always call start though.
Example
```
MD5 digest;
//digest.start(); //Not necessary
digest.put(0);
```
pure nothrow @nogc @trusted ubyte[16] **finish**();
Returns the finished MD5 hash. This also calls [`start`](#start) to reset the internal state.
Examples:
```
//Simple example
MD5 hash;
hash.start();
hash.put(cast(ubyte) 0);
ubyte[16] result = hash.finish();
```
auto **md5Of**(T...)(T data);
This is a convenience alias for [`std.digest.digest`](std_digest#digest) using the MD5 implementation.
Examples:
```
ubyte[16] hash = md5Of("abc");
writeln(hash); // digest!MD5("abc")
```
alias **MD5Digest** = std.digest.WrapperDigest!(MD5).WrapperDigest;
OOP API MD5 implementation. See `std.digest` for differences between template and OOP API.
This is an alias for `[std.digest.WrapperDigest](std_digest#WrapperDigest)!MD5`, see there for more information.
Examples:
```
//Simple example, hashing a string using Digest.digest helper function
auto md5 = new MD5Digest();
ubyte[] hash = md5.digest("abc");
//Let's get a hash string
writeln(toHexString(hash)); // "900150983CD24FB0D6963F7D28E17F72"
```
Examples:
```
//Let's use the OOP features:
void test(Digest dig)
{
dig.put(cast(ubyte) 0);
}
auto md5 = new MD5Digest();
test(md5);
//Let's use a custom buffer:
ubyte[16] buf;
ubyte[] result = md5.finish(buf[]);
writeln(toHexString(result)); // "93B885ADFE0DA089CDF634904FD59F71"
```
d std.experimental.allocator std.experimental.allocator
==========================
High-level interface for allocators. Implements bundled allocation/creation and destruction/deallocation of data including `struct`s and `class`es, and also array primitives related to allocation. This module is the entry point for both making use of allocators and for their documentation.
| Category | Functions |
| --- | --- |
| Make | [`make`](#make) [`makeArray`](#makeArray) [`makeMultidimensionalArray`](#makeMultidimensionalArray) |
| Dispose | [`dispose`](#dispose) [`disposeMultidimensionalArray`](#disposeMultidimensionalArray) |
| Modify | [`expandArray`](#expandArray) [`shrinkArray`](#shrinkArray) |
| Global | [`processAllocator`](#processAllocator) [`theAllocator`](#theAllocator) |
| Class interface | [`allocatorObject`](#allocatorObject) [`CAllocatorImpl`](#CAllocatorImpl) [`IAllocator`](#IAllocator) |
Synopsis
```
// Allocate an int, initialize it with 42
int* p = theAllocator.make!int(42);
assert(*p == 42);
// Destroy and deallocate it
theAllocator.dispose(p);
// Allocate using the global process allocator
p = processAllocator.make!int(100);
assert(*p == 100);
// Destroy and deallocate
processAllocator.dispose(p);
// Create an array of 50 doubles initialized to -1.0
double[] arr = theAllocator.makeArray!double(50, -1.0);
// Append two zeros to it
theAllocator.expandArray(arr, 2, 0.0);
// On second thought, take that back
theAllocator.shrinkArray(arr, 2);
// Destroy and deallocate
theAllocator.dispose(arr);
```
Layered Structure
-----------------
D's allocators have a layered structure in both implementation and documentation: 1. A high-level, dynamically-typed layer (described further down in this module). It consists of an interface called [`IAllocator`](#IAllocator), which concrete allocators need to implement. The interface primitives themselves are oblivious to the type of the objects being allocated; they only deal in `void[]`, by necessity of the interface being dynamic (as opposed to type-parameterized). Each thread has a current allocator it uses by default, which is a thread-local variable [`theAllocator`](#theAllocator) of type [`IAllocator`](#IAllocator). The process has a global allocator called [`processAllocator`](#processAllocator), also of type [`IAllocator`](#IAllocator). When a new thread is created, [`processAllocator`](#processAllocator) is copied into [`theAllocator`](#theAllocator). An application can change the objects to which these references point. By default, at application startup, [`processAllocator`](#processAllocator) refers to an object that uses D's garbage collected heap. This layer also include high-level functions such as [`make`](#make) and [`dispose`](#dispose) that comfortably allocate/create and respectively destroy/deallocate objects. This layer is all needed for most casual uses of allocation primitives.
2. A mid-level, statically-typed layer for assembling several allocators into one. It uses properties of the type of the objects being created to route allocation requests to possibly specialized allocators. This layer is relatively thin and implemented and documented in the [`std.experimental.allocator.typed`](std_experimental_allocator_typed) module. It allows an interested user to e.g. use different allocators for arrays versus fixed-sized objects, to the end of better overall performance.
3. A low-level collection of highly generic *heap building blocks* — Lego-like pieces that can be used to assemble application-specific allocators. The real allocation smarts are occurring at this level. This layer is of interest to advanced applications that want to configure their own allocators. A good illustration of typical uses of these building blocks is module [`std.experimental.allocator.showcase`](std_experimental_allocator_showcase) which defines a collection of frequently- used preassembled allocator objects. The implementation and documentation entry point is [`std.experimental.allocator.building_blocks`](std_experimental_allocator_building_blocks). By design, the primitives of the static interface have the same signatures as the [`IAllocator`](#IAllocator) primitives but are for the most part optional and driven by static introspection. The parameterized class [`CAllocatorImpl`](#CAllocatorImpl) offers an immediate and useful means to package a static low-level allocator into an implementation of [`IAllocator`](#IAllocator).
4. Core allocator objects that interface with D's garbage collected heap ([`std.experimental.allocator.gc_allocator`](std_experimental_allocator_gc_allocator)), the C `malloc` family ([`std.experimental.allocator.mallocator`](std_experimental_allocator_mallocator)), and the OS ([`std.experimental.allocator.mmap_allocator`](std_experimental_allocator_mmap_allocator)). Most custom allocators would ultimately obtain memory from one of these core allocators.
Idiomatic Use of `std.experimental.allocator`
---------------------------------------------
As of this time, `std.experimental.allocator` is not integrated with D's built-in operators that allocate memory, such as `new`, array literals, or array concatenation operators. That means `std.experimental.allocator` is opt-in — applications need to make explicit use of it. For casual creation and disposal of dynamically-allocated objects, use [`make`](#make), [`dispose`](#dispose), and the array-specific functions [`makeArray`](#makeArray), [`expandArray`](#expandArray), and [`shrinkArray`](#shrinkArray). These use by default D's garbage collected heap, but open the application to better configuration options. These primitives work either with `theAllocator` but also with any allocator obtained by combining heap building blocks. For example:
```
void fun(size_t n)
{
// Use the current allocator
int[] a1 = theAllocator.makeArray!int(n);
scope(exit) theAllocator.dispose(a1);
...
}
```
To experiment with alternative allocators, set [`theAllocator`](#theAllocator) for the current thread. For example, consider an application that allocates many 8-byte objects. These are not well supported by the default allocator, so a [free list allocator](std_experimental_allocator_building_blocks_free_list) would be recommended. To install one in `main`, the application would use:
```
void main()
{
import std.experimental.allocator.building_blocks.free_list
: FreeList;
theAllocator = allocatorObject(FreeList!8());
...
}
```
### Saving the `IAllocator` Reference For Later Use
As with any global resource, setting `theAllocator` and `processAllocator` should not be done often and casually. In particular, allocating memory with one allocator and deallocating with another causes undefined behavior. Typically, these variables are set during application initialization phase and last through the application. To avoid this, long-lived objects that need to perform allocations, reallocations, and deallocations relatively often may want to store a reference to the allocator object they use throughout their lifetime. Then, instead of using `theAllocator` for internal allocation-related tasks, they'd use the internally held reference. For example, consider a user-defined hash table:
```
struct HashTable
{
private IAllocator allocator;
this(size_t buckets, IAllocator allocator = theAllocator) {
this.allocator = allocator;
...
}
// Getter and setter
IAllocator allocator() { return allocator; }
void allocator(IAllocator a) { assert(empty); allocator = a; }
}
```
Following initialization, the `HashTable` object would consistently use its `allocator` object for acquiring memory. Furthermore, setting `HashTable.allocator` to point to a different allocator should be legal but only if the object is empty; otherwise, the object wouldn't be able to deallocate its existing state. ### Using Allocators without `IAllocator`
Allocators assembled from the heap building blocks don't need to go through `IAllocator` to be usable. They have the same primitives as `IAllocator` and they work with [`make`](#make), [`makeArray`](#makeArray), [`dispose`](#dispose) etc. So it suffice to create allocator objects wherever fit and use them appropriately:
```
void fun(size_t n)
{
// Use a stack-installed allocator for up to 64KB
StackFront!65536 myAllocator;
int[] a2 = myAllocator.makeArray!int(n);
scope(exit) myAllocator.dispose(a2);
...
}
```
In this case, `myAllocator` does not obey the `IAllocator` interface, but implements its primitives so it can work with `makeArray` by means of duck typing. One important thing to note about this setup is that statically-typed assembled allocators are almost always faster than allocators that go through `IAllocator`. An important rule of thumb is: "assemble allocator first, adapt to `IAllocator` after". A good allocator implements intricate logic by means of template assembly, and gets wrapped with `IAllocator` (usually by means of [`allocatorObject`](#allocatorObject)) only once, at client level.
License:
[Boost License 1.0](http://boost.org/LICENSE_1_0.txt).
Authors:
[Andrei Alexandrescu](http://erdani.com)
Source
[std/experimental/allocator](https://github.com/dlang/phobos/blob/master/std/experimental/allocator)
interface **IAllocator**;
Dynamic allocator interface. Code that defines allocators ultimately implements this interface. This should be used wherever a uniform type is required for encapsulating various allocator implementations.
Composition of allocators is not recommended at this level due to inflexibility of dynamic interfaces and inefficiencies caused by cascaded multiple calls. Instead, compose allocators using the static interface defined in [`std.experimental.allocator.building_blocks`](std_experimental_allocator_building_blocks), then adapt the composed allocator to `IAllocator` (possibly by using [`CAllocatorImpl`](#CAllocatorImpl) below).
Methods returning `Ternary` return `Ternary.yes` upon success, `Ternary.no` upon failure, and `Ternary.unknown` if the primitive is not implemented by the allocator instance.
abstract nothrow @property uint **alignment**();
Returns the alignment offered.
abstract nothrow size\_t **goodAllocSize**(size\_t s);
Returns the good allocation size that guarantees zero internal fragmentation.
abstract nothrow void[] **allocate**(size\_t, TypeInfo ti = null);
Allocates `n` bytes of memory.
abstract nothrow void[] **alignedAllocate**(size\_t n, uint a);
Allocates `n` bytes of memory with specified alignment `a`. Implementations that do not support this primitive should always return `null`.
abstract nothrow void[] **allocateAll**();
Allocates and returns all memory available to this allocator. Implementations that do not support this primitive should always return `null`.
abstract nothrow bool **expand**(ref void[], size\_t);
Expands a memory block in place and returns `true` if successful. Implementations that don't support this primitive should always return `false`.
abstract nothrow bool **reallocate**(ref void[], size\_t);
Reallocates a memory block.
abstract nothrow bool **alignedReallocate**(ref void[] b, size\_t size, uint alignment);
Reallocates a memory block with specified alignment.
abstract nothrow Ternary **owns**(void[] b);
Returns `Ternary.yes` if the allocator owns `b`, `Ternary.no` if the allocator doesn't own `b`, and `Ternary.unknown` if ownership cannot be determined. Implementations that don't support this primitive should always return `Ternary.unknown`.
abstract nothrow Ternary **resolveInternalPointer**(const void\* p, ref void[] result);
Resolves an internal pointer to the full block allocated. Implementations that don't support this primitive should always return `Ternary.unknown`.
abstract nothrow bool **deallocate**(void[] b);
Deallocates a memory block. Implementations that don't support this primitive should always return `false`. A simple way to check that an allocator supports deallocation is to call `deallocate(null)`.
abstract nothrow bool **deallocateAll**();
Deallocates all memory. Implementations that don't support this primitive should always return `false`.
abstract nothrow Ternary **empty**();
Returns `Ternary.yes` if no memory is currently allocated from this allocator, `Ternary.no` if some allocations are currently active, or `Ternary.unknown` if not supported.
abstract pure nothrow @nogc @safe void **incRef**();
Increases the reference count of the concrete class that implements this interface.
For stateless allocators, this does nothing.
abstract pure nothrow @nogc @safe bool **decRef**();
Decreases the reference count of the concrete class that implements this interface. When the reference count is `0`, the object self-destructs.
Returns:
`true` if the reference count is greater than `0` and `false` when it hits `0`. For stateless allocators, it always returns `true`.
struct **RCIAllocator**;
A reference counted struct that wraps the dynamic allocator interface. This should be used wherever a uniform type is required for encapsulating various allocator implementations.
Code that defines allocators ultimately implements the [`IAllocator`](#IAllocator) interface, possibly by using [`CAllocatorImpl`](#CAllocatorImpl) below, and then build a `RCIAllocator` out of this.
Composition of allocators is not recommended at this level due to inflexibility of dynamic interfaces and inefficiencies caused by cascaded multiple calls. Instead, compose allocators using the static interface defined in [`std.experimental.allocator.building_blocks`](std_experimental_allocator_building_blocks), then adapt the composed allocator to `RCIAllocator` (possibly by using [`allocatorObject`](#allocatorObject) below).
interface **ISharedAllocator**;
Dynamic shared allocator interface. Code that defines allocators shareable across threads ultimately implements this interface. This should be used wherever a uniform type is required for encapsulating various allocator implementations.
Composition of allocators is not recommended at this level due to inflexibility of dynamic interfaces and inefficiencies caused by cascaded multiple calls. Instead, compose allocators using the static interface defined in [`std.experimental.allocator.building_blocks`](std_experimental_allocator_building_blocks), then adapt the composed allocator to `ISharedAllocator` (possibly by using [`CSharedAllocatorImpl`](#CSharedAllocatorImpl) below).
Methods returning `Ternary` return `Ternary.yes` upon success, `Ternary.no` upon failure, and `Ternary.unknown` if the primitive is not implemented by the allocator instance.
abstract shared nothrow @property uint **alignment**();
Returns the alignment offered.
abstract shared nothrow size\_t **goodAllocSize**(size\_t s);
Returns the good allocation size that guarantees zero internal fragmentation.
abstract shared nothrow void[] **allocate**(size\_t, TypeInfo ti = null);
Allocates `n` bytes of memory.
abstract shared nothrow void[] **alignedAllocate**(size\_t n, uint a);
Allocates `n` bytes of memory with specified alignment `a`. Implementations that do not support this primitive should always return `null`.
abstract shared nothrow void[] **allocateAll**();
Allocates and returns all memory available to this allocator. Implementations that do not support this primitive should always return `null`.
abstract shared nothrow bool **expand**(ref void[], size\_t);
Expands a memory block in place and returns `true` if successful. Implementations that don't support this primitive should always return `false`.
abstract shared nothrow bool **reallocate**(ref void[], size\_t);
Reallocates a memory block.
abstract shared nothrow bool **alignedReallocate**(ref void[] b, size\_t size, uint alignment);
Reallocates a memory block with specified alignment.
abstract shared nothrow Ternary **owns**(void[] b);
Returns `Ternary.yes` if the allocator owns `b`, `Ternary.no` if the allocator doesn't own `b`, and `Ternary.unknown` if ownership cannot be determined. Implementations that don't support this primitive should always return `Ternary.unknown`.
abstract shared nothrow Ternary **resolveInternalPointer**(const void\* p, ref void[] result);
Resolves an internal pointer to the full block allocated. Implementations that don't support this primitive should always return `Ternary.unknown`.
abstract shared nothrow bool **deallocate**(void[] b);
Deallocates a memory block. Implementations that don't support this primitive should always return `false`. A simple way to check that an allocator supports deallocation is to call `deallocate(null)`.
abstract shared nothrow bool **deallocateAll**();
Deallocates all memory. Implementations that don't support this primitive should always return `false`.
abstract shared nothrow Ternary **empty**();
Returns `Ternary.yes` if no memory is currently allocated from this allocator, `Ternary.no` if some allocations are currently active, or `Ternary.unknown` if not supported.
abstract shared pure nothrow @nogc @safe void **incRef**();
Increases the reference count of the concrete class that implements this interface.
For stateless allocators, this does nothing.
abstract shared pure nothrow @nogc @safe bool **decRef**();
Decreases the reference count of the concrete class that implements this interface. When the reference count is `0`, the object self-destructs.
For stateless allocators, this does nothing.
Returns:
`true` if the reference count is greater than `0` and `false` when it hits `0`. For stateless allocators, it always returns `true`.
struct **RCISharedAllocator**;
A reference counted struct that wraps the dynamic shared allocator interface. This should be used wherever a uniform type is required for encapsulating various allocator implementations.
Code that defines allocators shareable across threads ultimately implements the [`ISharedAllocator`](#ISharedAllocator) interface, possibly by using [`CSharedAllocatorImpl`](#CSharedAllocatorImpl) below, and then build a `RCISharedAllocator` out of this.
Composition of allocators is not recommended at this level due to inflexibility of dynamic interfaces and inefficiencies caused by cascaded multiple calls. Instead, compose allocators using the static interface defined in [`std.experimental.allocator.building_blocks`](std_experimental_allocator_building_blocks), then adapt the composed allocator to `RCISharedAllocator` (possibly by using [`sharedAllocatorObject`](#sharedAllocatorObject) below).
nothrow @nogc @property ref @safe RCIAllocator **theAllocator**();
nothrow @nogc @property @system void **theAllocator**(RCIAllocator a);
Gets/sets the allocator for the current thread. This is the default allocator that should be used for allocating thread-local memory. For allocating memory to be shared across threads, use `processAllocator` (below). By default, `theAllocator` ultimately fetches memory from `processAllocator`, which in turn uses the garbage collected heap.
Examples:
```
// Install a new allocator that is faster for 128-byte allocations.
import std.experimental.allocator.building_blocks.free_list : FreeList;
import std.experimental.allocator.gc_allocator : GCAllocator;
auto oldAllocator = theAllocator;
scope(exit) theAllocator = oldAllocator;
theAllocator = allocatorObject(FreeList!(GCAllocator, 128)());
// Use the now changed allocator to allocate an array
const ubyte[] arr = theAllocator.makeArray!ubyte(128);
assert(arr.ptr);
//...
```
nothrow @nogc @property ref @trusted RCISharedAllocator **processAllocator**();
nothrow @nogc @property @system void **processAllocator**(ref RCISharedAllocator a);
Gets/sets the allocator for the current process. This allocator must be used for allocating memory shared across threads. Objects created using this allocator can be cast to `shared`.
auto **make**(T, Allocator, A...)(auto ref Allocator alloc, auto ref A args);
Dynamically allocates (using `alloc`) and then creates in the memory allocated an object of type `T`, using `args` (if any) for its initialization. Initialization occurs in the memory allocated and is otherwise semantically the same as `T(args)`. (Note that using `alloc.make!(T[])` creates a pointer to an (empty) array of `T`s, not an array. To use an allocator to allocate and initialize an array, use `alloc.makeArray!T` described below.)
Parameters:
| | |
| --- | --- |
| T | Type of the object being created. |
| Allocator `alloc` | The allocator used for getting the needed memory. It may be an object implementing the static interface for allocators, or an `IAllocator` reference. |
| A `args` | Optional arguments used for initializing the created object. If not present, the object is default constructed. |
Returns:
If `T` is a class type, returns a reference to the created `T` object. Otherwise, returns a `T*` pointing to the created object. In all cases, returns `null` if allocation failed.
Throws:
If `T`'s constructor throws, deallocates the allocated memory and propagates the exception.
Examples:
```
// Dynamically allocate one integer
const int* p1 = theAllocator.make!int;
// It's implicitly initialized with its .init value
writeln(*p1); // 0
// Dynamically allocate one double, initialize to 42.5
const double* p2 = theAllocator.make!double(42.5);
writeln(*p2); // 42.5
// Dynamically allocate a struct
static struct Point
{
int x, y, z;
}
// Use the generated constructor taking field values in order
const Point* p = theAllocator.make!Point(1, 2);
assert(p.x == 1 && p.y == 2 && p.z == 0);
// Dynamically allocate a class object
static class Customer
{
uint id = uint.max;
this() {}
this(uint id) { this.id = id; }
// ...
}
Customer cust = theAllocator.make!Customer;
assert(cust.id == uint.max); // default initialized
cust = theAllocator.make!Customer(42);
writeln(cust.id); // 42
// explicit passing of outer pointer
static class Outer
{
int x = 3;
class Inner
{
auto getX() { return x; }
}
}
auto outer = theAllocator.make!Outer();
auto inner = theAllocator.make!(Outer.Inner)(outer);
writeln(outer.x); // inner.getX
```
T[] **makeArray**(T, Allocator)(auto ref Allocator alloc, size\_t length);
T[] **makeArray**(T, Allocator)(auto ref Allocator alloc, size\_t length, T init);
Unqual!(ElementEncodingType!R)[] **makeArray**(Allocator, R)(auto ref Allocator alloc, R range)
Constraints: if (isInputRange!R && !isInfinite!R);
T[] **makeArray**(T, Allocator, R)(auto ref Allocator alloc, R range)
Constraints: if (isInputRange!R && !isInfinite!R);
Create an array of `T` with `length` elements using `alloc`. The array is either default-initialized, filled with copies of `init`, or initialized with values fetched from `range`.
Parameters:
| | |
| --- | --- |
| T | element type of the array being created |
| Allocator `alloc` | the allocator used for getting memory |
| size\_t `length` | length of the newly created array |
| T `init` | element used for filling the array |
| R `range` | range used for initializing the array elements |
Returns:
The newly-created array, or `null` if either `length` was `0` or allocation failed.
Throws:
The first two overloads throw only if `alloc`'s primitives do. The overloads that involve copy initialization deallocate memory and propagate the exception if the copy operation throws.
Examples:
```
import std.algorithm.comparison : equal;
static void test(T)()
{
T[] a = theAllocator.makeArray!T(2);
assert(a.equal([0, 0]));
a = theAllocator.makeArray!T(3, 42);
assert(a.equal([42, 42, 42]));
import std.range : only;
a = theAllocator.makeArray!T(only(42, 43, 44));
assert(a.equal([42, 43, 44]));
}
test!int();
test!(shared int)();
test!(const int)();
test!(immutable int)();
```
bool **expandArray**(T, Allocator)(auto ref Allocator alloc, ref T[] array, size\_t delta);
bool **expandArray**(T, Allocator)(auto ref Allocator alloc, ref T[] array, size\_t delta, auto ref T init);
bool **expandArray**(T, Allocator, R)(auto ref Allocator alloc, ref T[] array, R range)
Constraints: if (isInputRange!R);
Grows `array` by appending `delta` more elements. The needed memory is allocated using `alloc`. The extra elements added are either default- initialized, filled with copies of `init`, or initialized with values fetched from `range`.
Parameters:
| | |
| --- | --- |
| T | element type of the array being created |
| Allocator `alloc` | the allocator used for getting memory |
| T[] `array` | a reference to the array being grown |
| size\_t `delta` | number of elements to add (upon success the new length of `array` is `array.length + delta`) |
| T `init` | element used for filling the array |
| R `range` | range used for initializing the array elements |
Returns:
`true` upon success, `false` if memory could not be allocated. In the latter case `array` is left unaffected.
Throws:
The first two overloads throw only if `alloc`'s primitives do. The overloads that involve copy initialization deallocate memory and propagate the exception if the copy operation throws.
Examples:
```
auto arr = theAllocator.makeArray!int([1, 2, 3]);
assert(theAllocator.expandArray(arr, 2));
writeln(arr); // [1, 2, 3, 0, 0]
import std.range : only;
assert(theAllocator.expandArray(arr, only(4, 5)));
writeln(arr); // [1, 2, 3, 0, 0, 4, 5]
```
bool **shrinkArray**(T, Allocator)(auto ref Allocator alloc, ref T[] array, size\_t delta);
Shrinks an array by `delta` elements.
If `array.length < delta`, does nothing and returns `false`. Otherwise, destroys the last `array.length - delta` elements in the array and then reallocates the array's buffer. If reallocation fails, fills the array with default-initialized data.
Parameters:
| | |
| --- | --- |
| T | element type of the array being created |
| Allocator `alloc` | the allocator used for getting memory |
| T[] `array` | a reference to the array being shrunk |
| size\_t `delta` | number of elements to remove (upon success the new length of `array` is `array.length - delta`) |
Returns:
`true` upon success, `false` if memory could not be reallocated. In the latter case, the slice `array[$ - delta .. $]` is left with default-initialized elements.
Throws:
The first two overloads throw only if `alloc`'s primitives do. The overloads that involve copy initialization deallocate memory and propagate the exception if the copy operation throws.
Examples:
```
int[] a = theAllocator.makeArray!int(100, 42);
writeln(a.length); // 100
assert(theAllocator.shrinkArray(a, 98));
writeln(a.length); // 2
writeln(a); // [42, 42]
```
void **dispose**(A, T)(auto ref A alloc, auto ref T\* p);
void **dispose**(A, T)(auto ref A alloc, auto ref T p)
Constraints: if (is(T == class) || is(T == interface));
void **dispose**(A, T)(auto ref A alloc, auto ref T[] array);
Destroys and then deallocates (using `alloc`) the object pointed to by a pointer, the class object referred to by a `class` or `interface` reference, or an entire array. It is assumed the respective entities had been allocated with the same allocator.
auto **makeMultidimensionalArray**(T, Allocator, size\_t N)(auto ref Allocator alloc, size\_t[N] lengths...);
Allocates a multidimensional array of elements of type T.
Parameters:
| | |
| --- | --- |
| N | number of dimensions |
| T | element type of an element of the multidimensional arrat |
| Allocator `alloc` | the allocator used for getting memory |
| size\_t[N] `lengths` | static array containing the size of each dimension |
Returns:
An N-dimensional array with individual elements of type T.
Examples:
```
import std.experimental.allocator.mallocator : Mallocator;
auto mArray = Mallocator.instance.makeMultidimensionalArray!int(2, 3, 6);
// deallocate when exiting scope
scope(exit)
{
Mallocator.instance.disposeMultidimensionalArray(mArray);
}
writeln(mArray.length); // 2
foreach (lvl2Array; mArray)
{
writeln(lvl2Array.length); // 3
foreach (lvl3Array; lvl2Array)
writeln(lvl3Array.length); // 6
}
```
void **disposeMultidimensionalArray**(T, Allocator)(auto ref Allocator alloc, auto ref T[] array);
Destroys and then deallocates a multidimensional array, assuming it was created with makeMultidimensionalArray and the same allocator was used.
Parameters:
| | |
| --- | --- |
| T | element type of an element of the multidimensional array |
| Allocator `alloc` | the allocator used for getting memory |
| T[] `array` | the multidimensional array that is to be deallocated |
Examples:
```
struct TestAllocator
{
import std.experimental.allocator.common : platformAlignment;
import std.experimental.allocator.mallocator : Mallocator;
alias allocator = Mallocator.instance;
private static struct ByteRange
{
void* ptr;
size_t length;
}
private ByteRange[] _allocations;
enum uint alignment = platformAlignment;
void[] allocate(size_t numBytes)
{
auto ret = allocator.allocate(numBytes);
_allocations ~= ByteRange(ret.ptr, ret.length);
return ret;
}
bool deallocate(void[] bytes)
{
import std.algorithm.mutation : remove;
import std.algorithm.searching : canFind;
bool pred(ByteRange other)
{ return other.ptr == bytes.ptr && other.length == bytes.length; }
assert(_allocations.canFind!pred);
_allocations = _allocations.remove!pred;
return allocator.deallocate(bytes);
}
~this()
{
assert(!_allocations.length);
}
}
TestAllocator allocator;
auto mArray = allocator.makeMultidimensionalArray!int(2, 3, 5, 6, 7, 2);
allocator.disposeMultidimensionalArray(mArray);
```
RCIAllocator **allocatorObject**(A)(auto ref A a)
Constraints: if (!isPointer!A);
RCIAllocator **allocatorObject**(A)(A\* pa);
Returns a dynamically-typed `CAllocator` built around a given statically- typed allocator `a` of type `A`. Passing a pointer to the allocator creates a dynamic allocator around the allocator pointed to by the pointer, without attempting to copy or move it. Passing the allocator by value or reference behaves as follows.
* If `A` has no state, the resulting object is allocated in static shared storage.
* If `A` has state, the result will [`std.algorithm.mutation.move`](std_algorithm_mutation#move) the supplied allocator `A a` within. The result itself is allocated in its own statically-typed allocator.
Examples:
```
import std.experimental.allocator.mallocator : Mallocator;
RCIAllocator a = allocatorObject(Mallocator.instance);
auto b = a.allocate(100);
writeln(b.length); // 100
assert(a.deallocate(b));
// The in-situ region must be used by pointer
import std.experimental.allocator.building_blocks.region : InSituRegion;
auto r = InSituRegion!1024();
a = allocatorObject(&r);
b = a.allocate(200);
writeln(b.length); // 200
// In-situ regions can deallocate the last allocation
assert(a.deallocate(b));
```
nothrow RCISharedAllocator **sharedAllocatorObject**(A)(auto ref A a)
Constraints: if (!isPointer!A);
RCISharedAllocator **sharedAllocatorObject**(A)(A\* pa);
Returns a dynamically-typed `CSharedAllocator` built around a given statically- typed allocator `a` of type `A`. Passing a pointer to the allocator creates a dynamic allocator around the allocator pointed to by the pointer, without attempting to copy or move it. Passing the allocator by value or reference behaves as follows.
* If `A` has no state, the resulting object is allocated in static shared storage.
* If `A` has state and is copyable, the result will [`std.algorithm.mutation.move`](std_algorithm_mutation#move) the supplied allocator `A a` within. The result itself is allocated in its own statically-typed allocator.
* If `A` has state and is not copyable, the result will move the passed-in argument into the result. The result itself is allocated in its own statically-typed allocator.
class **CAllocatorImpl**(Allocator, Flag!"indirect" indirect = No.indirect): IAllocator;
Implementation of `IAllocator` using `Allocator`. This adapts a statically-built allocator type to `IAllocator` that is directly usable by non-templated code.
Usually `CAllocatorImpl` is used indirectly by calling [`theAllocator`](#theAllocator).
pure @nogc ref @safe Allocator **impl**();
The implementation is available as a public member.
pure @nogc @safe this(Allocator\* pa);
The implementation is available as a public member.
@property uint **alignment**();
Returns `impl.alignment`.
size\_t **goodAllocSize**(size\_t s);
Returns `impl.goodAllocSize(s)`.
void[] **allocate**(size\_t s, TypeInfo ti = null);
Returns `impl.allocate(s)`.
void[] **alignedAllocate**(size\_t s, uint a);
If `impl.alignedAllocate` exists, calls it and returns the result. Otherwise, always returns `null`.
Ternary **owns**(void[] b);
If `Allocator` implements `owns`, forwards to it. Otherwise, returns `Ternary.unknown`.
bool **expand**(ref void[] b, size\_t s);
Returns `impl.expand(b, s)` if defined, `false` otherwise.
bool **reallocate**(ref void[] b, size\_t s);
Returns `impl.reallocate(b, s)`.
bool **alignedReallocate**(ref void[] b, size\_t s, uint a);
Forwards to `impl.alignedReallocate` if defined, `false` otherwise.
bool **deallocate**(void[] b);
If `impl.deallocate` is not defined, returns `false`. Otherwise it forwards the call.
bool **deallocateAll**();
Calls `impl.deallocateAll()` and returns the result if defined, otherwise returns `false`.
Ternary **empty**();
Forwards to `impl.empty()` if defined, otherwise returns `Ternary.unknown`.
void[] **allocateAll**();
Returns `impl.allocateAll()` if present, `null` otherwise.
class **CSharedAllocatorImpl**(Allocator, Flag!"indirect" indirect = No.indirect): ISharedAllocator;
Implementation of `ISharedAllocator` using `Allocator`. This adapts a statically-built, shareable across threads, allocator type to `ISharedAllocator` that is directly usable by non-templated code.
Usually `CSharedAllocatorImpl` is used indirectly by calling [`processAllocator`](#processAllocator).
shared pure @nogc ref @safe Allocator **impl**();
The implementation is available as a public member.
shared pure @nogc @safe this(Allocator\* pa);
The implementation is available as a public member.
shared @property uint **alignment**();
Returns `impl.alignment`.
shared size\_t **goodAllocSize**(size\_t s);
Returns `impl.goodAllocSize(s)`.
shared void[] **allocate**(size\_t s, TypeInfo ti = null);
Returns `impl.allocate(s)`.
shared void[] **alignedAllocate**(size\_t s, uint a);
If `impl.alignedAllocate` exists, calls it and returns the result. Otherwise, always returns `null`.
shared Ternary **owns**(void[] b);
If `Allocator` implements `owns`, forwards to it. Otherwise, returns `Ternary.unknown`.
shared bool **expand**(ref void[] b, size\_t s);
Returns `impl.expand(b, s)` if defined, `false` otherwise.
shared bool **reallocate**(ref void[] b, size\_t s);
Returns `impl.reallocate(b, s)`.
shared bool **alignedReallocate**(ref void[] b, size\_t s, uint a);
Forwards to `impl.alignedReallocate` if defined, `false` otherwise.
shared bool **deallocate**(void[] b);
If `impl.deallocate` is not defined, returns `false`. Otherwise it forwards the call.
shared bool **deallocateAll**();
Calls `impl.deallocateAll()` and returns the result if defined, otherwise returns `false`.
shared Ternary **empty**();
Forwards to `impl.empty()` if defined, otherwise returns `Ternary.unknown`.
shared void[] **allocateAll**();
Returns `impl.allocateAll()` if present, `null` otherwise.
| programming_docs |
d core.attribute core.attribute
==============
This module contains UDA's (User Defined Attributes) either used in the runtime or special UDA's recognized by compiler.
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt)
Authors:
Jacob Carlborg
Source
[core/attribute.d](https://github.com/dlang/druntime/blob/master/src/core/attribute.d)
struct **selector**;
Use this attribute to attach an Objective-C selector to a method.
This is a special compiler recognized attribute, it has several requirements, which all will be enforced by the compiler:
* The attribute can only be attached to methods or constructors which have Objective-C linkage. That is, a method or a constructor in a class or interface declared as
```
extern(Objective-C)
```
.
,
* It cannot be attached to a method or constructor that is a template
,
* The number of colons in the string need to match the number of arguments the method accept.
,
* It can only be used once in a method declaration
Examples:
```
extern (Objective-C)
class NSObject
{
this() @selector("init");
static NSObject alloc() @selector("alloc");
NSObject initWithUTF8String(in char* str) @selector("initWithUTF8String:");
ObjcObject copyScriptingValue(ObjcObject value, NSString key, NSDictionary properties)
@selector("copyScriptingValue:forKey:withProperties:");
}
```
enum **optional**;
Use this attribute to make an Objective-C interface method optional.
An optional method is a method that does **not** have to be implemented in the class that implements the interface. To safely call an optional method, a runtime check should be performed to make sure the receiver implements the method.
This is a special compiler recognized attribute, it has several requirements, which all will be enforced by the compiler: * The attribute can only be attached to methods which have Objective-C linkage. That is, a method inside an interface declared as `extern (Objective-C)`
* It can only be used for methods that are declared inside an interface
* It can only be used once in a method declaration
* It cannot be attached to a method that is a template
Examples:
```
import core.attribute : optional, selector;
extern (Objective-C):
struct objc_selector;
alias SEL = objc_selector*;
SEL sel_registerName(in char* str);
extern class NSObject
{
bool respondsToSelector(SEL sel) @selector("respondsToSelector:");
}
interface Foo
{
@optional void foo() @selector("foo");
@optional void bar() @selector("bar");
}
class Bar : NSObject
{
static Bar alloc() @selector("alloc");
Bar init() @selector("init");
void bar() @selector("bar")
{
}
}
extern (D) void main()
{
auto bar = Bar.alloc.init;
if (bar.respondsToSelector(sel_registerName("bar")))
bar.bar();
}
```
struct **gnuAbiTag**;
Use this attribute to declare an ABI tag on a C++ symbol.
ABI tag is an attribute introduced by the GNU C++ compiler. It modifies the mangled name of the symbol to incorporate the tag name, in order to distinguish from an earlier version with a different ABI.
This is a special compiler recognized attribute, it has a few requirements, which all will be enforced by the compiler:
* There can only be one such attribute per symbol.
, * The attribute can only be attached to an `extern(C++)` symbol (`struct`, `class`, `enum`, function, and their templated counterparts).
, * The attribute cannot be applied to C++ namespaces. This is to prevent confusion with the C++ semantic, which allows it to be applied to namespaces.
, * The string arguments must only contain valid characters for C++ name mangling which currently include alphanumerics and the underscore character.
,
This UDA is not transitive, and inner scope do not inherit outer scopes' ABI tag. See examples below for how to translate a C++ declaration to D. Also note that entries in this UDA will be automatically sorted alphabetically, hence `gnuAbiTag("c", "b", "a")` will appear as `@gnuAbiTag("a", "b", "c")`.
See Also:
[Itanium ABI spec](https://itanium-cxx-abi.github.io/cxx-abi/abi.html#mangle.abi-tag) [GCC attributes documentation](https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Attributes.html).
Examples:
```
// ---- foo.cpp
struct [[gnu::abi_tag ("tag1", "tag2")]] Tagged1_2
{
struct [[gnu::abi_tag ("tag3")]] Tagged3
{
[[gnu::abi_tag ("tag4")]]
int Tagged4 () { return 42; }
}
}
Tagged1_2 inst1;
// ---- foo.d
@gnuAbiTag("tag1", "tag2") struct Tagged1_2
{
// Notice the repetition
@gnuAbiTag("tag1", "tag2", "tag3") struct Tagged3
{
@gnuAbiTag("tag1", "tag2", "tag3", "tag4") int Tagged4 ();
}
}
extern __gshared Tagged1_2 inst1;
```
d std.zlib std.zlib
========
Compress/decompress data using the [zlib library](http://www.zlib.net).
Examples:
If you have a small buffer you can use [`compress`](#compress) and [`uncompress`](#uncompress) directly.
```
import std.zlib;
auto src =
"the quick brown fox jumps over the lazy dog\r
the quick brown fox jumps over the lazy dog\r";
ubyte[] dst;
ubyte[] result;
dst = compress(src);
result = cast(ubyte[]) uncompress(dst);
assert(result == src);
```
When the data to be compressed doesn't fit in one buffer, use [`Compress`](#Compress) and [`UnCompress`](#UnCompress).
```
import std.zlib;
import std.stdio;
import std.conv : to;
import std.algorithm.iteration : map;
UnCompress decmp = new UnCompress;
foreach (chunk; stdin.byChunk(4096).map!(x => decmp.uncompress(x)))
{
chunk.to!string.write;
}
```
References
[Wikipedia](http://en.wikipedia.org/wiki/Zlib)
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
[Walter Bright](http://digitalmars.com)
Source
[std/zlib.d](https://github.com/dlang/phobos/blob/master/std/zlib.d)
class **ZlibException**: object.Exception;
Errors throw a ZlibException.
uint **adler32**(uint adler, const(void)[] buf);
Compute the Adler-32 checksum of a buffer's worth of data.
Parameters:
| | |
| --- | --- |
| uint `adler` | the starting checksum for the computation. Use 1 for a new checksum. Use the output of this function for a cumulative checksum. |
| const(void)[] `buf` | buffer containing input data |
Returns:
A `uint` checksum for the provided input data and starting checksum
See Also:
<http://en.wikipedia.org/wiki/Adler-32>
Examples:
```
static ubyte[] data = [1,2,3,4,5,6,7,8,9,10];
uint adler = adler32(0u, data);
writeln(adler); // 0xdc0037
```
uint **crc32**(uint crc, const(void)[] buf);
Compute the CRC32 checksum of a buffer's worth of data.
Parameters:
| | |
| --- | --- |
| uint `crc` | the starting checksum for the computation. Use 0 for a new checksum. Use the output of this function for a cumulative checksum. |
| const(void)[] `buf` | buffer containing input data |
Returns:
A `uint` checksum for the provided input data and starting checksum
See Also:
<http://en.wikipedia.org/wiki/Cyclic_redundancy_check>
ubyte[] **compress**(const(void)[] srcbuf, int level);
ubyte[] **compress**(const(void)[] srcbuf);
Compress data
Parameters:
| | |
| --- | --- |
| const(void)[] `srcbuf` | buffer containing the data to compress |
| int `level` | compression level. Legal values are -1 .. 9, with -1 indicating the default level (6), 0 indicating no compression, 1 being the least compression and 9 being the most. |
Returns:
the compressed data
void[] **uncompress**(const(void)[] srcbuf, size\_t destlen = 0u, int winbits = 15);
Decompresses the data in srcbuf[].
Parameters:
| | |
| --- | --- |
| const(void)[] `srcbuf` | buffer containing the compressed data. |
| size\_t `destlen` | size of the uncompressed data. It need not be accurate, but the decompression will be faster if the exact size is supplied. |
| int `winbits` | the base two logarithm of the maximum window size. |
Returns:
the decompressed data.
enum **HeaderFormat**: int;
the header format the compressed stream is wrapped in
**deflate**
a standard zlib header
**gzip**
a gzip file format header
**determineFromData**
used when decompressing. Try to automatically detect the stream format by looking at the data
class **Compress**;
Used when the data to be compressed is not all in one buffer.
this(int level, HeaderFormat header = HeaderFormat.deflate);
this(HeaderFormat header = HeaderFormat.deflate);
Constructor.
Parameters:
| | |
| --- | --- |
| int `level` | compression level. Legal values are 1 .. 9, with 1 being the least compression and 9 being the most. The default value is 6. |
| HeaderFormat `header` | sets the compression type to one of the options available in [`HeaderFormat`](#HeaderFormat). Defaults to HeaderFormat.deflate. |
See Also:
[`compress`](#compress), [`HeaderFormat`](#HeaderFormat)
const(void)[] **compress**(const(void)[] buf);
Compress the data in buf and return the compressed data.
Parameters:
| | |
| --- | --- |
| const(void)[] `buf` | data to compress |
Returns:
the compressed data. The buffers returned from successive calls to this should be concatenated together.
void[] **flush**(int mode = Z\_FINISH);
Compress and return any remaining data. The returned data should be appended to that returned by compress().
Parameters:
| | |
| --- | --- |
| int `mode` | one of the following: Z\_SYNC\_FLUSH Syncs up flushing to the next byte boundary. Used when more data is to be compressed later on. Z\_FULL\_FLUSH Syncs up flushing to the next byte boundary. Used when more data is to be compressed later on, and the decompressor needs to be restartable at this point. Z\_FINISH (default) Used when finished compressing the data. |
class **UnCompress**;
Used when the data to be decompressed is not all in one buffer.
this(uint destbufsize);
this(HeaderFormat format = HeaderFormat.determineFromData);
Construct. destbufsize is the same as for D.zlib.uncompress().
const(void)[] **uncompress**(const(void)[] buf);
Decompress the data in buf and return the decompressed data. The buffers returned from successive calls to this should be concatenated together.
void[] **flush**();
Decompress and return any remaining data. The returned data should be appended to that returned by uncompress(). The UnCompress object cannot be used further.
const @property bool **empty**();
Returns true if all input data has been decompressed and no further data can be decompressed (inflate() returned Z\_STREAM\_END)
Examples:
```
// some random data
ubyte[1024] originalData = void;
// append garbage data (or don't, this works in both cases)
auto compressedData = cast(ubyte[]) compress(originalData) ~ cast(ubyte[]) "whatever";
auto decompressor = new UnCompress();
auto uncompressedData = decompressor.uncompress(compressedData);
assert(uncompressedData[] == originalData[],
"The uncompressed and the original data differ");
assert(decompressor.empty, "The UnCompressor reports not being done");
```
d Named Character Entities Named Character Entities
========================
**Contents**
```
NamedCharacterEntity:
& Identifier ;
```
The full list of named character entities from the [HTML 5 Spec](https://w3.org/TR/html5/syntax.html#named-character-references) is supported except for the named entities which contain multiple code points. Below is a *partial* list of the named character entities.
**Note:** Not all glyphs will display properly in the **Glyph** column in all browsers.
Named Character Entities| **Name** | **Value** | **Glyph** |
| `quot` | 34 | " |
| `amp` | 38 | & |
| `lt` | 60 | `<` |
| `gt` | 62 | `>` |
| `OElig` | 338 | Œ |
| `oelig` | 339 | œ |
| `Scaron` | 352 | Š |
| `scaron` | 353 | š |
| `Yuml` | 376 | ÿ |
| `circ` | 710 | ˆ |
| `tilde` | 732 | ˜ |
| `ensp` | 8194 | |
| `emsp` | 8195 | |
| `thinsp` | 8201 | |
| `zwnj` | 8204 | |
| `zwj` | 8205 | |
| `lrm` | 8206 | |
| `rlm` | 8207 | |
| `ndash` | 8211 | – |
| `mdash` | 8212 | — |
| `lsquo` | 8216 | ‘ |
| `rsquo` | 8217 | ’ |
| `sbquo` | 8218 | ‚ |
| `ldquo` | 8220 | “ |
| `rdquo` | 8221 | ” |
| `bdquo` | 8222 | „ |
| `dagger` | 8224 | † |
| `Dagger` | 8225 | ‡ |
| `permil` | 8240 | ‰ |
| `lsaquo` | 8249 | ‹ |
| `rsaquo` | 8250 | › |
| `euro` | 8364 | € |
Latin-1 (ISO-8859-1) Entities| **Name** | **Value** | **Glyph** |
| `nbsp` | 160 | |
| `iexcl` | 161 | ¡ |
| `cent` | 162 | ¢ |
| `pound` | 163 | £ |
| `curren` | 164 | ¤ |
| `yen` | 165 | ¥ |
| `brvbar` | 166 | ¦ |
| `sect` | 167 | § |
| `uml` | 168 | ¨ |
| `copy` | 169 | © |
| `ordf` | 170 | ª |
| `laquo` | 171 | « |
| `not` | 172 | ¬ |
| `shy` | 173 | |
| `reg` | 174 | ® |
| `macr` | 175 | ¯ |
| `deg` | 176 | ° |
| `plusmn` | 177 | ± |
| `sup2` | 178 | ² |
| `sup3` | 179 | ³ |
| `acute` | 180 | ´ |
| `micro` | 181 | µ |
| `para` | 182 | ¶ |
| `middot` | 183 | · |
| `cedil` | 184 | ¸ |
| `sup1` | 185 | ¹ |
| `ordm` | 186 | º |
| `raquo` | 187 | » |
| `frac14` | 188 | ¼ |
| `frac12` | 189 | ½ |
| `frac34` | 190 | ¾ |
| `iquest` | 191 | ¿ |
| `Agrave` | 192 | À |
| `Aacute` | 193 | Á |
| `Acirc` | 194 | Â |
| `Atilde` | 195 | Ã |
| `Auml` | 196 | Ä |
| `Aring` | 197 | Å |
| `AElig` | 198 | Æ |
| `Ccedil` | 199 | Ç |
| `Egrave` | 200 | È |
| `Eacute` | 201 | É |
| `Ecirc` | 202 | Ê |
| `Euml` | 203 | Ë |
| `Igrave` | 204 | Ì |
| `Iacute` | 205 | Í |
| `Icirc` | 206 | Î |
| `Iuml` | 207 | Ï |
| `ETH` | 208 | Ð |
| `Ntilde` | 209 | Ñ |
| `Ograve` | 210 | Ò |
| `Oacute` | 211 | Ó |
| `Ocirc` | 212 | Ô |
| `Otilde` | 213 | Õ |
| `Ouml` | 214 | Ö |
| `times` | 215 | × |
| `Oslash` | 216 | Ø |
| `Ugrave` | 217 | Ù |
| `Uacute` | 218 | Ú |
| `Ucirc` | 219 | Û |
| `Uuml` | 220 | Ü |
| `Yacute` | 221 | Ý |
| `THORN` | 222 | Þ |
| `szlig` | 223 | ß |
| `agrave` | 224 | à |
| `aacute` | 225 | á |
| `acirc` | 226 | â |
| `atilde` | 227 | ã |
| `auml` | 228 | ä |
| `aring` | 229 | å |
| `aelig` | 230 | æ |
| `ccedil` | 231 | ç |
| `egrave` | 232 | è |
| `eacute` | 233 | é |
| `ecirc` | 234 | ê |
| `euml` | 235 | ë |
| `igrave` | 236 | ì |
| `iacute` | 237 | í |
| `icirc` | 238 | î |
| `iuml` | 239 | ï |
| `eth` | 240 | ð |
| `ntilde` | 241 | ñ |
| `ograve` | 242 | ò |
| `oacute` | 243 | ó |
| `ocirc` | 244 | ô |
| `otilde` | 245 | õ |
| `ouml` | 246 | ö |
| `divide` | 247 | ÷ |
| `oslash` | 248 | ø |
| `ugrave` | 249 | ù |
| `uacute` | 250 | ú |
| `ucirc` | 251 | û |
| `uuml` | 252 | ü |
| `yacute` | 253 | ý |
| `thorn` | 254 | þ |
| `yuml` | 255 | ÿ |
Symbols and Greek letter entities| **Name** | **Value** | **Glyph** |
| `fnof` | 402 | ƒ |
| `Alpha` | 913 | Α |
| `Beta` | 914 | Β |
| `Gamma` | 915 | Γ |
| `Delta` | 916 | Δ |
| `Epsilon` | 917 | Ε |
| `Zeta` | 918 | Ζ |
| `Eta` | 919 | Η |
| `Theta` | 920 | Θ |
| `Iota` | 921 | Ι |
| `Kappa` | 922 | Κ |
| `Lambda` | 923 | Λ |
| `Mu` | 924 | Μ |
| `Nu` | 925 | Ν |
| `Xi` | 926 | Ξ |
| `Omicron` | 927 | Ο |
| `Pi` | 928 | Π |
| `Rho` | 929 | Ρ |
| `Sigma` | 931 | Σ |
| `Tau` | 932 | Τ |
| `Upsilon` | 933 | Υ |
| `Phi` | 934 | Φ |
| `Chi` | 935 | Χ |
| `Psi` | 936 | Ψ |
| `Omega` | 937 | Ω |
| `alpha` | 945 | α |
| `beta` | 946 | β |
| `gamma` | 947 | γ |
| `delta` | 948 | δ |
| `epsilon` | 949 | ε |
| `zeta` | 950 | ζ |
| `eta` | 951 | η |
| `theta` | 952 | θ |
| `iota` | 953 | ι |
| `kappa` | 954 | κ |
| `lambda` | 955 | λ |
| `mu` | 956 | μ |
| `nu` | 957 | ν |
| `xi` | 958 | ξ |
| `omicron` | 959 | ο |
| `pi` | 960 | π |
| `rho` | 961 | ρ |
| `sigmaf` | 962 | ς |
| `sigma` | 963 | σ |
| `tau` | 964 | τ |
| `upsilon` | 965 | υ |
| `phi` | 966 | φ |
| `chi` | 967 | χ |
| `psi` | 968 | ψ |
| `omega` | 969 | ω |
| `thetasym` | 977 | ϑ |
| `upsih` | 978 | ϒ |
| `piv` | 982 | ϖ |
| `bull` | 8226 | • |
| `hellip` | 8230 | … |
| `prime` | 8242 | ′ |
| `Prime` | 8243 | ″ |
| `oline` | 8254 | ‾ |
| `frasl` | 8260 | ⁄ |
| `weierp` | 8472 | ℘ |
| `image` | 8465 | ℑ |
| `real` | 8476 | ℜ |
| `trade` | 8482 | ™ |
| `alefsym` | 8501 | ℵ |
| `larr` | 8592 | ← |
| `uarr` | 8593 | ↑ |
| `rarr` | 8594 | → |
| `darr` | 8595 | ↓ |
| `harr` | 8596 | ↔ |
| `crarr` | 8629 | ↵ |
| `lArr` | 8656 | ⇐ |
| `uArr` | 8657 | ⇑ |
| `rArr` | 8658 | ⇒ |
| `dArr` | 8659 | ⇓ |
| `hArr` | 8660 | ⇔ |
| `forall` | 8704 | ∀ |
| `part` | 8706 | ∂ |
| `exist` | 8707 | ∃ |
| `empty` | 8709 | ∅ |
| `nabla` | 8711 | ∇ |
| `isin` | 8712 | ∈ |
| `notin` | 8713 | ∉ |
| `ni` | 8715 | ∋ |
| `prod` | 8719 | ∏ |
| `sum` | 8721 | ∑ |
| `minus` | 8722 | − |
| `lowast` | 8727 | ∗ |
| `radic` | 8730 | √ |
| `prop` | 8733 | ∝ |
| `infin` | 8734 | ∞ |
| `ang` | 8736 | ∠ |
| `and` | 8743 | ∧ |
| `or` | 8744 | ∨ |
| `cap` | 8745 | ∩ |
| `cup` | 8746 | ∪ |
| `int` | 8747 | ∫ |
| `there4` | 8756 | ∴ |
| `sim` | 8764 | ∼ |
| `cong` | 8773 | ≅ |
| `asymp` | 8776 | ≈ |
| `ne` | 8800 | ≠ |
| `equiv` | 8801 | ≡ |
| `le` | 8804 | ≤ |
| `ge` | 8805 | ≥ |
| `sub` | 8834 | ⊂ |
| `sup` | 8835 | ⊃ |
| `nsub` | 8836 | ⊄ |
| `sube` | 8838 | ⊆ |
| `supe` | 8839 | ⊇ |
| `oplus` | 8853 | ⊕ |
| `otimes` | 8855 | ⊗ |
| `perp` | 8869 | ⊥ |
| `sdot` | 8901 | ⋅ |
| `lceil` | 8968 | ⌈ |
| `rceil` | 8969 | ⌉ |
| `lfloor` | 8970 | ⌊ |
| `rfloor` | 8971 | ⌋ |
| `loz` | 9674 | ◊ |
| `spades` | 9824 | ♠ |
| `clubs` | 9827 | ♣ |
| `hearts` | 9829 | ♥ |
| `diams` | 9830 | ♦ |
| `lang` | 10216 | 〈 |
| `rang` | 10217 | 〉 |
d std.digest.hmac std.digest.hmac
===============
This package implements the hash-based message authentication code (HMAC) algorithm as defined in [RFC2104](http://tools.ietf.org/html/rfc2104). See also the corresponding [Wikipedia article](http://en.wikipedia.org/wiki/Hash-based_message_authentication_code).
License:
[Boost License 1.0](http://boost.org/LICENSE_1_0.txt).
Source
[std/digest/hmac.d](https://github.com/dlang/phobos/blob/master/std/digest/hmac.d)
Examples:
Template API HMAC implementation. This implements an HMAC over the digest H. If H doesn't provide information about the block size, it can be supplied explicitly using the second overload. This type conforms to [`std.digest.isDigest`](std_digest#isDigest). Compute HMAC over an input string
```
import std.ascii : LetterCase;
import std.digest : toHexString;
import std.digest.sha : SHA1;
import std.string : representation;
auto secret = "secret".representation;
assert("The quick brown fox jumps over the lazy dog"
.representation
.hmac!SHA1(secret)
.toHexString!(LetterCase.lower) == "198ea1ea04c435c1246b586a06d5cf11c3ffcda6");
```
struct **HMAC**(H, size\_t hashBlockSize) if (hashBlockSize % 8 == 0);
template **hmac**(H) if (isDigest!H && hasBlockSize!H)
auto **hmac**(H, size\_t blockSize)(scope const(ubyte)[] secret)
Constraints: if (isDigest!H);
Overload of HMAC to be used if H doesn't provide information about its block size.
Examples:
```
import std.digest.sha : SHA1;
import std.string : representation;
string data1 = "Hello, world", data2 = "Hola mundo";
auto hmac = HMAC!SHA1("My s3cR3T keY".representation);
auto digest = hmac.put(data1.representation)
.put(data2.representation)
.finish();
static immutable expected = [
197, 57, 52, 3, 13, 194, 13,
36, 117, 228, 8, 11, 111, 51,
165, 3, 123, 31, 251, 113];
writeln(digest); // expected
```
this(scope const(ubyte)[] secret);
Constructs the HMAC digest using the specified secret.
Examples:
```
import std.digest.sha : SHA1;
import std.string : representation;
auto hmac = HMAC!SHA1("My s3cR3T keY".representation);
hmac.put("Hello, world".representation);
static immutable expected = [
130, 32, 235, 44, 208, 141,
150, 232, 211, 214, 162, 195,
188, 127, 52, 89, 100, 68, 90, 216];
writeln(hmac.finish()); // expected
```
ref HMAC!(H, blockSize) **start**() return;
Reinitializes the digest, making it ready for reuse.
Note
The constructor leaves the digest in an initialized state, so that this method only needs to be called if an unfinished digest is to be reused.
Returns:
A reference to the digest for convenient chaining.
Examples:
```
import std.digest.sha : SHA1;
import std.string : representation;
string data1 = "Hello, world", data2 = "Hola mundo";
auto hmac = HMAC!SHA1("My s3cR3T keY".representation);
hmac.put(data1.representation);
hmac.start(); // reset digest
hmac.put(data2.representation); // start over
static immutable expected = [
122, 151, 232, 240, 249, 80,
19, 178, 186, 77, 110, 23, 208,
52, 11, 88, 34, 151, 192, 255];
writeln(hmac.finish()); // expected
```
ref HMAC!(H, blockSize) **put**(in ubyte[] data...) return;
Feeds a piece of data into the hash computation. This method allows the type to be used as an [`std.range.OutputRange`](std_range#OutputRange).
Returns:
A reference to the digest for convenient chaining.
Examples:
```
import std.digest.hmac, std.digest.sha;
import std.string : representation;
string data1 = "Hello, world", data2 = "Hola mundo";
auto hmac = HMAC!SHA1("My s3cR3T keY".representation);
hmac.put(data1.representation)
.put(data2.representation);
static immutable expected = [
197, 57, 52, 3, 13, 194, 13,
36, 117, 228, 8, 11, 111, 51,
165, 3, 123, 31, 251, 113];
writeln(hmac.finish()); // expected
```
DigestType!H **finish**();
Resets the digest and returns the finished hash.
Examples:
```
import std.digest.sha : SHA1;
import std.string : representation;
string data1 = "Hello, world", data2 = "Hola mundo";
auto hmac = HMAC!SHA1("My s3cR3T keY".representation);
auto digest = hmac.put(data1.representation)
.put(data2.representation)
.finish();
static immutable expected = [
197, 57, 52, 3, 13, 194, 13,
36, 117, 228, 8, 11, 111, 51,
165, 3, 123, 31, 251, 113];
writeln(digest); // expected
```
| programming_docs |
d core.stdc.stdint core.stdc.stdint
================
D header file for C99.
This module contains bindings to selected types and functions from the standard C header [`<stdint.h>`](http://pubs.opengroup.org/onlinepubs/009695399/basedefs/stdint.h.html). Note that this is not automatically generated, and may omit some types/functions from the original C header.
License:
Distributed under the [Boost Software License 1.0](http://www.boost.org/LICENSE_1_0.txt). (See accompanying file LICENSE)
Authors:
Sean Kelly
Source
[core/stdc/stdint.d](https://github.com/dlang/druntime/blob/master/src/core/stdc/stdint.d)
Standards:
ISO/IEC 9899:1999 (E)
alias **int8\_t** = byte; alias **int16\_t** = short; alias **uint8\_t** = ubyte; alias **uint16\_t** = ushort; alias **int32\_t** = int; alias **uint32\_t** = uint; alias **int64\_t** = long; alias **uint64\_t** = ulong; alias **int\_least8\_t** = byte; alias **uint\_least8\_t** = ubyte; alias **int\_least16\_t** = short; alias **uint\_least16\_t** = ushort; alias **int\_least32\_t** = int; alias **uint\_least32\_t** = uint; alias **int\_least64\_t** = long; alias **uint\_least64\_t** = ulong; alias **int\_fast8\_t** = byte; alias **uint\_fast8\_t** = ubyte; alias **int\_fast16\_t** = long; alias **uint\_fast16\_t** = ulong; alias **int\_fast32\_t** = long; alias **uint\_fast32\_t** = ulong; alias **int\_fast64\_t** = long; alias **uint\_fast64\_t** = ulong; alias **intptr\_t** = long; alias **uintptr\_t** = ulong; alias **intmax\_t** = long; alias **uintmax\_t** = ulong; enum int8\_t **INT8\_MIN**; enum int8\_t **INT8\_MAX**; enum int16\_t **INT16\_MIN**; enum int16\_t **INT16\_MAX**; enum int32\_t **INT32\_MIN**; enum int32\_t **INT32\_MAX**; enum int64\_t **INT64\_MIN**; enum int64\_t **INT64\_MAX**; enum uint8\_t **UINT8\_MAX**; enum uint16\_t **UINT16\_MAX**; enum uint32\_t **UINT32\_MAX**; enum uint64\_t **UINT64\_MAX**; enum int\_least8\_t **INT\_LEAST8\_MIN**; enum int\_least8\_t **INT\_LEAST8\_MAX**; enum int\_least16\_t **INT\_LEAST16\_MIN**; enum int\_least16\_t **INT\_LEAST16\_MAX**; enum int\_least32\_t **INT\_LEAST32\_MIN**; enum int\_least32\_t **INT\_LEAST32\_MAX**; enum int\_least64\_t **INT\_LEAST64\_MIN**; enum int\_least64\_t **INT\_LEAST64\_MAX**; enum uint\_least8\_t **UINT\_LEAST8\_MAX**; enum uint\_least16\_t **UINT\_LEAST16\_MAX**; enum uint\_least32\_t **UINT\_LEAST32\_MAX**; enum uint\_least64\_t **UINT\_LEAST64\_MAX**; enum int\_fast8\_t **INT\_FAST8\_MIN**; enum int\_fast8\_t **INT\_FAST8\_MAX**; enum int\_fast16\_t **INT\_FAST16\_MIN**; enum int\_fast16\_t **INT\_FAST16\_MAX**; enum int\_fast32\_t **INT\_FAST32\_MIN**; enum int\_fast32\_t **INT\_FAST32\_MAX**; enum int\_fast64\_t **INT\_FAST64\_MIN**; enum int\_fast64\_t **INT\_FAST64\_MAX**; enum uint\_fast8\_t **UINT\_FAST8\_MAX**; enum uint\_fast16\_t **UINT\_FAST16\_MAX**; enum uint\_fast32\_t **UINT\_FAST32\_MAX**; enum uint\_fast64\_t **UINT\_FAST64\_MAX**; enum intptr\_t **INTPTR\_MIN**; enum intptr\_t **INTPTR\_MAX**; enum uintptr\_t **UINTPTR\_MIN**; enum uintptr\_t **UINTPTR\_MAX**; enum intmax\_t **INTMAX\_MIN**; enum intmax\_t **INTMAX\_MAX**; enum uintmax\_t **UINTMAX\_MAX**; enum ptrdiff\_t **PTRDIFF\_MIN**; enum ptrdiff\_t **PTRDIFF\_MAX**; enum sig\_atomic\_t **SIG\_ATOMIC\_MIN**; enum sig\_atomic\_t **SIG\_ATOMIC\_MAX**; enum size\_t **SIZE\_MAX**; enum wchar\_t **WCHAR\_MIN**; enum wchar\_t **WCHAR\_MAX**; enum wint\_t **WINT\_MIN**; enum wint\_t **WINT\_MAX**; alias **INT8\_C** = \_typify!byte.\_typify; alias **INT16\_C** = \_typify!short.\_typify; alias **INT32\_C** = \_typify!int.\_typify; alias **INT64\_C** = \_typify!long.\_typify; alias **UINT8\_C** = \_typify!ubyte.\_typify; alias **UINT16\_C** = \_typify!ushort.\_typify; alias **UINT32\_C** = \_typify!uint.\_typify; alias **UINT64\_C** = \_typify!ulong.\_typify; alias **INTMAX\_C** = \_typify!long.\_typify; alias **UINTMAX\_C** = \_typify!ulong.\_typify;
d std.container.binaryheap std.container.binaryheap
========================
This module provides a `BinaryHeap` (aka priority queue) adaptor that makes a binary heap out of any user-provided random-access range.
This module is a submodule of [`std.container`](std_container).
Source
[std/container/binaryheap.d](https://github.com/dlang/phobos/blob/master/std/container/binaryheap.d)
License:
Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE\_1\_0.txt or copy at [boost.org/LICENSE\_1\_0.txt](http://boost.org/LICENSE_1_0.txt)).
Authors:
[Andrei Alexandrescu](http://erdani.com)
Examples:
```
import std.algorithm.comparison : equal;
import std.range : take;
auto maxHeap = heapify([4, 7, 3, 1, 5]);
assert(maxHeap.take(3).equal([7, 5, 4]));
auto minHeap = heapify!"a > b"([4, 7, 3, 1, 5]);
assert(minHeap.take(3).equal([1, 3, 4]));
```
struct **BinaryHeap**(Store, alias less = "a < b") if (isRandomAccessRange!Store || isRandomAccessRange!(typeof(Store.init[])));
Implements a [binary heap](http://en.wikipedia.org/wiki/Binary_heap) container on top of a given random-access range type (usually `T[]`) or a random-access container type (usually `Array!T`). The documentation of `BinaryHeap` will refer to the underlying range or container as the *store* of the heap.
The binary heap induces structure over the underlying store such that accessing the largest element (by using the `front` property) is a Ο(`1`) operation and extracting it (by using the `removeFront()` method) is done fast in Ο(`log n`) time.
If `less` is the less-than operator, which is the default option, then `BinaryHeap` defines a so-called max-heap that optimizes extraction of the *largest* elements. To define a min-heap, instantiate BinaryHeap with `"a > b"` as its predicate.
Simply extracting elements from a `BinaryHeap` container is tantamount to lazily fetching elements of `Store` in descending order. Extracting elements from the `BinaryHeap` to completion leaves the underlying store sorted in ascending order but, again, yields elements in descending order.
If `Store` is a range, the `BinaryHeap` cannot grow beyond the size of that range. If `Store` is a container that supports `insertBack`, the `BinaryHeap` may grow by adding elements to the container.
Examples:
Example from "Introduction to Algorithms" Cormen et al, p 146
```
import std.algorithm.comparison : equal;
int[] a = [ 4, 1, 3, 2, 16, 9, 10, 14, 8, 7 ];
auto h = heapify(a);
// largest element
writeln(h.front); // 16
// a has the heap property
assert(equal(a, [ 16, 14, 10, 8, 7, 9, 3, 2, 4, 1 ]));
```
Examples:
`BinaryHeap` implements the standard input range interface, allowing lazy iteration of the underlying range in descending order.
```
import std.algorithm.comparison : equal;
import std.range : take;
int[] a = [4, 1, 3, 2, 16, 9, 10, 14, 8, 7];
auto top5 = heapify(a).take(5);
assert(top5.equal([16, 14, 10, 9, 8]));
```
this(Store s, size\_t initialSize = size\_t.max);
Converts the store `s` into a heap. If `initialSize` is specified, only the first `initialSize` elements in `s` are transformed into a heap, after which the heap can grow up to `r.length` (if `Store` is a range) or indefinitely (if `Store` is a container with `insertBack`). Performs Ο(`min(r.length, initialSize)`) evaluations of `less`.
void **acquire**(Store s, size\_t initialSize = size\_t.max);
Takes ownership of a store. After this, manipulating `s` may make the heap work incorrectly.
void **assume**(Store s, size\_t initialSize = size\_t.max);
Takes ownership of a store assuming it already was organized as a heap.
auto **release**();
Clears the heap. Returns the portion of the store from `0` up to `length`, which satisfies the [heap property](https://en.wikipedia.org/wiki/Heap_(data_structure)).
@property bool **empty**();
Returns `true` if the heap is empty, `false` otherwise.
@property BinaryHeap **dup**();
Returns a duplicate of the heap. The `dup` method is available only if the underlying store supports it.
@property size\_t **length**();
Returns the length of the heap.
@property size\_t **capacity**();
Returns the capacity of the heap, which is the length of the underlying store (if the store is a range) or the capacity of the underlying store (if the store is a container).
@property ElementType!Store **front**();
Returns a copy of the front of the heap, which is the largest element according to `less`.
void **clear**();
Clears the heap by detaching it from the underlying store.
size\_t **insert**(ElementType!Store value);
Inserts `value` into the store. If the underlying store is a range and `length == capacity`, throws an exception.
void **removeFront**();
alias **popFront** = removeFront;
Removes the largest element from the heap.
ElementType!Store **removeAny**();
Removes the largest element from the heap and returns a copy of it. The element still resides in the heap's store. For performance reasons you may want to use `removeFront` with heaps of objects that are expensive to copy.
void **replaceFront**(ElementType!Store value);
Replaces the largest element in the store with `value`.
bool **conditionalInsert**(ElementType!Store value);
If the heap has room to grow, inserts `value` into the store and returns `true`. Otherwise, if `less(value, front)`, calls `replaceFront(value)` and returns again `true`. Otherwise, leaves the heap unaffected and returns `false`. This method is useful in scenarios where the smallest `k` elements of a set of candidates must be collected.
bool **conditionalSwap**(ref ElementType!Store value);
Swapping is allowed if the heap is full. If `less(value, front)`, the method exchanges store.front and value and returns `true`. Otherwise, it leaves the heap unaffected and returns `false`.
BinaryHeap!(Store, less) **heapify**(alias less = "a < b", Store)(Store s, size\_t initialSize = size\_t.max);
Convenience function that returns a `BinaryHeap!Store` object initialized with `s` and `initialSize`.
Examples:
```
import std.conv : to;
import std.range.primitives;
{
// example from "Introduction to Algorithms" Cormen et al., p 146
int[] a = [ 4, 1, 3, 2, 16, 9, 10, 14, 8, 7 ];
auto h = heapify(a);
h = heapify!"a < b"(a);
writeln(h.front); // 16
writeln(a); // [16, 14, 10, 8, 7, 9, 3, 2, 4, 1]
auto witness = [ 16, 14, 10, 9, 8, 7, 4, 3, 2, 1 ];
for (; !h.empty; h.removeFront(), witness.popFront())
{
assert(!witness.empty);
writeln(witness.front); // h.front
}
assert(witness.empty);
}
{
int[] a = [ 4, 1, 3, 2, 16, 9, 10, 14, 8, 7 ];
int[] b = new int[a.length];
BinaryHeap!(int[]) h = BinaryHeap!(int[])(b, 0);
foreach (e; a)
{
h.insert(e);
}
writeln(b); // [16, 14, 10, 8, 7, 3, 9, 1, 4, 2]
}
```
d std.conv std.conv
========
A one-stop shop for converting values from one type to another.
| Category | Functions |
| --- | --- |
| Generic | [`asOriginalType`](#asOriginalType) [`castFrom`](#castFrom) [`emplace`](#emplace) [`parse`](#parse) [`to`](#to) [`toChars`](#toChars) |
| Strings | [`text`](#text) [`wtext`](#wtext) [`dtext`](#dtext) [`hexString`](#hexString) |
| Numeric | [`octal`](#octal) [`roundTo`](#roundTo) [`signed`](#signed) [`unsigned`](#unsigned) |
| Exceptions | [`ConvException`](#ConvException) [`ConvOverflowException`](#ConvOverflowException) |
License:
[Boost License 1.0](http://boost.org/LICENSE_1_0.txt).
Authors:
[Walter Bright](http://digitalmars.com), [Andrei Alexandrescu](http://erdani.org), Shin Fujishiro, Adam D. Ruppe, Kenji Hara
Source
[std/conv.d](https://github.com/dlang/phobos/blob/master/std/conv.d)
class **ConvException**: object.Exception;
Thrown on conversion errors.
Examples:
```
import std.exception : assertThrown;
assertThrown!ConvException(to!int("abc"));
```
class **ConvOverflowException**: std.conv.ConvException;
Thrown on conversion overflow errors.
Examples:
```
import std.exception : assertThrown;
assertThrown!ConvOverflowException(to!ubyte(1_000_000));
```
template **to**(T)
The `to` template converts a value from one type to another. The source type is deduced and the target type must be specified, for example the expression `to!int(42.0)` converts the number 42 from `double` to `int`. The conversion is "safe", i.e., it checks for overflow; `to!int(4.2e10)` would throw the `ConvOverflowException` exception. Overflow checks are only inserted when necessary, e.g., `to!double(42)` does not do any checking because any `int` fits in a `double`.
Conversions from string to numeric types differ from the C equivalents `atoi()` and `atol()` by checking for overflow and not allowing whitespace.
For conversion of strings to signed types, the grammar recognized is:
```
Integer: Sign UnsignedInteger
UnsignedInteger
Sign:
+
-
```
For conversion to unsigned types, the grammar recognized is:
```
UnsignedInteger:
DecimalDigit
DecimalDigit UnsignedInteger
```
Examples:
Converting a value to its own type (useful mostly for generic code) simply returns its argument.
```
int a = 42;
int b = to!int(a);
double c = to!double(3.14); // c is double with value 3.14
```
Examples:
Converting among numeric types is a safe way to cast them around. Conversions from floating-point types to integral types allow loss of precision (the fractional part of a floating-point number). The conversion is truncating towards zero, the same way a cast would truncate. (To round a floating point value when casting to an integral, use `roundTo`.)
```
import std.exception : assertThrown;
int a = 420;
writeln(to!long(a)); // a
assertThrown!ConvOverflowException(to!byte(a));
writeln(to!int(4.2e6)); // 4200000
assertThrown!ConvOverflowException(to!uint(-3.14));
writeln(to!uint(3.14)); // 3
writeln(to!uint(3.99)); // 3
writeln(to!int(-3.99)); // -3
```
Examples:
When converting strings to numeric types, note that the D hexadecimal and binary literals are not handled. Neither the prefixes that indicate the base, nor the horizontal bar used to separate groups of digits are recognized. This also applies to the suffixes that indicate the type. To work around this, you can specify a radix for conversions involving numbers.
```
auto str = to!string(42, 16);
writeln(str); // "2A"
auto i = to!int(str, 16);
writeln(i); // 42
```
Examples:
Conversions from integral types to floating-point types always succeed, but might lose accuracy. The largest integers with a predecessor representable in floating-point format are `2^24-1` for `float`, `2^53-1` for `double`, and `2^64-1` for `real` (when `real` is 80-bit, e.g. on Intel machines).
```
// 2^24 - 1, largest proper integer representable as float
int a = 16_777_215;
writeln(to!int(to!float(a))); // a
writeln(to!int(to!float(-a))); // -a
```
Examples:
Conversion from string types to char types enforces the input to consist of a single code point, and said code point must fit in the target type. Otherwise, [`ConvException`](#ConvException) is thrown.
```
import std.exception : assertThrown;
writeln(to!char("a")); // 'a'
assertThrown(to!char("ñ")); // 'ñ' does not fit into a char
writeln(to!wchar("ñ")); // 'ñ'
assertThrown(to!wchar("😃")); // '😃' does not fit into a wchar
writeln(to!dchar("😃")); // '😃'
// Using wstring or dstring as source type does not affect the result
writeln(to!char("a"w)); // 'a'
writeln(to!char("a"d)); // 'a'
// Two code points cannot be converted to a single one
assertThrown(to!char("ab"));
```
Examples:
Converting an array to another array type works by converting each element in turn. Associative arrays can be converted to associative arrays as long as keys and values can in turn be converted.
```
import std.string : split;
int[] a = [1, 2, 3];
auto b = to!(float[])(a);
writeln(b); // [1.0f, 2, 3]
string str = "1 2 3 4 5 6";
auto numbers = to!(double[])(split(str));
writeln(numbers); // [1.0, 2, 3, 4, 5, 6]
int[string] c;
c["a"] = 1;
c["b"] = 2;
auto d = to!(double[wstring])(c);
assert(d["a"w] == 1 && d["b"w] == 2);
```
Examples:
Conversions operate transitively, meaning that they work on arrays and associative arrays of any complexity. This conversion works because `to!short` applies to an `int`, `to!wstring` applies to a `string`, `to!string` applies to a `double`, and `to!(double[])` applies to an `int[]`. The conversion might throw an exception because `to!short` might fail the range check.
```
int[string][double[int[]]] a;
auto b = to!(short[wstring][string[double[]]])(a);
```
Examples:
Object-to-object conversions by dynamic casting throw exception when the source is non-null and the target is null.
```
import std.exception : assertThrown;
// Testing object conversions
class A {}
class B : A {}
class C : A {}
A a1 = new A, a2 = new B, a3 = new C;
assert(to!B(a2) is a2);
assert(to!C(a3) is a3);
assertThrown!ConvException(to!B(a3));
```
Examples:
Stringize conversion from all types is supported. * String to string conversion works for any two string types having (`char`, `wchar`, `dchar`) character widths and any combination of qualifiers (mutable, `const`, or `immutable`).
* Converts array (other than strings) to string. Each element is converted by calling `to!T`.
* Associative array to string conversion. Each element is converted by calling `to!T`.
* Object to string conversion calls `toString` against the object or returns `"null"` if the object is null.
* Struct to string conversion calls `toString` against the struct if it is defined.
* For structs that do not define `toString`, the conversion to string produces the list of fields.
* Enumerated types are converted to strings as their symbolic names.
* Boolean values are converted to `"true"` or `"false"`.
* `char`, `wchar`, `dchar` to a string type.
* Unsigned or signed integers to strings.
[special case] Convert integral value to string in radix radix. radix must be a value from 2 to 36. value is treated as a signed value only if radix is 10. The characters A through Z are used to represent values 10 through 36 and their case is determined by the letterCase parameter.
* All floating point types to all string types.
* Pointer to string conversions convert the pointer to a `size_t` value. If pointer is `char*`, treat it as C-style strings. In that case, this function is `@system`.
See [`std.format.formatValue`](std_format#formatValue) on how toString should be defined.
```
// Conversion representing dynamic/static array with string
long[] a = [ 1, 3, 5 ];
writeln(to!string(a)); // "[1, 3, 5]"
// Conversion representing associative array with string
int[string] associativeArray = ["0":1, "1":2];
assert(to!string(associativeArray) == `["0":1, "1":2]` ||
to!string(associativeArray) == `["1":2, "0":1]`);
// char* to string conversion
writeln(to!string(cast(char*)null)); // ""
writeln(to!string("foo\0".ptr)); // "foo"
// Conversion reinterpreting void array to string
auto w = "abcx"w;
const(void)[] b = w;
writeln(b.length); // 8
auto c = to!(wchar[])(b);
writeln(c); // "abcx"
```
template **roundTo**(Target)
Rounded conversion from floating point to integral.
Rounded conversions do not work with non-integral target types.
Examples:
```
writeln(roundTo!int(3.14)); // 3
writeln(roundTo!int(3.49)); // 3
writeln(roundTo!int(3.5)); // 4
writeln(roundTo!int(3.999)); // 4
writeln(roundTo!int(-3.14)); // -3
writeln(roundTo!int(-3.49)); // -3
writeln(roundTo!int(-3.5)); // -4
writeln(roundTo!int(-3.999)); // -4
writeln(roundTo!(const int)(to!(const double)(-3.999))); // -4
```
auto **parse**(Target, Source, Flag!"doCount" doCount = No.doCount)(ref Source source)
Constraints: if (isInputRange!Source && isSomeChar!(ElementType!Source) && is(immutable(Target) == immutable(bool)));
The `parse` family of functions works quite like the `to` family, except that: 1. It only works with character ranges as input.
2. It takes the input by reference. (This means that rvalues - such as string literals - are not accepted: use `to` instead.)
3. It advances the input to the position following the conversion.
4. It does not throw if it could not convert the entire input.
This overload converts a character input range to a `bool`.
Parameters:
| | |
| --- | --- |
| Target | the type to convert to |
| Source `source` | the lvalue of an [input range](std_range_primitives#isInputRange) |
| doCount | the flag for deciding to report the number of consumed characters |
Returns:
* A `bool` if `doCount` is set to `No.doCount`
* A `tuple` containing a `bool` and a `size_t` if `doCount` is set to `Yes.doCount`
Throws:
A [`ConvException`](#ConvException) if the range does not represent a `bool`.
Note
All character input range conversions using [`to`](#to) are forwarded to `parse` and do not require lvalues.
Examples:
```
import std.typecons : Flag, Yes, No;
auto s = "true";
bool b = parse!bool(s);
assert(b);
auto s2 = "true";
bool b2 = parse!(bool, string, No.doCount)(s2);
assert(b2);
auto s3 = "true";
auto b3 = parse!(bool, string, Yes.doCount)(s3);
assert(b3.data && b3.count == 4);
auto s4 = "falSE";
auto b4 = parse!(bool, string, Yes.doCount)(s4);
assert(!b4.data && b4.count == 5);
```
auto **parse**(Target, Source, Flag!"doCount" doCount = No.doCount)(ref Source s)
Constraints: if (isSomeChar!(ElementType!Source) && isIntegral!Target && !is(Target == enum));
auto **parse**(Target, Source, Flag!"doCount" doCount = No.doCount)(ref Source source, uint radix)
Constraints: if (isSomeChar!(ElementType!Source) && isIntegral!Target && !is(Target == enum));
Parses a character [input range](std_range_primitives#isInputRange) to an integral value.
Parameters:
| | |
| --- | --- |
| Target | the integral type to convert to |
| Source `s` | the lvalue of an input range |
| doCount | the flag for deciding to report the number of consumed characters |
Returns:
* A number of type `Target` if `doCount` is set to `No.doCount`
* A `tuple` containing a number of type `Target` and a `size_t` if `doCount` is set to `Yes.doCount`
Throws:
A [`ConvException`](#ConvException) If an overflow occurred during conversion or if no character of the input was meaningfully converted.
Examples:
```
import std.typecons : Flag, Yes, No;
string s = "123";
auto a = parse!int(s);
writeln(a); // 123
string s1 = "123";
auto a1 = parse!(int, string, Yes.doCount)(s1);
assert(a1.data == 123 && a1.count == 3);
// parse only accepts lvalues
static assert(!__traits(compiles, parse!int("123")));
```
Examples:
```
import std.string : tr;
import std.typecons : Flag, Yes, No;
string test = "123 \t 76.14";
auto a = parse!uint(test);
writeln(a); // 123
assert(test == " \t 76.14"); // parse bumps string
test = tr(test, " \t\n\r", "", "d"); // skip ws
writeln(test); // "76.14"
auto b = parse!double(test);
writeln(b); // 76.14
writeln(test); // ""
string test2 = "123 \t 76.14";
auto a2 = parse!(uint, string, Yes.doCount)(test2);
assert(a2.data == 123 && a2.count == 3);
assert(test2 == " \t 76.14");// parse bumps string
test2 = tr(test2, " \t\n\r", "", "d"); // skip ws
writeln(test2); // "76.14"
auto b2 = parse!(double, string, Yes.doCount)(test2);
assert(b2.data == 76.14 && b2.count == 5);
writeln(test2); // ""
```
auto **parse**(Target, Source, Flag!"doCount" doCount = No.doCount)(ref Source s)
Constraints: if (isSomeString!Source && !is(Source == enum) && is(Target == enum));
Takes a string representing an `enum` type and returns that type.
Parameters:
| | |
| --- | --- |
| Target | the `enum` type to convert to |
| Source `s` | the lvalue of the range to parse |
| doCount | the flag for deciding to report the number of consumed characters |
Returns:
* An `enum` of type `Target` if `doCount` is set to `No.doCount`
* A `tuple` containing an `enum` of type `Target` and a `size_t` if `doCount` is set to `Yes.doCount`
Throws:
A [`ConvException`](#ConvException) if type `Target` does not have a member represented by `s`.
Examples:
```
import std.typecons : Flag, Yes, No, tuple;
enum EnumType : bool { a = true, b = false, c = a }
auto str = "a";
writeln(parse!EnumType(str)); // EnumType.a
auto str2 = "a";
writeln(parse!(EnumType, string, No.doCount)(str2)); // EnumType.a
auto str3 = "a";
writeln(parse!(EnumType, string, Yes.doCount)(str3)); // tuple(EnumType.a, 1)
```
auto **parse**(Target, Source, Flag!"doCount" doCount = No.doCount)(ref Source source)
Constraints: if (isInputRange!Source && isSomeChar!(ElementType!Source) && !is(Source == enum) && isFloatingPoint!Target && !is(Target == enum));
Parses a character range to a floating point number.
Parameters:
| | |
| --- | --- |
| Target | a floating point type |
| Source `source` | the lvalue of the range to parse |
| doCount | the flag for deciding to report the number of consumed characters |
Returns:
* A floating point number of type `Target` if `doCount` is set to `No.doCount`
* A `tuple` containing a floating point number of·type `Target` and a `size_t` if `doCount` is set to `Yes.doCount`
Throws:
A [`ConvException`](#ConvException) if `source` is empty, if no number could be parsed, or if an overflow occurred.
Examples:
```
import std.math : approxEqual;
import std.typecons : Flag, Yes, No;
import std.math : isNaN, isInfinity;
auto str = "123.456";
assert(parse!double(str).approxEqual(123.456));
auto str2 = "123.456";
assert(parse!(double, string, No.doCount)(str2).approxEqual(123.456));
auto str3 = "123.456";
auto r = parse!(double, string, Yes.doCount)(str3);
assert(r.data.approxEqual(123.456));
writeln(r.count); // 7
auto str4 = "-123.456";
r = parse!(double, string, Yes.doCount)(str4);
assert(r.data.approxEqual(-123.456));
writeln(r.count); // 8
auto str5 = "+123.456";
r = parse!(double, string, Yes.doCount)(str5);
assert(r.data.approxEqual(123.456));
writeln(r.count); // 8
auto str6 = "inf0";
r = parse!(double, string, Yes.doCount)(str6);
assert(isInfinity(r.data) && r.count == 3 && str6 == "0");
auto str7 = "-0";
auto r2 = parse!(float, string, Yes.doCount)(str7);
assert(r2.data.approxEqual(0.0) && r2.count == 2);
auto str8 = "nan";
auto r3 = parse!(real, string, Yes.doCount)(str8);
assert(isNaN(r3.data) && r3.count == 3);
```
auto **parse**(Target, Source, Flag!"doCount" doCount = No.doCount)(ref Source s)
Constraints: if (isSomeString!Source && !is(Source == enum) && (staticIndexOf!(immutable(Target), immutable(dchar), immutable(ElementEncodingType!Source)) >= 0));
auto **parse**(Target, Source, Flag!"doCount" doCount = No.doCount)(ref Source s)
Constraints: if (!isSomeString!Source && isInputRange!Source && isSomeChar!(ElementType!Source) && isSomeChar!Target && (Target.sizeof >= ElementType!Source.sizeof) && !is(Target == enum));
Parsing one character off a range returns the first element and calls `popFront`.
Parameters:
| | |
| --- | --- |
| Target | the type to convert to |
| Source `s` | the lvalue of an [input range](std_range_primitives#isInputRange) |
| doCount | the flag for deciding to report the number of consumed characters |
Returns:
* A character of type `Target` if `doCount` is set to `No.doCount`
* A `tuple` containing a character of type `Target` and a `size_t` if `doCount` is set to `Yes.doCount`
Throws:
A [`ConvException`](#ConvException) if the range is empty.
Examples:
```
import std.typecons : Flag, Yes, No;
auto s = "Hello, World!";
char first = parse!char(s);
writeln(first); // 'H'
writeln(s); // "ello, World!"
char second = parse!(char, string, No.doCount)(s);
writeln(second); // 'e'
writeln(s); // "llo, World!"
auto third = parse!(char, string, Yes.doCount)(s);
assert(third.data == 'l' && third.count == 1);
writeln(s); // "lo, World!"
```
auto **parse**(Target, Source, Flag!"doCount" doCount = No.doCount)(ref Source s)
Constraints: if (isInputRange!Source && isSomeChar!(ElementType!Source) && is(immutable(Target) == immutable(typeof(null))));
Parsing a character range to `typeof(null)` returns `null` if the range spells `"null"`. This function is case insensitive.
Parameters:
| | |
| --- | --- |
| Target | the type to convert to |
| Source `s` | the lvalue of an [input range](std_range_primitives#isInputRange) |
| doCount | the flag for deciding to report the number of consumed characters |
Returns:
* `null` if `doCount` is set to `No.doCount`
* A `tuple` containing `null` and a `size_t` if `doCount` is set to `Yes.doCount`
Throws:
A [`ConvException`](#ConvException) if the range doesn't represent `null`.
Examples:
```
import std.exception : assertThrown;
import std.typecons : Flag, Yes, No;
alias NullType = typeof(null);
auto s1 = "null";
assert(parse!NullType(s1) is null);
writeln(s1); // ""
auto s2 = "NUll"d;
assert(parse!NullType(s2) is null);
writeln(s2); // ""
auto s3 = "nuLlNULl";
assert(parse!(NullType, string, No.doCount)(s3) is null);
auto r = parse!(NullType, string, Yes.doCount)(s3);
assert(r.data is null && r.count == 4);
auto m = "maybe";
assertThrown!ConvException(parse!NullType(m));
assertThrown!ConvException(parse!(NullType, string, Yes.doCount)(m));
assert(m == "maybe"); // m shouldn't change on failure
auto s = "NULL";
assert(parse!(const NullType)(s) is null);
```
auto **parse**(Target, Source, Flag!"doCount" doCount = No.doCount)(ref Source s, dchar lbracket = '[', dchar rbracket = ']', dchar comma = ',')
Constraints: if (isSomeString!Source && !is(Source == enum) && isDynamicArray!Target && !is(Target == enum));
auto **parse**(Target, Source, Flag!"doCount" doCount = No.doCount)(ref Source s, dchar lbracket = '[', dchar rbracket = ']', dchar comma = ',')
Constraints: if (isExactSomeString!Source && isStaticArray!Target && !is(Target == enum));
Parses an array from a string given the left bracket (default `'['`), right bracket (default `']'`), and element separator (by default `','`). A trailing separator is allowed.
Parameters:
| | |
| --- | --- |
| Source `s` | The string to parse |
| dchar `lbracket` | the character that starts the array |
| dchar `rbracket` | the character that ends the array |
| dchar `comma` | the character that separates the elements of the array |
| doCount | the flag for deciding to report the number of consumed characters |
Returns:
* An array of type `Target` if `doCount` is set to `No.doCount`
* A `tuple` containing an array of type `Target` and a `size_t` if `doCount` is set to `Yes.doCount`
Examples:
```
import std.typecons : Flag, Yes, No;
auto s1 = `[['h', 'e', 'l', 'l', 'o'], "world"]`;
auto a1 = parse!(string[])(s1);
writeln(a1); // ["hello", "world"]
auto s2 = `["aaa", "bbb", "ccc"]`;
auto a2 = parse!(string[])(s2);
writeln(a2); // ["aaa", "bbb", "ccc"]
auto s3 = `[['h', 'e', 'l', 'l', 'o'], "world"]`;
auto len3 = s3.length;
auto a3 = parse!(string[], string, Yes.doCount)(s3);
writeln(a3.data); // ["hello", "world"]
writeln(a3.count); // len3
```
auto **parse**(Target, Source, Flag!"doCount" doCount = No.doCount)(ref Source s, dchar lbracket = '[', dchar rbracket = ']', dchar keyval = ':', dchar comma = ',')
Constraints: if (isSomeString!Source && !is(Source == enum) && isAssociativeArray!Target && !is(Target == enum));
Parses an associative array from a string given the left bracket (default `'['`), right bracket (default `']'`), key-value separator (default `':'`), and element seprator (by default `','`).
Parameters:
| | |
| --- | --- |
| Source `s` | the string to parse |
| dchar `lbracket` | the character that starts the associative array |
| dchar `rbracket` | the character that ends the associative array |
| dchar `keyval` | the character that associates the key with the value |
| dchar `comma` | the character that separates the elements of the associative array |
| doCount | the flag for deciding to report the number of consumed characters |
Returns:
* An associative array of type `Target` if `doCount` is set to `No.doCount`
* A `tuple` containing an associative array of type `Target` and a `size_t` if `doCount` is set to `Yes.doCount`
Examples:
```
import std.typecons : Flag, Yes, No, tuple;
import std.range.primitives : save;
import std.array : assocArray;
auto s1 = "[1:10, 2:20, 3:30]";
auto copyS1 = s1.save;
auto aa1 = parse!(int[int])(s1);
writeln(aa1); // [1:10, 2:20, 3:30]
// parse!(int[int], string, Yes.doCount)(copyS1)
writeln(tuple([1:10, 2:20, 3:30], copyS1.length));
auto s2 = `["aaa":10, "bbb":20, "ccc":30]`;
auto copyS2 = s2.save;
auto aa2 = parse!(int[string])(s2);
writeln(aa2); // ["aaa":10, "bbb":20, "ccc":30]
assert(tuple(["aaa":10, "bbb":20, "ccc":30], copyS2.length) ==
parse!(int[string], string, Yes.doCount)(copyS2));
auto s3 = `["aaa":[1], "bbb":[2,3], "ccc":[4,5,6]]`;
auto copyS3 = s3.save;
auto aa3 = parse!(int[][string])(s3);
writeln(aa3); // ["aaa":[1], "bbb":[2, 3], "ccc":[4, 5, 6]]
assert(tuple(["aaa":[1], "bbb":[2,3], "ccc":[4,5,6]], copyS3.length) ==
parse!(int[][string], string, Yes.doCount)(copyS3));
auto s4 = `[]`;
int[int] emptyAA;
writeln(tuple(emptyAA, s4.length)); // parse!(int[int], string, Yes.doCount)(s4)
```
string **text**(T...)(T args)
Constraints: if (T.length > 0);
wstring **wtext**(T...)(T args)
Constraints: if (T.length > 0);
dstring **dtext**(T...)(T args)
Constraints: if (T.length > 0);
Convenience functions for converting one or more arguments of any type into text (the three character widths).
Examples:
```
writeln(text(42, ' ', 1.5, ": xyz")); // "42 1.5: xyz"c
writeln(wtext(42, ' ', 1.5, ": xyz")); // "42 1.5: xyz"w
writeln(dtext(42, ' ', 1.5, ": xyz")); // "42 1.5: xyz"d
```
template **octal**(string num) if (isOctalLiteral(num))
template **octal**(alias decimalInteger) if (is(typeof(decimalInteger)) && isIntegral!(typeof(decimalInteger)))
The `octal` facility provides a means to declare a number in base 8. Using `octal!177` or `octal!"177"` for 127 represented in octal (same as 0177 in C).
The rules for strings are the usual for literals: If it can fit in an `int`, it is an `int`. Otherwise, it is a `long`. But, if the user specifically asks for a `long` with the `L` suffix, always give the `long`. Give an unsigned iff it is asked for with the `U` or `u` suffix. Octals created from integers preserve the type of the passed-in integral.
See Also:
[`parse`](#parse) for parsing octal strings at runtime.
Examples:
```
// same as 0177
auto x = octal!177;
// octal is a compile-time device
enum y = octal!160;
// Create an unsigned octal
auto z = octal!"1_000_000u";
```
pure nothrow @safe T\* **emplace**(T)(T\* chunk);
Given a pointer `chunk` to uninitialized memory (but already typed as `T`), constructs an object of non-`class` type `T` at that address. If `T` is a class, initializes the class reference to null.
Returns:
A pointer to the newly constructed object (which is the same as `chunk`).
Examples:
```
static struct S
{
int i = 42;
}
S[2] s2 = void;
emplace(&s2);
assert(s2[0].i == 42 && s2[1].i == 42);
```
Examples:
```
interface I {}
class K : I {}
K k = void;
emplace(&k);
assert(k is null);
I i = void;
emplace(&i);
assert(i is null);
```
T\* **emplace**(T, Args...)(T\* chunk, auto ref Args args)
Constraints: if (is(T == struct) || Args.length == 1);
Given a pointer `chunk` to uninitialized memory (but already typed as a non-class type `T`), constructs an object of type `T` at that address from arguments `args`. If `T` is a class, initializes the class reference to `args[0]`.
This function can be `@trusted` if the corresponding constructor of `T` is `@safe`.
Returns:
A pointer to the newly constructed object (which is the same as `chunk`).
Examples:
```
int a;
int b = 42;
writeln(*emplace!int(&a, b)); // 42
```
T **emplace**(T, Args...)(T chunk, auto ref Args args)
Constraints: if (is(T == class));
Given a raw memory area `chunk` (but already typed as a class type `T`), constructs an object of `class` type `T` at that address. The constructor is passed the arguments `Args`.
If `T` is an inner class whose `outer` field can be used to access an instance of the enclosing class, then `Args` must not be empty, and the first member of it must be a valid initializer for that `outer` field. Correct initialization of this field is essential to access members of the outer class inside `T` methods.
Note
This function is `@safe` if the corresponding constructor of `T` is `@safe`.
Returns:
The newly constructed object.
Examples:
```
() @safe {
class SafeClass
{
int x;
@safe this(int x) { this.x = x; }
}
auto buf = new void[__traits(classInstanceSize, SafeClass)];
auto support = (() @trusted => cast(SafeClass)(buf.ptr))();
auto safeClass = emplace!SafeClass(support, 5);
writeln(safeClass.x); // 5
class UnsafeClass
{
int x;
@system this(int x) { this.x = x; }
}
auto buf2 = new void[__traits(classInstanceSize, UnsafeClass)];
auto support2 = (() @trusted => cast(UnsafeClass)(buf2.ptr))();
static assert(!__traits(compiles, emplace!UnsafeClass(support2, 5)));
static assert(!__traits(compiles, emplace!UnsafeClass(buf2, 5)));
}();
```
T **emplace**(T, Args...)(void[] chunk, auto ref Args args)
Constraints: if (is(T == class));
Given a raw memory area `chunk`, constructs an object of `class` type `T` at that address. The constructor is passed the arguments `Args`.
If `T` is an inner class whose `outer` field can be used to access an instance of the enclosing class, then `Args` must not be empty, and the first member of it must be a valid initializer for that `outer` field. Correct initialization of this field is essential to access members of the outer class inside `T` methods.
Preconditions
`chunk` must be at least as large as `T` needs and should have an alignment multiple of `T`'s alignment. (The size of a `class` instance is obtained by using `_traits(classInstanceSize, T)`).
Note
This function can be `@trusted` if the corresponding constructor of `T` is `@safe`.
Returns:
The newly constructed object.
Examples:
```
static class C
{
int i;
this(int i){this.i = i;}
}
auto buf = new void[__traits(classInstanceSize, C)];
auto c = emplace!C(buf, 5);
writeln(c.i); // 5
```
T\* **emplace**(T, Args...)(void[] chunk, auto ref Args args)
Constraints: if (!is(T == class));
Given a raw memory area `chunk`, constructs an object of non-`class` type `T` at that address. The constructor is passed the arguments `args`, if any.
Preconditions
`chunk` must be at least as large as `T` needs and should have an alignment multiple of `T`'s alignment.
Note
This function can be `@trusted` if the corresponding constructor of `T` is `@safe`.
Returns:
A pointer to the newly constructed object.
Examples:
```
struct S
{
int a, b;
}
auto buf = new void[S.sizeof];
S s;
s.a = 42;
s.b = 43;
auto s1 = emplace!S(buf, s);
assert(s1.a == 42 && s1.b == 43);
```
auto **unsigned**(T)(T x)
Constraints: if (isIntegral!T);
auto **unsigned**(T)(T x)
Constraints: if (isSomeChar!T);
Returns the corresponding unsigned value for `x` (e.g. if `x` has type `int`, it returns `cast(uint) x`). The advantage compared to the cast is that you do not need to rewrite the cast if `x` later changes type (e.g from `int` to `long`).
Note that the result is always mutable even if the original type was const or immutable. In order to retain the constness, use [`std.traits.Unsigned`](std_traits#Unsigned).
Examples:
```
import std.traits : Unsigned;
immutable int s = 42;
auto u1 = unsigned(s); //not qualified
static assert(is(typeof(u1) == uint));
Unsigned!(typeof(s)) u2 = unsigned(s); //same qualification
static assert(is(typeof(u2) == immutable uint));
immutable u3 = unsigned(s); //explicitly qualified
```
auto **signed**(T)(T x)
Constraints: if (isIntegral!T);
Returns the corresponding signed value for `x` (e.g. if `x` has type `uint`, it returns `cast(int) x`). The advantage compared to the cast is that you do not need to rewrite the cast if `x` later changes type (e.g from `uint` to `ulong`).
Note that the result is always mutable even if the original type was const or immutable. In order to retain the constness, use [`std.traits.Signed`](std_traits#Signed).
Examples:
```
import std.traits : Signed;
immutable uint u = 42;
auto s1 = signed(u); //not qualified
static assert(is(typeof(s1) == int));
Signed!(typeof(u)) s2 = signed(u); //same qualification
static assert(is(typeof(s2) == immutable int));
immutable s3 = signed(u); //explicitly qualified
```
OriginalType!E **asOriginalType**(E)(E value)
Constraints: if (is(E == enum));
Returns the representation of an enumerated value, i.e. the value converted to the base type of the enumeration.
Examples:
```
enum A { a = 42 }
static assert(is(typeof(A.a.asOriginalType) == int));
writeln(A.a.asOriginalType); // 42
enum B : double { a = 43 }
static assert(is(typeof(B.a.asOriginalType) == double));
writeln(B.a.asOriginalType); // 43
```
template **castFrom**(From)
A wrapper on top of the built-in cast operator that allows one to restrict casting of the original type of the value.
A common issue with using a raw cast is that it may silently continue to compile even if the value's type has changed during refactoring, which breaks the initial assumption about the cast.
Parameters:
| | |
| --- | --- |
| From | The type to cast from. The programmer must ensure it is legal to make this cast. |
Examples:
```
// Regular cast, which has been verified to be legal by the programmer:
{
long x;
auto y = cast(int) x;
}
// However this will still compile if 'x' is changed to be a pointer:
{
long* x;
auto y = cast(int) x;
}
// castFrom provides a more reliable alternative to casting:
{
long x;
auto y = castFrom!long.to!int(x);
}
// Changing the type of 'x' will now issue a compiler error,
// allowing bad casts to be caught before it's too late:
{
long* x;
static assert(
!__traits(compiles, castFrom!long.to!int(x))
);
// if cast is still needed, must be changed to:
auto y = castFrom!(long*).to!int(x);
}
```
ref @system auto **to**(To, T)(auto ref T value);
Parameters:
| | |
| --- | --- |
| To | The type to cast to. |
| T `value` | The value to cast. It must be of type `From`, otherwise a compile-time error is emitted. |
Returns:
the value after the cast, returned by reference if possible.
template **hexString**(string hexData) if (hexData.isHexLiteral)
template **hexString**(wstring hexData) if (hexData.isHexLiteral)
template **hexString**(dstring hexData) if (hexData.isHexLiteral)
Converts a hex literal to a string at compile time.
Takes a string made of hexadecimal digits and returns the matching string by converting each pair of digits to a character. The input string can also include white characters, which can be used to keep the literal string readable in the source code.
The function is intended to replace the hexadecimal literal strings starting with `'x'`, which could be removed to simplify the core language.
Parameters:
| | |
| --- | --- |
| hexData | string to be converted. |
Returns:
a `string`, a `wstring` or a `dstring`, according to the type of hexData.
Examples:
```
// conversion at compile time
auto string1 = hexString!"304A314B";
writeln(string1); // "0J1K"
auto string2 = hexString!"304A314B"w;
writeln(string2); // "0J1K"w
auto string3 = hexString!"304A314B"d;
writeln(string3); // "0J1K"d
```
pure nothrow @nogc @safe auto **toChars**(ubyte radix = 10, Char = char, LetterCase letterCase = LetterCase.lower, T)(T value)
Constraints: if ((radix == 2 || radix == 8 || radix == 10 || radix == 16) && (is(immutable(T) == immutable(uint)) || is(immutable(T) == immutable(ulong)) || radix == 10 && (is(immutable(T) == immutable(int)) || is(immutable(T) == immutable(long)))));
Convert integer to a range of characters. Intended to be lightweight and fast.
Parameters:
| | |
| --- | --- |
| radix | 2, 8, 10, 16 |
| Char | character type for output |
| letterCase | lower for deadbeef, upper for DEADBEEF |
| T `value` | integer to convert. Can be uint or ulong. If radix is 10, can also be int or long. |
Returns:
Random access range with slicing and everything
Examples:
```
import std.algorithm.comparison : equal;
assert(toChars(1).equal("1"));
assert(toChars(1_000_000).equal("1000000"));
assert(toChars!(2)(2U).equal("10"));
assert(toChars!(16)(255U).equal("ff"));
assert(toChars!(16, char, LetterCase.upper)(255U).equal("FF"));
```
| programming_docs |
d std.json std.json
========
JavaScript Object Notation
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
Jeremie Pelletier, David Herberth
References
<http://json.org/>, <http://seriot.ch/parsing_json.html>
Source
[std/json.d](https://github.com/dlang/phobos/blob/master/std/json.d)
Examples:
```
import std.conv : to;
// parse a file or string of json into a usable structure
string s = `{ "language": "D", "rating": 3.5, "code": "42" }`;
JSONValue j = parseJSON(s);
// j and j["language"] return JSONValue,
// j["language"].str returns a string
writeln(j["language"].str); // "D"
writeln(j["rating"].floating); // 3.5
// check a type
long x;
if (const(JSONValue)* code = "code" in j)
{
if (code.type() == JSONType.integer)
x = code.integer;
else
x = to!int(code.str);
}
// create a json struct
JSONValue jj = [ "language": "D" ];
// rating doesnt exist yet, so use .object to assign
jj.object["rating"] = JSONValue(3.5);
// create an array to assign to list
jj.object["list"] = JSONValue( ["a", "b", "c"] );
// list already exists, so .object optional
jj["list"].array ~= JSONValue("D");
string jjStr = `{"language":"D","list":["a","b","c","D"],"rating":3.5}`;
writeln(jj.toString); // jjStr
```
enum **JSONFloatLiteral**: string;
String literals used to represent special float values within JSON strings.
**nan**
string representation of floating-point NaN
**inf**
string representation of floating-point Infinity
**negativeInf**
string representation of floating-point negative Infinity
enum **JSONOptions**: int;
Flags that control how json is encoded and parsed.
**none**
standard parsing
**specialFloatLiterals**
encode NaN and Inf float values as strings
**escapeNonAsciiChars**
encode non ascii characters with an unicode escape sequence
**doNotEscapeSlashes**
do not escape slashes ('/')
**strictParsing**
Strictly follow RFC-8259 grammar when parsing
enum **JSONType**: byte;
JSON type enumeration
**null\_**
**string**
**integer**
**uinteger**
**float\_**
**array**
**object**
**true\_**
**false\_**
Indicates the type of a `JSONValue`.
struct **JSONValue**;
JSON value node
const pure nothrow @nogc @property @safe JSONType **type**();
Returns the JSONType of the value stored in this structure.
Examples:
```
string s = "{ \"language\": \"D\" }";
JSONValue j = parseJSON(s);
writeln(j.type); // JSONType.object
writeln(j["language"].type); // JSONType.string
```
const pure @property @trusted string **str**();
pure nothrow @nogc @property @safe string **str**(string v);
Value getter/setter for `JSONType.string`.
Throws:
`JSONException` for read access if `type` is not `JSONType.string`.
Examples:
```
JSONValue j = [ "language": "D" ];
// get value
writeln(j["language"].str); // "D"
// change existing key to new string
j["language"].str = "Perl";
writeln(j["language"].str); // "Perl"
```
const pure @property @safe long **integer**();
pure nothrow @nogc @property @safe long **integer**(long v);
Value getter/setter for `JSONType.integer`.
Throws:
`JSONException` for read access if `type` is not `JSONType.integer`.
const pure @property @safe ulong **uinteger**();
pure nothrow @nogc @property @safe ulong **uinteger**(ulong v);
Value getter/setter for `JSONType.uinteger`.
Throws:
`JSONException` for read access if `type` is not `JSONType.uinteger`.
const pure @property @safe double **floating**();
pure nothrow @nogc @property @safe double **floating**(double v);
Value getter/setter for `JSONType.float_`. Note that despite the name, this is a **64**-bit `double`, not a 32-bit `float`.
Throws:
`JSONException` for read access if `type` is not `JSONType.float_`.
const pure @property @safe bool **boolean**();
pure nothrow @nogc @property @safe bool **boolean**(bool v);
Value getter/setter for boolean stored in JSON.
Throws:
`JSONException` for read access if `this.type` is not `JSONType.true_` or `JSONType.false_`.
Examples:
```
JSONValue j = true;
writeln(j.boolean); // true
j.boolean = false;
writeln(j.boolean); // false
j.integer = 12;
import std.exception : assertThrown;
assertThrown!JSONException(j.boolean);
```
inout pure @property ref @system inout(JSONValue[string]) **object**();
pure nothrow @nogc @property @safe JSONValue[string] **object**(JSONValue[string] v);
Value getter/setter for `JSONType.object`.
Throws:
`JSONException` for read access if `type` is not `JSONType.object`.
Note
this is @system because of the following pattern:
```
auto a = &(json.object());
json.uinteger = 0; // overwrite AA pointer
(*a)["hello"] = "world"; // segmentation fault
```
inout pure @property @trusted inout(JSONValue[string]) **objectNoRef**();
Value getter for `JSONType.object`. Unlike `object`, this retrieves the object by value and can be used in @safe code.
A caveat is that, if the returned value is null, modifications will not be visible:
```
JSONValue json;
json.object = null;
json.objectNoRef["hello"] = JSONValue("world");
assert("hello" !in json.object);
```
Throws:
`JSONException` for read access if `type` is not `JSONType.object`.
inout pure @property ref @system inout(JSONValue[]) **array**();
pure nothrow @nogc @property @safe JSONValue[] **array**(JSONValue[] v);
Value getter/setter for `JSONType.array`.
Throws:
`JSONException` for read access if `type` is not `JSONType.array`.
Note
this is @system because of the following pattern:
```
auto a = &(json.array());
json.uinteger = 0; // overwrite array pointer
(*a)[0] = "world"; // segmentation fault
```
inout pure @property @trusted inout(JSONValue[]) **arrayNoRef**();
Value getter for `JSONType.array`. Unlike `array`, this retrieves the array by value and can be used in @safe code.
A caveat is that, if you append to the returned array, the new values aren't visible in the
JSONValue
```
JSONValue json;
json.array = [JSONValue("hello")];
json.arrayNoRef ~= JSONValue("world");
assert(json.array.length == 1);
```
Throws:
`JSONException` for read access if `type` is not `JSONType.array`.
const pure nothrow @nogc @property @safe bool **isNull**();
Test whether the type is `JSONType.null_`
inout const pure @property @safe inout(T) **get**(T)();
inout pure @property @trusted inout(T) **get**(T : JSONValue[string])();
Generic type value getter A convenience getter that returns this `JSONValue` as the specified D type.
Note
only numeric, `bool`, `string`, `JSONValue[string]` and `JSONValue[]` types are accepted
Throws:
`JSONException` if `T` cannot hold the contents of this `JSONValue` `ConvException` in case of integer overflow when converting to `T`
Examples:
```
import std.exception;
import std.conv;
string s =
`{
"a": 123,
"b": 3.1415,
"c": "text",
"d": true,
"e": [1, 2, 3],
"f": { "a": 1 },
"g": -45,
"h": ` ~ ulong.max.to!string ~ `,
}`;
struct a { }
immutable json = parseJSON(s);
writeln(json["a"].get!double); // 123.0
writeln(json["a"].get!int); // 123
writeln(json["a"].get!uint); // 123
writeln(json["b"].get!double); // 3.1415
assertThrown!JSONException(json["b"].get!int);
writeln(json["c"].get!string); // "text"
writeln(json["d"].get!bool); // true
assertNotThrown(json["e"].get!(JSONValue[]));
assertNotThrown(json["f"].get!(JSONValue[string]));
static assert(!__traits(compiles, json["a"].get!a));
assertThrown!JSONException(json["e"].get!float);
assertThrown!JSONException(json["d"].get!(JSONValue[string]));
assertThrown!JSONException(json["f"].get!(JSONValue[]));
writeln(json["g"].get!int); // -45
assertThrown!ConvException(json["g"].get!uint);
writeln(json["h"].get!ulong); // ulong.max
assertThrown!ConvException(json["h"].get!uint);
assertNotThrown(json["h"].get!float);
```
this(T)(T arg)
Constraints: if (!isStaticArray!T);
this(T)(ref T arg)
Constraints: if (isStaticArray!T);
inout this(T : JSONValue)(inout T arg);
Constructor for `JSONValue`. If `arg` is a `JSONValue` its value and type will be copied to the new `JSONValue`. Note that this is a shallow copy: if type is `JSONType.object` or `JSONType.array` then only the reference to the data will be copied. Otherwise, `arg` must be implicitly convertible to one of the following types: `typeof(null)`, `string`, `ulong`, `long`, `double`, an associative array `V[K]` for any `V` and `K` i.e. a JSON object, any array or `bool`. The type will be set accordingly.
Examples:
```
JSONValue j = JSONValue( "a string" );
j = JSONValue(42);
j = JSONValue( [1, 2, 3] );
writeln(j.type); // JSONType.array
j = JSONValue( ["language": "D"] );
writeln(j.type); // JSONType.object
```
inout pure ref @safe inout(JSONValue) **opIndex**(size\_t i);
Array syntax for json arrays.
Throws:
`JSONException` if `type` is not `JSONType.array`.
Examples:
```
JSONValue j = JSONValue( [42, 43, 44] );
writeln(j[0].integer); // 42
writeln(j[1].integer); // 43
```
inout pure ref @safe inout(JSONValue) **opIndex**(string k);
Hash syntax for json objects.
Throws:
`JSONException` if `type` is not `JSONType.object`.
Examples:
```
JSONValue j = JSONValue( ["language": "D"] );
writeln(j["language"].str); // "D"
```
void **opIndexAssign**(T)(auto ref T value, string key);
Operator sets `value` for element of JSON object by `key`.
If JSON value is null, then operator initializes it with object and then sets `value` for it.
Throws:
`JSONException` if `type` is not `JSONType.object` or `JSONType.null_`.
Examples:
```
JSONValue j = JSONValue( ["language": "D"] );
j["language"].str = "Perl";
writeln(j["language"].str); // "Perl"
```
const @safe auto **opBinaryRight**(string op : "in")(string k);
Support for the `in` operator.
Tests wether a key can be found in an object.
Returns:
when found, the `const(JSONValue)*` that matches to the key, otherwise `null`.
Throws:
`JSONException` if the right hand side argument `JSONType` is not `object`.
Examples:
```
JSONValue j = [ "language": "D", "author": "walter" ];
string a = ("author" in j).str;
```
@system int **opApply**(scope int delegate(size\_t index, ref JSONValue) dg);
Implements the foreach `opApply` interface for json arrays.
@system int **opApply**(scope int delegate(string key, ref JSONValue) dg);
Implements the foreach `opApply` interface for json objects.
const @safe string **toString**(in JSONOptions options = JSONOptions.none);
Implicitly calls `toJSON` on this JSONValue.
*options* can be used to tweak the conversion behavior.
const void **toString**(Out)(Out sink, in JSONOptions options = JSONOptions.none); const @safe string **toPrettyString**(in JSONOptions options = JSONOptions.none);
Implicitly calls `toJSON` on this JSONValue, like `toString`, but also passes *true* as *pretty* argument.
*options* can be used to tweak the conversion behavior
const void **toPrettyString**(Out)(Out sink, in JSONOptions options = JSONOptions.none); JSONValue **parseJSON**(T)(T json, int maxDepth = -1, JSONOptions options = JSONOptions.none)
Constraints: if (isInputRange!T && !isInfinite!T && isSomeChar!(ElementEncodingType!T));
Parses a serialized string and returns a tree of JSON values.
Throws:
[`JSONException`](#JSONException) if string does not follow the JSON grammar or the depth exceeds the max depth, [`ConvException`](#ConvException) if a number in the input cannot be represented by a native D type.
Parameters:
| | |
| --- | --- |
| T `json` | json-formatted string to parse |
| int `maxDepth` | maximum depth of nesting allowed, -1 disables depth checking |
| JSONOptions `options` | enable decoding string representations of NaN/Inf as float values |
JSONValue **parseJSON**(T)(T json, JSONOptions options)
Constraints: if (isInputRange!T && !isInfinite!T && isSomeChar!(ElementEncodingType!T));
Parses a serialized string and returns a tree of JSON values.
Throws:
[`JSONException`](#JSONException) if the depth exceeds the max depth.
Parameters:
| | |
| --- | --- |
| T `json` | json-formatted string to parse |
| JSONOptions `options` | enable decoding string representations of NaN/Inf as float values |
@safe string **toJSON**(ref const JSONValue root, in bool pretty = false, in JSONOptions options = JSONOptions.none);
Takes a tree of JSON values and returns the serialized string.
Any Object types will be serialized in a key-sorted order.
If `pretty` is false no whitespaces are generated. If `pretty` is true serialized string is formatted to be human-readable. Set the [`JSONOptions.specialFloatLiterals`](#JSONOptions.specialFloatLiterals) flag is set in `options` to encode NaN/Infinity as strings.
void **toJSON**(Out)(auto ref Out json, ref const JSONValue root, in bool pretty = false, in JSONOptions options = JSONOptions.none)
Constraints: if (isOutputRange!(Out, char));
class **JSONException**: object.Exception;
Exception thrown on JSON errors
d Memory-Safe-D-Spec Memory-Safe-D-Spec
==================
**Contents** 1. [Usage](#usage)
1. [Scope and Return Parameters](#scope-return-params)
2. [Limitations](#limitations)
*Memory Safety* for a program is defined as it being impossible for the program to corrupt memory. Therefore, the safe subset of D consists only of programming language features that are guaranteed to never result in memory corruption. See [this article](https://dlang.org/safed.html) for a rationale.
Memory-safe code [cannot use certain language features](function#function-safety), such as:
* Casts that break the type system.
* Modification of pointer values.
* Taking the address of a local variable or function parameter.
Usage
-----
There are three categories of functions from the perspective of memory safety:
* [`@safe`](function#safe-functions) functions
* [`@trusted`](function#trusted-functions) functions
* [`@system`](function#system-functions) functions
`@system` functions may perform any operation legal from the perspective of the language including inherently memory unsafe operations like returning pointers to expired stackframes. These functions may not be called directly from `@safe` functions.
`@trusted` functions have all the capabilities of `@system` functions but may be called from `@safe` functions. For this reason they should be very limited in the scope of their use. Typical uses of `@trusted` functions include wrapping system calls that take buffer pointer and length arguments separately so that @safe` functions may call them with arrays.
`@safe` functions have a number of restrictions on what they may do and are intended to disallow operations that may cause memory corruption. See [`@safe` functions](function#safe-functions).
These attributes may be inferred when the compiler has the function body available, such as with templates.
Array bounds checks are necessary to enforce memory safety, so these are enabled (by default) for `@safe` code even in **-release** mode.
### Scope and Return Parameters
The function parameter attributes `return` and `scope` are used to track what happens to low-level pointers passed to functions. Such pointers include: raw pointers, arrays, `this`, classes, `ref` parameters, delegate/lazy parameters, and aggregates containing a pointer.
`scope` ensures that no references to the pointed-to object are retained, in global variables or pointers passed to the function (and recursively to other functions called in the function), as a result of calling the function. Variables in the function body and parameter list that are `scope` may have their allocations elided as a result.
`return` indicates that either the return value of the function or the first parameter is a pointer derived from the `return` parameter or any other parameters also marked `return`. For constructors, `return` applies to the (implicitly returned) `this` reference. For void functions, `return` applies to the first parameter *iff* it is `ref`; this is to support UFCS, property setters and non-member functions (e.g. `put` used like `put(dest, source)`).
These attributes may appear after the formal parameter list, in which case they apply either to a method's `this` parameter, or to a free function's first parameter *iff* it is `ref`. `return` or `scope` is ignored when applied to a type that is not a low-level pointer.
**Note:** Checks for `scope` parameters are currently enabled only for `@safe` code compiled with the `-dip1000` command-line flag.
Limitations
-----------
Memory safety does not imply that code is portable, uses only sound programming practices, is free of byte order dependencies, or other bugs. It is focussed only on eliminating memory corruption possibilities.
d Arrays Arrays
======
**Contents** 1. [Kinds](#array-kinds)
1. [Pointers](#pointers)
2. [Static Arrays](#static-arrays)
3. [Dynamic Arrays](#dynamic-arrays)
2. [Array Declarations](#declarations)
3. [Array Usage](#usage)
4. [Indexing](#indexing)
5. [Slicing](#slicing)
6. [Array Copying](#array-copying)
1. [Overlapping Copying](#overlapping-copying)
7. [Array Setting](#array-setting)
8. [Array Concatenation](#array-concatenation)
9. [Array Operations](#array-operations)
10. [Pointer Arithmetic](#pointer-arithmetic)
11. [Rectangular Arrays](#rectangular-arrays)
12. [Array Length](#array-length)
13. [Array Properties](#array-properties)
1. [Setting Dynamic Array Length](#resize)
2. [Functions as Array Properties](#func-as-property)
14. [Array Bounds Checking](#bounds)
1. [Disabling Array Bounds Checking](#disable-bounds-check)
15. [Array Initialization](#array-initialization)
1. [Default Initialization](#default-initialization)
2. [Void Initialization](#void-initialization)
3. [Static Initialization of Statically Allocated Arrays](#static-init-static)
16. [Special Array Types](#special-array)
1. [Strings](#strings)
2. [Void Arrays](#void_arrays)
17. [Implicit Conversions](#implicit-conversions)
Kinds
-----
There are four kinds of arrays:
Kinds of Arrays| **Syntax** | **Description** |
| *type*\* | [Pointers to data](#pointers) |
| *type*[*integer*] | [Static arrays](#static-arrays) |
| *type*[] | [Dynamic arrays](#dynamic-arrays) |
| *type*[*type*] | [Associative arrays](hash-map) |
### Pointers
```
int* p;
```
A pointer to type *T* has a value which is a reference (address) to another object of type *T*. It is commonly called a *pointer to T*.
If a pointer contains a *null* value, it is not pointing to a valid object.
When a pointer to *T* is dereferenced, it must either contain a *null* value, or point to a valid object of type *T*.
**Implementation Defined:** 1. The behavior when a *null* pointer is dereferenced. Typically the program will be aborted.
**Undefined Behavior:** 1. Dereferencing a pointer that is not *null* and does not point to a valid object of type *T*.
**Best Practices:** These are simple pointers to data. Pointers are provided for interfacing with C and for specialized systems work. There is no length associated with it, and so there is no way for the compiler or runtime to do bounds checking, etc., on it. Most conventional uses for pointers can be replaced with dynamic arrays, `out` and `ref` parameters, and reference types. ### Static Arrays
```
int[3] s;
```
Static arrays have a length fixed at compile time.
The total size of a static array cannot exceed 16Mb.
A static array with a dimension of 0 is allowed, but no space is allocated for it.
Static arrays are value types. They are passed to and returned by functions by value.
**Best Practices:** 1. Use dynamic arrays for larger arrays.
2. Static arrays with 0 elements are useful as the last member of a variable length struct, or as the degenerate case of a template expansion.
3. Because static arrays are passed to functions by value, a larger array can consume a lot of stack space. Use dynamic arrays instead.
### Dynamic Arrays
```
int[] a;
```
Dynamic arrays consist of a length and a pointer to the array data. Multiple dynamic arrays can share all or parts of the array data.
**Best Practices:** 1. Use dynamic arrays instead of pointers to arrays as much as practical. Indexing of dynamic arrays are bounds checked, avoiding buffer underflow and overflow problems.
Array Declarations
------------------
Declarations appear before the identifier being declared and read right to left, so:
```
int[] a; // dynamic array of ints
int[4][3] b; // array of 3 arrays of 4 ints each
int[][5] c; // array of 5 dynamic arrays of ints.
int*[]*[3] d; // array of 3 pointers to dynamic arrays of pointers to ints
int[]* e; // pointer to dynamic array of ints
```
Array Usage
-----------
There are two broad kinds of operations to do on an array - affecting the handle to the array, and affecting the contents of the array.
The handle to an array is specified by naming the array, as in p, s or a:
```
int* p;
int[3] s;
int[] a;
int[] b;
p = s.ptr; // p points to the first element of the array s.
p = a.ptr; // p points to the first element of the array a.
a = p; // error, since the length of the array pointed
// to by p is unknown
a = s; // a is initialized to point to the s array
a = b; // a points to the same array as b does
```
Indexing
--------
See also [*IndexExpression*](expression#IndexExpression).
Slicing
-------
*Slicing* an array means to specify a subarray of it. An array slice does not copy the data, it is only another reference to it. For example:
```
void foo(int value)
{
writeln("value=", value);
}
int[10] a; // declare array of 10 ints
int[] b;
b = a[1..3]; // a[1..3] is a 2 element array consisting of
// a[1] and a[2]
foo(b[1]); // equivalent to foo(0)
a[2] = 3;
foo(b[1]); // equivalent to foo(3)
```
The [] is shorthand for a slice of the entire array. For example, the assignments to b:
```
int[10] a = [ 1,2,3,4,5,6,7,8,9,10 ];
int[] b1, b2, b3, b4;
b1 = a;
b2 = a[];
b3 = a[0 .. a.length];
b4 = a[0 .. $];
writeln(b1);
writeln(b2);
writeln(b3);
writeln(b4);
```
are all semantically equivalent.
Slicing is not only handy for referring to parts of other arrays, but for converting pointers into bounds-checked arrays:
```
int[10] a = [ 1,2,3,4,5,6,7,8,9,10 ];
int* p = &a[2];
int[] b = p[0..8];
writeln(b);
writeln(p[7]); // 10
writeln(p[8]); // undefined behaviour
writeln(b[7]); // 10
//writeln(b[8]); // range error
```
See also [*SliceExpression*](expression#SliceExpression).
Array Copying
-------------
When the slice operator appears as the left-hand side of an assignment expression, it means that the contents of the array are the target of the assignment rather than a reference to the array. Array copying happens when the left-hand side is a slice, and the right-hand side is an array of or pointer to the same type.
```
int[3] s;
int[3] t;
s = t; // the 3 elements of t are copied into s
s[] = t; // the 3 elements of t are copied into s
s[] = t[]; // the 3 elements of t are copied into s
s[1..2] = t[0..1]; // same as s[1] = t[0]
s[0..2] = t[1..3]; // same as s[0] = t[1], s[1] = t[2]
s[0..4] = t[0..4]; // error, only 3 elements in s
s[0..2] = t; // error, operands have different lengths
```
### Overlapping Copying
Overlapping copies are an error:
```
s[0..2] = s[1..3]; // error, overlapping copy
s[1..3] = s[0..2]; // error, overlapping copy
```
Disallowing overlapping makes it possible for more aggressive parallel code optimizations than possible with the serial semantics of C.
If overlapping is required, use [`std.algorithm.mutation.copy`](https://dlang.org/phobos/std_algorithm_mutation.html#copy):
```
import std.algorithm;
int[] s = [1, 2, 3, 4];
copy(s[1..3], s[0..2]);
assert(s == [2, 3, 3, 4]);
```
Array Setting
-------------
If a slice operator appears as the left-hand side of an assignment expression, and the type of the right-hand side is the same as the element type of the left-hand side, then the array contents of the left-hand side are set to the right-hand side.
```
int[3] s;
int* p;
s[] = 3; // same as s[0] = 3, s[1] = 3, s[2] = 3
p[0..2] = 3; // same as p[0] = 3, p[1] = 3
```
Array Concatenation
-------------------
The binary operator ~ is the *cat* operator. It is used to concatenate arrays:
```
int[] a;
int[] b;
int[] c;
a = b ~ c; // Create an array from the concatenation
// of the b and c arrays
```
Many languages overload the + operator to mean concatenation. This confusingly leads to, does:
```
"10" + 3 + 4
```
produce the number 17, the string "1034" or the string "107" as the result? It isn't obvious, and the language designers wind up carefully writing rules to disambiguate it - rules that get incorrectly implemented, overlooked, forgotten, and ignored. It's much better to have + mean addition, and a separate operator to be array concatenation.
Similarly, the ~= operator means append, as in:
```
a ~= b; // a becomes the concatenation of a and b
```
Concatenation always creates a copy of its operands, even if one of the operands is a 0 length array, so:
```
a = b; // a refers to b
a = b ~ c[0..0]; // a refers to a copy of b
```
Appending does not always create a copy, see [setting dynamic array length](#resize) for details.
Array Operations
----------------
Many array operations, also known as vector operations, can be expressed at a high level rather than as a loop. For example, the loop:
```
T[] a, b;
...
for (size_t i = 0; i < a.length; i++)
a[i] = b[i] + 4;
```
assigns to the elements of `a` the elements of `b` with `4` added to each. This can also be expressed in vector notation as:
```
T[] a, b;
...
a[] = b[] + 4;
```
A vector operation is indicated by the slice operator appearing as the left-hand side of an assignment or an op-assignment expression. The right-hand side can be an expression consisting either of an array slice of the same length and type as the left-hand side or a scalar expression of the element type of the left-hand side, in any combination.
```
T[] a, b, c;
...
a[] -= (b[] + 4) * c[];
```
The slice on the left and any slices on the right must not overlap. All operands are evaluated exactly once, even if the array slice has zero elements in it.
The order in which the array elements are computed is implementation defined, and may even occur in parallel. An application must not depend on this order.
Implementation note: Many vector operations are expected to take advantage of any vector math instructions available on the target computer.
Pointer Arithmetic
------------------
```
int[3] abc; // static array of 3 ints
int[] def = [ 1, 2, 3 ]; // dynamic array of 3 ints
void dibb(int* array)
{
array[2]; // means same thing as *(array + 2)
*(array + 2); // get 3rd element
}
void diss(int[] array)
{
array[2]; // ok
*(array + 2); // error, array is not a pointer
}
void ditt(int[3] array)
{
array[2]; // ok
*(array + 2); // error, array is not a pointer
}
```
Rectangular Arrays
------------------
Experienced FORTRAN numerics programmers know that multidimensional "rectangular" arrays for things like matrix operations are much faster than trying to access them via pointers to pointers resulting from "array of pointers to array" semantics. For example, the D syntax:
```
double[][] matrix;
```
declares matrix as an array of pointers to arrays. (Dynamic arrays are implemented as pointers to the array data.) Since the arrays can have varying sizes (being dynamically sized), this is sometimes called "jagged" arrays. Even worse for optimizing the code, the array rows can sometimes point to each other! Fortunately, D static arrays, while using the same syntax, are implemented as a fixed rectangular layout in a contiguous block of memory:
```
double[6][3] matrix = 0; // Sets all elements to 0.
writeln(matrix); // [[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]
```
Note that dimensions and indices appear in opposite orders. Dimensions in the [declaration](#declarations) are read right to left whereas indices are read left to right:
```
double[6][3] matrix = 0;
matrix[2][5] = 3.14; // Assignment to bottom right element.
writeln(matrix); // [[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 3.14]]
static assert(!__traits(compiles, matrix[5][2])); // Array index out of bounds.
```
Array Length
------------
Within the [ ] of a static or a dynamic array, the symbol `$` represents the length of the array.
```
int[4] foo;
int[] bar = foo;
int* p = &foo[0];
// These expressions are equivalent:
bar[]
bar[0 .. 4]
bar[0 .. $]
bar[0 .. bar.length]
p[0 .. $] // '$' is not defined, since p is not an array
bar[0]+$ // '$' is not defined, out of scope of [ ]
bar[$-1] // retrieves last element of the array
```
Array Properties
----------------
Static array properties are:
Static Array Properties| **Property** | **Description** |
| `.init` | Returns an array literal with each element of the literal being the `.init` property of the array element type. |
| `.sizeof` | Returns the array length multiplied by the number of bytes per array element. |
| `.length` | Returns the number of elements in the array. This is a fixed quantity for static arrays. It is of type `size_t`. |
| `.ptr` | Returns a pointer to the first element of the array. |
| `.dup` | Create a dynamic array of the same size and copy the contents of the array into it. The copy will have any immutability or const stripped. If this conversion is invalid the call will not compile. |
| `.idup` | Create a dynamic array of the same size and copy the contents of the array into it. The copy is typed as being immutable. If this conversion is invalid the call will not compile. |
Dynamic array properties are:
Dynamic Array Properties| **Property** | **Description** |
| `.init` | Returns `null`. |
| `.sizeof` | Returns the size of the dynamic array reference, which is 8 in 32-bit builds and 16 on 64-bit builds. |
| `.length` | Get/set number of elements in the array. It is of type `size_t`. |
| `.ptr` | Returns a pointer to the first element of the array. |
| `.dup` | Create a dynamic array of the same size and copy the contents of the array into it. The copy will have any immutability or const stripped. If this conversion is invalid the call will not compile. |
| `.idup` | Create a dynamic array of the same size and copy the contents of the array into it. The copy is typed as being immutable. If this conversion is invalid the call will not compile. |
Examples:
```
int* p;
int[3] s;
int[] a;
p.length; // error, length not known for pointer
s.length; // compile time constant 3
a.length; // runtime value
p.dup; // error, length not known
s.dup; // creates an array of 3 elements, copies
// elements s into it
a.dup; // creates an array of a.length elements, copies
// elements of a into it
```
### Setting Dynamic Array Length
The `.length` property of a dynamic array can be set as the left-hand side of an = operator:
```
array.length = 7;
```
This causes the array to be reallocated in place, and the existing contents copied over to the new array. If the new array length is shorter, the array is not reallocated, and no data is copied. It is equivalent to slicing the array:
```
array = array[0..7];
```
If the new array length is longer, the remainder is filled out with the default initializer.
To maximize efficiency, the runtime always tries to resize the array in place to avoid extra copying. It will always do a copy if the new size is larger and the array was not allocated via the new operator or resizing in place would overwrite valid data in the array. For example:
```
char[] a = new char[20];
char[] b = a[0..10];
char[] c = a[10..20];
char[] d = a;
b.length = 15; // always reallocates because extending in place would
// overwrite other data in a.
b[11] = 'x'; // a[11] and c[1] are not affected
d.length = 1;
d.length = 20; // also reallocates, because doing this will overwrite a and
// c
c.length = 12; // may reallocate in place if space allows, because nothing
// was allocated after c.
c[5] = 'y'; // may affect contents of a, but not b or d because those
// were reallocated.
a.length = 25; // This always reallocates because if c extended in place,
// then extending a would overwrite c. If c didn't
// reallocate in place, it means there was not enough space,
// which will still be true for a.
a[15] = 'z'; // does not affect c, because either a or c has reallocated.
```
To guarantee copying behavior, use the .dup property to ensure a unique array that can be resized. Also, one may use the `.capacity` property to determine how many elements can be appended to the array without reallocating.
These issues also apply to appending arrays with the ~= operator. Concatenation using the ~ operator is not affected since it always reallocates.
Resizing a dynamic array is a relatively expensive operation. So, while the following method of filling an array:
```
int[] array;
while (1)
{
import core.stdc.stdio : getchar;
auto c = getchar;
if (!c)
break;
++array.length;
array[array.length - 1] = c;
}
```
will work, it will be inefficient. A more practical approach would be to minimize the number of resizes:
```
int[] array;
array.length = 100; // guess
int i;
for (i = 0; ; i++)
{
import core.stdc.stdio : getchar;
auto c = getchar;
if (!c)
break;
if (i == array.length)
array.length *= 2;
array[i] = c;
}
array.length = i;
```
Base selection of the initial size on expected common use cases, which can be determined by instrumenting the code, or simply using good judgement. For example, when gathering user input from the console - it's unlikely to be longer than 80.
The `reserve` function expands an array's capacity for use by the append operator.
```
int[] array;
size_t cap = array.reserve(10); // request
array ~= [1, 2, 3, 4, 5];
assert(cap >= 10); // allocated may be more than request
assert(cap == array.capacity);
```
### Functions as Array Properties
See [Uniform Function Call Syntax (UFCS)](https://dlang.org/function.html#pseudo-member).
Array Bounds Checking
---------------------
It is an error to index an array with an index that is less than 0 or greater than or equal to the array length. If an index is out of bounds, a RangeError exception is raised if detected at runtime, and an error if detected at compile time. A program may not rely on array bounds checking happening, for example, the following program is incorrect:
```
import core.exception;
try
{
auto array = [1, 2];
for (auto i = 0; ; i++)
{
array[i] = 5;
}
}
catch (RangeError)
{
// terminate loop
}
```
The loop is correctly written:
```
auto array = [1, 2];
for (auto i = 0; i < array.length; i++)
{
array[i] = 5;
}
```
`Implementation Note:` Compilers should attempt to detect array bounds errors at compile time, for example:
```
int[3] foo;
int x = foo[3]; // error, out of bounds
```
Insertion of array bounds checking code at runtime should be turned on and off with a compile time switch.
**Undefined Behavior:** An out of bounds memory access will cause undefined behavior, therefore array bounds check is normally enabled in `@safe` functions. The runtime behavior is part of the language semantics. See also [Safe Functions](https://dlang.org/function.html#safe-functions).
### Disabling Array Bounds Checking
Insertion of array bounds checking code at runtime may be turned off with a compiler switch [`-boundscheck`](https://dlang.org/dmd.html#switch-boundscheck).
If the bounds check in `@system` or `@trusted` code is disabled, the code correctness must still be guaranteed by the code author.
On the other hand, disabling the bounds check in `@safe` code will break the guaranteed memory safety by compiler. It's not recommended unless motivated by speed measurements.
Array Initialization
--------------------
### Default Initialization
* Pointers are initialized to `null`.
* Static array contents are initialized to the default initializer for the array element type.
* Dynamic arrays are initialized to having 0 elements.
* Associative arrays are initialized to having 0 elements.
### Void Initialization
Void initialization happens when the *Initializer* for an array is `void`. What it means is that no initialization is done, i.e. the contents of the array will be undefined. This is most useful as an efficiency optimization. Void initializations are an advanced technique and should only be used when profiling indicates that it matters.
to void initialise the elements of dynamic array use [`std.array.uninitializedArray`](https://dlang.org/phobos/std_array.html#uninitializedArray).
### Static Initialization of Statically Allocated Arrays
Static initalizations are supplied by a list of array element values enclosed in [ ]. The values can be optionally preceded by an index and a :. If an index is not supplied, it is set to the previous index plus 1, or 0 if it is the first value.
```
int[3] a = [ 1:2, 3 ]; // a[0] = 0, a[1] = 2, a[2] = 3
```
This is most handy when the array indices are given by enums:
```
enum Color { red, blue, green }
int[Color.max + 1] value =
[ Color.blue :6,
Color.green:2,
Color.red :5 ];
```
These arrays are statically allocated when they appear in global scope. Otherwise, they need to be marked with `const` or `static` storage classes to make them statically allocated arrays.
Special Array Types
-------------------
### Strings
A string is an array of characters. String literals are just an easy way to write character arrays. String literals are immutable (read only).
```
char[] str1 = "abc"; // error, "abc" is not mutable
char[] str2 = "abc".dup; // ok, make mutable copy
immutable(char)[] str3 = "abc"; // ok
immutable(char)[] str4 = str1; // error, str4 is not mutable
immutable(char)[] str5 = str1.idup; // ok, make immutable copy
```
The name `string` is aliased to `immutable(char)[]`, so the above declarations could be equivalently written as:
```
char[] str1 = "abc"; // error, "abc" is not mutable
char[] str2 = "abc".dup; // ok, make mutable copy
string str3 = "abc"; // ok
string str4 = str1; // error, str4 is not mutable
string str5 = str1.idup; // ok, make immutable copy
```
`char[]` strings are in UTF-8 format. `wchar[]` strings are in UTF-16 format. `dchar[]` strings are in UTF-32 format.
Strings can be copied, compared, concatenated, and appended:
```
str1 = str2;
if (str1 < str3) { ... }
func(str3 ~ str4);
str4 ~= str1;
```
with the obvious semantics. Any generated temporaries get cleaned up by the garbage collector (or by using `alloca()`). Not only that, this works with any array not just a special String array.
A pointer to a char can be generated:
```
char* p = &str[3]; // pointer to 4th element
char* p = str; // pointer to 1st element
```
Since strings, however, are not 0 terminated in D, when transferring a pointer to a string to C, add a terminating 0:
```
str ~= "\0";
```
or use the function `std.string.toStringz`. The type of a string is determined by the semantic phase of compilation. The type is one of: char[], wchar[], dchar[], and is determined by implicit conversion rules. If there are two equally applicable implicit conversions, the result is an error. To disambiguate these cases, a cast or a postfix of `c`, `w` or `d` can be used:
```
cast(immutable(wchar) [])"abc" // this is an array of wchar characters
"abc"w // so is this
```
String literals that do not have a postfix character and that have not been cast can be implicitly converted between string, wstring, and dstring as necessary.
```
char c;
wchar w;
dchar d;
c = 'b'; // c is assigned the character 'b'
w = 'b'; // w is assigned the wchar character 'b'
//w = 'bc'; // error - only one wchar character at a time
w = "b"[0]; // w is assigned the wchar character 'b'
w = "\r"[0]; // w is assigned the carriage return wchar character
d = 'd'; // d is assigned the character 'd'
```
#### Strings and Unicode
Note that built-in comparison operators operate on a [code unit](http://www.unicode.org/glossary/#code_unit) basis. The end result for valid strings is the same as that of [code point](http://www.unicode.org/glossary/#code_point) for [code point](http://www.unicode.org/glossary/#code_point) comparison as long as both strings are in the same [normalization form](http://www.unicode.org/glossary/#normalization_form). Since normalization is a costly operation not suitable for language primitives it's assumed to be enforced by the user.
The standard library lends a hand for comparing strings with mixed encodings (by transparently decoding, see [`std.algorithm.cmp`](https://dlang.org/phobos/std_algorithm.html#cmp)), [case-insensitive comparison](https://dlang.org/phobos/std_uni.html#icmp) and [normalization](https://dlang.org/phobos/std_uni.html#normalize).
Last but not least, a desired string sorting order differs by culture and language and is usually nothing like code point for code point comparison. The natural order of strings is obtained by applying [the Unicode collation algorithm](http://www.unicode.org/reports/tr10/) that should be implemented in the standard library.
#### C's printf() and Strings
`printf()` is a C function and is not part of D. `printf()` will print C strings, which are 0 terminated. There are two ways to use `printf()` with D strings. The first is to add a terminating 0, and cast the result to a char\*:
```
str ~= "\0";
printf("the string is '%s'\n", str.ptr);
```
or:
```
import std.string;
printf("the string is '%s'\n", std.string.toStringz(str));
```
String literals already have a 0 appended to them, so can be used directly:
```
printf("the string is '%s'\n", "string literal".ptr);
```
So, why does the first string literal to printf not need the `.ptr`? The first parameter is prototyped as a `const(char)*`, and a string literal can be implicitly `cast` to a `const(char)*`. The rest of the arguments to printf, however, are variadic (specified by ...), and a string literal typed `immutable(char)[]` cannot pass to variadic parameters.
The second way is to use the precision specifier. The length comes first, followed by the pointer:
```
printf("the string is '%.*s'\n", str.length, str.ptr);
```
The best way is to use std.stdio.writefln, which can handle D strings:
```
import std.stdio;
writefln("the string is '%s'", str);
```
### Void Arrays
There is a special type of array which acts as a wildcard that can hold arrays of any kind, declared as `void[]`. Void arrays are used for low-level operations where some kind of array data is being handled, but the exact type of the array elements are unimportant. The `.length` of a void array is the length of the data in bytes, rather than the number of elements in its original type. Array indices in indexing and slicing operations are interpreted as byte indices.
Arrays of any type can be implicitly converted to a void array; the compiler inserts the appropriate calculations so that the `.length` of the resulting array's size is in bytes rather than number of elements. Void arrays cannot be converted back to the original type without using a cast, and it is an error to convert to an array type whose element size does not evenly divide the length of the void array.
```
void main()
{
int[] data1 = [1,2,3];
long[] data2;
void[] arr = data1; // OK, int[] implicit converts to void[].
assert(data1.length == 3);
assert(arr.length == 12); // length is implicitly converted to bytes.
//data1 = arr; // Illegal: void[] does not implicitly
// convert to int[].
int[] data3 = cast(int[]) arr; // OK, can convert with explicit cast.
data2 = cast(long[]) arr; // Runtime error: long.sizeof == 8, which
// does not divide arr.length, which is 12
// bytes.
}
```
Void arrays can also be static if their length is known at compile-time. The length is specified in bytes:
```
void main()
{
byte[2] x;
int[2] y;
void[2] a = x; // OK, lengths match
void[2] b = y; // Error: int[2] is 8 bytes long, doesn't fit in 2 bytes.
}
```
While it may seem that void arrays are just fancy syntax for `ubyte[]`, there is a subtle distinction. The garbage collector generally will not scan `ubyte[]` arrays for pointers, `ubyte[]` being presumed to contain only pure byte data, not pointers. However, it *will* scan `void[]` arrays for pointers, since such an array may have been implicitly converted from an array of pointers or an array of elements that contain pointers. Allocating an array that contains pointers as `ubyte[]` may run the risk of the GC collecting live memory if these pointers are the only remaining references to their targets.
Implicit Conversions
--------------------
A pointer `T*` can be implicitly converted to one of the following:
* `void*`
A static array `T[dim]` can be implicitly converted to one of the following:
* `T[]`
* `const(U)[]`
* `const(U[])`
* `void[]`
A dynamic array `T[]` can be implicitly converted to one of the following (`U` is a base class of `T`):
* `const(U)[]`
* `const(U[])`
* `void[]`
| programming_docs |
d std.string std.string
==========
String handling functions.
| Category | Functions |
| --- | --- |
| Searching | [*column*](#column) [*indexOf*](#indexOf) [*indexOfAny*](#indexOfAny) [*indexOfNeither*](#indexOfNeither) [*lastIndexOf*](#lastIndexOf) [*lastIndexOfAny*](#lastIndexOfAny) [*lastIndexOfNeither*](#lastIndexOfNeither) |
| Comparison | [*isNumeric*](#isNumeric) |
| Mutation | [*capitalize*](#capitalize) |
| Pruning and Filling | [*center*](#center) [*chomp*](#chomp) [*chompPrefix*](#chompPrefix) [*chop*](#chop) [*detabber*](#detabber) [*detab*](#detab) [*entab*](#entab) [*entabber*](#entabber) [*leftJustify*](#leftJustify) [*outdent*](#outdent) [*rightJustify*](#rightJustify) [*strip*](#strip) [*stripLeft*](#stripLeft) [*stripRight*](#stripRight) [*wrap*](#wrap) |
| Substitution | [*abbrev*](#abbrev) [*soundex*](#soundex) [*soundexer*](#soundexer) [*succ*](#succ) [*tr*](#tr) [*translate*](#translate) |
| Miscellaneous | [*assumeUTF*](#assumeUTF) [*fromStringz*](#fromStringz) [*lineSplitter*](#lineSplitter) [*representation*](#representation) [*splitLines*](#splitLines) [*toStringz*](#toStringz) |
Objects of types `string`, `wstring`, and `dstring` are value types and cannot be mutated element-by-element. For using mutation during building strings, use `char[]`, `wchar[]`, or `dchar[]`. The `xxxstring` types are preferable because they don't exhibit undesired aliasing, thus making code more robust.
The following functions are publicly imported:
| Module | Functions |
| --- | --- |
| *Publicly imported functions* |
| std.algorithm | [`cmp`](std_algorithm_comparison#cmp) [`count`](std_algorithm_searching#count) [`endsWith`](std_algorithm_searching#endsWith) [`startsWith`](std_algorithm_searching#startsWith) |
| std.array | [`join`](std_array#join) [`replace`](std_array#replace) [`replaceInPlace`](std_array#replaceInPlace) [`split`](std_array#split) [`empty`](std_array#empty) |
| std.format | [`format`](std_format#format) [`sformat`](std_format#sformat) |
| std.uni | [`icmp`](std_uni#icmp) [`toLower`](std_uni#toLower) [`toLowerInPlace`](std_uni#toLowerInPlace) [`toUpper`](std_uni#toUpper) [`toUpperInPlace`](std_uni#toUpperInPlace) |
There is a rich set of functions for string handling defined in other modules. Functions related to Unicode and ASCII are found in [`std.uni`](std_uni) and [`std.ascii`](std_ascii), respectively. Other functions that have a wider generality than just strings can be found in [`std.algorithm`](std_algorithm) and [`std.range`](std_range).
See Also:
* [`std.algorithm`](std_algorithm) and [`std.range`](std_range) for generic range algorithms
* [`std.ascii`](std_ascii) for functions that work with ASCII strings
* [`std.uni`](std_uni) for functions that work with unicode strings
License:
[Boost License 1.0](http://boost.org/LICENSE_1_0.txt).
Authors:
[Walter Bright](http://digitalmars.com), [Andrei Alexandrescu](http://erdani.org), [Jonathan M Davis](http://jmdavisprog.com), and David L. 'SpottedTiger' Davis
Source
[std/string.d](https://github.com/dlang/phobos/blob/master/std/string.d)
class **StringException**: object.Exception;
Exception thrown on errors in std.string functions.
Examples:
```
import std.exception : assertThrown;
auto bad = " a\n\tb\n c";
assertThrown!StringException(bad.outdent);
```
pure nothrow @nogc @system inout(Char)[] **fromStringz**(Char)(return scope inout(Char)\* cString)
Constraints: if (isSomeChar!Char);
Parameters:
| | |
| --- | --- |
| inout(Char)\* `cString` | A null-terminated c-style string. |
Returns:
A D-style array of `char`, `wchar` or `dchar` referencing the same string. The returned array will retain the same type qualifiers as the input. Important Note: The returned array is a slice of the original buffer. The original data is not changed and not copied.
Examples:
```
writeln(fromStringz("foo\0"c.ptr)); // "foo"c
writeln(fromStringz("foo\0"w.ptr)); // "foo"w
writeln(fromStringz("foo\0"d.ptr)); // "foo"d
writeln(fromStringz("福\0"c.ptr)); // "福"c
writeln(fromStringz("福\0"w.ptr)); // "福"w
writeln(fromStringz("福\0"d.ptr)); // "福"d
```
pure nothrow @trusted immutable(char)\* **toStringz**(scope const(char)[] s);
pure nothrow @trusted immutable(char)\* **toStringz**(return scope string s);
Parameters:
| | |
| --- | --- |
| const(char)[] `s` | A D-style string. |
Returns:
A C-style null-terminated string equivalent to `s`. `s` must not contain embedded `'\0'`'s as any C function will treat the first `'\0'` that it sees as the end of the string. If `s.empty` is `true`, then a string containing only `'\0'` is returned. Important Note: When passing a `char*` to a C function, and the C function keeps it around for any reason, make sure that you keep a reference to it in your D code. Otherwise, it may become invalid during a garbage collection cycle and cause a nasty bug when the C code tries to use it.
Examples:
```
import core.stdc.string : strlen;
import std.conv : to;
auto p = toStringz("foo");
writeln(strlen(p)); // 3
const(char)[] foo = "abbzxyzzy";
p = toStringz(foo[3 .. 5]);
writeln(strlen(p)); // 2
string test = "";
p = toStringz(test);
writeln(*p); // 0
test = "\0";
p = toStringz(test);
writeln(*p); // 0
test = "foo\0";
p = toStringz(test);
assert(p[0] == 'f' && p[1] == 'o' && p[2] == 'o' && p[3] == 0);
const string test2 = "";
p = toStringz(test2);
writeln(*p); // 0
```
alias **CaseSensitive** = std.typecons.Flag!"caseSensitive".Flag;
Flag indicating whether a search is case-sensitive.
ptrdiff\_t **indexOf**(Range)(Range s, dchar c, CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isInputRange!Range && isSomeChar!(ElementType!Range) && !isSomeString!Range);
ptrdiff\_t **indexOf**(C)(scope const(C)[] s, dchar c, CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!C);
ptrdiff\_t **indexOf**(Range)(Range s, dchar c, size\_t startIdx, CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isInputRange!Range && isSomeChar!(ElementType!Range) && !isSomeString!Range);
ptrdiff\_t **indexOf**(C)(scope const(C)[] s, dchar c, size\_t startIdx, CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!C);
Searches for character in range.
Parameters:
| | |
| --- | --- |
| Range `s` | string or InputRange of characters to search in correct UTF format |
| dchar `c` | character to search for |
| size\_t `startIdx` | starting index to a well-formed code point |
| CaseSensitive `cs` | `Yes.caseSensitive` or `No.caseSensitive` |
Returns:
the index of the first occurrence of `c` in `s` with respect to the start index `startIdx`. If `c` is not found, then `-1` is returned. If `c` is found the value of the returned index is at least `startIdx`. If the parameters are not valid UTF, the result will still be in the range [-1 .. s.length], but will not be reliable otherwise.
Throws:
If the sequence starting at `startIdx` does not represent a well formed codepoint, then a [`std.utf.UTFException`](std_utf#UTFException) may be thrown.
See Also:
[`std.algorithm.searching.countUntil`](std_algorithm_searching#countUntil)
Examples:
```
import std.typecons : No;
string s = "Hello World";
writeln(indexOf(s, 'W')); // 6
writeln(indexOf(s, 'Z')); // -1
writeln(indexOf(s, 'w', No.caseSensitive)); // 6
```
Examples:
```
import std.typecons : No;
string s = "Hello World";
writeln(indexOf(s, 'W', 4)); // 6
writeln(indexOf(s, 'Z', 100)); // -1
writeln(indexOf(s, 'w', 3, No.caseSensitive)); // 6
```
ptrdiff\_t **indexOf**(Range, Char)(Range s, const(Char)[] sub)
Constraints: if (isForwardRange!Range && isSomeChar!(ElementEncodingType!Range) && isSomeChar!Char);
ptrdiff\_t **indexOf**(Range, Char)(Range s, const(Char)[] sub, in CaseSensitive cs)
Constraints: if (isForwardRange!Range && isSomeChar!(ElementEncodingType!Range) && isSomeChar!Char);
@safe ptrdiff\_t **indexOf**(Char1, Char2)(const(Char1)[] s, const(Char2)[] sub, in size\_t startIdx)
Constraints: if (isSomeChar!Char1 && isSomeChar!Char2);
@safe ptrdiff\_t **indexOf**(Char1, Char2)(const(Char1)[] s, const(Char2)[] sub, in size\_t startIdx, in CaseSensitive cs)
Constraints: if (isSomeChar!Char1 && isSomeChar!Char2);
Searches for substring in `s`.
Parameters:
| | |
| --- | --- |
| Range `s` | string or ForwardRange of characters to search in correct UTF format |
| const(Char)[] `sub` | substring to search for |
| size\_t `startIdx` | the index into s to start searching from |
| CaseSensitive `cs` | `Yes.caseSensitive` (default) or `No.caseSensitive` |
Returns:
the index of the first occurrence of `sub` in `s` with respect to the start index `startIdx`. If `sub` is not found, then `-1` is returned. If the arguments are not valid UTF, the result will still be in the range [-1 .. s.length], but will not be reliable otherwise. If `sub` is found the value of the returned index is at least `startIdx`.
Throws:
If the sequence starting at `startIdx` does not represent a well formed codepoint, then a [`std.utf.UTFException`](std_utf#UTFException) may be thrown.
Bugs:
Does not work with case insensitive strings where the mapping of tolower and toupper is not 1:1.
Examples:
```
import std.typecons : No;
string s = "Hello World";
writeln(indexOf(s, "Wo", 4)); // 6
writeln(indexOf(s, "Zo", 100)); // -1
writeln(indexOf(s, "wo", 3, No.caseSensitive)); // 6
```
Examples:
```
import std.typecons : No;
string s = "Hello World";
writeln(indexOf(s, "Wo")); // 6
writeln(indexOf(s, "Zo")); // -1
writeln(indexOf(s, "wO", No.caseSensitive)); // 6
```
pure @safe ptrdiff\_t **lastIndexOf**(Char)(const(Char)[] s, in dchar c, in CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!Char);
pure @safe ptrdiff\_t **lastIndexOf**(Char)(const(Char)[] s, in dchar c, in size\_t startIdx, in CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!Char);
Parameters:
| | |
| --- | --- |
| const(Char)[] `s` | string to search |
| dchar `c` | character to search for |
| size\_t `startIdx` | the index into s to start searching from |
| CaseSensitive `cs` | `Yes.caseSensitive` or `No.caseSensitive` |
Returns:
The index of the last occurrence of `c` in `s`. If `c` is not found, then `-1` is returned. The `startIdx` slices `s` in the following way `s[0 .. startIdx]`. `startIdx` represents a codeunit index in `s`.
Throws:
If the sequence ending at `startIdx` does not represent a well formed codepoint, then a [`std.utf.UTFException`](std_utf#UTFException) may be thrown. `cs` indicates whether the comparisons are case sensitive.
Examples:
```
import std.typecons : No;
string s = "Hello World";
writeln(lastIndexOf(s, 'l')); // 9
writeln(lastIndexOf(s, 'Z')); // -1
writeln(lastIndexOf(s, 'L', No.caseSensitive)); // 9
```
Examples:
```
import std.typecons : No;
string s = "Hello World";
writeln(lastIndexOf(s, 'l', 4)); // 3
writeln(lastIndexOf(s, 'Z', 1337)); // -1
writeln(lastIndexOf(s, 'L', 7, No.caseSensitive)); // 3
```
pure @safe ptrdiff\_t **lastIndexOf**(Char1, Char2)(const(Char1)[] s, const(Char2)[] sub, in CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!Char1 && isSomeChar!Char2);
pure @safe ptrdiff\_t **lastIndexOf**(Char1, Char2)(const(Char1)[] s, const(Char2)[] sub, in size\_t startIdx, in CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!Char1 && isSomeChar!Char2);
Parameters:
| | |
| --- | --- |
| const(Char1)[] `s` | string to search |
| const(Char2)[] `sub` | substring to search for |
| size\_t `startIdx` | the index into s to start searching from |
| CaseSensitive `cs` | `Yes.caseSensitive` or `No.caseSensitive` |
Returns:
the index of the last occurrence of `sub` in `s`. If `sub` is not found, then `-1` is returned. The `startIdx` slices `s` in the following way `s[0 .. startIdx]`. `startIdx` represents a codeunit index in `s`.
Throws:
If the sequence ending at `startIdx` does not represent a well formed codepoint, then a [`std.utf.UTFException`](std_utf#UTFException) may be thrown. `cs` indicates whether the comparisons are case sensitive.
Examples:
```
import std.typecons : No;
string s = "Hello World";
writeln(lastIndexOf(s, "ll")); // 2
writeln(lastIndexOf(s, "Zo")); // -1
writeln(lastIndexOf(s, "lL", No.caseSensitive)); // 2
```
Examples:
```
import std.typecons : No;
string s = "Hello World";
writeln(lastIndexOf(s, "ll", 4)); // 2
writeln(lastIndexOf(s, "Zo", 128)); // -1
writeln(lastIndexOf(s, "lL", 3, No.caseSensitive)); // -1
```
pure @safe ptrdiff\_t **indexOfAny**(Char, Char2)(const(Char)[] haystack, const(Char2)[] needles, in CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!Char && isSomeChar!Char2);
pure @safe ptrdiff\_t **indexOfAny**(Char, Char2)(const(Char)[] haystack, const(Char2)[] needles, in size\_t startIdx, in CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!Char && isSomeChar!Char2);
Returns the index of the first occurrence of any of the elements in `needles` in `haystack`. If no element of `needles` is found, then `-1` is returned. The `startIdx` slices `haystack` in the following way `haystack[startIdx .. $]`. `startIdx` represents a codeunit index in `haystack`. If the sequence ending at `startIdx` does not represent a well formed codepoint, then a [`std.utf.UTFException`](std_utf#UTFException) may be thrown.
Parameters:
| | |
| --- | --- |
| const(Char)[] `haystack` | String to search for needles in. |
| const(Char2)[] `needles` | Strings to search for in haystack. |
| size\_t `startIdx` | slices haystack like this `haystack[startIdx .. $]`. If the startIdx is greater equal the length of haystack the functions returns `-1`. |
| CaseSensitive `cs` | Indicates whether the comparisons are case sensitive. |
Examples:
```
import std.conv : to;
ptrdiff_t i = "helloWorld".indexOfAny("Wr");
writeln(i); // 5
i = "öällo world".indexOfAny("lo ");
writeln(i); // 4
```
Examples:
```
import std.conv : to;
ptrdiff_t i = "helloWorld".indexOfAny("Wr", 4);
writeln(i); // 5
i = "Foo öällo world".indexOfAny("lh", 3);
writeln(i); // 8
```
pure @safe ptrdiff\_t **lastIndexOfAny**(Char, Char2)(const(Char)[] haystack, const(Char2)[] needles, in CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!Char && isSomeChar!Char2);
pure @safe ptrdiff\_t **lastIndexOfAny**(Char, Char2)(const(Char)[] haystack, const(Char2)[] needles, in size\_t stopIdx, in CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!Char && isSomeChar!Char2);
Returns the index of the last occurrence of any of the elements in `needles` in `haystack`. If no element of `needles` is found, then `-1` is returned. The `stopIdx` slices `haystack` in the following way `s[0 .. stopIdx]`. `stopIdx` represents a codeunit index in `haystack`. If the sequence ending at `startIdx` does not represent a well formed codepoint, then a [`std.utf.UTFException`](std_utf#UTFException) may be thrown.
Parameters:
| | |
| --- | --- |
| const(Char)[] `haystack` | String to search for needles in. |
| const(Char2)[] `needles` | Strings to search for in haystack. |
| size\_t `stopIdx` | slices haystack like this `haystack[0 .. stopIdx]`. If the stopIdx is greater equal the length of haystack the functions returns `-1`. |
| CaseSensitive `cs` | Indicates whether the comparisons are case sensitive. |
Examples:
```
ptrdiff_t i = "helloWorld".lastIndexOfAny("Wlo");
writeln(i); // 8
i = "Foo öäöllo world".lastIndexOfAny("öF");
writeln(i); // 8
```
Examples:
```
import std.conv : to;
ptrdiff_t i = "helloWorld".lastIndexOfAny("Wlo", 4);
writeln(i); // 3
i = "Foo öäöllo world".lastIndexOfAny("öF", 3);
writeln(i); // 0
```
pure @safe ptrdiff\_t **indexOfNeither**(Char, Char2)(const(Char)[] haystack, const(Char2)[] needles, in CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!Char && isSomeChar!Char2);
pure @safe ptrdiff\_t **indexOfNeither**(Char, Char2)(const(Char)[] haystack, const(Char2)[] needles, in size\_t startIdx, in CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!Char && isSomeChar!Char2);
Returns the index of the first occurrence of any character not an elements in `needles` in `haystack`. If all element of `haystack` are element of `needles` `-1` is returned.
Parameters:
| | |
| --- | --- |
| const(Char)[] `haystack` | String to search for needles in. |
| const(Char2)[] `needles` | Strings to search for in haystack. |
| size\_t `startIdx` | slices haystack like this `haystack[startIdx .. $]`. If the startIdx is greater equal the length of haystack the functions returns `-1`. |
| CaseSensitive `cs` | Indicates whether the comparisons are case sensitive. |
Examples:
```
writeln(indexOfNeither("abba", "a", 2)); // 2
writeln(indexOfNeither("def", "de", 1)); // 2
writeln(indexOfNeither("dfefffg", "dfe", 4)); // 6
```
Examples:
```
writeln(indexOfNeither("def", "a")); // 0
writeln(indexOfNeither("def", "de")); // 2
writeln(indexOfNeither("dfefffg", "dfe")); // 6
```
pure @safe ptrdiff\_t **lastIndexOfNeither**(Char, Char2)(const(Char)[] haystack, const(Char2)[] needles, in CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!Char && isSomeChar!Char2);
pure @safe ptrdiff\_t **lastIndexOfNeither**(Char, Char2)(const(Char)[] haystack, const(Char2)[] needles, in size\_t stopIdx, in CaseSensitive cs = Yes.caseSensitive)
Constraints: if (isSomeChar!Char && isSomeChar!Char2);
Returns the last index of the first occurence of any character that is not an elements in `needles` in `haystack`. If all element of `haystack` are element of `needles` `-1` is returned.
Parameters:
| | |
| --- | --- |
| const(Char)[] `haystack` | String to search for needles in. |
| const(Char2)[] `needles` | Strings to search for in haystack. |
| size\_t `stopIdx` | slices haystack like this `haystack[0 .. stopIdx]` If the stopIdx is greater equal the length of haystack the functions returns `-1`. |
| CaseSensitive `cs` | Indicates whether the comparisons are case sensitive. |
Examples:
```
writeln(lastIndexOfNeither("abba", "a")); // 2
writeln(lastIndexOfNeither("def", "f")); // 1
```
Examples:
```
writeln(lastIndexOfNeither("def", "rsa", 3)); // -1
writeln(lastIndexOfNeither("abba", "a", 2)); // 1
```
pure nothrow @nogc @safe auto **representation**(Char)(Char[] s)
Constraints: if (isSomeChar!Char);
Returns the representation of a string, which has the same type as the string except the character type is replaced by `ubyte`, `ushort`, or `uint` depending on the character width.
Parameters:
| | |
| --- | --- |
| Char[] `s` | The string to return the representation of. |
Returns:
The representation of the passed string.
Examples:
```
string s = "hello";
static assert(is(typeof(representation(s)) == immutable(ubyte)[]));
assert(representation(s) is cast(immutable(ubyte)[]) s);
writeln(representation(s)); // [0x68, 0x65, 0x6c, 0x6c, 0x6f]
```
pure @trusted S **capitalize**(S)(S input)
Constraints: if (isSomeString!S);
Capitalize the first character of `s` and convert the rest of `s` to lowercase.
Parameters:
| | |
| --- | --- |
| S `input` | The string to capitalize. |
Returns:
The capitalized string.
See Also:
[`std.uni.asCapitalized`](std_uni#asCapitalized) for a lazy range version that doesn't allocate memory
Examples:
```
writeln(capitalize("hello")); // "Hello"
writeln(capitalize("World")); // "World"
```
alias **KeepTerminator** = std.typecons.Flag!"keepTerminator".Flag;
pure @safe C[][] **splitLines**(C)(C[] s, KeepTerminator keepTerm = No.keepTerminator)
Constraints: if (isSomeChar!C);
Split `s` into an array of lines according to the unicode standard using `'\r'`, `'\n'`, `"\r\n"`, [`std.uni.lineSep`](std_uni#lineSep), [`std.uni.paraSep`](std_uni#paraSep), `U+0085` (NEL), `'\v'` and `'\f'` as delimiters. If `keepTerm` is set to `KeepTerminator.yes`, then the delimiter is included in the strings returned.
Does not throw on invalid UTF; such is simply passed unchanged to the output.
Allocates memory; use [`lineSplitter`](#lineSplitter) for an alternative that does not.
Adheres to [Unicode 7.0](http://www.unicode.org/versions/Unicode7.0.0/ch05.pdf).
Parameters:
| | |
| --- | --- |
| C[] `s` | a string of `chars`, `wchars`, or `dchars`, or any custom type that casts to a `string` type |
| `KeepTerminator` `keepTerm` | whether delimiter is included or not in the results |
Returns:
array of strings, each element is a line that is a slice of `s`
See Also:
[`lineSplitter`](#lineSplitter) [`std.algorithm.splitter`](std_algorithm#splitter) [`std.regex.splitter`](std_regex#splitter)
Examples:
```
string s = "Hello\nmy\rname\nis";
writeln(splitLines(s)); // ["Hello", "my", "name", "is"]
```
auto **lineSplitter**(KeepTerminator keepTerm = No.keepTerminator, Range)(Range r)
Constraints: if (hasSlicing!Range && hasLength!Range && isSomeChar!(ElementType!Range) && !isSomeString!Range);
auto **lineSplitter**(KeepTerminator keepTerm = No.keepTerminator, C)(C[] r)
Constraints: if (isSomeChar!C);
Split an array or slicable range of characters into a range of lines using `'\r'`, `'\n'`, `'\v'`, `'\f'`, `"\r\n"`, [`std.uni.lineSep`](std_uni#lineSep), [`std.uni.paraSep`](std_uni#paraSep) and `'\u0085'` (NEL) as delimiters. If `keepTerm` is set to `Yes.keepTerminator`, then the delimiter is included in the slices returned.
Does not throw on invalid UTF; such is simply passed unchanged to the output.
Adheres to [Unicode 7.0](http://www.unicode.org/versions/Unicode7.0.0/ch05.pdf).
Does not allocate memory.
Parameters:
| | |
| --- | --- |
| Range `r` | array of `chars`, `wchars`, or `dchars` or a slicable range |
| keepTerm | whether delimiter is included or not in the results |
Returns:
range of slices of the input range `r`
See Also:
[`splitLines`](#splitLines) [`std.algorithm.splitter`](std_algorithm#splitter) [`std.regex.splitter`](std_regex#splitter)
Examples:
```
import std.array : array;
string s = "Hello\nmy\rname\nis";
/* notice the call to 'array' to turn the lazy range created by
lineSplitter comparable to the string[] created by splitLines.
*/
writeln(lineSplitter(s).array); // splitLines(s)
```
Examples:
```
auto s = "\rpeter\n\rpaul\r\njerry\u2028ice\u2029cream\n\nsunday\nmon\u2030day\n";
auto lines = s.lineSplitter();
static immutable witness = ["", "peter", "", "paul", "jerry", "ice", "cream", "", "sunday", "mon\u2030day"];
uint i;
foreach (line; lines)
{
writeln(line); // witness[i++]
}
writeln(i); // witness.length
```
auto **stripLeft**(Range)(Range input)
Constraints: if (isForwardRange!Range && isSomeChar!(ElementEncodingType!Range) && !isInfinite!Range && !isConvertibleToString!Range);
auto **stripLeft**(Range, Char)(Range input, const(Char)[] chars)
Constraints: if ((isForwardRange!Range && isSomeChar!(ElementEncodingType!Range) || isConvertibleToString!Range) && isSomeChar!Char);
Strips leading whitespace (as defined by [`std.uni.isWhite`](std_uni#isWhite)) or as specified in the second argument.
Parameters:
| | |
| --- | --- |
| Range `input` | string or [forward range](std_range_primitives#isForwardRange) of characters |
| const(Char)[] `chars` | string of characters to be stripped |
Returns:
`input` stripped of leading whitespace or characters specified in the second argument.
Postconditions
`input` and the returned value will share the same tail (see [`std.array.sameTail`](std_array#sameTail)).
See Also:
Generic stripping on ranges: [`std.algorithm.mutation.stripLeft`](std_algorithm_mutation#stripLeft)
Examples:
```
import std.uni : lineSep, paraSep;
assert(stripLeft(" hello world ") ==
"hello world ");
assert(stripLeft("\n\t\v\rhello world\n\t\v\r") ==
"hello world\n\t\v\r");
assert(stripLeft(" \u2028hello world") ==
"hello world");
assert(stripLeft("hello world") ==
"hello world");
assert(stripLeft([lineSep] ~ "hello world" ~ lineSep) ==
"hello world" ~ [lineSep]);
assert(stripLeft([paraSep] ~ "hello world" ~ paraSep) ==
"hello world" ~ [paraSep]);
import std.array : array;
import std.utf : byChar;
assert(stripLeft(" hello world "w.byChar).array ==
"hello world ");
assert(stripLeft(" \u2022hello world ".byChar).array ==
"\u2022hello world ");
```
Examples:
```
assert(stripLeft(" hello world ", " ") ==
"hello world ");
assert(stripLeft("xxxxxhello world ", "x") ==
"hello world ");
assert(stripLeft("xxxyy hello world ", "xy ") ==
"hello world ");
```
Examples:
```
import std.array : array;
import std.utf : byChar, byWchar, byDchar;
assert(stripLeft(" xxxyy hello world "w.byChar, "xy ").array ==
"hello world ");
assert(stripLeft("\u2028\u2020hello world\u2028"w.byWchar,
"\u2028").array == "\u2020hello world\u2028");
assert(stripLeft("\U00010001hello world"w.byWchar, " ").array ==
"\U00010001hello world"w);
assert(stripLeft("\U00010001 xyhello world"d.byDchar,
"\U00010001 xy").array == "hello world"d);
writeln(stripLeft("\u2020hello"w, "\u2020"w)); // "hello"w
writeln(stripLeft("\U00010001hello"d, "\U00010001"d)); // "hello"d
writeln(stripLeft(" hello ", "")); // " hello "
```
auto **stripRight**(Range)(Range str)
Constraints: if (isSomeString!Range || isRandomAccessRange!Range && hasLength!Range && hasSlicing!Range && !isConvertibleToString!Range && isSomeChar!(ElementEncodingType!Range));
auto **stripRight**(Range, Char)(Range str, const(Char)[] chars)
Constraints: if ((isBidirectionalRange!Range && isSomeChar!(ElementEncodingType!Range) || isConvertibleToString!Range) && isSomeChar!Char);
Strips trailing whitespace (as defined by [`std.uni.isWhite`](std_uni#isWhite)) or as specified in the second argument.
Parameters:
| | |
| --- | --- |
| Range `str` | string or random access range of characters |
| const(Char)[] `chars` | string of characters to be stripped |
Returns:
slice of `str` stripped of trailing whitespace or characters specified in the second argument.
See Also:
Generic stripping on ranges: [`std.algorithm.mutation.stripRight`](std_algorithm_mutation#stripRight)
Examples:
```
import std.uni : lineSep, paraSep;
assert(stripRight(" hello world ") ==
" hello world");
assert(stripRight("\n\t\v\rhello world\n\t\v\r") ==
"\n\t\v\rhello world");
assert(stripRight("hello world") ==
"hello world");
assert(stripRight([lineSep] ~ "hello world" ~ lineSep) ==
[lineSep] ~ "hello world");
assert(stripRight([paraSep] ~ "hello world" ~ paraSep) ==
[paraSep] ~ "hello world");
```
Examples:
```
assert(stripRight(" hello world ", "x") ==
" hello world ");
assert(stripRight(" hello world ", " ") ==
" hello world");
assert(stripRight(" hello worldxy ", "xy ") ==
" hello world");
```
auto **strip**(Range)(Range str)
Constraints: if (isSomeString!Range || isRandomAccessRange!Range && hasLength!Range && hasSlicing!Range && !isConvertibleToString!Range && isSomeChar!(ElementEncodingType!Range));
auto **strip**(Range, Char)(Range str, const(Char)[] chars)
Constraints: if ((isBidirectionalRange!Range && isSomeChar!(ElementEncodingType!Range) || isConvertibleToString!Range) && isSomeChar!Char);
auto **strip**(Range, Char)(Range str, const(Char)[] leftChars, const(Char)[] rightChars)
Constraints: if ((isBidirectionalRange!Range && isSomeChar!(ElementEncodingType!Range) || isConvertibleToString!Range) && isSomeChar!Char);
Strips both leading and trailing whitespace (as defined by [`std.uni.isWhite`](std_uni#isWhite)) or as specified in the second argument.
Parameters:
| | |
| --- | --- |
| Range `str` | string or random access range of characters |
| const(Char)[] `chars` | string of characters to be stripped |
| const(Char)[] `leftChars` | string of leading characters to be stripped |
| const(Char)[] `rightChars` | string of trailing characters to be stripped |
Returns:
slice of `str` stripped of leading and trailing whitespace or characters as specified in the second argument.
See Also:
Generic stripping on ranges: [`std.algorithm.mutation.strip`](std_algorithm_mutation#strip)
Examples:
```
import std.uni : lineSep, paraSep;
assert(strip(" hello world ") ==
"hello world");
assert(strip("\n\t\v\rhello world\n\t\v\r") ==
"hello world");
assert(strip("hello world") ==
"hello world");
assert(strip([lineSep] ~ "hello world" ~ [lineSep]) ==
"hello world");
assert(strip([paraSep] ~ "hello world" ~ [paraSep]) ==
"hello world");
```
Examples:
```
assert(strip(" hello world ", "x") ==
" hello world ");
assert(strip(" hello world ", " ") ==
"hello world");
assert(strip(" xyxyhello worldxyxy ", "xy ") ==
"hello world");
writeln(strip("\u2020hello\u2020"w, "\u2020"w)); // "hello"w
writeln(strip("\U00010001hello\U00010001"d, "\U00010001"d)); // "hello"d
writeln(strip(" hello ", "")); // " hello "
```
Examples:
```
writeln(strip("xxhelloyy", "x", "y")); // "hello"
assert(strip(" xyxyhello worldxyxyzz ", "xy ", "xyz ") ==
"hello world");
writeln(strip("\u2020hello\u2028"w, "\u2020"w, "\u2028"w)); // "hello"w
assert(strip("\U00010001hello\U00010002"d, "\U00010001"d, "\U00010002"d) ==
"hello"d);
writeln(strip(" hello ", "", "")); // " hello "
```
Range **chomp**(Range)(Range str)
Constraints: if ((isRandomAccessRange!Range && isSomeChar!(ElementEncodingType!Range) || isNarrowString!Range) && !isConvertibleToString!Range);
Range **chomp**(Range, C2)(Range str, const(C2)[] delimiter)
Constraints: if ((isBidirectionalRange!Range && isSomeChar!(ElementEncodingType!Range) || isNarrowString!Range) && !isConvertibleToString!Range && isSomeChar!C2);
If `str` ends with `delimiter`, then `str` is returned without `delimiter` on its end. If it `str` does *not* end with `delimiter`, then it is returned unchanged.
If no `delimiter` is given, then one trailing `'\r'`, `'\n'`, `"\r\n"`, `'\f'`, `'\v'`, [`std.uni.lineSep`](std_uni#lineSep), [`std.uni.paraSep`](std_uni#paraSep), or [`std.uni.nelSep`](std_uni#nelSep) is removed from the end of `str`. If `str` does not end with any of those characters, then it is returned unchanged.
Parameters:
| | |
| --- | --- |
| Range `str` | string or indexable range of characters |
| const(C2)[] `delimiter` | string of characters to be sliced off end of str[] |
Returns:
slice of str
Examples:
```
import std.uni : lineSep, paraSep, nelSep;
import std.utf : decode;
writeln(chomp(" hello world \n\r")); // " hello world \n"
writeln(chomp(" hello world \r\n")); // " hello world "
writeln(chomp(" hello world \f")); // " hello world "
writeln(chomp(" hello world \v")); // " hello world "
writeln(chomp(" hello world \n\n")); // " hello world \n"
writeln(chomp(" hello world \n\n ")); // " hello world \n\n "
writeln(chomp(" hello world \n\n" ~ [lineSep])); // " hello world \n\n"
writeln(chomp(" hello world \n\n" ~ [paraSep])); // " hello world \n\n"
writeln(chomp(" hello world \n\n" ~ [nelSep])); // " hello world \n\n"
writeln(chomp(" hello world")); // " hello world"
writeln(chomp("")); // ""
writeln(chomp(" hello world", "orld")); // " hello w"
writeln(chomp(" hello world", " he")); // " hello world"
writeln(chomp("", "hello")); // ""
// Don't decode pointlessly
writeln(chomp("hello\xFE", "\r")); // "hello\xFE"
```
Range **chompPrefix**(Range, C2)(Range str, const(C2)[] delimiter)
Constraints: if ((isForwardRange!Range && isSomeChar!(ElementEncodingType!Range) || isNarrowString!Range) && !isConvertibleToString!Range && isSomeChar!C2);
If `str` starts with `delimiter`, then the part of `str` following `delimiter` is returned. If `str` does *not* start with
`delimiter`, then it is returned unchanged.
Parameters:
| | |
| --- | --- |
| Range `str` | string or [forward range](std_range_primitives#isForwardRange) of characters |
| const(C2)[] `delimiter` | string of characters to be sliced off front of str[] |
Returns:
slice of str
Examples:
```
writeln(chompPrefix("hello world", "he")); // "llo world"
writeln(chompPrefix("hello world", "hello w")); // "orld"
writeln(chompPrefix("hello world", " world")); // "hello world"
writeln(chompPrefix("", "hello")); // ""
```
Range **chop**(Range)(Range str)
Constraints: if ((isBidirectionalRange!Range && isSomeChar!(ElementEncodingType!Range) || isNarrowString!Range) && !isConvertibleToString!Range);
Returns `str` without its last character, if there is one. If `str` ends with `"\r\n"`, then both are removed. If `str` is empty, then it is returned unchanged.
Parameters:
| | |
| --- | --- |
| Range `str` | string (must be valid UTF) |
Returns:
slice of str
Examples:
```
writeln(chop("hello world")); // "hello worl"
writeln(chop("hello world\n")); // "hello world"
writeln(chop("hello world\r")); // "hello world"
writeln(chop("hello world\n\r")); // "hello world\n"
writeln(chop("hello world\r\n")); // "hello world"
writeln(chop("Walter Bright")); // "Walter Brigh"
writeln(chop("")); // ""
```
S **leftJustify**(S)(S s, size\_t width, dchar fillChar = ' ')
Constraints: if (isSomeString!S);
Left justify `s` in a field `width` characters wide. `fillChar` is the character that will be used to fill up the space in the field that `s` doesn't fill.
Parameters:
| | |
| --- | --- |
| S `s` | string |
| size\_t `width` | minimum field width |
| dchar `fillChar` | used to pad end up to `width` characters |
Returns:
GC allocated string
See Also:
[`leftJustifier`](#leftJustifier), which does not allocate
Examples:
```
writeln(leftJustify("hello", 7, 'X')); // "helloXX"
writeln(leftJustify("hello", 2, 'X')); // "hello"
writeln(leftJustify("hello", 9, 'X')); // "helloXXXX"
```
auto **leftJustifier**(Range)(Range r, size\_t width, dchar fillChar = ' ')
Constraints: if (isInputRange!Range && isSomeChar!(ElementEncodingType!Range) && !isConvertibleToString!Range);
Left justify `s` in a field `width` characters wide. `fillChar` is the character that will be used to fill up the space in the field that `s` doesn't fill.
Parameters:
| | |
| --- | --- |
| Range `r` | string or range of characters |
| size\_t `width` | minimum field width |
| dchar `fillChar` | used to pad end up to `width` characters |
Returns:
a lazy range of the left justified result
See Also:
[`rightJustifier`](#rightJustifier)
Examples:
```
import std.algorithm.comparison : equal;
import std.utf : byChar;
assert(leftJustifier("hello", 2).equal("hello".byChar));
assert(leftJustifier("hello", 7).equal("hello ".byChar));
assert(leftJustifier("hello", 7, 'x').equal("helloxx".byChar));
```
S **rightJustify**(S)(S s, size\_t width, dchar fillChar = ' ')
Constraints: if (isSomeString!S);
Right justify `s` in a field `width` characters wide. `fillChar` is the character that will be used to fill up the space in the field that `s` doesn't fill.
Parameters:
| | |
| --- | --- |
| S `s` | string |
| size\_t `width` | minimum field width |
| dchar `fillChar` | used to pad end up to `width` characters |
Returns:
GC allocated string
See Also:
[`rightJustifier`](#rightJustifier), which does not allocate
Examples:
```
writeln(rightJustify("hello", 7, 'X')); // "XXhello"
writeln(rightJustify("hello", 2, 'X')); // "hello"
writeln(rightJustify("hello", 9, 'X')); // "XXXXhello"
```
auto **rightJustifier**(Range)(Range r, size\_t width, dchar fillChar = ' ')
Constraints: if (isForwardRange!Range && isSomeChar!(ElementEncodingType!Range) && !isConvertibleToString!Range);
Right justify `s` in a field `width` characters wide. `fillChar` is the character that will be used to fill up the space in the field that `s` doesn't fill.
Parameters:
| | |
| --- | --- |
| Range `r` | string or [forward range](std_range_primitives#isForwardRange) of characters |
| size\_t `width` | minimum field width |
| dchar `fillChar` | used to pad end up to `width` characters |
Returns:
a lazy range of the right justified result
See Also:
[`leftJustifier`](#leftJustifier)
Examples:
```
import std.algorithm.comparison : equal;
import std.utf : byChar;
assert(rightJustifier("hello", 2).equal("hello".byChar));
assert(rightJustifier("hello", 7).equal(" hello".byChar));
assert(rightJustifier("hello", 7, 'x').equal("xxhello".byChar));
```
S **center**(S)(S s, size\_t width, dchar fillChar = ' ')
Constraints: if (isSomeString!S);
Center `s` in a field `width` characters wide. `fillChar` is the character that will be used to fill up the space in the field that `s` doesn't fill.
Parameters:
| | |
| --- | --- |
| S `s` | The string to center |
| size\_t `width` | Width of the field to center `s` in |
| dchar `fillChar` | The character to use for filling excess space in the field |
Returns:
The resulting center-justified string. The returned string is GC-allocated. To avoid GC allocation, use [`centerJustifier`](#centerJustifier) instead.
Examples:
```
writeln(center("hello", 7, 'X')); // "XhelloX"
writeln(center("hello", 2, 'X')); // "hello"
writeln(center("hello", 9, 'X')); // "XXhelloXX"
```
auto **centerJustifier**(Range)(Range r, size\_t width, dchar fillChar = ' ')
Constraints: if (isForwardRange!Range && isSomeChar!(ElementEncodingType!Range) && !isConvertibleToString!Range);
Center justify `r` in a field `width` characters wide. `fillChar` is the character that will be used to fill up the space in the field that `r` doesn't fill.
Parameters:
| | |
| --- | --- |
| Range `r` | string or [forward range](std_range_primitives#isForwardRange) of characters |
| size\_t `width` | minimum field width |
| dchar `fillChar` | used to pad end up to `width` characters |
Returns:
a lazy range of the center justified result
See Also:
[`leftJustifier`](#leftJustifier) [`rightJustifier`](#rightJustifier)
Examples:
```
import std.algorithm.comparison : equal;
import std.utf : byChar;
assert(centerJustifier("hello", 2).equal("hello".byChar));
assert(centerJustifier("hello", 8).equal(" hello ".byChar));
assert(centerJustifier("hello", 7, 'x').equal("xhellox".byChar));
```
pure auto **detab**(Range)(auto ref Range s, size\_t tabSize = 8)
Constraints: if (isForwardRange!Range && isSomeChar!(ElementEncodingType!Range) || \_\_traits(compiles, StringTypeOf!Range));
Replace each tab character in `s` with the number of spaces necessary to align the following character at the next tab stop.
Parameters:
| | |
| --- | --- |
| Range `s` | string |
| size\_t `tabSize` | distance between tab stops |
Returns:
GC allocated string with tabs replaced with spaces
Examples:
```
writeln(detab(" \n\tx", 9)); // " \n x"
```
auto **detabber**(Range)(Range r, size\_t tabSize = 8)
Constraints: if (isForwardRange!Range && isSomeChar!(ElementEncodingType!Range) && !isConvertibleToString!Range);
Replace each tab character in `r` with the number of spaces necessary to align the following character at the next tab stop.
Parameters:
| | |
| --- | --- |
| Range `r` | string or [forward range](std_range_primitives#isForwardRange) |
| size\_t `tabSize` | distance between tab stops |
Returns:
lazy forward range with tabs replaced with spaces
Examples:
```
import std.array : array;
writeln(detabber(" \n\tx", 9).array); // " \n x"
```
auto **entab**(Range)(Range s, size\_t tabSize = 8)
Constraints: if (isForwardRange!Range && isSomeChar!(ElementEncodingType!Range));
Replaces spaces in `s` with the optimal number of tabs. All spaces and tabs at the end of a line are removed.
Parameters:
| | |
| --- | --- |
| Range `s` | String to convert. |
| size\_t `tabSize` | Tab columns are `tabSize` spaces apart. |
Returns:
GC allocated string with spaces replaced with tabs; use [`entabber`](#entabber) to not allocate.
See Also:
[`entabber`](#entabber)
Examples:
```
writeln(entab(" x \n")); // "\tx\n"
```
auto **entabber**(Range)(Range r, size\_t tabSize = 8)
Constraints: if (isForwardRange!Range && !isConvertibleToString!Range);
Replaces spaces in range `r` with the optimal number of tabs. All spaces and tabs at the end of a line are removed.
Parameters:
| | |
| --- | --- |
| Range `r` | string or [forward range](std_range_primitives#isForwardRange) |
| size\_t `tabSize` | distance between tab stops |
Returns:
lazy forward range with spaces replaced with tabs
See Also:
[`entab`](#entab)
Examples:
```
import std.array : array;
writeln(entabber(" x \n").array); // "\tx\n"
```
pure @safe C1[] **translate**(C1, C2 = immutable(char))(C1[] str, in dchar[dchar] transTable, const(C2)[] toRemove = null)
Constraints: if (isSomeChar!C1 && isSomeChar!C2);
pure @safe C1[] **translate**(C1, S, C2 = immutable(char))(C1[] str, in S[dchar] transTable, const(C2)[] toRemove = null)
Constraints: if (isSomeChar!C1 && isSomeString!S && isSomeChar!C2);
Replaces the characters in `str` which are keys in `transTable` with their corresponding values in `transTable`. `transTable` is an AA where its keys are `dchar` and its values are either `dchar` or some type of string. Also, if `toRemove` is given, the characters in it are removed from `str` prior to translation. `str` itself is unaltered. A copy with the changes is returned.
See Also:
[`tr`](#tr), [`std.array.replace`](std_array#replace), [`std.algorithm.iteration.substitute`](std_algorithm_iteration#substitute)
Parameters:
| | |
| --- | --- |
| C1[] `str` | The original string. |
| dchar[dchar] `transTable` | The AA indicating which characters to replace and what to replace them with. |
| const(C2)[] `toRemove` | The characters to remove from the string. |
Examples:
```
dchar[dchar] transTable1 = ['e' : '5', 'o' : '7', '5': 'q'];
writeln(translate("hello world", transTable1)); // "h5ll7 w7rld"
writeln(translate("hello world", transTable1, "low")); // "h5 rd"
string[dchar] transTable2 = ['e' : "5", 'o' : "orange"];
writeln(translate("hello world", transTable2)); // "h5llorange worangerld"
```
void **translate**(C1, C2 = immutable(char), Buffer)(const(C1)[] str, in dchar[dchar] transTable, const(C2)[] toRemove, Buffer buffer)
Constraints: if (isSomeChar!C1 && isSomeChar!C2 && isOutputRange!(Buffer, C1));
void **translate**(C1, S, C2 = immutable(char), Buffer)(C1[] str, in S[dchar] transTable, const(C2)[] toRemove, Buffer buffer)
Constraints: if (isSomeChar!C1 && isSomeString!S && isSomeChar!C2 && isOutputRange!(Buffer, S));
This is an overload of `translate` which takes an existing buffer to write the contents to.
Parameters:
| | |
| --- | --- |
| const(C1)[] `str` | The original string. |
| dchar[dchar] `transTable` | The AA indicating which characters to replace and what to replace them with. |
| const(C2)[] `toRemove` | The characters to remove from the string. |
| Buffer `buffer` | An output range to write the contents to. |
Examples:
```
import std.array : appender;
dchar[dchar] transTable1 = ['e' : '5', 'o' : '7', '5': 'q'];
auto buffer = appender!(dchar[])();
translate("hello world", transTable1, null, buffer);
writeln(buffer.data); // "h5ll7 w7rld"
buffer.clear();
translate("hello world", transTable1, "low", buffer);
writeln(buffer.data); // "h5 rd"
buffer.clear();
string[dchar] transTable2 = ['e' : "5", 'o' : "orange"];
translate("hello world", transTable2, null, buffer);
writeln(buffer.data); // "h5llorange worangerld"
```
pure nothrow @trusted C[] **translate**(C = immutable(char))(scope const(char)[] str, scope const(char)[] transTable, scope const(char)[] toRemove = null)
Constraints: if (is(immutable(C) == immutable(char)));
This is an *ASCII-only* overload of [`translate`](#translate). It will *not* work with Unicode. It exists as an optimization for the cases where Unicode processing is not necessary.
Unlike the other overloads of [`translate`](#translate), this one does not take an AA. Rather, it takes a `string` generated by [`makeTransTable`](#makeTransTable).
The array generated by `makeTransTable` is `256` elements long such that the index is equal to the ASCII character being replaced and the value is equal to the character that it's being replaced with. Note that translate does not decode any of the characters, so you can actually pass it Extended ASCII characters if you want to (ASCII only actually uses `128` characters), but be warned that Extended ASCII characters are not valid Unicode and therefore will result in a `UTFException` being thrown from most other Phobos functions.
Also, because no decoding occurs, it is possible to use this overload to translate ASCII characters within a proper UTF-8 string without altering the other, non-ASCII characters. It's replacing any code unit greater than `127` with another code unit or replacing any code unit with another code unit greater than `127` which will cause UTF validation issues.
See Also:
[`tr`](#tr), [`std.array.replace`](std_array#replace), [`std.algorithm.iteration.substitute`](std_algorithm_iteration#substitute)
Parameters:
| | |
| --- | --- |
| const(char)[] `str` | The original string. |
| const(char)[] `transTable` | The string indicating which characters to replace and what to replace them with. It is generated by [`makeTransTable`](#makeTransTable). |
| const(char)[] `toRemove` | The characters to remove from the string. |
Examples:
```
auto transTable1 = makeTrans("eo5", "57q");
writeln(translate("hello world", transTable1)); // "h5ll7 w7rld"
writeln(translate("hello world", transTable1, "low")); // "h5 rd"
```
pure nothrow @trusted string **makeTrans**(scope const(char)[] from, scope const(char)[] to);
Do same thing as [`makeTransTable`](#makeTransTable) but allocate the translation table on the GC heap.
Use [`makeTransTable`](#makeTransTable) instead.
Examples:
```
auto transTable1 = makeTrans("eo5", "57q");
writeln(translate("hello world", transTable1)); // "h5ll7 w7rld"
writeln(translate("hello world", transTable1, "low")); // "h5 rd"
```
pure nothrow @nogc @safe char[256] **makeTransTable**(scope const(char)[] from, scope const(char)[] to);
Construct 256 character translation table, where characters in from[] are replaced by corresponding characters in to[].
Parameters:
| | |
| --- | --- |
| const(char)[] `from` | array of chars, less than or equal to 256 in length |
| const(char)[] `to` | corresponding array of chars to translate to |
Returns:
translation array
Examples:
```
writeln(translate("hello world", makeTransTable("hl", "q5"))); // "qe55o wor5d"
writeln(translate("hello world", makeTransTable("12345", "67890"))); // "hello world"
```
pure @trusted void **translate**(C = immutable(char), Buffer)(scope const(char)[] str, scope const(char)[] transTable, scope const(char)[] toRemove, Buffer buffer)
Constraints: if (is(immutable(C) == immutable(char)) && isOutputRange!(Buffer, char));
This is an *ASCII-only* overload of `translate` which takes an existing buffer to write the contents to.
Parameters:
| | |
| --- | --- |
| const(char)[] `str` | The original string. |
| const(char)[] `transTable` | The string indicating which characters to replace and what to replace them with. It is generated by [`makeTransTable`](#makeTransTable). |
| const(char)[] `toRemove` | The characters to remove from the string. |
| Buffer `buffer` | An output range to write the contents to. |
Examples:
```
import std.array : appender;
auto buffer = appender!(char[])();
auto transTable1 = makeTransTable("eo5", "57q");
translate("hello world", transTable1, null, buffer);
writeln(buffer.data); // "h5ll7 w7rld"
buffer.clear();
translate("hello world", transTable1, "low", buffer);
writeln(buffer.data); // "h5 rd"
```
pure @safe S **succ**(S)(S s)
Constraints: if (isSomeString!S);
Return string that is the 'successor' to s[]. If the rightmost character is a-zA-Z0-9, it is incremented within its case or digits. If it generates a carry, the process is repeated with the one to its immediate left.
Examples:
```
writeln(succ("1")); // "2"
writeln(succ("9")); // "10"
writeln(succ("999")); // "1000"
writeln(succ("zz99")); // "aaa00"
```
C1[] **tr**(C1, C2, C3, C4 = immutable(char))(C1[] str, const(C2)[] from, const(C3)[] to, const(C4)[] modifiers = null);
Replaces the characters in `str` which are in `from` with the the corresponding characters in `to` and returns the resulting string.
`tr` is based on [Posix's tr](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/tr.html), though it doesn't do everything that the Posix utility does.
Parameters:
| | |
| --- | --- |
| C1[] `str` | The original string. |
| const(C2)[] `from` | The characters to replace. |
| const(C3)[] `to` | The characters to replace with. |
| const(C4)[] `modifiers` | String containing modifiers. |
Modifiers
| | |
| --- | --- |
| Modifier | Description |
| `'c'` | Complement the list of characters in `from` |
| `'d'` | Removes matching characters with no corresponding replacement in `to` |
| `'s'` | Removes adjacent duplicates in the replaced characters |
If the modifier `'d'` is present, then the number of characters in `to` may be only `0` or `1`. If the modifier `'d'` is *not* present, and `to` is empty, then `to` is taken to be the same as `from`. If the modifier `'d'` is *not* present, and `to` is shorter than `from`, then `to` is extended by replicating the last character in `to`. Both `from` and `to` may contain ranges using the `'-'` character (e.g. `"a-d"` is synonymous with `"abcd"`.) Neither accept a leading `'^'` as meaning the complement of the string (use the `'c'` modifier for that).
See Also:
[`translate`](#translate), [`std.array.replace`](std_array#replace), [`std.algorithm.iteration.substitute`](std_algorithm_iteration#substitute)
Examples:
```
writeln(tr("abcdef", "cd", "CD")); // "abCDef"
writeln(tr("1st March, 2018", "March", "MAR", "s")); // "1st MAR, 2018"
writeln(tr("abcdef", "ef", "", "d")); // "abcd"
writeln(tr("14-Jul-87", "a-zA-Z", " ", "cs")); // " Jul "
```
bool **isNumeric**(S)(S s, bool bAllowSep = false)
Constraints: if (isSomeString!S || isRandomAccessRange!S && hasSlicing!S && isSomeChar!(ElementType!S) && !isInfinite!S);
Takes a string `s` and determines if it represents a number. This function also takes an optional parameter, `bAllowSep`, which will accept the separator characters `','` and `'__'` within the string. But these characters should be stripped from the string before using any of the conversion functions like `to!int()`, `to!float()`, and etc else an error will occur.
Also please note, that no spaces are allowed within the string anywhere whether it's a leading, trailing, or embedded space(s), thus they too must be stripped from the string before using this function, or any of the conversion functions.
Parameters:
| | |
| --- | --- |
| S `s` | the string or random access range to check |
| bool `bAllowSep` | accept separator characters or not |
Returns:
`bool`
Examples:
Integer Whole Number: (byte, ubyte, short, ushort, int, uint, long, and ulong) ['+'|'-']digit(s)[U|L|UL]
```
assert(isNumeric("123"));
assert(isNumeric("123UL"));
assert(isNumeric("123L"));
assert(isNumeric("+123U"));
assert(isNumeric("-123L"));
```
Examples:
Floating-Point Number: (float, double, real, ifloat, idouble, and ireal) ['+'|'-']digit(s)[.][digit(s)][[e-|e+]digit(s)][i|f|L|Li|fi]] or [nan|nani|inf|-inf]
```
assert(isNumeric("+123"));
assert(isNumeric("-123.01"));
assert(isNumeric("123.3e-10f"));
assert(isNumeric("123.3e-10fi"));
assert(isNumeric("123.3e-10L"));
assert(isNumeric("nan"));
assert(isNumeric("nani"));
assert(isNumeric("-inf"));
```
Examples:
Floating-Point Number: (cfloat, cdouble, and creal) ['+'|'-']digit(s)[.][digit(s)][[e-|e+]digit(s)][+] [digit(s)[.][digit(s)][[e-|e+]digit(s)][i|f|L|Li|fi]] or [nan|nani|nan+nani|inf|-inf]
```
assert(isNumeric("-123e-1+456.9e-10Li"));
assert(isNumeric("+123e+10+456i"));
assert(isNumeric("123+456"));
```
Examples:
isNumeric works with CTFE
```
enum a = isNumeric("123.00E-5+1234.45E-12Li");
enum b = isNumeric("12345xxxx890");
static assert( a);
static assert(!b);
```
char[4] **soundexer**(Range)(Range str)
Constraints: if (isInputRange!Range && isSomeChar!(ElementEncodingType!Range) && !isConvertibleToString!Range);
char[4] **soundexer**(Range)(auto ref Range str)
Constraints: if (isConvertibleToString!Range);
Soundex algorithm.
The Soundex algorithm converts a word into 4 characters based on how the word sounds phonetically. The idea is that two spellings that sound alike will have the same Soundex value, which means that Soundex can be used for fuzzy matching of names.
Parameters:
| | |
| --- | --- |
| Range `str` | String or InputRange to convert to Soundex representation. |
Returns:
The four character array with the Soundex result in it. The array has zero's in it if there is no Soundex representation for the string.
See Also:
[Wikipedia](http://en.wikipedia.org/wiki/Soundex), [The Soundex Indexing System](https://google.com/search?btnI=I%27m+Feeling+Lucky&ie=UTF-8&oe=UTF-8&q=The%20Soundex%20Indexing%20System) [`soundex`](#soundex)
Note
Only works well with English names.
Examples:
```
writeln(soundexer("Gauss")); // "G200"
writeln(soundexer("Ghosh")); // "G200"
writeln(soundexer("Robert")); // "R163"
writeln(soundexer("Rupert")); // "R163"
writeln(soundexer("0123^&^^**&^")); // ['\0', '\0', '\0', '\0']
```
pure nothrow @safe char[] **soundex**(scope const(char)[] str, char[] buffer = null);
Like [`soundexer`](#soundexer), but with different parameters and return value.
Parameters:
| | |
| --- | --- |
| const(char)[] `str` | String to convert to Soundex representation. |
| char[] `buffer` | Optional 4 char array to put the resulting Soundex characters into. If null, the return value buffer will be allocated on the heap. |
Returns:
The four character array with the Soundex result in it. Returns null if there is no Soundex representation for the string.
See Also:
[`soundexer`](#soundexer)
Examples:
```
writeln(soundex("Gauss")); // "G200"
writeln(soundex("Ghosh")); // "G200"
writeln(soundex("Robert")); // "R163"
writeln(soundex("Rupert")); // "R163"
writeln(soundex("0123^&^^**&^")); // null
```
pure @safe string[string] **abbrev**(string[] values);
Construct an associative array consisting of all abbreviations that uniquely map to the strings in values.
This is useful in cases where the user is expected to type in one of a known set of strings, and the program will helpfully auto-complete the string once sufficient characters have been entered that uniquely identify it.
Examples:
```
import std.string;
static string[] list = [ "food", "foxy" ];
auto abbrevs = abbrev(list);
assert(abbrevs == ["fox": "foxy", "food": "food",
"foxy": "foxy", "foo": "food"]);
```
size\_t **column**(Range)(Range str, in size\_t tabsize = 8)
Constraints: if ((isInputRange!Range && isSomeChar!(Unqual!(ElementEncodingType!Range)) || isNarrowString!Range) && !isConvertibleToString!Range);
Compute column number at the end of the printed form of the string, assuming the string starts in the leftmost column, which is numbered starting from 0.
Tab characters are expanded into enough spaces to bring the column number to the next multiple of tabsize. If there are multiple lines in the string, the column number of the last line is returned.
Parameters:
| | |
| --- | --- |
| Range `str` | string or InputRange to be analyzed |
| size\_t `tabsize` | number of columns a tab character represents |
Returns:
column number
Examples:
```
import std.utf : byChar, byWchar, byDchar;
writeln(column("1234 ")); // 5
writeln(column("1234 "w)); // 5
writeln(column("1234 "d)); // 5
writeln(column("1234 ".byChar())); // 5
writeln(column("1234 "w.byWchar())); // 5
writeln(column("1234 "d.byDchar())); // 5
// Tab stops are set at 8 spaces by default; tab characters insert enough
// spaces to bring the column position to the next multiple of 8.
writeln(column("\t")); // 8
writeln(column("1\t")); // 8
writeln(column("\t1")); // 9
writeln(column("123\t")); // 8
// Other tab widths are possible by specifying it explicitly:
writeln(column("\t", 4)); // 4
writeln(column("1\t", 4)); // 4
writeln(column("\t1", 4)); // 5
writeln(column("123\t", 4)); // 4
// New lines reset the column number.
writeln(column("abc\n")); // 0
writeln(column("abc\n1")); // 1
writeln(column("abcdefg\r1234")); // 4
writeln(column("abc\u20281")); // 1
writeln(column("abc\u20291")); // 1
writeln(column("abc\u00851")); // 1
writeln(column("abc\u00861")); // 5
```
S **wrap**(S)(S s, in size\_t columns = 80, S firstindent = null, S indent = null, in size\_t tabsize = 8)
Constraints: if (isSomeString!S);
Wrap text into a paragraph.
The input text string s is formed into a paragraph by breaking it up into a sequence of lines, delineated by \n, such that the number of columns is not exceeded on each line. The last line is terminated with a \n.
Parameters:
| | |
| --- | --- |
| S `s` | text string to be wrapped |
| size\_t `columns` | maximum number of columns in the paragraph |
| S `firstindent` | string used to indent first line of the paragraph |
| S `indent` | string to use to indent following lines of the paragraph |
| size\_t `tabsize` | column spacing of tabs in firstindent[] and indent[] |
Returns:
resulting paragraph as an allocated string
Examples:
```
writeln(wrap("a short string", 7)); // "a short\nstring\n"
// wrap will not break inside of a word, but at the next space
writeln(wrap("a short string", 4)); // "a\nshort\nstring\n"
writeln(wrap("a short string", 7, "\t")); // "\ta\nshort\nstring\n"
writeln(wrap("a short string", 7, "\t", " ")); // "\ta\n short\n string\n"
```
pure @safe S **outdent**(S)(S str)
Constraints: if (isSomeString!S);
Removes one level of indentation from a multi-line string.
This uniformly outdents the text as much as possible. Whitespace-only lines are always converted to blank lines.
Does not allocate memory if it does not throw.
Parameters:
| | |
| --- | --- |
| S `str` | multi-line string |
Returns:
outdented string
Throws:
StringException if indentation is done with different sequences of whitespace characters.
Examples:
```
enum pretty = q{
import std.stdio;
void main() {
writeln("Hello");
}
}.outdent();
enum ugly = q{
import std.stdio;
void main() {
writeln("Hello");
}
};
writeln(pretty); // ugly
```
pure @safe S[] **outdent**(S)(S[] lines)
Constraints: if (isSomeString!S);
Removes one level of indentation from an array of single-line strings.
This uniformly outdents the text as much as possible. Whitespace-only lines are always converted to blank lines.
Parameters:
| | |
| --- | --- |
| S[] `lines` | array of single-line strings |
Returns:
lines[] is rewritten in place with outdented lines
Throws:
StringException if indentation is done with different sequences of whitespace characters.
Examples:
```
auto str1 = [
" void main()\n",
" {\n",
" test();\n",
" }\n"
];
auto str1Expected = [
"void main()\n",
"{\n",
" test();\n",
"}\n"
];
writeln(str1.outdent); // str1Expected
auto str2 = [
"void main()\n",
" {\n",
" test();\n",
" }\n"
];
writeln(str2.outdent); // str2
```
auto **assumeUTF**(T)(T[] arr)
Constraints: if (staticIndexOf!(immutable(T), immutable(ubyte), immutable(ushort), immutable(uint)) != -1);
Assume the given array of integers `arr` is a well-formed UTF string and return it typed as a UTF string.
`ubyte` becomes `char`, `ushort` becomes `wchar` and `uint` becomes `dchar`. Type qualifiers are preserved.
When compiled with debug mode, this function performs an extra check to make sure the return value is a valid Unicode string.
Parameters:
| | |
| --- | --- |
| T[] `arr` | array of bytes, ubytes, shorts, ushorts, ints, or uints |
Returns:
arr retyped as an array of chars, wchars, or dchars
Throws:
In debug mode `AssertError`, when the result is not a well-formed UTF string.
See Also:
[`representation`](#representation)
Examples:
```
string a = "Hölo World";
immutable(ubyte)[] b = a.representation;
string c = b.assumeUTF;
writeln(c); // "Hölo World"
```
| programming_docs |
d std.experimental.allocator.gc_allocator std.experimental.allocator.gc\_allocator
========================================
D's built-in garbage-collected allocator.
Source
[std/experimental/allocator/gc\_allocator.d](https://github.com/dlang/phobos/blob/master/std/experimental/allocator/gc_allocator.d)
struct **GCAllocator**;
D's built-in garbage-collected allocator.
Examples:
```
auto buffer = GCAllocator.instance.allocate(1024 * 1024 * 4);
// deallocate upon scope's end (alternatively: leave it to collection)
scope(exit) GCAllocator.instance.deallocate(buffer);
//...
```
enum uint **alignment**;
The alignment is a static constant equal to `platformAlignment`, which ensures proper alignment for any D data type.
shared const pure nothrow @trusted void[] **allocate**(size\_t bytes);
shared const pure nothrow @trusted bool **expand**(ref void[] b, size\_t delta);
shared const pure nothrow @system bool **reallocate**(ref void[] b, size\_t newSize);
shared const pure nothrow @nogc @trusted Ternary **resolveInternalPointer**(const void\* p, ref void[] result);
shared const pure nothrow @nogc @system bool **deallocate**(void[] b);
shared const pure nothrow @nogc @safe size\_t **goodAllocSize**(size\_t n);
Standard allocator methods per the semantics defined above. The `deallocate` and `reallocate` methods are `@system` because they may move memory around, leaving dangling pointers in user code.
static shared const GCAllocator **instance**;
Returns the global instance of this allocator type. The garbage collected allocator is thread-safe, therefore all of its methods and `instance` itself are `shared`.
d std.container.rbtree std.container.rbtree
====================
This module implements a red-black tree container.
This module is a submodule of [`std.container`](std_container).
Source
[std/container/rbtree.d](https://github.com/dlang/phobos/blob/master/std/container/rbtree.d)
License:
Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE\_1\_0.txt or copy at [boost.org/LICENSE\_1\_0.txt](http://boost.org/LICENSE_1_0.txt)).
Authors:
Steven Schveighoffer, [Andrei Alexandrescu](http://erdani.com)
Examples:
```
import std.algorithm.comparison : equal;
import std.container.rbtree;
auto rbt = redBlackTree(3, 1, 4, 2, 5);
writeln(rbt.front); // 1
assert(equal(rbt[], [1, 2, 3, 4, 5]));
rbt.removeKey(1, 4);
assert(equal(rbt[], [2, 3, 5]));
rbt.removeFront();
assert(equal(rbt[], [3, 5]));
rbt.insert([1, 2, 4]);
assert(equal(rbt[], [1, 2, 3, 4, 5]));
// Query bounds in O(log(n))
assert(rbt.lowerBound(3).equal([1, 2]));
assert(rbt.equalRange(3).equal([3]));
assert(rbt.upperBound(3).equal([4, 5]));
// A Red Black tree with the highest element at front:
import std.range : iota;
auto maxTree = redBlackTree!"a > b"(iota(5));
assert(equal(maxTree[], [4, 3, 2, 1, 0]));
// adding duplicates will not add them, but return 0
auto rbt2 = redBlackTree(1, 3);
writeln(rbt2.insert(1)); // 0
assert(equal(rbt2[], [1, 3]));
writeln(rbt2.insert(2)); // 1
// however you can allow duplicates
auto ubt = redBlackTree!true([0, 1, 0, 1]);
assert(equal(ubt[], [0, 0, 1, 1]));
```
class **RedBlackTree**(T, alias less = "a < b", bool allowDuplicates = false) if (is(typeof(binaryFun!less(T.init, T.init))));
Implementation of a [red-black tree](https://en.wikipedia.org/wiki/Red%E2%80%93black_tree) container.
All inserts, removes, searches, and any function in general has complexity of Ο(`lg(n)`).
To use a different comparison than `"a < b"`, pass a different operator string that can be used by [`std.functional.binaryFun`](std_functional#binaryFun), or pass in a function, delegate, functor, or any type where `less(a, b)` results in a `bool` value.
Note that less should produce a strict ordering. That is, for two unequal elements `a` and `b`, `less(a, b) == !less(b, a)`. `less(a, a)` should always equal `false`.
If `allowDuplicates` is set to `true`, then inserting the same element more than once continues to add more elements. If it is `false`, duplicate elements are ignored on insertion. If duplicates are allowed, then new elements are inserted after all existing duplicate elements.
alias **Elem** = T;
Element type for the tree
alias **Range** = RBRange!(RBNode\*);
alias **ConstRange** = RBRange!(const(RBNode)\*);
alias **ImmutableRange** = RBRange!(immutable(RBNode)\*);
The range types for `RedBlackTree`
const @property bool **empty**();
Check if any elements exist in the container. Returns `false` if at least one element exists.
const @property size\_t **length**();
Returns the number of elements in the container.
Complexity
Ο(`1`).
@property RedBlackTree **dup**();
Duplicate this container. The resulting container contains a shallow copy of the elements.
Complexity
Ο(`n`)
Range **opSlice**();
const ConstRange **opSlice**();
immutable ImmutableRange **opSlice**();
Fetch a range that spans all the elements in the container.
Complexity
Ο(`1`)
inout inout(Elem) **front**();
The front element in the container
Complexity
Ο(`1`)
inout inout(Elem) **back**();
The last element in the container
Complexity
Ο(`log(n)`)
const bool **opBinaryRight**(string op)(Elem e)
Constraints: if (op == "in");
`in` operator. Check to see if the given element exists in the container.
Complexity
Ο(`log(n)`)
bool **opEquals**(Object rhs);
Compares two trees for equality.
Complexity
Ο(`n`)
nothrow @safe size\_t **toHash**();
Generates a hash for the tree. Note that with a custom comparison function it may not hold that if two rbtrees are equal, the hashes of the trees will be equal.
void **clear**();
Removes all elements from the container.
Complexity
Ο(`1`)
size\_t **stableInsert**(Stuff)(Stuff stuff)
Constraints: if (isImplicitlyConvertible!(Stuff, Elem));
Insert a single element in the container. Note that this does not invalidate any ranges currently iterating the container.
Returns:
The number of elements inserted.
Complexity
Ο(`log(n)`)
size\_t **stableInsert**(Stuff)(scope Stuff stuff)
Constraints: if (isInputRange!Stuff && isImplicitlyConvertible!(ElementType!Stuff, Elem));
alias **insert** = stableInsert;
Insert a range of elements in the container. Note that this does not invalidate any ranges currently iterating the container.
Returns:
The number of elements inserted.
Complexity
Ο(`m * log(n)`)
Elem **removeAny**();
Remove an element from the container and return its value.
Complexity
Ο(`log(n)`)
void **removeFront**();
Remove the front element from the container.
Complexity
Ο(`log(n)`)
void **removeBack**();
Remove the back element from the container.
Complexity
Ο(`log(n)`)
Range **remove**(Range r);
Removes the given range from the container.
Returns:
A range containing all of the elements that were after the given range.
Complexity
Ο(`m * log(n)`) (where m is the number of elements in the range)
Range **remove**(Take!Range r);
Removes the given `Take!Range` from the container
Returns:
A range containing all of the elements that were after the given range.
Complexity
Ο(`m * log(n)`) (where m is the number of elements in the range)
size\_t **removeKey**(U...)(U elems)
Constraints: if (allSatisfy!(isImplicitlyConvertibleToElem, U));
size\_t **removeKey**(U)(scope U[] elems)
Constraints: if (isImplicitlyConvertible!(U, Elem));
size\_t **removeKey**(Stuff)(Stuff stuff)
Constraints: if (isInputRange!Stuff && isImplicitlyConvertible!(ElementType!Stuff, Elem) && !isDynamicArray!Stuff);
Removes elements from the container that are equal to the given values according to the less comparator. One element is removed for each value given which is in the container. If `allowDuplicates` is true, duplicates are removed only if duplicate values are given.
Returns:
The number of elements removed.
Complexity
Ο(`m log(n)`) (where m is the number of elements to remove)
Example
```
auto rbt = redBlackTree!true(0, 1, 1, 1, 4, 5, 7);
rbt.removeKey(1, 4, 7);
assert(equal(rbt[], [0, 1, 1, 5]));
rbt.removeKey(1, 1, 0);
assert(equal(rbt[], [5]));
```
Range **upperBound**(Elem e);
const ConstRange **upperBound**(Elem e);
immutable ImmutableRange **upperBound**(Elem e);
Get a range from the container with all elements that are > e according to the less comparator
Complexity
Ο(`log(n)`)
Range **lowerBound**(Elem e);
const ConstRange **lowerBound**(Elem e);
immutable ImmutableRange **lowerBound**(Elem e);
Get a range from the container with all elements that are < e according to the less comparator
Complexity
Ο(`log(n)`)
auto **equalRange**(this This)(Elem e);
Get a range from the container with all elements that are == e according to the less comparator
Complexity
Ο(`log(n)`)
const void **toString**(scope void delegate(const(char)[]) sink, ref scope const FormatSpec!char fmt);
Formats the RedBlackTree into a sink function. For more info see `std.format.formatValue`. Note that this only is available when the element type can be formatted. Otherwise, the default toString from Object is used.
this(Elem[] elems...);
Constructor. Pass in an array of elements, or individual elements to initialize the tree with.
this(Stuff)(Stuff stuff)
Constraints: if (isInputRange!Stuff && isImplicitlyConvertible!(ElementType!Stuff, Elem));
Constructor. Pass in a range of elements to initialize the tree with.
this(); auto **redBlackTree**(E)(E[] elems...);
auto **redBlackTree**(bool allowDuplicates, E)(E[] elems...);
auto **redBlackTree**(alias less, E)(E[] elems...)
Constraints: if (is(typeof(binaryFun!less(E.init, E.init))));
auto **redBlackTree**(alias less, bool allowDuplicates, E)(E[] elems...)
Constraints: if (is(typeof(binaryFun!less(E.init, E.init))));
auto **redBlackTree**(Stuff)(Stuff range)
Constraints: if (isInputRange!Stuff && !isArray!Stuff);
auto **redBlackTree**(bool allowDuplicates, Stuff)(Stuff range)
Constraints: if (isInputRange!Stuff && !isArray!Stuff);
auto **redBlackTree**(alias less, Stuff)(Stuff range)
Constraints: if (is(typeof(binaryFun!less((ElementType!Stuff).init, (ElementType!Stuff).init))) && isInputRange!Stuff && !isArray!Stuff);
auto **redBlackTree**(alias less, bool allowDuplicates, Stuff)(Stuff range)
Constraints: if (is(typeof(binaryFun!less((ElementType!Stuff).init, (ElementType!Stuff).init))) && isInputRange!Stuff && !isArray!Stuff);
Convenience function for creating a `RedBlackTree!E` from a list of values.
Parameters:
| | |
| --- | --- |
| allowDuplicates | Whether duplicates should be allowed (optional, default: false) |
| less | predicate to sort by (optional) |
| E[] `elems` | elements to insert into the rbtree (variadic arguments) |
| Stuff `range` | range elements to insert into the rbtree (alternative to elems) |
Examples:
```
import std.range : iota;
auto rbt1 = redBlackTree(0, 1, 5, 7);
auto rbt2 = redBlackTree!string("hello", "world");
auto rbt3 = redBlackTree!true(0, 1, 5, 7, 5);
auto rbt4 = redBlackTree!"a > b"(0, 1, 5, 7);
auto rbt5 = redBlackTree!("a > b", true)(0.1, 1.3, 5.9, 7.2, 5.9);
// also works with ranges
auto rbt6 = redBlackTree(iota(3));
auto rbt7 = redBlackTree!true(iota(3));
auto rbt8 = redBlackTree!"a > b"(iota(3));
auto rbt9 = redBlackTree!("a > b", true)(iota(3));
```
d std.experimental.allocator.typed std.experimental.allocator.typed
================================
This module defines `TypedAllocator`, a statically-typed allocator that aggregates multiple untyped allocators and uses them depending on the static properties of the types allocated. For example, distinct allocators may be used for thread-local vs. thread-shared data, or for fixed-size data (`struct`, `class` objects) vs. resizable data (arrays).
Source
[std/experimental/allocator/typed.d](https://github.com/dlang/phobos/blob/master/std/experimental/allocator/typed.d)
enum **AllocFlag**: uint;
Allocation-related flags dictated by type characteristics. `TypedAllocator` deduces these flags from the type being allocated and uses the appropriate allocator accordingly.
**fixedSize**
Fixed-size allocation (unlikely to get reallocated later). Examples: `int`, `double`, any `struct` or `class` type. By default it is assumed that the allocation is variable-size, i.e. susceptible to later reallocation (for example all array types). This flag is advisory, i.e. in-place resizing may be attempted for `fixedSize` allocations and may succeed. The flag is just a hint to the compiler it may use allocation strategies that work well with objects of fixed size.
**hasNoIndirections**
The type being allocated embeds no pointers. Examples: `int`, `int[]`, `Tuple!(int, float)`. The implicit conservative assumption is that the type has members with indirections so it needs to be scanned if garbage collected. Example of types with pointers: `int*[]`, `Tuple!(int, string)`.
**immutableShared**
**threadLocal**
By default it is conservatively assumed that allocated memory may be `cast` to `shared`, passed across threads, and deallocated in a different thread than the one that allocated it. If that's not the case, there are two options. First, `immutableShared` means the memory is allocated for `immutable` data and will be deallocated in the same thread it was allocated in. Second, `threadLocal` means the memory is not to be shared across threads at all. The two flags cannot be simultaneously present.
struct **TypedAllocator**(PrimaryAllocator, Policies...);
`TypedAllocator` acts like a chassis on which several specialized allocators can be assembled. To let the system make a choice about a particular kind of allocation, use `Default` for the respective parameters.
There is a hierarchy of allocation kinds. When an allocator is implemented for a given combination of flags, it is used. Otherwise, the next down the list is chosen.
| `AllocFlag` combination | Description |
| `AllocFlag.threadLocal | AllocFlag.hasNoIndirections | AllocFlag.fixedSize` | This is the most specific allocation policy: the memory being allocated is thread local, has no indirections at all, and will not be reallocated. Examples of types fitting this description: `int`, `double`, `Tuple!(int, long)`, but not `Tuple!(int, string)`, which contains an indirection. |
| `AllocFlag.threadLocal | AllocFlag.hasNoIndirections` | As above, but may be reallocated later. Examples of types fitting this description are `int[]`, `double[]`, `Tuple!(int, long)[]`, but not `Tuple!(int, string)[]`, which contains an indirection. |
| `AllocFlag.threadLocal` | As above, but may embed indirections. Examples of types fitting this description are `int*[]`, `Object[]`, `Tuple!(int, string)[]`. |
| `AllocFlag.immutableShared | AllocFlag.hasNoIndirections | AllocFlag.fixedSize` | The type being allocated is `immutable` and has no pointers. The thread that allocated it must also deallocate it. Example: `immutable(int)`. |
| `AllocFlag.immutableShared | AllocFlag.hasNoIndirections` | As above, but the type may be appended to in the future. Example: `string`. |
| `AllocFlag.immutableShared` | As above, but the type may embed references. Example: `immutable(Object)[]`. |
| `AllocFlag.hasNoIndirections | AllocFlag.fixedSize` | The type being allocated may be shared across threads, embeds no indirections, and has fixed size. |
| `AllocFlag.hasNoIndirections` | The type being allocated may be shared across threads, may embed indirections, and has variable size. |
| `AllocFlag.fixedSize` | The type being allocated may be shared across threads, may embed indirections, and has fixed size. |
| `0` | The most conservative/general allocation: memory may be shared, deallocated in a different thread, may or may not be resized, and may embed references. |
Parameters:
| | |
| --- | --- |
| PrimaryAllocator | The default allocator. |
| Policies | Zero or more pairs consisting of an `AllocFlag` and an allocator type. |
Examples:
```
import std.experimental.allocator.gc_allocator : GCAllocator;
import std.experimental.allocator.mallocator : Mallocator;
import std.experimental.allocator.mmap_allocator : MmapAllocator;
alias MyAllocator = TypedAllocator!(GCAllocator,
AllocFlag.fixedSize | AllocFlag.threadLocal, Mallocator,
AllocFlag.fixedSize | AllocFlag.threadLocal
| AllocFlag.hasNoIndirections,
MmapAllocator,
);
MyAllocator a;
auto b = &a.allocatorFor!0();
static assert(is(typeof(*b) == shared const(GCAllocator)));
enum f1 = AllocFlag.fixedSize | AllocFlag.threadLocal;
auto c = &a.allocatorFor!f1();
static assert(is(typeof(*c) == Mallocator));
enum f2 = AllocFlag.fixedSize | AllocFlag.threadLocal;
static assert(is(typeof(a.allocatorFor!f2()) == Mallocator));
// Partial match
enum f3 = AllocFlag.threadLocal;
static assert(is(typeof(a.allocatorFor!f3()) == Mallocator));
int* p = a.make!int;
scope(exit) a.dispose(p);
int[] arr = a.makeArray!int(42);
scope(exit) a.dispose(arr);
assert(a.expandArray(arr, 3));
assert(a.shrinkArray(arr, 4));
```
ref auto **allocatorFor**(uint flags)();
ref auto **allocatorFor**(T)();
Given `flags` as a combination of `AllocFlag` values, or a type `T`, returns the allocator that's a closest fit in capabilities.
uint **type2flags**(T)();
Given a type `T`, returns its allocation-related flags as a combination of `AllocFlag` values.
auto **make**(T, A...)(auto ref A args);
Dynamically allocates (using the appropriate allocator chosen with `allocatorFor!T`) and then creates in the memory allocated an object of type `T`, using `args` (if any) for its initialization. Initialization occurs in the memory allocated and is otherwise semantically the same as `T(args)`. (Note that using `make!(T[])` creates a pointer to an (empty) array of `T`s, not an array. To allocate and initialize an array, use `makeArray!T` described below.)
Parameters:
| | |
| --- | --- |
| T | Type of the object being created. |
| A `args` | Optional arguments used for initializing the created object. If not present, the object is default constructed. |
Returns:
If `T` is a class type, returns a reference to the created `T` object. Otherwise, returns a `T*` pointing to the created object. In all cases, returns `null` if allocation failed.
Throws:
If `T`'s constructor throws, deallocates the allocated memory and propagates the exception.
T[] **makeArray**(T)(size\_t length);
T[] **makeArray**(T)(size\_t length, auto ref T init);
T[] **makeArray**(T, R)(R range)
Constraints: if (isInputRange!R);
Create an array of `T` with `length` elements. The array is either default-initialized, filled with copies of `init`, or initialized with values fetched from `range`.
Parameters:
| | |
| --- | --- |
| T | element type of the array being created |
| size\_t `length` | length of the newly created array |
| T `init` | element used for filling the array |
| R `range` | range used for initializing the array elements |
Returns:
The newly-created array, or `null` if either `length` was `0` or allocation failed.
Throws:
The first two overloads throw only if the used allocator's primitives do. The overloads that involve copy initialization deallocate memory and propagate the exception if the copy operation throws.
bool **expandArray**(T)(ref T[] array, size\_t delta);
bool **expandArray**(T)(T[] array, size\_t delta, auto ref T init);
bool **expandArray**(T, R)(ref T[] array, R range)
Constraints: if (isInputRange!R);
Grows `array` by appending `delta` more elements. The needed memory is allocated using the same allocator that was used for the array type. The extra elements added are either default-initialized, filled with copies of `init`, or initialized with values fetched from `range`.
Parameters:
| | |
| --- | --- |
| T | element type of the array being created |
| T[] `array` | a reference to the array being grown |
| size\_t `delta` | number of elements to add (upon success the new length of `array` is `array.length + delta`) |
| T `init` | element used for filling the array |
| R `range` | range used for initializing the array elements |
Returns:
`true` upon success, `false` if memory could not be allocated. In the latter case `array` is left unaffected.
Throws:
The first two overloads throw only if the used allocator's primitives do. The overloads that involve copy initialization deallocate memory and propagate the exception if the copy operation throws.
bool **shrinkArray**(T)(ref T[] arr, size\_t delta);
Shrinks an array by `delta` elements using `allocatorFor!(T[])`.
If `arr.length < delta`, does nothing and returns `false`. Otherwise, destroys the last `arr.length - delta` elements in the array and then reallocates the array's buffer. If reallocation fails, fills the array with default-initialized data.
Parameters:
| | |
| --- | --- |
| T | element type of the array being created |
| T[] `arr` | a reference to the array being shrunk |
| size\_t `delta` | number of elements to remove (upon success the new length of `arr` is `arr.length - delta`) |
Returns:
`true` upon success, `false` if memory could not be reallocated. In the latter case `arr[$ - delta .. $]` is left with default-initialized elements.
Throws:
The first two overloads throw only if the used allocator's primitives do. The overloads that involve copy initialization deallocate memory and propagate the exception if the copy operation throws.
void **dispose**(T)(T\* p);
void **dispose**(T)(T p)
Constraints: if (is(T == class) || is(T == interface));
void **dispose**(T)(T[] array);
Destroys and then deallocates (using `allocatorFor!T`) the object pointed to by a pointer, the class object referred to by a `class` or `interface` reference, or an entire array. It is assumed the respective entities had been allocated with the same allocator.
| programming_docs |
d core.volatile core.volatile
=============
This module declares intrinsics for volatile operations.
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt)
Authors:
Walter Bright, Ernesto Castellotti
Source
[core/volatile.d](https://github.com/dlang/druntime/blob/master/src/core/volatile.d)
nothrow @nogc @safe ubyte **volatileLoad**(ubyte\* ptr);
nothrow @nogc @safe ushort **volatileLoad**(ushort\* ptr);
nothrow @nogc @safe uint **volatileLoad**(uint\* ptr);
nothrow @nogc @safe ulong **volatileLoad**(ulong\* ptr);
nothrow @nogc @safe void **volatileStore**(ubyte\* ptr, ubyte value);
nothrow @nogc @safe void **volatileStore**(ushort\* ptr, ushort value);
nothrow @nogc @safe void **volatileStore**(uint\* ptr, uint value);
nothrow @nogc @safe void **volatileStore**(ulong\* ptr, ulong value);
Read/write value from/to the memory location indicated by ptr.
These functions are recognized by the compiler, and calls to them are guaranteed to not be removed (as dead assignment elimination or presumed to have no effect) or reordered in the same thread.
These reordering guarantees are only made with regards to other operations done through these functions; the compiler is free to reorder regular loads/stores with regards to loads/stores done through these functions.
This is useful when dealing with memory-mapped I/O (MMIO) where a store can have an effect other than just writing a value, or where sequential loads with no intervening stores can retrieve different values from the same location due to external stores to the location.
These functions will, when possible, do the load/store as a single operation. In general, this is possible when the size of the operation is less than or equal to `(void*).sizeof`, although some targets may support larger operations. If the load/store cannot be done as a single operation, multiple smaller operations will be used.
These are not to be conflated with atomic operations. They do not guarantee any atomicity. This may be provided by coincidence as a result of the instructions used on the target, but this should not be relied on for portable programs. Further, no memory fences are implied by these functions. They should not be used for communication between threads. They may be used to guarantee a write or read cycle occurs at a specified address.
d Type Qualifiers Type Qualifiers
===============
**Contents** 1. [Const and Immutable](#const_and_immutable)
2. [Immutable Storage Class](#immutable_storage_class)
3. [Const Storage Class](#const_storage_class)
4. [Immutable Type](#immutable_type)
5. [Creating Immutable Data](#creating_immutable_data)
6. [Removing Immutable or Const with a Cast](#removing_with_cast)
7. [Immutable Member Functions](#immutable_member_functions)
8. [Const Type](#const_type)
9. [Const Member Functions](#const_member_functions)
10. [Combining Qualifiers](#combining_qualifiers)
11. [Implicit Qualifier Conversions](#implicit_qualifier_conversions)
Type qualifiers modify a type by applying a [*TypeCtor*](type#TypeCtor). *TypeCtor*s are: `const`, `immutable`, `shared`, and `inout`. Each applies transitively to all subtypes.
Const and Immutable
-------------------
When examining a data structure or interface, it is very helpful to be able to easily tell which data can be expected to not change, which data might change, and who may change that data. This is done with the aid of the language typing system. Data can be marked as const or immutable, with the default being changeable (or *mutable*).
`immutable` applies to data that cannot change. Immutable data values, once constructed, remain the same for the duration of the program's execution. Immutable data can be placed in ROM (Read Only Memory) or in memory pages marked by the hardware as read only. Since immutable data does not change, it enables many opportunities for program optimization, and has applications in functional style programming.
`const` applies to data that cannot be changed by the const reference to that data. It may, however, be changed by another reference to that same data. Const finds applications in passing data through interfaces that promise not to modify them.
Both immutable and const are *transitive*, which means that any data reachable through an immutable reference is also immutable, and likewise for const.
Immutable Storage Class
-----------------------
The simplest immutable declarations use it as a storage class. It can be used to declare manifest constants.
```
immutable int x = 3; // x is set to 3
x = 4; // error, x is immutable
char[x] s; // s is an array of 3 chars
```
The type can be inferred from the initializer:
```
immutable y = 4; // y is of type int
y = 5; // error, y is immutable
```
If the initializer is not present, the immutable can be initialized from the corresponding constructor:
```
immutable int z;
void test()
{
z = 3; // error, z is immutable
}
static this()
{
z = 3; // ok, can set immutable that doesn't
// have static initializer
}
```
The initializer for a non-local immutable declaration must be evaluatable at compile time:
```
int foo(int f) { return f * 3; }
int i = 5;
immutable x = 3 * 4; // ok, 12
immutable y = i + 1; // error, cannot evaluate at compile time
immutable z = foo(2) + 1; // ok, foo(2) can be evaluated at compile time, 7
```
The initializer for a non-static local immutable declaration is evaluated at run time:
```
int foo(int f)
{
immutable x = f + 1; // evaluated at run time
x = 3; // error, x is immutable
}
```
Because immutable is transitive, data referred to by an immutable is also immutable:
```
immutable char[] s = "foo";
s[0] = 'a'; // error, s refers to immutable data
s = "bar"; // error, s is immutable
```
Immutable declarations can appear as lvalues, i.e. they can have their address taken, and occupy storage.
Const Storage Class
-------------------
A const declaration is exactly like an immutable declaration, with the following differences:
* Any data referenced by the const declaration cannot be changed from the const declaration, but it might be changed by other references to the same data.
* The type of a const declaration is itself const.
Immutable Type
--------------
Data that will never change its value can be typed as immutable. The immutable keyword can be used as a *type qualifier*:
```
immutable(char)[] s = "hello";
```
The immutable applies to the type within the following parentheses. So, while `s` can be assigned new values, the contents of `s[]` cannot be:
```
s[0] = 'b'; // error, s[] is immutable
s = null; // ok, s itself is not immutable
```
Immutability is transitive, meaning it applies to anything that can be referenced from the immutable type:
```
immutable(char*)** p = ...;
p = ...; // ok, p is not immutable
*p = ...; // ok, *p is not immutable
**p = ...; // error, **p is immutable
***p = ...; // error, ***p is immutable
```
Immutable used as a storage class is equivalent to using immutable as a type qualifier for the entire type of a declaration:
```
immutable int x = 3; // x is typed as immutable(int)
immutable(int) y = 3; // y is immutable
```
Creating Immutable Data
-----------------------
The first way is to use a literal that is already immutable, such as string literals. String literals are always immutable.
```
auto s = "hello"; // s is immutable(char)[5]
char[] p = "world"; // error, cannot implicitly convert immutable
// to mutable
```
The second way is to cast data to immutable. When doing so, it is up to the programmer to ensure that any mutable references to the same data are not used to modify the data after the cast.
```
char[] s = ['a'];
s[0] = 'b'; // ok
immutable(char)[] p = cast(immutable)s; // ok, if data is not mutated
// through s anymore
s[0] = 'c'; // undefined behavior
immutable(char)[] q = cast(immutable)s.dup; // always ok, unique reference
char[][] s2 = [['a', 'b'], ['c', 'd']];
immutable(char[][]) p2 = cast(immutable)s2.dup; // dangerous, only the first
// level of elements is unique
s2[0] = ['x', 'y']; // ok, doesn't affect p2
s2[1][0] = 'z'; // undefined behavior
immutable(char[][]) q2 = [s2[0].dup, s2[1].dup]; // always ok, unique references
```
The `.idup` property is a convenient way to create an immutable copy of an array:
```
auto p = s.idup;
p[0] = ...; // error, p[] is immutable
```
Removing Immutable or Const with a Cast
---------------------------------------
An immutable or const type qualifier can be removed with a cast:
```
immutable int* p = ...;
int* q = cast(int*)p;
```
This does not mean, however, that one can change the data:
```
*q = 3; // allowed by compiler, but result is undefined behavior
```
The ability to cast away immutable-correctness is necessary in some cases where the static typing is incorrect and not fixable, such as when referencing code in a library one cannot change. Casting is, as always, a blunt and effective instrument, and when using it to cast away immutable-correctness, one must assume the responsibility to ensure the immutability of the data, as the compiler will no longer be able to statically do so.
Note that casting away a const qualifier and then mutating is undefined behavior, too, even when the referenced data is mutable. This is so that compilers and programmers can make assumptions based on const alone. For example, here it may be assumed that `f` does not alter `x`:
```
void f(const int* a);
void main()
{
int x = 1;
f(&x);
assert(x == 1); // guaranteed to hold
}
```
Immutable Member Functions
--------------------------
Immutable member functions are guaranteed that the object and anything referred to by the `this` reference is immutable. They are declared as:
```
struct S
{
int x;
void foo() immutable
{
x = 4; // error, x is immutable
this.x = 4; // error, x is immutable
}
}
```
Note that using immutable on the left hand side of a method does not apply to the return type:
```
struct S
{
immutable int[] bar() // bar is still immutable, return type is not!
{
}
}
```
To make the return type immutable, surround it with parentheses:
```
struct S
{
immutable(int[]) bar() // bar is now mutable, return type is immutable.
{
}
}
```
To make both the return type and the method immutable, write:
```
struct S
{
immutable(int[]) bar() immutable
{
}
}
```
Const Type
----------
Const types are like immutable types, except that const forms a read-only *view* of data. Other aliases to that same data may change it at any time.
Const Member Functions
----------------------
Const member functions are functions that are not allowed to change any part of the object through the member function's this reference.
Combining Qualifiers
--------------------
More than one qualifier may apply to a type. The order of application is irrelevant, for example given an unqualified type `T`, `const shared T` and `shared const T` are the same type. For that reason, this document depicts qualifier combinations without parentheses unless necessary and in alphabetic order.
Applying a qualifier to a type that already has that qualifier is legal but has no effect, e.g. given an unqualified type `T`, `shared(const shared T)` yields the type `const shared T`.
Applying the `immutable` qualifier to any type (qualified or not) results in `immutable T`. Applying any qualifier to `immutable T` results in `immutable T`. This makes `immutable` a fixed point of qualifier combinations and makes types such as `const(immutable(shared T))` impossible to create.
Assuming `T` is an unqualified type, the graph below illustrates how qualifiers combine (combinations with `immutable` are omitted). For each node, applying the qualifier labeling the edge leads to the resulting type.
Implicit Qualifier Conversions
------------------------------
Values that have no mutable indirections (including structs that don't contain any field with mutable indirections) can be implicitly converted across *mutable*, `const`, `immutable`, `const shared`, `inout` and `inout shared`.
References to qualified objects can be implicitly converted according to the following rules:
In the graph above, any directed path is a legal implicit conversion. No other qualifier combinations than the ones shown is valid. If a directed path exists between two sets of qualifiers, the types thus qualified are called [qualifier-convertible](http://dlang.org/glossary.html#qualifier-convertible). The same information is shown below in tabular format:
Implicit Conversion of Reference Types| from/to | *mutable* | `const` | `shared` | `const shared` | `inout` | `const inout` | `inout shared` | `const inout shared` | `immutable` |
| *mutable* | ✔ | ✔ | | | | | | | |
| `const` | | ✔ | | | | | | | |
| `const inout` | | ✔ | | | | ✔ | | | |
| `const shared` | | | | ✔ | | | | | |
| `const inout shared` | | | | ✔ | | | | ✔ | |
| `immutable` | | ✔ | | ✔ | | ✔ | | ✔ | ✔ |
| `inout` | | ✔ | | | ✔ | ✔ | | | |
| `shared` | | | ✔ | ✔ | | | | | |
| `inout shared` | | | | ✔ | | | ✔ | ✔ | |
If an implicit conversion is disallowed by the table, an [*Expression*](expression#Expression) may be converted if:
An expression may be converted from mutable or shared to immutable if the expression is unique and all expressions it transitively refers to are either unique or immutable.
An expression may be converted from mutable to shared if the expression is unique and all expressions it transitively refers to are either unique, immutable, or shared.
An expression may be converted from immutable to mutable if the expression is unique.
An expression may be converted from shared to mutable if the expression is unique.
A *Unique Expression* is one for which there are no other references to the value of the expression and all expressions it transitively refers to are either also unique or are immutable. For example:
```
void main()
{
immutable int** p = new int*(null); // ok, unique
int x;
immutable int** q = new int*(&x); // error, there may be other references to x
immutable int y;
immutable int** r = new immutable(int)*(&y); // ok, y is immutable
}
```
Otherwise, a [*CastExpression*](expression#CastExpression) can be used to force a conversion when an implicit version is disallowed, but this cannot be done in `@safe` code, and the correctness of it must be verified by the user.
d core.vararg core.vararg
===========
The vararg module is intended to facilitate vararg manipulation in D. It should be interface compatible with the C module "stdarg," and the two modules may share a common implementation if possible (as is done here).
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt)
Authors:
Walter Bright, Hauke Duden
Source
[core/vararg.d](https://github.com/dlang/druntime/blob/master/src/core/vararg.d)
alias **va\_arg** = **va\_arg**; void **va\_arg**()(ref va\_list ap, TypeInfo ti, void\* parmn);
Retrieve and store through parmn the next value that is of TypeInfo ti. Used when the static type is not known.
d core.memory core.memory
===========
This module provides an interface to the garbage collector used by applications written in the D programming language. It allows the garbage collector in the runtime to be swapped without affecting binary compatibility of applications.
Using this module is not necessary in typical D code. It is mostly useful when doing low-level memory management.
Notes to users
1. The GC is a conservative mark-and-sweep collector. It only runs a collection cycle when an allocation is requested of it, never otherwise. Hence, if the program is not doing allocations, there will be no GC collection pauses. The pauses occur because all threads the GC knows about are halted so the threads' stacks and registers can be scanned for references to GC allocated data.
2. The GC does not know about threads that were created by directly calling the OS/C runtime thread creation APIs and D threads that were detached from the D runtime after creation. Such threads will not be paused for a GC collection, and the GC might not detect references to GC allocated data held by them. This can cause memory corruption. There are several ways to resolve this issue:
1. Do not hold references to GC allocated data in such threads.
2. Register/unregister such data with calls to [`addRoot`](#addRoot)/[`removeRoot`](#removeRoot) and [`addRange`](#addRange)/[`removeRange`](#removeRange).
3. Maintain another reference to that same data in another thread that the GC does know about.
4. Disable GC collection cycles while that thread is active with [`disable`](#disable)/[`enable`](#enable).
5. Register the thread with the GC using [`core.thread.thread_attachThis`](core_thread#thread_attachThis)/[`core.thread.thread_detachThis`](core_thread#thread_detachThis).
Notes to implementors
* On POSIX systems, the signals SIGUSR1 and SIGUSR2 are reserved by this module for use in the garbage collector implementation. Typically, they will be used to stop and resume other threads when performing a collection, but an implementation may choose not to use this mechanism (or not stop the world at all, in the case of concurrent garbage collectors).
* Registers, the stack, and any other memory locations added through the `GC.[addRange](#addRange)` function are always scanned conservatively. This means that even if a variable is e.g. of type `float`, it will still be scanned for possible GC pointers. And, if the word-interpreted representation of the variable matches a GC-managed memory block's address, that memory block is considered live.
* Implementations are free to scan the non-root heap in a precise manner, so that fields of types like `float` will not be considered relevant when scanning the heap. Thus, casting a GC pointer to an integral type (e.g. `size_t`) and storing it in a field of that type inside the GC heap may mean that it will not be recognized if the memory block was allocated with precise type info or with the `GC.BlkAttr.[NO\_SCAN](#NO_SCAN)` attribute.
* Destructors will always be executed while other threads are active; that is, an implementation that stops the world must not execute destructors until the world has been resumed.
* A destructor of an object must not access object references within the object. This means that an implementation is free to optimize based on this rule.
* An implementation is free to perform heap compaction and copying so long as no valid GC pointers are invalidated in the process. However, memory allocated with `GC.BlkAttr.[NO\_MOVE](#NO_MOVE)` must not be moved/copied.
* Implementations must support interior pointers. That is, if the only reference to a GC-managed memory block points into the middle of the block rather than the beginning (for example), the GC must consider the memory block live. The exception to this rule is when a memory block is allocated with the `GC.BlkAttr.[NO\_INTERIOR](#NO_INTERIOR)` attribute; it is the user's responsibility to make sure such memory blocks have a proper pointer to them when they should be considered live.
* It is acceptable for an implementation to store bit flags into pointer values and GC-managed memory blocks, so long as such a trick is not visible to the application. In practice, this means that only a stop-the-world collector can do this.
* Implementations are free to assume that GC pointers are only stored on word boundaries. Unaligned pointers may be ignored entirely.
* Implementations are free to run collections at any point. It is, however, recommendable to only do so when an allocation attempt happens and there is insufficient memory available.
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt)
Authors:
Sean Kelly, Alex Rønne Petersen
Source
[core/memory.d](https://github.com/dlang/druntime/blob/master/src/core/memory.d)
immutable size\_t **pageSize**;
The size of a system page in bytes.
This value is set at startup time of the application. It's safe to use early in the start process, like in shared module constructors and initialization of the D runtime itself.
Examples:
```
ubyte[] buffer = new ubyte[pageSize];
```
struct **GC**;
This struct encapsulates all garbage collection functionality for the D programming language.
struct **Stats**;
Aggregation of GC stats to be exposed via public API
size\_t **usedSize**;
number of used bytes on the GC heap (might only get updated after a collection)
size\_t **freeSize**;
number of free bytes on the GC heap (might only get updated after a collection)
ulong **allocatedInCurrentThread**;
number of bytes allocated for current thread since program start
struct **ProfileStats**;
Aggregation of current profile information
size\_t **numCollections**;
total number of GC cycles
Duration **totalCollectionTime**;
total time spent doing GC
Duration **totalPauseTime**;
total time threads were paused doing GC
Duration **maxPauseTime**;
largest time threads were paused during one GC cycle
Duration **maxCollectionTime**;
largest time spent doing one GC cycle
static nothrow void **enable**();
Enables automatic garbage collection behavior if collections have previously been suspended by a call to disable. This function is reentrant, and must be called once for every call to disable before automatic collections are enabled.
static nothrow void **disable**();
Disables automatic garbage collections performed to minimize the process footprint. Collections may continue to occur in instances where the implementation deems necessary for correct program behavior, such as during an out of memory condition. This function is reentrant, but enable must be called once for each call to disable.
static nothrow void **collect**();
Begins a full collection. While the meaning of this may change based on the garbage collector implementation, typical behavior is to scan all stack segments for roots, mark accessible memory blocks as alive, and then to reclaim free space. This action may need to suspend all running threads for at least part of the collection process.
static nothrow void **minimize**();
Indicates that the managed memory space be minimized by returning free physical memory to the operating system. The amount of free memory returned depends on the allocator design and on program behavior.
enum **BlkAttr**: uint;
Elements for a bit field representing memory block attributes. These are manipulated via the getAttr, setAttr, clrAttr functions.
**NONE**
No attributes set.
**FINALIZE**
Finalize the data in this block on collect.
**NO\_SCAN**
Do not scan through this block on collect.
**NO\_MOVE**
Do not move this memory block on collect.
**APPENDABLE**
This block contains the info to allow appending.
This can be used to manually allocate arrays. Initial slice size is 0.
Note
The slice's usable size will not match the block size. Use [`capacity`](#capacity) to retrieve actual usable capacity.
Example
```
// Allocate the underlying array.
int* pToArray = cast(int*)GC.malloc(10 * int.sizeof, GC.BlkAttr.NO_SCAN | GC.BlkAttr.APPENDABLE);
// Bind a slice. Check the slice has capacity information.
int[] slice = pToArray[0 .. 0];
assert(capacity(slice) > 0);
// Appending to the slice will not relocate it.
slice.length = 5;
slice ~= 1;
assert(slice.ptr == p);
```
**NO\_INTERIOR**
This block is guaranteed to have a pointer to its base while it is alive. Interior pointers can be safely ignored. This attribute is useful for eliminating false pointers in very large data structures and is only implemented for data structures at least a page in size.
alias **BlkInfo** = .BlkInfo\_;
Contains aggregate information about a block of managed memory. The purpose of this struct is to support a more efficient query style in instances where detailed information is needed.
base = A pointer to the base of the block in question. size = The size of the block, calculated from base. attr = Attribute bits set on the memory block.
static nothrow uint **getAttr**(scope const void\* p);
static pure nothrow uint **getAttr**(void\* p);
Returns a bit field representing all block attributes set for the memory referenced by p. If p references memory not originally allocated by this garbage collector, points to the interior of a memory block, or if p is null, zero will be returned.
Parameters:
| | |
| --- | --- |
| void\* `p` | A pointer to the root of a valid memory block or to null. |
Returns:
A bit field containing any bits set for the memory block referenced by p or zero on error.
static nothrow uint **setAttr**(scope const void\* p, uint a);
static pure nothrow uint **setAttr**(void\* p, uint a);
Sets the specified bits for the memory references by p. If p references memory not originally allocated by this garbage collector, points to the interior of a memory block, or if p is null, no action will be performed.
Parameters:
| | |
| --- | --- |
| void\* `p` | A pointer to the root of a valid memory block or to null. |
| uint `a` | A bit field containing any bits to set for this memory block. |
Returns:
The result of a call to getAttr after the specified bits have been set.
static nothrow uint **clrAttr**(scope const void\* p, uint a);
static pure nothrow uint **clrAttr**(void\* p, uint a);
Clears the specified bits for the memory references by p. If p references memory not originally allocated by this garbage collector, points to the interior of a memory block, or if p is null, no action will be performed.
Parameters:
| | |
| --- | --- |
| void\* `p` | A pointer to the root of a valid memory block or to null. |
| uint `a` | A bit field containing any bits to clear for this memory block. |
Returns:
The result of a call to getAttr after the specified bits have been cleared.
static pure nothrow void\* **malloc**(size\_t sz, uint ba = 0, const TypeInfo ti = null);
Requests an aligned block of managed memory from the garbage collector. This memory may be deleted at will with a call to free, or it may be discarded and cleaned up automatically during a collection run. If allocation fails, this function will call onOutOfMemory which is expected to throw an OutOfMemoryError.
Parameters:
| | |
| --- | --- |
| size\_t `sz` | The desired allocation size in bytes. |
| uint `ba` | A bitmask of the attributes to set on this block. |
| TypeInfo `ti` | TypeInfo to describe the memory. The GC might use this information to improve scanning for pointers or to call finalizers. |
Returns:
A reference to the allocated memory or null if insufficient memory is available.
Throws:
OutOfMemoryError on allocation failure.
static pure nothrow BlkInfo **qalloc**(size\_t sz, uint ba = 0, const TypeInfo ti = null);
Requests an aligned block of managed memory from the garbage collector. This memory may be deleted at will with a call to free, or it may be discarded and cleaned up automatically during a collection run. If allocation fails, this function will call onOutOfMemory which is expected to throw an OutOfMemoryError.
Parameters:
| | |
| --- | --- |
| size\_t `sz` | The desired allocation size in bytes. |
| uint `ba` | A bitmask of the attributes to set on this block. |
| TypeInfo `ti` | TypeInfo to describe the memory. The GC might use this information to improve scanning for pointers or to call finalizers. |
Returns:
Information regarding the allocated memory block or BlkInfo.init on error.
Throws:
OutOfMemoryError on allocation failure.
static pure nothrow void\* **calloc**(size\_t sz, uint ba = 0, const TypeInfo ti = null);
Requests an aligned block of managed memory from the garbage collector, which is initialized with all bits set to zero. This memory may be deleted at will with a call to free, or it may be discarded and cleaned up automatically during a collection run. If allocation fails, this function will call onOutOfMemory which is expected to throw an OutOfMemoryError.
Parameters:
| | |
| --- | --- |
| size\_t `sz` | The desired allocation size in bytes. |
| uint `ba` | A bitmask of the attributes to set on this block. |
| TypeInfo `ti` | TypeInfo to describe the memory. The GC might use this information to improve scanning for pointers or to call finalizers. |
Returns:
A reference to the allocated memory or null if insufficient memory is available.
Throws:
OutOfMemoryError on allocation failure.
static pure nothrow void\* **realloc**(void\* p, size\_t sz, uint ba = 0, const TypeInfo ti = null);
Extend, shrink or allocate a new block of memory keeping the contents of an existing block
If `sz` is zero, the memory referenced by p will be deallocated as if by a call to `free`. If `p` is `null`, new memory will be allocated via `malloc`. If `p` is pointing to memory not allocated from the GC or to the interior of an allocated memory block, no operation is performed and null is returned.
Otherwise, a new memory block of size `sz` will be allocated as if by a call to `malloc`, or the implementation may instead resize or shrink the memory block in place. The contents of the new memory block will be the same as the contents of the old memory block, up to the lesser of the new and old sizes.
The caller guarantees that there are no other live pointers to the passed memory block, still it might not be freed immediately by `realloc`. The garbage collector can reclaim the memory block in a later collection if it is unused. If allocation fails, this function will throw an `OutOfMemoryError`.
If `ba` is zero (the default) the attributes of the existing memory will be used for an allocation. If `ba` is not zero and no new memory is allocated, the bits in ba will replace those of the current memory block.
Parameters:
| | |
| --- | --- |
| void\* `p` | A pointer to the base of a valid memory block or to `null`. |
| size\_t `sz` | The desired allocation size in bytes. |
| uint `ba` | A bitmask of the BlkAttr attributes to set on this block. |
| TypeInfo `ti` | TypeInfo to describe the memory. The GC might use this information to improve scanning for pointers or to call finalizers. |
Returns:
A reference to the allocated memory on success or `null` if `sz` is zero or the pointer does not point to the base of an GC allocated memory block.
Throws:
`OutOfMemoryError` on allocation failure.
Examples:
```
enum size1 = 1 << 11 + 1; // page in large object pool
enum size2 = 1 << 22 + 1; // larger than large object pool size
auto data1 = cast(ubyte*)GC.calloc(size1);
auto data2 = cast(ubyte*)GC.realloc(data1, size2);
GC.BlkInfo info = GC.query(data2);
assert(info.size >= size2);
```
static pure nothrow size\_t **extend**(void\* p, size\_t mx, size\_t sz, const TypeInfo ti = null);
Requests that the managed memory block referenced by p be extended in place by at least mx bytes, with a desired extension of sz bytes. If an extension of the required size is not possible or if p references memory not originally allocated by this garbage collector, no action will be taken.
Parameters:
| | |
| --- | --- |
| void\* `p` | A pointer to the root of a valid memory block or to null. |
| size\_t `mx` | The minimum extension size in bytes. |
| size\_t `sz` | The desired extension size in bytes. |
| TypeInfo `ti` | TypeInfo to describe the full memory block. The GC might use this information to improve scanning for pointers or to call finalizers. |
Returns:
The size in bytes of the extended memory block referenced by p or zero if no extension occurred.
Note
Extend may also be used to extend slices (or memory blocks with [`APPENDABLE`](#APPENDABLE) info). However, use the return value only as an indicator of success. [`capacity`](#capacity) should be used to retrieve actual usable slice capacity.
Examples:
Standard extending
```
size_t size = 1000;
int* p = cast(int*)GC.malloc(size * int.sizeof, GC.BlkAttr.NO_SCAN);
//Try to extend the allocated data by 1000 elements, preferred 2000.
size_t u = GC.extend(p, 1000 * int.sizeof, 2000 * int.sizeof);
if (u != 0)
size = u / int.sizeof;
```
Examples:
slice extending
```
int[] slice = new int[](1000);
int* p = slice.ptr;
//Check we have access to capacity before attempting the extend
if (slice.capacity)
{
//Try to extend slice by 1000 elements, preferred 2000.
size_t u = GC.extend(p, 1000 * int.sizeof, 2000 * int.sizeof);
if (u != 0)
{
slice.length = slice.capacity;
assert(slice.length >= 2000);
}
}
```
static nothrow size\_t **reserve**(size\_t sz);
Requests that at least sz bytes of memory be obtained from the operating system and marked as free.
Parameters:
| | |
| --- | --- |
| size\_t `sz` | The desired size in bytes. |
Returns:
The actual number of bytes reserved or zero on error.
static pure nothrow @nogc void **free**(void\* p);
Deallocates the memory referenced by p. If p is null, no action occurs. If p references memory not originally allocated by this garbage collector, if p points to the interior of a memory block, or if this method is called from a finalizer, no action will be taken. The block will not be finalized regardless of whether the FINALIZE attribute is set. If finalization is desired, call [`destroy`](object#destroy) prior to `GC.free`.
Parameters:
| | |
| --- | --- |
| void\* `p` | A pointer to the root of a valid memory block or to null. |
static nothrow @nogc inout(void)\* **addrOf**(inout(void)\* p);
static pure nothrow @nogc void\* **addrOf**(void\* p);
Returns the base address of the memory block containing p. This value is useful to determine whether p is an interior pointer, and the result may be passed to routines such as sizeOf which may otherwise fail. If p references memory not originally allocated by this garbage collector, if p is null, or if the garbage collector does not support this operation, null will be returned.
Parameters:
| | |
| --- | --- |
| inout(void)\* `p` | A pointer to the root or the interior of a valid memory block or to null. |
Returns:
The base address of the memory block referenced by p or null on error.
static nothrow @nogc size\_t **sizeOf**(scope const void\* p);
static pure nothrow @nogc size\_t **sizeOf**(void\* p);
Returns the true size of the memory block referenced by p. This value represents the maximum number of bytes for which a call to realloc may resize the existing block in place. If p references memory not originally allocated by this garbage collector, points to the interior of a memory block, or if p is null, zero will be returned.
Parameters:
| | |
| --- | --- |
| void\* `p` | A pointer to the root of a valid memory block or to null. |
Returns:
The size in bytes of the memory block referenced by p or zero on error.
static nothrow BlkInfo **query**(scope const void\* p);
static pure nothrow BlkInfo **query**(void\* p);
Returns aggregate information about the memory block containing p. If p references memory not originally allocated by this garbage collector, if p is null, or if the garbage collector does not support this operation, BlkInfo.init will be returned. Typically, support for this operation is dependent on support for addrOf.
Parameters:
| | |
| --- | --- |
| void\* `p` | A pointer to the root or the interior of a valid memory block or to null. |
Returns:
Information regarding the memory block referenced by p or BlkInfo.init on error.
static nothrow Stats **stats**();
Returns runtime stats for currently active GC implementation See `core.memory.GC.Stats` for list of available metrics.
static nothrow @nogc @safe ProfileStats **profileStats**();
Returns runtime profile stats for currently active GC implementation See `core.memory.GC.ProfileStats` for list of available metrics.
static nothrow @nogc void **addRoot**(const void\* p);
Adds an internal root pointing to the GC memory block referenced by p. As a result, the block referenced by p itself and any blocks accessible via it will be considered live until the root is removed again.
If p is null, no operation is performed.
Parameters:
| | |
| --- | --- |
| void\* `p` | A pointer into a GC-managed memory block or null. |
Example
```
// Typical C-style callback mechanism; the passed function
// is invoked with the user-supplied context pointer at a
// later point.
extern(C) void addCallback(void function(void*), void*);
// Allocate an object on the GC heap (this would usually be
// some application-specific context data).
auto context = new Object;
// Make sure that it is not collected even if it is no
// longer referenced from D code (stack, GC heap, …).
GC.addRoot(cast(void*)context);
// Also ensure that a moving collector does not relocate
// the object.
GC.setAttr(cast(void*)context, GC.BlkAttr.NO_MOVE);
// Now context can be safely passed to the C library.
addCallback(&myHandler, cast(void*)context);
extern(C) void myHandler(void* ctx)
{
// Assuming that the callback is invoked only once, the
// added root can be removed again now to allow the GC
// to collect it later.
GC.removeRoot(ctx);
GC.clrAttr(ctx, GC.BlkAttr.NO_MOVE);
auto context = cast(Object)ctx;
// Use context here…
}
```
static nothrow @nogc void **removeRoot**(const void\* p);
Removes the memory block referenced by p from an internal list of roots to be scanned during a collection. If p is null or is not a value previously passed to addRoot() then no operation is performed.
Parameters:
| | |
| --- | --- |
| void\* `p` | A pointer into a GC-managed memory block or null. |
static nothrow @nogc void **addRange**(const void\* p, size\_t sz, const TypeInfo ti = null);
Adds `p[0 .. sz]` to the list of memory ranges to be scanned for pointers during a collection. If p is null, no operation is performed.
Note that `p[0 .. sz]` is treated as an opaque range of memory assumed to be suitably managed by the caller. In particular, if p points into a GC-managed memory block, addRange does *not* mark this block as live.
Parameters:
| | |
| --- | --- |
| void\* `p` | A pointer to a valid memory address or to null. |
| size\_t `sz` | The size in bytes of the block to add. If sz is zero then the no operation will occur. If p is null then sz must be zero. |
| TypeInfo `ti` | TypeInfo to describe the memory. The GC might use this information to improve scanning for pointers or to call finalizers |
Example
```
// Allocate a piece of memory on the C heap.
enum size = 1_000;
auto rawMemory = core.stdc.stdlib.malloc(size);
// Add it as a GC range.
GC.addRange(rawMemory, size);
// Now, pointers to GC-managed memory stored in
// rawMemory will be recognized on collection.
```
static nothrow @nogc void **removeRange**(const void\* p);
Removes the memory range starting at p from an internal list of ranges to be scanned during a collection. If p is null or does not represent a value previously passed to addRange() then no operation is performed.
Parameters:
| | |
| --- | --- |
| void\* `p` | A pointer to a valid memory address or to null. |
static void **runFinalizers**(scope const void[] segment);
Runs any finalizer that is located in address range of the given code segment. This is used before unloading shared libraries. All matching objects which have a finalizer in this code segment are assumed to be dead, using them while or after calling this method has undefined behavior.
Parameters:
| | |
| --- | --- |
| void[] `segment` | address range of a code segment. |
static nothrow @nogc @safe bool **inFinalizer**();
Queries the GC whether the current thread is running object finalization as part of a GC collection, or an explicit call to runFinalizers.
As some GC implementations (such as the current conservative one) don't support GC memory allocation during object finalization, this function can be used to guard against such programming errors.
Returns:
true if the current thread is in a finalizer, a destructor invoked by the GC.
Examples:
```
// Only code called from a destructor is executed during finalization.
assert(!GC.inFinalizer);
```
Examples:
```
enum Outcome
{
notCalled,
calledManually,
calledFromDruntime
}
static class Resource
{
static Outcome outcome;
this()
{
outcome = Outcome.notCalled;
}
~this()
{
if (GC.inFinalizer)
{
outcome = Outcome.calledFromDruntime;
import core.exception : InvalidMemoryOperationError;
try
{
/*
* Presently, allocating GC memory during finalization
* is forbidden and leads to
* `InvalidMemoryOperationError` being thrown.
*
* `GC.inFinalizer` can be used to guard against
* programming erros such as these and is also a more
* efficient way to verify whether a destructor was
* invoked by the GC.
*/
cast(void) GC.malloc(1);
assert(false);
}
catch (InvalidMemoryOperationError e)
{
return;
}
assert(false);
}
else
outcome = Outcome.calledManually;
}
}
static void createGarbage()
{
auto r = new Resource;
r = null;
}
assert(Resource.outcome == Outcome.notCalled);
createGarbage();
GC.collect;
assert(
Resource.outcome == Outcome.notCalled ||
Resource.outcome == Outcome.calledFromDruntime);
auto r = new Resource;
GC.runFinalizers((cast(const void*)typeid(Resource).destructor)[0..1]);
assert(Resource.outcome == Outcome.calledFromDruntime);
Resource.outcome = Outcome.notCalled;
debug(MEMSTOMP) {} else
{
// assume Resource data is still available
r.destroy;
assert(Resource.outcome == Outcome.notCalled);
}
r = new Resource;
assert(Resource.outcome == Outcome.notCalled);
r.destroy;
assert(Resource.outcome == Outcome.calledManually);
```
static nothrow ulong **allocatedInCurrentThread**();
Returns the number of bytes allocated for the current thread since program start. It is the same as GC.stats().allocatedInCurrentThread, but faster.
Examples:
Using allocatedInCurrentThread
```
ulong currentlyAllocated = GC.allocatedInCurrentThread();
struct DataStruct
{
long l1;
long l2;
long l3;
long l4;
}
DataStruct* unused = new DataStruct;
assert(GC.allocatedInCurrentThread() == currentlyAllocated + 32);
assert(GC.stats().allocatedInCurrentThread == currentlyAllocated + 32);
```
pure nothrow @nogc @trusted void\* **pureMalloc**()(size\_t size);
pure nothrow @nogc @trusted void\* **pureCalloc**()(size\_t nmemb, size\_t size);
pure nothrow @nogc @system void\* **pureRealloc**()(void\* ptr, size\_t size);
pure nothrow @nogc @system void **pureFree**()(void\* ptr);
Pure variants of C's memory allocation functions `malloc`, `calloc`, and `realloc` and deallocation function `free`.
UNIX 98 requires that errno be set to ENOMEM upon failure. Purity is achieved by saving and restoring the value of `errno`, thus behaving as if it were never changed.
See Also:
[D's rules for purity](https://dlang.org/spec/function.html#pure-functions), which allow for memory allocation under specific circumstances.
Examples:
```
ubyte[] fun(size_t n) pure
{
void* p = pureMalloc(n);
p !is null || n == 0 || assert(0);
scope(failure) p = pureRealloc(p, 0);
p = pureRealloc(p, n *= 2);
p !is null || n == 0 || assert(0);
return cast(ubyte[]) p[0 .. n];
}
auto buf = fun(100);
assert(buf.length == 200);
pureFree(buf.ptr);
```
@system void **\_\_delete**(T)(ref T x);
Destroys and then deallocates an object.
In detail, `__delete(x)` returns with no effect if `x` is `null`. Otherwise, it performs the following actions in sequence: * Calls the destructor `~this()` for the object referred to by `x` (if `x` is a class or interface reference) or for the object pointed to by `x` (if `x` is a pointer to a `struct`). Arrays of structs call the destructor, if defined, for each element in the array. If no destructor is defined, this step has no effect.
* Frees the memory allocated for `x`. If `x` is a reference to a class or interface, the memory allocated for the underlying instance is freed. If `x` is a pointer, the memory allocated for the pointed-to object is freed. If `x` is a built-in array, the memory allocated for the array is freed. If `x` does not refer to memory previously allocated with `new` (or the lower-level equivalents in the GC API), the behavior is undefined.
* Lastly, `x` is set to `null`. Any attempt to read or write the freed memory via other references will result in undefined behavior.
Note
Users should prefer [`destroy`](object#destroy) to explicitly finalize objects, and only resort to [`core.memory._delete`](core_memory#_delete) when [`object.destroy`](object_#destroy) wouldn't be a feasible option.
Parameters:
| | |
| --- | --- |
| T `x` | aggregate object that should be destroyed |
See Also:
[`destroy`](object#destroy), [`core.GC.free`](core_gc#free)
History:
The `delete` keyword allowed to free GC-allocated memory. As this is inherently not `@safe`, it has been deprecated. This function has been added to provide an easy transition from `delete`. It performs the same functionality as the former `delete` keyword.
Examples:
Deleting classes
```
bool dtorCalled;
class B
{
int test;
~this()
{
dtorCalled = true;
}
}
B b = new B();
B a = b;
b.test = 10;
assert(GC.addrOf(cast(void*) b) != null);
__delete(b);
assert(b is null);
assert(dtorCalled);
assert(GC.addrOf(cast(void*) b) == null);
// but be careful, a still points to it
assert(a !is null);
assert(GC.addrOf(cast(void*) a) == null); // but not a valid GC pointer
```
Examples:
Deleting interfaces
```
bool dtorCalled;
interface A
{
int quack();
}
class B : A
{
int a;
int quack()
{
a++;
return a;
}
~this()
{
dtorCalled = true;
}
}
A a = new B();
a.quack();
assert(GC.addrOf(cast(void*) a) != null);
__delete(a);
assert(a is null);
assert(dtorCalled);
assert(GC.addrOf(cast(void*) a) == null);
```
Examples:
Deleting structs
```
bool dtorCalled;
struct A
{
string test;
~this()
{
dtorCalled = true;
}
}
auto a = new A("foo");
assert(GC.addrOf(cast(void*) a) != null);
__delete(a);
assert(a is null);
assert(dtorCalled);
assert(GC.addrOf(cast(void*) a) == null);
```
Examples:
Deleting arrays
```
int[] a = [1, 2, 3];
auto b = a;
assert(GC.addrOf(b.ptr) != null);
__delete(b);
assert(b is null);
assert(GC.addrOf(b.ptr) == null);
// but be careful, a still points to it
assert(a !is null);
assert(GC.addrOf(a.ptr) == null); // but not a valid GC pointer
```
Examples:
Deleting arrays of structs
```
int dtorCalled;
struct A
{
int a;
~this()
{
assert(dtorCalled == a);
dtorCalled++;
}
}
auto arr = [A(1), A(2), A(3)];
arr[0].a = 2;
arr[1].a = 1;
arr[2].a = 0;
assert(GC.addrOf(arr.ptr) != null);
__delete(arr);
assert(dtorCalled == 3);
assert(GC.addrOf(arr.ptr) == null);
```
| programming_docs |
d std.range.primitives std.range.primitives
====================
This module is a submodule of [`std.range`](std_range).
It defines the bidirectional and forward range primitives for arrays: [`empty`](#empty), [`front`](#front), [`back`](#back), [`popFront`](#popFront), [`popBack`](#popBack) and [`save`](#save).
It provides basic range functionality by defining several templates for testing whether a given object is a range, and what kind of range it is:
| | |
| --- | --- |
| [`isInputRange`](#isInputRange) | Tests if something is an *input range*, defined to be something from which one can sequentially read data using the primitives `front`, `popFront`, and `empty`. |
| [`isOutputRange`](#isOutputRange) | Tests if something is an *output range*, defined to be something to which one can sequentially write data using the [`put`](#put) primitive. |
| [`isForwardRange`](#isForwardRange) | Tests if something is a *forward range*, defined to be an input range with the additional capability that one can save one's current position with the `save` primitive, thus allowing one to iterate over the same range multiple times. |
| [`isBidirectionalRange`](#isBidirectionalRange) | Tests if something is a *bidirectional range*, that is, a forward range that allows reverse traversal using the primitives `back` and `popBack`. |
| [`isRandomAccessRange`](#isRandomAccessRange) | Tests if something is a *random access range*, which is a bidirectional range that also supports the array subscripting operation via the primitive `opIndex`. |
It also provides number of templates that test for various range capabilities:
| | |
| --- | --- |
| [`hasMobileElements`](#hasMobileElements) | Tests if a given range's elements can be moved around using the primitives `moveFront`, `moveBack`, or `moveAt`. |
| [`ElementType`](#ElementType) | Returns the element type of a given range. |
| [`ElementEncodingType`](#ElementEncodingType) | Returns the encoding element type of a given range. |
| [`hasSwappableElements`](#hasSwappableElements) | Tests if a range is a forward range with swappable elements. |
| [`hasAssignableElements`](#hasAssignableElements) | Tests if a range is a forward range with mutable elements. |
| [`hasLvalueElements`](#hasLvalueElements) | Tests if a range is a forward range with elements that can be passed by reference and have their address taken. |
| [`hasLength`](#hasLength) | Tests if a given range has the `length` attribute. |
| [`isInfinite`](#isInfinite) | Tests if a given range is an *infinite range*. |
| [`hasSlicing`](#hasSlicing) | Tests if a given range supports the array slicing operation `R[x .. y]`. |
Finally, it includes some convenience functions for manipulating ranges:
| | |
| --- | --- |
| [`popFrontN`](#popFrontN) | Advances a given range by up to *n* elements. |
| [`popBackN`](#popBackN) | Advances a given bidirectional range from the right by up to *n* elements. |
| [`popFrontExactly`](#popFrontExactly) | Advances a given range by up exactly *n* elements. |
| [`popBackExactly`](#popBackExactly) | Advances a given bidirectional range from the right by exactly *n* elements. |
| [`moveFront`](#moveFront) | Removes the front element of a range. |
| [`moveBack`](#moveBack) | Removes the back element of a bidirectional range. |
| [`moveAt`](#moveAt) | Removes the *i*'th element of a random-access range. |
| [`walkLength`](#walkLength) | Computes the length of any range in O(n) time. |
| [`put`](#put) | Outputs element `e` to a range. |
Source
[std/range/primitives.d](https://github.com/dlang/phobos/blob/master/std/range/primitives.d)
License:
[Boost License 1.0](http://boost.org/LICENSE_1_0.txt).
Authors:
[Andrei Alexandrescu](http://erdani.com), David Simcha, and [Jonathan M Davis](http://jmdavisprog.com). Credit for some of the ideas in building this module goes to [Leonardo Maffi](http://fantascienza.net/leonardo/so/).
enum bool **isInputRange**(R);
Returns `true` if `R` is an input range. An input range must define the primitives `empty`, `popFront`, and `front`. The following code should compile for any input range.
```
R r; // can define a range object
if (r.empty) {} // can test for empty
r.popFront(); // can invoke popFront()
auto h = r.front; // can get the front of the range of non-void type
```
The following are rules of input ranges are assumed to hold true in all Phobos code. These rules are not checkable at compile-time, so not conforming to these rules when writing ranges or range based code will result in undefined behavior.
* `r.empty` returns `false` if and only if there is more data available in the range.
* `r.empty` evaluated multiple times, without calling `r.popFront`, or otherwise mutating the range object or the underlying data, yields the same result for every evaluation.
* `r.front` returns the current element in the range. It may return by value or by reference.
* `r.front` can be legally evaluated if and only if evaluating `r.empty` has, or would have, equaled `false`.
* `r.front` evaluated multiple times, without calling `r.popFront`, or otherwise mutating the range object or the underlying data, yields the same result for every evaluation.
* `r.popFront` advances to the next element in the range.
* `r.popFront` can be called if and only if evaluating `r.empty` has, or would have, equaled `false`.
Also, note that Phobos code assumes that the primitives `r.front` and `r.empty` are Ο(`1`) time complexity wise or "cheap" in terms of running time. Ο() statements in the documentation of range functions are made with this assumption.
See Also:
The header of [`std.range`](std_range) for tutorials on ranges.
Parameters:
| | |
| --- | --- |
| R | type to be tested |
Returns:
`true` if R is an input range, `false` if not
Examples:
```
struct A {}
struct B
{
void popFront();
@property bool empty();
@property int front();
}
static assert(!isInputRange!A);
static assert( isInputRange!B);
static assert( isInputRange!(int[]));
static assert( isInputRange!(char[]));
static assert(!isInputRange!(char[4]));
static assert( isInputRange!(inout(int)[]));
static struct NotDefaultConstructible
{
@disable this();
void popFront();
@property bool empty();
@property int front();
}
static assert( isInputRange!NotDefaultConstructible);
static struct NotDefaultConstructibleOrCopyable
{
@disable this();
@disable this(this);
void popFront();
@property bool empty();
@property int front();
}
static assert(isInputRange!NotDefaultConstructibleOrCopyable);
static struct Frontless
{
void popFront();
@property bool empty();
}
static assert(!isInputRange!Frontless);
static struct VoidFront
{
void popFront();
@property bool empty();
void front();
}
static assert(!isInputRange!VoidFront);
```
void **put**(R, E)(ref R r, E e);
Outputs `e` to `r`. The exact effect is dependent upon the two types. Several cases are accepted, as described below. The code snippets are attempted in order, and the first to compile "wins" and gets evaluated.
In this table "doPut" is a method that places `e` into `r`, using the correct primitive: `r.put(e)` if `R` defines `put`, `r.front = e` if `r` is an input range (followed by `r.popFront()`), or `r(e)` otherwise.
| Code Snippet | Scenario |
| --- | --- |
| `r.doPut(e);` | `R` specifically accepts an `E`. |
| `r.doPut([ e ]);` | `R` specifically accepts an `E[]`. |
| `r.putChar(e);` | `R` accepts some form of string or character. put will transcode the character `e` accordingly. |
| `for (; !e.empty; e.popFront()) put(r, e.front);` | Copying range `E` into `R`. |
Tip
`put` should *not* be used "UFCS-style", e.g. `r.put(e)`. Doing this may call `R.put` directly, by-passing any transformation feature provided by `Range.put`. `put(r, e)` is prefered.
Examples:
When an output range's `put` method only accepts elements of type `T`, use the global `put` to handle outputting a `T[]` to the range or vice-versa.
```
import std.traits : isSomeChar;
static struct A
{
string data;
void put(C)(C c) if (isSomeChar!C)
{
data ~= c;
}
}
static assert(isOutputRange!(A, char));
auto a = A();
put(a, "Hello");
writeln(a.data); // "Hello"
```
Examples:
`put` treats dynamic arrays as array slices, and will call `popFront` on the slice after an element has been copied. Be sure to save the position of the array before calling `put`.
```
int[] a = [1, 2, 3], b = [10, 20];
auto c = a;
put(a, b);
writeln(c); // [10, 20, 3]
// at this point, a was advanced twice, so it only contains
// its last element while c represents the whole array
writeln(a); // [3]
```
Examples:
It's also possible to `put` any width strings or characters into narrow strings -- put does the conversion for you. Note that putting the same width character as the target buffer type is `nothrow`, but transcoding can throw a [`std.utf.UTFException`](std_utf#UTFException).
```
// the elements must be mutable, so using string or const(char)[]
// won't compile
char[] s1 = new char[13];
auto r1 = s1;
put(r1, "Hello, World!"w);
writeln(s1); // "Hello, World!"
```
enum bool **isOutputRange**(R, E);
Returns `true` if `R` is an output range for elements of type `E`. An output range is defined functionally as a range that supports the operation `put(r, e)` as defined above.
See Also:
The header of [`std.range`](std_range) for tutorials on ranges.
Examples:
```
void myprint(scope const(char)[] s) { }
static assert(isOutputRange!(typeof(&myprint), char));
static assert( isOutputRange!(char[], char));
static assert( isOutputRange!(dchar[], wchar));
static assert( isOutputRange!(dchar[], dchar));
```
enum bool **isForwardRange**(R);
Returns `true` if `R` is a forward range. A forward range is an input range `r` that can save "checkpoints" by saving `r.save` to another value of type `R`. Notable examples of input ranges that are *not* forward ranges are file/socket ranges; copying such a range will not save the position in the stream, and they most likely reuse an internal buffer as the entire stream does not sit in memory. Subsequently, advancing either the original or the copy will advance the stream, so the copies are not independent.
The following code should compile for any forward range.
```
static assert(isInputRange!R);
R r1;
auto s1 = r1.save;
static assert(is(typeof(s1) == R));
```
Saving a range is not duplicating it; in the example above, `r1` and `r2` still refer to the same underlying data. They just navigate that data independently.
The semantics of a forward range (not checkable during compilation) are the same as for an input range, with the additional requirement that backtracking must be possible by saving a copy of the range object with `save` and using it later.
See Also:
The header of [`std.range`](std_range) for tutorials on ranges.
Examples:
```
static assert(!isForwardRange!(int));
static assert( isForwardRange!(int[]));
static assert( isForwardRange!(inout(int)[]));
```
enum bool **isBidirectionalRange**(R);
Returns `true` if `R` is a bidirectional range. A bidirectional range is a forward range that also offers the primitives `back` and `popBack`. The following code should compile for any bidirectional range.
The semantics of a bidirectional range (not checkable during compilation) are assumed to be the following (`r` is an object of type `R`):
* `r.back` returns (possibly a reference to) the last element in the range. Calling `r.back` is allowed only if calling `r.empty` has, or would have, returned `false`.
See Also:
The header of [`std.range`](std_range) for tutorials on ranges.
Examples:
```
alias R = int[];
R r = [0,1];
static assert(isForwardRange!R); // is forward range
r.popBack(); // can invoke popBack
auto t = r.back; // can get the back of the range
auto w = r.front;
static assert(is(typeof(t) == typeof(w))); // same type for front and back
```
enum bool **isRandomAccessRange**(R);
Returns `true` if `R` is a random-access range. A random-access range is a bidirectional range that also offers the primitive `opIndex`, OR an infinite forward range that offers `opIndex`. In either case, the range must either offer `length` or be infinite. The following code should compile for any random-access range.
The semantics of a random-access range (not checkable during compilation) are assumed to be the following (`r` is an object of type `R`): * `r.opIndex(n)` returns a reference to the `n`th element in the range.
Although `char[]` and `wchar[]` (as well as their qualified versions including `string` and `wstring`) are arrays, `isRandomAccessRange` yields `false` for them because they use variable-length encodings (UTF-8 and UTF-16 respectively). These types are bidirectional ranges only.
See Also:
The header of [`std.range`](std_range) for tutorials on ranges.
Examples:
```
import std.traits : isAggregateType, isAutodecodableString;
alias R = int[];
// range is finite and bidirectional or infinite and forward.
static assert(isBidirectionalRange!R ||
isForwardRange!R && isInfinite!R);
R r = [0,1];
auto e = r[1]; // can index
auto f = r.front;
static assert(is(typeof(e) == typeof(f))); // same type for indexed and front
static assert(!(isAutodecodableString!R && !isAggregateType!R)); // narrow strings cannot be indexed as ranges
static assert(hasLength!R || isInfinite!R); // must have length or be infinite
// $ must work as it does with arrays if opIndex works with $
static if (is(typeof(r[$])))
{
static assert(is(typeof(f) == typeof(r[$])));
// $ - 1 doesn't make sense with infinite ranges but needs to work
// with finite ones.
static if (!isInfinite!R)
static assert(is(typeof(f) == typeof(r[$ - 1])));
}
```
enum bool **hasMobileElements**(R);
Returns `true` iff `R` is an input range that supports the `moveFront` primitive, as well as `moveBack` and `moveAt` if it's a bidirectional or random access range. These may be explicitly implemented, or may work via the default behavior of the module level functions `moveFront` and friends. The following code should compile for any range with mobile elements.
```
alias E = ElementType!R;
R r;
static assert(isInputRange!R);
static assert(is(typeof(moveFront(r)) == E));
static if (isBidirectionalRange!R)
static assert(is(typeof(moveBack(r)) == E));
static if (isRandomAccessRange!R)
static assert(is(typeof(moveAt(r, 0)) == E));
```
Examples:
```
import std.algorithm.iteration : map;
import std.range : iota, repeat;
static struct HasPostblit
{
this(this) {}
}
auto nonMobile = map!"a"(repeat(HasPostblit.init));
static assert(!hasMobileElements!(typeof(nonMobile)));
static assert( hasMobileElements!(int[]));
static assert( hasMobileElements!(inout(int)[]));
static assert( hasMobileElements!(typeof(iota(1000))));
static assert( hasMobileElements!( string));
static assert( hasMobileElements!(dstring));
static assert( hasMobileElements!( char[]));
static assert( hasMobileElements!(dchar[]));
```
template **ElementType**(R)
The element type of `R`. `R` does not have to be a range. The element type is determined as the type yielded by `r.front` for an object `r` of type `R`. For example, `ElementType!(T[])` is `T` if `T[]` isn't a narrow string; if it is, the element type is `dchar`. If `R` doesn't have `front`, `ElementType!R` is `void`.
Examples:
```
import std.range : iota;
// Standard arrays: returns the type of the elements of the array
static assert(is(ElementType!(int[]) == int));
// Accessing .front retrieves the decoded dchar
static assert(is(ElementType!(char[]) == dchar)); // rvalue
static assert(is(ElementType!(dchar[]) == dchar)); // lvalue
// Ditto
static assert(is(ElementType!(string) == dchar));
static assert(is(ElementType!(dstring) == immutable(dchar)));
// For ranges it gets the type of .front.
auto range = iota(0, 10);
static assert(is(ElementType!(typeof(range)) == int));
```
template **ElementEncodingType**(R)
The encoding element type of `R`. For narrow strings (`char[]`, `wchar[]` and their qualified variants including `string` and `wstring`), `ElementEncodingType` is the character type of the string. For all other types, `ElementEncodingType` is the same as `ElementType`.
Examples:
```
import std.range : iota;
// internally the range stores the encoded type
static assert(is(ElementEncodingType!(char[]) == char));
static assert(is(ElementEncodingType!(wstring) == immutable(wchar)));
static assert(is(ElementEncodingType!(byte[]) == byte));
auto range = iota(0, 10);
static assert(is(ElementEncodingType!(typeof(range)) == int));
```
enum bool **hasSwappableElements**(R);
Returns `true` if `R` is an input range and has swappable elements. The following code should compile for any range with swappable elements.
```
R r;
static assert(isInputRange!R);
swap(r.front, r.front);
static if (isBidirectionalRange!R) swap(r.back, r.front);
static if (isRandomAccessRange!R) swap(r[0], r.front);
```
Examples:
```
static assert(!hasSwappableElements!(const int[]));
static assert(!hasSwappableElements!(const(int)[]));
static assert(!hasSwappableElements!(inout(int)[]));
static assert( hasSwappableElements!(int[]));
static assert(!hasSwappableElements!( string));
static assert(!hasSwappableElements!(dstring));
static assert(!hasSwappableElements!( char[]));
static assert( hasSwappableElements!(dchar[]));
```
enum bool **hasAssignableElements**(R);
Returns `true` if `R` is an input range and has mutable elements. The following code should compile for any range with assignable elements.
```
R r;
static assert(isInputRange!R);
r.front = r.front;
static if (isBidirectionalRange!R) r.back = r.front;
static if (isRandomAccessRange!R) r[0] = r.front;
```
Examples:
```
static assert(!hasAssignableElements!(const int[]));
static assert(!hasAssignableElements!(const(int)[]));
static assert( hasAssignableElements!(int[]));
static assert(!hasAssignableElements!(inout(int)[]));
static assert(!hasAssignableElements!( string));
static assert(!hasAssignableElements!(dstring));
static assert(!hasAssignableElements!( char[]));
static assert( hasAssignableElements!(dchar[]));
```
enum bool **hasLvalueElements**(R);
Tests whether the range `R` has lvalue elements. These are defined as elements that can be passed by reference and have their address taken. The following code should compile for any range with lvalue elements.
```
void passByRef(ref ElementType!R stuff);
...
static assert(isInputRange!R);
passByRef(r.front);
static if (isBidirectionalRange!R) passByRef(r.back);
static if (isRandomAccessRange!R) passByRef(r[0]);
```
template **hasLength**(R)
Yields `true` if `R` has a `length` member that returns a value of `size_t` type. `R` does not have to be a range. If `R` is a range, algorithms in the standard library are only guaranteed to support `length` with type `size_t`.
Note that `length` is an optional primitive as no range must implement it. Some ranges do not store their length explicitly, some cannot compute it without actually exhausting the range (e.g. socket streams), and some other ranges may be infinite.
Although narrow string types (`char[]`, `wchar[]`, and their qualified derivatives) do define a `length` property, `hasLength` yields `false` for them. This is because a narrow string's length does not reflect the number of characters, but instead the number of encoding units, and as such is not useful with range-oriented algorithms. To use strings as random-access ranges with length, use [`std.string.representation`](std_string#representation) or [`std.utf.byCodeUnit`](std_utf#byCodeUnit).
Examples:
```
static assert(!hasLength!(char[]));
static assert( hasLength!(int[]));
static assert( hasLength!(inout(int)[]));
struct A { size_t length() { return 0; } }
struct B { @property size_t length() { return 0; } }
static assert( hasLength!(A));
static assert( hasLength!(B));
```
template **isInfinite**(R)
Returns `true` if `R` is an infinite input range. An infinite input range is an input range that has a statically-defined enumerated member called `empty` that is always `false`, for example:
```
struct MyInfiniteRange
{
enum bool empty = false;
...
}
```
Examples:
```
import std.range : Repeat;
static assert(!isInfinite!(int[]));
static assert( isInfinite!(Repeat!(int)));
```
enum bool **hasSlicing**(R);
Returns `true` if `R` offers a slicing operator with integral boundaries that returns a forward range type.
For finite ranges, the result of `opSlice` must be of the same type as the original range type. If the range defines `opDollar`, then it must support subtraction.
For infinite ranges, when *not* using `opDollar`, the result of `opSlice` must be the result of [`take`](#take) or [`takeExactly`](#takeExactly) on the original range (they both return the same type for infinite ranges). However, when using `opDollar`, the result of `opSlice` must be that of the original range type.
The following expression must be true for `hasSlicing` to be `true`:
```
isForwardRange!R
&& !isNarrowString!R
&& is(ReturnType!((R r) => r[1 .. 1].length) == size_t)
&& (is(typeof(lvalueOf!R[1 .. 1]) == R) || isInfinite!R)
&& (!is(typeof(lvalueOf!R[0 .. $])) || is(typeof(lvalueOf!R[0 .. $]) == R))
&& (!is(typeof(lvalueOf!R[0 .. $])) || isInfinite!R
|| is(typeof(lvalueOf!R[0 .. $ - 1]) == R))
&& is(typeof((ref R r)
{
static assert(isForwardRange!(typeof(r[1 .. 2])));
}));
```
Examples:
```
import std.range : takeExactly;
static assert( hasSlicing!(int[]));
static assert( hasSlicing!(const(int)[]));
static assert(!hasSlicing!(const int[]));
static assert( hasSlicing!(inout(int)[]));
static assert(!hasSlicing!(inout int []));
static assert( hasSlicing!(immutable(int)[]));
static assert(!hasSlicing!(immutable int[]));
static assert(!hasSlicing!string);
static assert( hasSlicing!dstring);
enum rangeFuncs = "@property int front();" ~
"void popFront();" ~
"@property bool empty();" ~
"@property auto save() { return this; }" ~
"@property size_t length();";
struct A { mixin(rangeFuncs); int opSlice(size_t, size_t); }
struct B { mixin(rangeFuncs); B opSlice(size_t, size_t); }
struct C { mixin(rangeFuncs); @disable this(); C opSlice(size_t, size_t); }
struct D { mixin(rangeFuncs); int[] opSlice(size_t, size_t); }
static assert(!hasSlicing!(A));
static assert( hasSlicing!(B));
static assert( hasSlicing!(C));
static assert(!hasSlicing!(D));
struct InfOnes
{
enum empty = false;
void popFront() {}
@property int front() { return 1; }
@property InfOnes save() { return this; }
auto opSlice(size_t i, size_t j) { return takeExactly(this, j - i); }
auto opSlice(size_t i, Dollar d) { return this; }
struct Dollar {}
Dollar opDollar() const { return Dollar.init; }
}
static assert(hasSlicing!InfOnes);
```
auto **walkLength**(Range)(Range range)
Constraints: if (isInputRange!Range && !isInfinite!Range);
auto **walkLength**(Range)(Range range, const size\_t upTo)
Constraints: if (isInputRange!Range);
This is a best-effort implementation of `length` for any kind of range.
If `hasLength!Range`, simply returns `range.length` without checking `upTo` (when specified).
Otherwise, walks the range through its length and returns the number of elements seen. Performes Ο(`n`) evaluations of `range.empty` and `range.popFront()`, where `n` is the effective length of `range`.
The `upTo` parameter is useful to "cut the losses" in case the interest is in seeing whether the range has at least some number of elements. If the parameter `upTo` is specified, stops if `upTo` steps have been taken and returns `upTo`.
Infinite ranges are compatible, provided the parameter `upTo` is specified, in which case the implementation simply returns upTo.
Examples:
```
import std.range : iota;
writeln(10.iota.walkLength); // 10
// iota has a length function, and therefore the
// doesn't have to be walked, and the upTo
// parameter is ignored
writeln(10.iota.walkLength(5)); // 10
```
size\_t **popFrontN**(Range)(ref Range r, size\_t n)
Constraints: if (isInputRange!Range);
size\_t **popBackN**(Range)(ref Range r, size\_t n)
Constraints: if (isBidirectionalRange!Range);
`popFrontN` eagerly advances `r` itself (not a copy) up to `n` times (by calling `r.popFront`). `popFrontN` takes `r` by `ref`, so it mutates the original range. Completes in Ο(`1`) steps for ranges that support slicing and have length. Completes in Ο(`n`) time for all other ranges.
`popBackN` behaves the same as `popFrontN` but instead removes elements from the back of the (bidirectional) range instead of the front.
Returns:
How much `r` was actually advanced, which may be less than `n` if `r` did not have at least `n` elements.
See Also:
[`std.range.drop`](std_range#drop), [`std.range.dropBack`](std_range#dropBack)
Examples:
```
int[] a = [ 1, 2, 3, 4, 5 ];
a.popFrontN(2);
writeln(a); // [3, 4, 5]
a.popFrontN(7);
writeln(a); // []
```
Examples:
```
import std.algorithm.comparison : equal;
import std.range : iota;
auto LL = iota(1L, 7L);
auto r = popFrontN(LL, 2);
assert(equal(LL, [3L, 4L, 5L, 6L]));
writeln(r); // 2
```
Examples:
```
int[] a = [ 1, 2, 3, 4, 5 ];
a.popBackN(2);
writeln(a); // [1, 2, 3]
a.popBackN(7);
writeln(a); // []
```
Examples:
```
import std.algorithm.comparison : equal;
import std.range : iota;
auto LL = iota(1L, 7L);
auto r = popBackN(LL, 2);
assert(equal(LL, [1L, 2L, 3L, 4L]));
writeln(r); // 2
```
void **popFrontExactly**(Range)(ref Range r, size\_t n)
Constraints: if (isInputRange!Range);
void **popBackExactly**(Range)(ref Range r, size\_t n)
Constraints: if (isBidirectionalRange!Range);
Eagerly advances `r` itself (not a copy) exactly `n` times (by calling `r.popFront`). `popFrontExactly` takes `r` by `ref`, so it mutates the original range. Completes in Ο(`1`) steps for ranges that support slicing, and have either length or are infinite. Completes in Ο(`n`) time for all other ranges.
Note
Unlike [`popFrontN`](#popFrontN), `popFrontExactly` will assume that the range holds at least `n` elements. This makes `popFrontExactly` faster than `popFrontN`, but it also means that if `range` does not contain at least `n` elements, it will attempt to call `popFront` on an empty range, which is undefined behavior. So, only use `popFrontExactly` when it is guaranteed that `range` holds at least `n` elements.
`popBackExactly` will behave the same but instead removes elements from the back of the (bidirectional) range instead of the front.
See Also:
[`std.range.dropExactly`](std_range#dropExactly), [`std.range.dropBackExactly`](std_range#dropBackExactly)
Examples:
```
import std.algorithm.comparison : equal;
import std.algorithm.iteration : filterBidirectional;
auto a = [1, 2, 3];
a.popFrontExactly(1);
writeln(a); // [2, 3]
a.popBackExactly(1);
writeln(a); // [2]
string s = "日本語";
s.popFrontExactly(1);
writeln(s); // "本語"
s.popBackExactly(1);
writeln(s); // "本"
auto bd = filterBidirectional!"true"([1, 2, 3]);
bd.popFrontExactly(1);
assert(bd.equal([2, 3]));
bd.popBackExactly(1);
assert(bd.equal([2]));
```
ElementType!R **moveFront**(R)(R r);
Moves the front of `r` out and returns it. Leaves `r.front` in a destroyable state that does not allocate any resources (usually equal to its `.init` value).
Examples:
```
auto a = [ 1, 2, 3 ];
writeln(moveFront(a)); // 1
writeln(a.length); // 3
// define a perfunctory input range
struct InputRange
{
enum bool empty = false;
enum int front = 7;
void popFront() {}
int moveFront() { return 43; }
}
InputRange r;
// calls r.moveFront
writeln(moveFront(r)); // 43
```
ElementType!R **moveBack**(R)(R r);
Moves the back of `r` out and returns it. Leaves `r.back` in a destroyable state that does not allocate any resources (usually equal to its `.init` value).
Examples:
```
struct TestRange
{
int payload = 5;
@property bool empty() { return false; }
@property TestRange save() { return this; }
@property ref int front() return { return payload; }
@property ref int back() return { return payload; }
void popFront() { }
void popBack() { }
}
static assert(isBidirectionalRange!TestRange);
TestRange r;
auto x = moveBack(r);
writeln(x); // 5
```
ElementType!R **moveAt**(R)(R r, size\_t i);
Moves element at index `i` of `r` out and returns it. Leaves `r[i]` in a destroyable state that does not allocate any resources (usually equal to its `.init` value).
Examples:
```
auto a = [1,2,3,4];
foreach (idx, it; a)
{
writeln(it); // moveAt(a, idx)
}
```
@property bool **empty**(T)(auto ref scope T a)
Constraints: if (is(typeof(a.length) : size\_t));
Implements the range interface primitive `empty` for types that obey [`hasLength`](#hasLength) property and for narrow strings. Due to the fact that nonmember functions can be called with the first argument using the dot notation, `a.empty` is equivalent to `empty(a)`.
Examples:
```
auto a = [ 1, 2, 3 ];
assert(!a.empty);
assert(a[3 .. $].empty);
int[string] b;
assert(b.empty);
b["zero"] = 0;
assert(!b.empty);
```
pure nothrow @nogc @property @safe inout(T)[] **save**(T)(return scope inout(T)[] a);
Implements the range interface primitive `save` for built-in arrays. Due to the fact that nonmember functions can be called with the first argument using the dot notation, `array.save` is equivalent to `save(array)`. The function does not duplicate the content of the array, it simply returns its argument.
Examples:
```
auto a = [ 1, 2, 3 ];
auto b = a.save;
assert(b is a);
```
pure nothrow @nogc @safe void **popFront**(T)(ref scope inout(T)[] a)
Constraints: if (!isAutodecodableString!(T[]) && !is(T[] == void[]));
pure nothrow @trusted void **popFront**(C)(ref scope inout(C)[] str)
Constraints: if (isAutodecodableString!(C[]));
Implements the range interface primitive `popFront` for built-in arrays. Due to the fact that nonmember functions can be called with the first argument using the dot notation, `array.popFront` is equivalent to `popFront(array)`. For [narrow strings](http://dlang.org/glossary.html#narrow%20strings), `popFront` automatically advances to the next [code point](#).
Examples:
```
auto a = [ 1, 2, 3 ];
a.popFront();
writeln(a); // [2, 3]
```
pure nothrow @nogc @safe void **popBack**(T)(ref scope inout(T)[] a)
Constraints: if (!isAutodecodableString!(T[]) && !is(T[] == void[]));
pure @safe void **popBack**(T)(ref scope inout(T)[] a)
Constraints: if (isAutodecodableString!(T[]));
Implements the range interface primitive `popBack` for built-in arrays. Due to the fact that nonmember functions can be called with the first argument using the dot notation, `array.popBack` is equivalent to `popBack(array)`. For [narrow strings](http://dlang.org/glossary.html#narrow%20strings), `popFront` automatically eliminates the last [code point](http://dlang.org/glossary.html#code%20point).
Examples:
```
auto a = [ 1, 2, 3 ];
a.popBack();
writeln(a); // [1, 2]
```
enum bool **autodecodeStrings**;
EXPERIMENTAL
to try out removing autodecoding, set the version `NoAutodecodeStrings`. Most things are expected to fail with this version currently.
pure nothrow @nogc @property ref @safe inout(T) **front**(T)(return scope inout(T)[] a)
Constraints: if (!isAutodecodableString!(T[]) && !is(T[] == void[]));
pure @property @safe dchar **front**(T)(scope const(T)[] a)
Constraints: if (isAutodecodableString!(T[]));
Implements the range interface primitive `front` for built-in arrays. Due to the fact that nonmember functions can be called with the first argument using the dot notation, `array.front` is equivalent to `front(array)`. For [narrow strings](http://dlang.org/glossary.html#narrow%20strings), `front` automatically returns the first [code point](http://dlang.org/glossary.html#code%20point) as a `dchar`.
Examples:
```
int[] a = [ 1, 2, 3 ];
writeln(a.front); // 1
```
pure nothrow @nogc @property ref @safe inout(T) **back**(T)(return scope inout(T)[] a)
Constraints: if (!isAutodecodableString!(T[]) && !is(T[] == void[]));
pure @property @safe dchar **back**(T)(scope const(T)[] a)
Constraints: if (isAutodecodableString!(T[]));
Implements the range interface primitive `back` for built-in arrays. Due to the fact that nonmember functions can be called with the first argument using the dot notation, `array.back` is equivalent to `back(array)`. For [narrow strings](http://dlang.org/glossary.html#narrow%20strings), `back` automatically returns the last [code point](http://dlang.org/glossary.html#code%20point) as a `dchar`.
Examples:
```
int[] a = [ 1, 2, 3 ];
writeln(a.back); // 3
a.back += 4;
writeln(a.back); // 7
```
| programming_docs |
d std.datetime.systime std.datetime.systime
====================
| Category | Functions |
| --- | --- |
| Types | [`Clock`](#Clock) [`SysTime`](#SysTime) [`DosFileTime`](#DosFileTime) |
| Conversion | [`parseRFC822DateTime`](#parseRFC822DateTime) [`DosFileTimeToSysTime`](#DosFileTimeToSysTime) [`FILETIMEToStdTime`](#FILETIMEToStdTime) [`FILETIMEToSysTime`](#FILETIMEToSysTime) [`stdTimeToFILETIME`](#stdTimeToFILETIME) [`stdTimeToUnixTime`](#stdTimeToUnixTime) [`SYSTEMTIMEToSysTime`](#SYSTEMTIMEToSysTime) [`SysTimeToDosFileTime`](#SysTimeToDosFileTime) [`SysTimeToFILETIME`](#SysTimeToFILETIME) [`SysTimeToSYSTEMTIME`](#SysTimeToSYSTEMTIME) [`unixTimeToStdTime`](#unixTimeToStdTime) |
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
[Jonathan M Davis](http://jmdavisprog.com)
Source
[std/datetime/systime.d](https://github.com/dlang/phobos/blob/master/std/datetime/systime.d)
class **Clock**;
Effectively a namespace to make it clear that the methods it contains are getting the time from the system clock. It cannot be instantiated.
Examples:
Get the current time as a [`SysTime`](#SysTime)
```
import std.datetime.timezone : LocalTime;
SysTime today = Clock.currTime();
assert(today.timezone is LocalTime());
```
@safe SysTime **currTime**(ClockType clockType = ClockType.normal)(immutable TimeZone tz = LocalTime());
Returns the current time in the given time zone.
Parameters:
| | |
| --- | --- |
| clockType | The [`core.time.ClockType`](core_time#ClockType) indicates which system clock to use to get the current time. Very few programs need to use anything other than the default. |
| TimeZone `tz` | The time zone for the SysTime that's returned. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if it fails to get the time.
@property @trusted long **currStdTime**(ClockType clockType = ClockType.normal)();
Returns the number of hnsecs since midnight, January 1st, 1 A.D. for the current time.
Parameters:
| | |
| --- | --- |
| clockType | The [`core.time.ClockType`](core_time#ClockType) indicates which system clock to use to get the current time. Very few programs need to use anything other than the default. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if it fails to get the time.
struct **SysTime**;
`SysTime` is the type used to get the current time from the system or doing anything that involves time zones. Unlike [`std.datetime.date.DateTime`](std_datetime_date#DateTime), the time zone is an integral part of `SysTime` (though for local time applications, time zones can be ignored and it will work, since it defaults to using the local time zone). It holds its internal time in std time (hnsecs since midnight, January 1st, 1 A.D. UTC), so it interfaces well with the system time. However, that means that, unlike [`std.datetime.date.DateTime`](std_datetime_date#DateTime), it is not optimized for calendar-based operations, and getting individual units from it such as years or days is going to involve conversions and be less efficient.
For calendar-based operations that don't care about time zones, then [`std.datetime.date.DateTime`](std_datetime_date#DateTime) would be the type to use. For system time, use `SysTime`.
[`Clock.currTime`](#Clock.currTime) will return the current time as a `SysTime`. To convert a `SysTime` to a [`std.datetime.date.Date`](std_datetime_date#Date) or [`std.datetime.date.DateTime`](std_datetime_date#DateTime), simply cast it. To convert a [`std.datetime.date.Date`](std_datetime_date#Date) or [`std.datetime.date.DateTime`](std_datetime_date#DateTime) to a `SysTime`, use `SysTime`'s constructor, and pass in the intended time zone with it (or don't pass in a [`std.datetime.timezone.TimeZone`](std_datetime_timezone#TimeZone), and the local time zone will be used). Be aware, however, that converting from a [`std.datetime.date.DateTime`](std_datetime_date#DateTime) to a `SysTime` will not necessarily be 100% accurate due to DST (one hour of the year doesn't exist and another occurs twice). To not risk any conversion errors, keep times as `SysTime`s. Aside from DST though, there shouldn't be any conversion problems.
For using time zones other than local time or UTC, use [`std.datetime.timezone.PosixTimeZone`](std_datetime_timezone#PosixTimeZone) on Posix systems (or on Windows, if providing the TZ Database files), and use [`std.datetime.timezone.WindowsTimeZone`](std_datetime_timezone#WindowsTimeZone) on Windows systems. The time in `SysTime` is kept internally in hnsecs from midnight, January 1st, 1 A.D. UTC. Conversion error cannot happen when changing the time zone of a `SysTime`. [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime) is the [`std.datetime.timezone.TimeZone`](std_datetime_timezone#TimeZone) class which represents the local time, and `UTC` is the [`std.datetime.timezone.TimeZone`](std_datetime_timezone#TimeZone) class which represents UTC. `SysTime` uses [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime) if no [`std.datetime.timezone.TimeZone`](std_datetime_timezone#TimeZone) is provided. For more details on time zones, see the documentation for [`std.datetime.timezone.TimeZone`](std_datetime_timezone#TimeZone), [`std.datetime.timezone.PosixTimeZone`](std_datetime_timezone#PosixTimeZone), and [`std.datetime.timezone.WindowsTimeZone`](std_datetime_timezone#WindowsTimeZone).
`SysTime`'s range is from approximately 29,000 B.C. to approximately 29,000 A.D.
Examples:
```
import core.time : days, hours, seconds;
import std.datetime.date : DateTime;
import std.datetime.timezone : SimpleTimeZone, UTC;
// make a specific point in time in the UTC timezone
auto st = SysTime(DateTime(2018, 1, 1, 10, 30, 0), UTC());
// make a specific point in time in the New York timezone
auto ny = SysTime(
DateTime(2018, 1, 1, 10, 30, 0),
new immutable SimpleTimeZone(-5.hours, "America/New_York")
);
// ISO standard time strings
writeln(st.toISOString()); // "20180101T103000Z"
writeln(st.toISOExtString()); // "2018-01-01T10:30:00Z"
// add two days and 30 seconds
st += 2.days + 30.seconds;
writeln(st.toISOExtString()); // "2018-01-03T10:30:30Z"
```
nothrow @safe this(DateTime dateTime, immutable TimeZone tz = null);
Parameters:
| | |
| --- | --- |
| DateTime `dateTime` | The [`std.datetime.date.DateTime`](std_datetime_date#DateTime) to use to set this [`SysTime`](#SysTime)'s internal std time. As [`std.datetime.date.DateTime`](std_datetime_date#DateTime) has no concept of time zone, tz is used as its time zone. |
| TimeZone `tz` | The [`std.datetime.timezone.TimeZone`](std_datetime_timezone#TimeZone) to use for this [`SysTime`](#SysTime). If null, [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime) will be used. The given [`std.datetime.date.DateTime`](std_datetime_date#DateTime) is assumed to be in the given time zone. |
@safe this(DateTime dateTime, Duration fracSecs, immutable TimeZone tz = null);
Parameters:
| | |
| --- | --- |
| DateTime `dateTime` | The [`std.datetime.date.DateTime`](std_datetime_date#DateTime) to use to set this [`SysTime`](#SysTime)'s internal std time. As [`std.datetime.date.DateTime`](std_datetime_date#DateTime) has no concept of time zone, tz is used as its time zone. |
| Duration `fracSecs` | The fractional seconds portion of the time. |
| TimeZone `tz` | The [`std.datetime.timezone.TimeZone`](std_datetime_timezone#TimeZone) to use for this [`SysTime`](#SysTime). If null, [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime) will be used. The given [`std.datetime.date.DateTime`](std_datetime_date#DateTime) is assumed to be in the given time zone. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if `fracSecs` is negative or if it's greater than or equal to one second.
nothrow @safe this(Date date, immutable TimeZone tz = null);
Parameters:
| | |
| --- | --- |
| Date `date` | The [`std.datetime.date.Date`](std_datetime_date#Date) to use to set this [`SysTime`](#SysTime)'s internal std time. As [`std.datetime.date.Date`](std_datetime_date#Date) has no concept of time zone, tz is used as its time zone. |
| TimeZone `tz` | The [`std.datetime.timezone.TimeZone`](std_datetime_timezone#TimeZone) to use for this [`SysTime`](#SysTime). If null, [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime) will be used. The given [`std.datetime.date.Date`](std_datetime_date#Date) is assumed to be in the given time zone. |
pure nothrow @safe this(long stdTime, immutable TimeZone tz = null);
Note
Whereas the other constructors take in the given date/time, assume that it's in the given time zone, and convert it to hnsecs in UTC since midnight, January 1st, 1 A.D. UTC - i.e. std time - this constructor takes a std time, which is specifically already in UTC, so no conversion takes place. Of course, the various getter properties and functions will use the given time zone's conversion function to convert the results to that time zone, but no conversion of the arguments to this constructor takes place.
Parameters:
| | |
| --- | --- |
| long `stdTime` | The number of hnsecs since midnight, January 1st, 1 A.D. UTC. |
| TimeZone `tz` | The [`std.datetime.timezone.TimeZone`](std_datetime_timezone#TimeZone) to use for this [`SysTime`](#SysTime). If null, [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime) will be used. |
pure nothrow ref scope @safe SysTime **opAssign**()(auto ref const(SysTime) rhs) return;
Parameters:
| | |
| --- | --- |
| const(SysTime) `rhs` | The [`SysTime`](#SysTime) to assign to this one. |
Returns:
The `this` of this `SysTime`.
const pure nothrow scope @safe bool **opEquals**()(auto ref const(SysTime) rhs);
Checks for equality between this [`SysTime`](#SysTime) and the given [`SysTime`](#SysTime).
Note that the time zone is ignored. Only the internal std times (which are in UTC) are compared.
const pure nothrow scope @safe int **opCmp**()(auto ref const(SysTime) rhs);
Compares this [`SysTime`](#SysTime) with the given [`SysTime`](#SysTime).
Time zone is irrelevant when comparing [`SysTime`](#SysTime)s.
Returns:
| | |
| --- | --- |
| this < rhs | < 0 |
| this == rhs | 0 |
| this > rhs | > 0 |
const pure nothrow @nogc scope @safe size\_t **toHash**();
Returns:
A hash of the [`SysTime`](#SysTime).
const nothrow @property scope @safe short **year**();
Year of the Gregorian Calendar. Positive numbers are A.D. Non-positive are B.C.
@property scope @safe void **year**(int **year**);
Year of the Gregorian Calendar. Positive numbers are A.D. Non-positive are B.C.
Parameters:
| | |
| --- | --- |
| int `year` | The year to set this [`SysTime`](#SysTime)'s year to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the new year is not a leap year and the resulting date would be on February 29th.
Examples:
```
import std.datetime.date : DateTime;
writeln(SysTime(DateTime(1999, 7, 6, 9, 7, 5)).year); // 1999
writeln(SysTime(DateTime(2010, 10, 4, 0, 0, 30)).year); // 2010
writeln(SysTime(DateTime(-7, 4, 5, 7, 45, 2)).year); // -7
```
const @property scope @safe ushort **yearBC**();
Year B.C. of the Gregorian Calendar counting year 0 as 1 B.C.
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if `isAD` is true.
Examples:
```
import std.datetime.date : DateTime;
writeln(SysTime(DateTime(0, 1, 1, 12, 30, 33)).yearBC); // 1
writeln(SysTime(DateTime(-1, 1, 1, 10, 7, 2)).yearBC); // 2
writeln(SysTime(DateTime(-100, 1, 1, 4, 59, 0)).yearBC); // 101
```
@property scope @safe void **yearBC**(int year);
Year B.C. of the Gregorian Calendar counting year 0 as 1 B.C.
Parameters:
| | |
| --- | --- |
| int `year` | The year B.C. to set this [`SysTime`](#SysTime)'s year to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if a non-positive value is given.
const nothrow @property scope @safe Month **month**();
Month of a Gregorian Year.
Examples:
```
import std.datetime.date : DateTime;
writeln(SysTime(DateTime(1999, 7, 6, 9, 7, 5)).month); // 7
writeln(SysTime(DateTime(2010, 10, 4, 0, 0, 30)).month); // 10
writeln(SysTime(DateTime(-7, 4, 5, 7, 45, 2)).month); // 4
```
@property scope @safe void **month**(Month **month**);
Month of a Gregorian Year.
Parameters:
| | |
| --- | --- |
| Month `month` | The month to set this [`SysTime`](#SysTime)'s month to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given month is not a valid month.
const nothrow @property scope @safe ubyte **day**();
Day of a Gregorian Month.
Examples:
```
import std.datetime.date : DateTime;
writeln(SysTime(DateTime(1999, 7, 6, 9, 7, 5)).day); // 6
writeln(SysTime(DateTime(2010, 10, 4, 0, 0, 30)).day); // 4
writeln(SysTime(DateTime(-7, 4, 5, 7, 45, 2)).day); // 5
```
@property scope @safe void **day**(int **day**);
Day of a Gregorian Month.
Parameters:
| | |
| --- | --- |
| int `day` | The day of the month to set this [`SysTime`](#SysTime)'s day to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given day is not a valid day of the current month.
const nothrow @property scope @safe ubyte **hour**();
Hours past midnight.
@property scope @safe void **hour**(int **hour**);
Hours past midnight.
Parameters:
| | |
| --- | --- |
| int `hour` | The hours to set this [`SysTime`](#SysTime)'s hour to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given hour are not a valid hour of the day.
const nothrow @property scope @safe ubyte **minute**();
Minutes past the current hour.
@property scope @safe void **minute**(int **minute**);
Minutes past the current hour.
Parameters:
| | |
| --- | --- |
| int `minute` | The minute to set this [`SysTime`](#SysTime)'s minute to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given minute are not a valid minute of an hour.
const nothrow @property scope @safe ubyte **second**();
Seconds past the current minute.
@property scope @safe void **second**(int **second**);
Seconds past the current minute.
Parameters:
| | |
| --- | --- |
| int `second` | The second to set this [`SysTime`](#SysTime)'s second to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given second are not a valid second of a minute.
const nothrow @property scope @safe Duration **fracSecs**();
Fractional seconds past the second (i.e. the portion of a [`SysTime`](#SysTime) which is less than a second).
Examples:
```
import core.time : msecs, usecs, hnsecs, nsecs;
import std.datetime.date : DateTime;
auto dt = DateTime(1982, 4, 1, 20, 59, 22);
writeln(SysTime(dt, msecs(213)).fracSecs); // msecs(213)
writeln(SysTime(dt, usecs(5202)).fracSecs); // usecs(5202)
writeln(SysTime(dt, hnsecs(1234567)).fracSecs); // hnsecs(1234567)
// SysTime and Duration both have a precision of hnsecs (100 ns),
// so nsecs are going to be truncated.
writeln(SysTime(dt, nsecs(123456789)).fracSecs); // nsecs(123456700)
```
@property scope @safe void **fracSecs**(Duration **fracSecs**);
Fractional seconds past the second (i.e. the portion of a [`SysTime`](#SysTime) which is less than a second).
Parameters:
| | |
| --- | --- |
| Duration `fracSecs` | The duration to set this [`SysTime`](#SysTime)'s fractional seconds to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given duration is negative or if it's greater than or equal to one second.
Examples:
```
import core.time : Duration, msecs, hnsecs, nsecs;
import std.datetime.date : DateTime;
auto st = SysTime(DateTime(1982, 4, 1, 20, 59, 22));
writeln(st.fracSecs); // Duration.zero
st.fracSecs = msecs(213);
writeln(st.fracSecs); // msecs(213)
st.fracSecs = hnsecs(1234567);
writeln(st.fracSecs); // hnsecs(1234567)
// SysTime has a precision of hnsecs (100 ns), so nsecs are
// going to be truncated.
st.fracSecs = nsecs(123456789);
writeln(st.fracSecs); // hnsecs(1234567)
```
const pure nothrow @nogc @property scope @safe long **stdTime**();
The total hnsecs from midnight, January 1st, 1 A.D. UTC. This is the internal representation of [`SysTime`](#SysTime).
pure nothrow @property scope @safe void **stdTime**(long **stdTime**);
The total hnsecs from midnight, January 1st, 1 A.D. UTC. This is the internal representation of [`SysTime`](#SysTime).
Parameters:
| | |
| --- | --- |
| long `stdTime` | The number of hnsecs since January 1st, 1 A.D. UTC. |
const pure nothrow @property scope @safe immutable(TimeZone) **timezone**();
The current time zone of this [`SysTime`](#SysTime). Its internal time is always kept in UTC, so there are no conversion issues between time zones due to DST. Functions which return all or part of the time - such as hours - adjust the time to this [`SysTime`](#SysTime)'s time zone before returning.
pure nothrow @property scope @safe void **timezone**(immutable TimeZone **timezone**);
The current time zone of this [`SysTime`](#SysTime). It's internal time is always kept in UTC, so there are no conversion issues between time zones due to DST. Functions which return all or part of the time - such as hours - adjust the time to this [`SysTime`](#SysTime)'s time zone before returning.
Parameters:
| | |
| --- | --- |
| TimeZone `timezone` | The [`std.datetime.timezone.TimeZone`](std_datetime_timezone#TimeZone) to set this [`SysTime`](#SysTime)'s time zone to. |
const nothrow @property scope @safe bool **dstInEffect**();
Returns whether DST is in effect for this [`SysTime`](#SysTime).
const nothrow @property scope @safe Duration **utcOffset**();
Returns what the offset from UTC is for this [`SysTime`](#SysTime). It includes the DST offset in effect at that time (if any).
const pure nothrow scope @safe SysTime **toLocalTime**();
Returns a [`SysTime`](#SysTime) with the same std time as this one, but with [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime) as its time zone.
const pure nothrow scope @safe SysTime **toUTC**();
Returns a [`SysTime`](#SysTime) with the same std time as this one, but with `UTC` as its time zone.
const pure nothrow scope @safe SysTime **toOtherTZ**(immutable TimeZone tz);
Returns a [`SysTime`](#SysTime) with the same std time as this one, but with given time zone as its time zone.
const pure nothrow scope @safe T **toUnixTime**(T = time\_t)()
Constraints: if (is(T == int) || is(T == long));
Converts this [`SysTime`](#SysTime) to unix time (i.e. seconds from midnight, January 1st, 1970 in UTC).
The C standard does not specify the representation of time\_t, so it is implementation defined. On POSIX systems, unix time is equivalent to time\_t, but that's not necessarily true on other systems (e.g. it is not true for the Digital Mars C runtime). So, be careful when using unix time with C functions on non-POSIX systems.
By default, the return type is time\_t (which is normally an alias for int on 32-bit systems and long on 64-bit systems), but if a different size is required than either int or long can be passed as a template argument to get the desired size.
If the return type is int, and the result can't fit in an int, then the closest value that can be held in 32 bits will be used (so `int.max` if it goes over and `int.min` if it goes under). However, no attempt is made to deal with integer overflow if the return type is long.
Parameters:
| | |
| --- | --- |
| T | The return type (int or long). It defaults to time\_t, which is normally 32 bits on a 32-bit system and 64 bits on a 64-bit system. |
Returns:
A signed integer representing the unix time which is equivalent to this SysTime.
Examples:
```
import core.time : hours;
import std.datetime.date : DateTime;
import std.datetime.timezone : SimpleTimeZone, UTC;
writeln(SysTime(DateTime(1970, 1, 1), UTC()).toUnixTime()); // 0
auto pst = new immutable SimpleTimeZone(hours(-8));
writeln(SysTime(DateTime(1970, 1, 1), pst).toUnixTime()); // 28800
auto utc = SysTime(DateTime(2007, 12, 22, 8, 14, 45), UTC());
writeln(utc.toUnixTime()); // 1_198_311_285
auto ca = SysTime(DateTime(2007, 12, 22, 8, 14, 45), pst);
writeln(ca.toUnixTime()); // 1_198_340_085
static void testScope(scope ref SysTime st) @safe
{
auto result = st.toUnixTime();
}
```
static pure nothrow @safe SysTime **fromUnixTime**(long unixTime, immutable TimeZone tz = LocalTime());
Converts from unix time (i.e. seconds from midnight, January 1st, 1970 in UTC) to a [`SysTime`](#SysTime).
The C standard does not specify the representation of time\_t, so it is implementation defined. On POSIX systems, unix time is equivalent to time\_t, but that's not necessarily true on other systems (e.g. it is not true for the Digital Mars C runtime). So, be careful when using unix time with C functions on non-POSIX systems.
Parameters:
| | |
| --- | --- |
| long `unixTime` | Seconds from midnight, January 1st, 1970 in UTC. |
| TimeZone `tz` | The time zone for the SysTime that's returned. |
Examples:
```
import core.time : hours;
import std.datetime.date : DateTime;
import std.datetime.timezone : SimpleTimeZone, UTC;
assert(SysTime.fromUnixTime(0) ==
SysTime(DateTime(1970, 1, 1), UTC()));
auto pst = new immutable SimpleTimeZone(hours(-8));
assert(SysTime.fromUnixTime(28800) ==
SysTime(DateTime(1970, 1, 1), pst));
auto st1 = SysTime.fromUnixTime(1_198_311_285, UTC());
writeln(st1); // SysTime(DateTime(2007, 12, 22, 8, 14, 45), UTC())
assert(st1.timezone is UTC());
writeln(st1); // SysTime(DateTime(2007, 12, 22, 0, 14, 45), pst)
auto st2 = SysTime.fromUnixTime(1_198_311_285, pst);
writeln(st2); // SysTime(DateTime(2007, 12, 22, 8, 14, 45), UTC())
assert(st2.timezone is pst);
writeln(st2); // SysTime(DateTime(2007, 12, 22, 0, 14, 45), pst)
```
const pure nothrow scope @safe timeval **toTimeVal**();
Returns a `timeval` which represents this [`SysTime`](#SysTime).
Note that like all conversions in std.datetime, this is a truncating conversion.
If `timeval.tv_sec` is int, and the result can't fit in an int, then the closest value that can be held in 32 bits will be used for `tv_sec`. (so `int.max` if it goes over and `int.min` if it goes under).
const pure nothrow scope @safe timespec **toTimeSpec**();
Returns a `timespec` which represents this [`SysTime`](#SysTime).
This function is Posix-Only.
const nothrow scope @safe tm **toTM**();
Returns a `tm` which represents this [`SysTime`](#SysTime).
nothrow ref scope @safe SysTime **add**(string units)(long value, AllowDayOverflow allowOverflow = AllowDayOverflow.yes)
Constraints: if (units == "years" || units == "months");
Adds the given number of years or months to this [`SysTime`](#SysTime). A negative number will subtract.
Note that if day overflow is allowed, and the date with the adjusted year/month overflows the number of days in the new month, then the month will be incremented by one, and the day set to the number of days overflowed. (e.g. if the day were 31 and the new month were June, then the month would be incremented to July, and the new day would be 1). If day overflow is not allowed, then the day will be set to the last valid day in the month (e.g. June 31st would become June 30th).
Parameters:
| | |
| --- | --- |
| units | The type of units to add ("years" or "months"). |
| long `value` | The number of months or years to add to this [`SysTime`](#SysTime). |
| AllowDayOverflow `allowOverflow` | Whether the days should be allowed to overflow, causing the month to increment. |
nothrow ref scope @safe SysTime **roll**(string units)(long value, AllowDayOverflow allowOverflow = AllowDayOverflow.yes)
Constraints: if (units == "years");
Adds the given number of years or months to this [`SysTime`](#SysTime). A negative number will subtract.
The difference between rolling and adding is that rolling does not affect larger units. Rolling a [`SysTime`](#SysTime) 12 months gets the exact same [`SysTime`](#SysTime). However, the days can still be affected due to the differing number of days in each month.
Because there are no units larger than years, there is no difference between adding and rolling years.
Parameters:
| | |
| --- | --- |
| units | The type of units to add ("years" or "months"). |
| long `value` | The number of months or years to add to this [`SysTime`](#SysTime). |
| AllowDayOverflow `allowOverflow` | Whether the days should be allowed to overflow, causing the month to increment. |
Examples:
```
import std.datetime.date : AllowDayOverflow, DateTime;
auto st1 = SysTime(DateTime(2010, 1, 1, 12, 33, 33));
st1.roll!"months"(1);
writeln(st1); // SysTime(DateTime(2010, 2, 1, 12, 33, 33))
auto st2 = SysTime(DateTime(2010, 1, 1, 12, 33, 33));
st2.roll!"months"(-1);
writeln(st2); // SysTime(DateTime(2010, 12, 1, 12, 33, 33))
auto st3 = SysTime(DateTime(1999, 1, 29, 12, 33, 33));
st3.roll!"months"(1);
writeln(st3); // SysTime(DateTime(1999, 3, 1, 12, 33, 33))
auto st4 = SysTime(DateTime(1999, 1, 29, 12, 33, 33));
st4.roll!"months"(1, AllowDayOverflow.no);
writeln(st4); // SysTime(DateTime(1999, 2, 28, 12, 33, 33))
auto st5 = SysTime(DateTime(2000, 2, 29, 12, 30, 33));
st5.roll!"years"(1);
writeln(st5); // SysTime(DateTime(2001, 3, 1, 12, 30, 33))
auto st6 = SysTime(DateTime(2000, 2, 29, 12, 30, 33));
st6.roll!"years"(1, AllowDayOverflow.no);
writeln(st6); // SysTime(DateTime(2001, 2, 28, 12, 30, 33))
```
nothrow ref scope @safe SysTime **roll**(string units)(long value)
Constraints: if (units == "days");
Adds the given number of units to this [`SysTime`](#SysTime). A negative number will subtract.
The difference between rolling and adding is that rolling does not affect larger units. For instance, rolling a [`SysTime`](#SysTime) one year's worth of days gets the exact same [`SysTime`](#SysTime).
Accepted units are `"days"`, `"minutes"`, `"hours"`, `"minutes"`, `"seconds"`, `"msecs"`, `"usecs"`, and `"hnsecs"`.
Note that when rolling msecs, usecs or hnsecs, they all add up to a second. So, for example, rolling 1000 msecs is exactly the same as rolling 100,000 usecs.
Parameters:
| | |
| --- | --- |
| units | The units to add. |
| long `value` | The number of units to add to this [`SysTime`](#SysTime). |
Examples:
```
import core.time : msecs, hnsecs;
import std.datetime.date : DateTime;
auto st1 = SysTime(DateTime(2010, 1, 1, 11, 23, 12));
st1.roll!"days"(1);
writeln(st1); // SysTime(DateTime(2010, 1, 2, 11, 23, 12))
st1.roll!"days"(365);
writeln(st1); // SysTime(DateTime(2010, 1, 26, 11, 23, 12))
st1.roll!"days"(-32);
writeln(st1); // SysTime(DateTime(2010, 1, 25, 11, 23, 12))
auto st2 = SysTime(DateTime(2010, 7, 4, 12, 0, 0));
st2.roll!"hours"(1);
writeln(st2); // SysTime(DateTime(2010, 7, 4, 13, 0, 0))
auto st3 = SysTime(DateTime(2010, 2, 12, 12, 0, 0));
st3.roll!"hours"(-1);
writeln(st3); // SysTime(DateTime(2010, 2, 12, 11, 0, 0))
auto st4 = SysTime(DateTime(2009, 12, 31, 0, 0, 0));
st4.roll!"minutes"(1);
writeln(st4); // SysTime(DateTime(2009, 12, 31, 0, 1, 0))
auto st5 = SysTime(DateTime(2010, 1, 1, 0, 0, 0));
st5.roll!"minutes"(-1);
writeln(st5); // SysTime(DateTime(2010, 1, 1, 0, 59, 0))
auto st6 = SysTime(DateTime(2009, 12, 31, 0, 0, 0));
st6.roll!"seconds"(1);
writeln(st6); // SysTime(DateTime(2009, 12, 31, 0, 0, 1))
auto st7 = SysTime(DateTime(2010, 1, 1, 0, 0, 0));
st7.roll!"seconds"(-1);
writeln(st7); // SysTime(DateTime(2010, 1, 1, 0, 0, 59))
auto dt = DateTime(2010, 1, 1, 0, 0, 0);
auto st8 = SysTime(dt);
st8.roll!"msecs"(1);
writeln(st8); // SysTime(dt, msecs(1))
auto st9 = SysTime(dt);
st9.roll!"msecs"(-1);
writeln(st9); // SysTime(dt, msecs(999))
auto st10 = SysTime(dt);
st10.roll!"hnsecs"(1);
writeln(st10); // SysTime(dt, hnsecs(1))
auto st11 = SysTime(dt);
st11.roll!"hnsecs"(-1);
writeln(st11); // SysTime(dt, hnsecs(9_999_999))
```
const pure nothrow scope @safe SysTime **opBinary**(string op)(Duration duration)
Constraints: if (op == "+" || op == "-");
Gives the result of adding or subtracting a [`core.time.Duration`](core_time#Duration) from this [`SysTime`](#SysTime).
The legal types of arithmetic for [`SysTime`](#SysTime) using this operator are
| | | | | |
| --- | --- | --- | --- | --- |
| SysTime | + | Duration | --> | SysTime |
| SysTime | - | Duration | --> | SysTime |
Parameters:
| | |
| --- | --- |
| Duration `duration` | The [`core.time.Duration`](core_time#Duration) to add to or subtract from this [`SysTime`](#SysTime). |
Examples:
```
import core.time : hours, seconds;
import std.datetime.date : DateTime;
assert(SysTime(DateTime(2015, 12, 31, 23, 59, 59)) + seconds(1) ==
SysTime(DateTime(2016, 1, 1, 0, 0, 0)));
assert(SysTime(DateTime(2015, 12, 31, 23, 59, 59)) + hours(1) ==
SysTime(DateTime(2016, 1, 1, 0, 59, 59)));
assert(SysTime(DateTime(2016, 1, 1, 0, 0, 0)) - seconds(1) ==
SysTime(DateTime(2015, 12, 31, 23, 59, 59)));
assert(SysTime(DateTime(2016, 1, 1, 0, 59, 59)) - hours(1) ==
SysTime(DateTime(2015, 12, 31, 23, 59, 59)));
```
pure nothrow ref scope @safe SysTime **opOpAssign**(string op)(Duration duration)
Constraints: if (op == "+" || op == "-");
Gives the result of adding or subtracting a [`core.time.Duration`](core_time#Duration) from this [`SysTime`](#SysTime), as well as assigning the result to this [`SysTime`](#SysTime).
The legal types of arithmetic for [`SysTime`](#SysTime) using this operator are
| | | | | |
| --- | --- | --- | --- | --- |
| SysTime | + | Duration | --> | SysTime |
| SysTime | - | Duration | --> | SysTime |
Parameters:
| | |
| --- | --- |
| Duration `duration` | The [`core.time.Duration`](core_time#Duration) to add to or subtract from this [`SysTime`](#SysTime). |
const pure nothrow scope @safe Duration **opBinary**(string op)(SysTime rhs)
Constraints: if (op == "-");
Gives the difference between two [`SysTime`](#SysTime)s.
The legal types of arithmetic for [`SysTime`](#SysTime) using this operator are
| | | | | |
| --- | --- | --- | --- | --- |
| SysTime | - | SysTime | --> | duration |
const nothrow scope @safe int **diffMonths**(scope SysTime rhs);
Returns the difference between the two [`SysTime`](#SysTime)s in months.
To get the difference in years, subtract the year property of two [`SysTime`](#SysTime)s. To get the difference in days or weeks, subtract the [`SysTime`](#SysTime)s themselves and use the [`core.time.Duration`](core_time#Duration) that results. Because converting between months and smaller units requires a specific date (which [`core.time.Duration`](core_time#Duration)s don't have), getting the difference in months requires some math using both the year and month properties, so this is a convenience function for getting the difference in months.
Note that the number of days in the months or how far into the month either date is is irrelevant. It is the difference in the month property combined with the difference in years \* 12. So, for instance, December 31st and January 1st are one month apart just as December 1st and January 31st are one month apart.
Parameters:
| | |
| --- | --- |
| SysTime `rhs` | The [`SysTime`](#SysTime) to subtract from this one. |
Examples:
```
import core.time;
import std.datetime.date : Date;
assert(SysTime(Date(1999, 2, 1)).diffMonths(
SysTime(Date(1999, 1, 31))) == 1);
assert(SysTime(Date(1999, 1, 31)).diffMonths(
SysTime(Date(1999, 2, 1))) == -1);
assert(SysTime(Date(1999, 3, 1)).diffMonths(
SysTime(Date(1999, 1, 1))) == 2);
assert(SysTime(Date(1999, 1, 1)).diffMonths(
SysTime(Date(1999, 3, 31))) == -2);
```
const nothrow @property scope @safe bool **isLeapYear**();
Whether this [`SysTime`](#SysTime) is in a leap year.
const nothrow @property scope @safe DayOfWeek **dayOfWeek**();
Day of the week this [`SysTime`](#SysTime) is on.
const nothrow @property scope @safe ushort **dayOfYear**();
Day of the year this [`SysTime`](#SysTime) is on.
Examples:
```
import core.time;
import std.datetime.date : DateTime;
writeln(SysTime(DateTime(1999, 1, 1, 12, 22, 7)).dayOfYear); // 1
writeln(SysTime(DateTime(1999, 12, 31, 7, 2, 59)).dayOfYear); // 365
writeln(SysTime(DateTime(2000, 12, 31, 21, 20, 0)).dayOfYear); // 366
```
@property scope @safe void **dayOfYear**(int day);
Day of the year.
Parameters:
| | |
| --- | --- |
| int `day` | The day of the year to set which day of the year this [`SysTime`](#SysTime) is on. |
const nothrow @property scope @safe int **dayOfGregorianCal**();
The Xth day of the Gregorian Calendar that this [`SysTime`](#SysTime) is on.
Examples:
```
import core.time;
import std.datetime.date : DateTime;
writeln(SysTime(DateTime(1, 1, 1, 0, 0, 0)).dayOfGregorianCal); // 1
writeln(SysTime(DateTime(1, 12, 31, 23, 59, 59)).dayOfGregorianCal); // 365
writeln(SysTime(DateTime(2, 1, 1, 2, 2, 2)).dayOfGregorianCal); // 366
writeln(SysTime(DateTime(0, 12, 31, 7, 7, 7)).dayOfGregorianCal); // 0
writeln(SysTime(DateTime(0, 1, 1, 19, 30, 0)).dayOfGregorianCal); // -365
writeln(SysTime(DateTime(-1, 12, 31, 4, 7, 0)).dayOfGregorianCal); // -366
writeln(SysTime(DateTime(2000, 1, 1, 9, 30, 20)).dayOfGregorianCal); // 730_120
writeln(SysTime(DateTime(2010, 12, 31, 15, 45, 50)).dayOfGregorianCal); // 734_137
```
nothrow @property scope @safe void **dayOfGregorianCal**(int days);
The Xth day of the Gregorian Calendar that this [`SysTime`](#SysTime) is on. Setting this property does not affect the time portion of [`SysTime`](#SysTime).
Parameters:
| | |
| --- | --- |
| int `days` | The day of the Gregorian Calendar to set this [`SysTime`](#SysTime) to. |
Examples:
```
import core.time;
import std.datetime.date : DateTime;
auto st = SysTime(DateTime(0, 1, 1, 12, 0, 0));
st.dayOfGregorianCal = 1;
writeln(st); // SysTime(DateTime(1, 1, 1, 12, 0, 0))
st.dayOfGregorianCal = 365;
writeln(st); // SysTime(DateTime(1, 12, 31, 12, 0, 0))
st.dayOfGregorianCal = 366;
writeln(st); // SysTime(DateTime(2, 1, 1, 12, 0, 0))
st.dayOfGregorianCal = 0;
writeln(st); // SysTime(DateTime(0, 12, 31, 12, 0, 0))
st.dayOfGregorianCal = -365;
writeln(st); // SysTime(DateTime(-0, 1, 1, 12, 0, 0))
st.dayOfGregorianCal = -366;
writeln(st); // SysTime(DateTime(-1, 12, 31, 12, 0, 0))
st.dayOfGregorianCal = 730_120;
writeln(st); // SysTime(DateTime(2000, 1, 1, 12, 0, 0))
st.dayOfGregorianCal = 734_137;
writeln(st); // SysTime(DateTime(2010, 12, 31, 12, 0, 0))
```
const nothrow @property scope @safe ubyte **isoWeek**();
The ISO 8601 week of the year that this [`SysTime`](#SysTime) is in.
See Also:
[ISO Week Date](http://en.wikipedia.org/wiki/ISO_week_date).
Examples:
```
import core.time;
import std.datetime.date : Date;
auto st = SysTime(Date(1999, 7, 6));
const cst = SysTime(Date(2010, 5, 1));
immutable ist = SysTime(Date(2015, 10, 10));
writeln(st.isoWeek); // 27
writeln(cst.isoWeek); // 17
writeln(ist.isoWeek); // 41
```
const nothrow @property scope @safe SysTime **endOfMonth**();
[`SysTime`](#SysTime) for the last day in the month that this Date is in. The time portion of endOfMonth is always 23:59:59.9999999.
Examples:
```
import core.time : msecs, usecs, hnsecs;
import std.datetime.date : DateTime;
assert(SysTime(DateTime(1999, 1, 6, 0, 0, 0)).endOfMonth ==
SysTime(DateTime(1999, 1, 31, 23, 59, 59), hnsecs(9_999_999)));
assert(SysTime(DateTime(1999, 2, 7, 19, 30, 0), msecs(24)).endOfMonth ==
SysTime(DateTime(1999, 2, 28, 23, 59, 59), hnsecs(9_999_999)));
assert(SysTime(DateTime(2000, 2, 7, 5, 12, 27), usecs(5203)).endOfMonth ==
SysTime(DateTime(2000, 2, 29, 23, 59, 59), hnsecs(9_999_999)));
assert(SysTime(DateTime(2000, 6, 4, 12, 22, 9), hnsecs(12345)).endOfMonth ==
SysTime(DateTime(2000, 6, 30, 23, 59, 59), hnsecs(9_999_999)));
```
const nothrow @property scope @safe ubyte **daysInMonth**();
The last day in the month that this [`SysTime`](#SysTime) is in.
Examples:
```
import core.time;
import std.datetime.date : DateTime;
writeln(SysTime(DateTime(1999, 1, 6, 0, 0, 0)).daysInMonth); // 31
writeln(SysTime(DateTime(1999, 2, 7, 19, 30, 0)).daysInMonth); // 28
writeln(SysTime(DateTime(2000, 2, 7, 5, 12, 27)).daysInMonth); // 29
writeln(SysTime(DateTime(2000, 6, 4, 12, 22, 9)).daysInMonth); // 30
```
const nothrow @property scope @safe bool **isAD**();
Whether the current year is a date in A.D.
Examples:
```
import core.time;
import std.datetime.date : DateTime;
assert(SysTime(DateTime(1, 1, 1, 12, 7, 0)).isAD);
assert(SysTime(DateTime(2010, 12, 31, 0, 0, 0)).isAD);
assert(!SysTime(DateTime(0, 12, 31, 23, 59, 59)).isAD);
assert(!SysTime(DateTime(-2010, 1, 1, 2, 2, 2)).isAD);
```
const nothrow @property scope @safe long **julianDay**();
The [Julian day](http://en.wikipedia.org/wiki/Julian_day) for this [`SysTime`](#SysTime) at the given time. For example, prior to noon, 1996-03-31 would be the Julian day number 2\_450\_173, so this function returns 2\_450\_173, while from noon onward, the Julian day number would be 2\_450\_174, so this function returns 2\_450\_174.
const nothrow @property scope @safe long **modJulianDay**();
The modified [Julian day](http://en.wikipedia.org/wiki/Julian_day) for any time on this date (since, the modified Julian day changes at midnight).
const nothrow scope @safe Date **opCast**(T)()
Constraints: if (is(immutable(T) == immutable(Date)));
Returns a [`std.datetime.date.Date`](std_datetime_date#Date) equivalent to this [`SysTime`](#SysTime).
const nothrow scope @safe DateTime **opCast**(T)()
Constraints: if (is(immutable(T) == immutable(DateTime)));
Returns a [`std.datetime.date.DateTime`](std_datetime_date#DateTime) equivalent to this [`SysTime`](#SysTime).
const nothrow scope @safe TimeOfDay **opCast**(T)()
Constraints: if (is(immutable(T) == immutable(TimeOfDay)));
Returns a [`std.datetime.date.TimeOfDay`](std_datetime_date#TimeOfDay) equivalent to this [`SysTime`](#SysTime).
const nothrow scope @safe string **toISOString**();
const scope void **toISOString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`SysTime`](#SysTime) to a string with the format YYYYMMDDTHHMMSS.FFFFFFFTZ (where F is fractional seconds and TZ is time zone).
Note that the number of digits in the fractional seconds varies with the number of fractional seconds. It's a maximum of 7 (which would be hnsecs), but only has as many as are necessary to hold the correct value (so no trailing zeroes), and if there are no fractional seconds, then there is no decimal point.
If this [`SysTime`](#SysTime)'s time zone is [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime), then TZ is empty. If its time zone is `UTC`, then it is "Z". Otherwise, it is the offset from UTC (e.g. +0100 or -0700). Note that the offset from UTC is *not* enough to uniquely identify the time zone.
Time zone offsets will be in the form +HHMM or -HHMM.
Warning: Previously, toISOString did the same as [`toISOExtString`](#toISOExtString) and generated +HH:MM or -HH:MM for the time zone when it was not [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime) or [`std.datetime.timezone.UTC`](std_datetime_timezone#UTC), which is not in conformance with ISO 8601 for the non-extended string format. This has now been fixed. However, for now, fromISOString will continue to accept the extended format for the time zone so that any code which has been writing out the result of toISOString to read in later will continue to work. The current behavior will be kept until July 2019 at which point, fromISOString will be fixed to be standards compliant.
Parameters:
| | |
| --- | --- |
| W `writer` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
Examples:
```
import core.time : msecs, hnsecs;
import std.datetime.date : DateTime;
assert(SysTime(DateTime(2010, 7, 4, 7, 6, 12)).toISOString() ==
"20100704T070612");
assert(SysTime(DateTime(1998, 12, 25, 2, 15, 0), msecs(24)).toISOString() ==
"19981225T021500.024");
assert(SysTime(DateTime(0, 1, 5, 23, 9, 59)).toISOString() ==
"00000105T230959");
assert(SysTime(DateTime(-4, 1, 5, 0, 0, 2), hnsecs(520_920)).toISOString() ==
"-00040105T000002.052092");
```
const nothrow scope @safe string **toISOExtString**();
const scope void **toISOExtString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`SysTime`](#SysTime) to a string with the format YYYY-MM-DDTHH:MM:SS.FFFFFFFTZ (where F is fractional seconds and TZ is the time zone).
Note that the number of digits in the fractional seconds varies with the number of fractional seconds. It's a maximum of 7 (which would be hnsecs), but only has as many as are necessary to hold the correct value (so no trailing zeroes), and if there are no fractional seconds, then there is no decimal point.
If this [`SysTime`](#SysTime)'s time zone is [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime), then TZ is empty. If its time zone is `UTC`, then it is "Z". Otherwise, it is the offset from UTC (e.g. +01:00 or -07:00). Note that the offset from UTC is *not* enough to uniquely identify the time zone.
Time zone offsets will be in the form +HH:MM or -HH:MM.
Parameters:
| | |
| --- | --- |
| W `writer` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
Examples:
```
import core.time : msecs, hnsecs;
import std.datetime.date : DateTime;
assert(SysTime(DateTime(2010, 7, 4, 7, 6, 12)).toISOExtString() ==
"2010-07-04T07:06:12");
assert(SysTime(DateTime(1998, 12, 25, 2, 15, 0), msecs(24)).toISOExtString() ==
"1998-12-25T02:15:00.024");
assert(SysTime(DateTime(0, 1, 5, 23, 9, 59)).toISOExtString() ==
"0000-01-05T23:09:59");
assert(SysTime(DateTime(-4, 1, 5, 0, 0, 2), hnsecs(520_920)).toISOExtString() ==
"-0004-01-05T00:00:02.052092");
```
const nothrow scope @safe string **toSimpleString**();
const scope void **toSimpleString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`SysTime`](#SysTime) to a string with the format YYYY-Mon-DD HH:MM:SS.FFFFFFFTZ (where F is fractional seconds and TZ is the time zone).
Note that the number of digits in the fractional seconds varies with the number of fractional seconds. It's a maximum of 7 (which would be hnsecs), but only has as many as are necessary to hold the correct value (so no trailing zeroes), and if there are no fractional seconds, then there is no decimal point.
If this [`SysTime`](#SysTime)'s time zone is [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime), then TZ is empty. If its time zone is `UTC`, then it is "Z". Otherwise, it is the offset from UTC (e.g. +01:00 or -07:00). Note that the offset from UTC is *not* enough to uniquely identify the time zone.
Time zone offsets will be in the form +HH:MM or -HH:MM.
Parameters:
| | |
| --- | --- |
| W `writer` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
Examples:
```
import core.time : msecs, hnsecs;
import std.datetime.date : DateTime;
assert(SysTime(DateTime(2010, 7, 4, 7, 6, 12)).toSimpleString() ==
"2010-Jul-04 07:06:12");
assert(SysTime(DateTime(1998, 12, 25, 2, 15, 0), msecs(24)).toSimpleString() ==
"1998-Dec-25 02:15:00.024");
assert(SysTime(DateTime(0, 1, 5, 23, 9, 59)).toSimpleString() ==
"0000-Jan-05 23:09:59");
assert(SysTime(DateTime(-4, 1, 5, 0, 0, 2), hnsecs(520_920)).toSimpleString() ==
"-0004-Jan-05 00:00:02.052092");
```
const nothrow scope @safe string **toString**();
const scope void **toString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`SysTime`](#SysTime) to a string.
This function exists to make it easy to convert a [`SysTime`](#SysTime) to a string for code that does not care what the exact format is - just that it presents the information in a clear manner. It also makes it easy to simply convert a [`SysTime`](#SysTime) to a string when using functions such as `to!string`, `format`, or `writeln` which use toString to convert user-defined types. So, it is unlikely that much code will call toString directly.
The format of the string is purposefully unspecified, and code that cares about the format of the string should use `toISOString`, `toISOExtString`, `toSimpleString`, or some other custom formatting function that explicitly generates the format that the code needs. The reason is that the code is then clear about what format it's using, making it less error-prone to maintain the code and interact with other software that consumes the generated strings. It's for this same reason that [`SysTime`](#SysTime) has no `fromString` function, whereas it does have `fromISOString`, `fromISOExtString`, and `fromSimpleString`.
The format returned by toString may or may not change in the future.
Parameters:
| | |
| --- | --- |
| W `writer` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
@safe SysTime **fromISOString**(S)(scope const S isoString, immutable TimeZone tz = null)
Constraints: if (isSomeString!S);
Creates a [`SysTime`](#SysTime) from a string with the format YYYYMMDDTHHMMSS.FFFFFFFTZ (where F is fractional seconds is the time zone). Whitespace is stripped from the given string.
The exact format is exactly as described in `toISOString` except that trailing zeroes are permitted - including having fractional seconds with all zeroes. However, a decimal point with nothing following it is invalid. Also, while [`toISOString`](#toISOString) will never generate a string with more than 7 digits in the fractional seconds (because that's the limit with hecto-nanosecond precision), it will allow more than 7 digits in order to read strings from other sources that have higher precision (however, any digits beyond 7 will be truncated).
If there is no time zone in the string, then [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime) is used. If the time zone is "Z", then `UTC` is used. Otherwise, a [`std.datetime.timezone.SimpleTimeZone`](std_datetime_timezone#SimpleTimeZone) which corresponds to the given offset from UTC is used. To get the returned [`SysTime`](#SysTime) to be a particular time zone, pass in that time zone and the [`SysTime`](#SysTime) to be returned will be converted to that time zone (though it will still be read in as whatever time zone is in its string).
The accepted formats for time zone offsets are +HH, -HH, +HHMM, and -HHMM.
Warning: Previously, [`toISOString`](#toISOString) did the same as [`toISOExtString`](#toISOExtString) and generated +HH:MM or -HH:MM for the time zone when it was not [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime) or [`std.datetime.timezone.UTC`](std_datetime_timezone#UTC), which is not in conformance with ISO 8601 for the non-extended string format. This has now been fixed. However, for now, fromISOString will continue to accept the extended format for the time zone so that any code which has been writing out the result of toISOString to read in later will continue to work. The current behavior will be kept until July 2019 at which point, fromISOString will be fixed to be standards compliant.
Parameters:
| | |
| --- | --- |
| S `isoString` | A string formatted in the ISO format for dates and times. |
| TimeZone `tz` | The time zone to convert the given time to (no conversion occurs if null). |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given string is not in the ISO format or if the resulting [`SysTime`](#SysTime) would not be valid.
@safe SysTime **fromISOExtString**(S)(scope const S isoExtString, immutable TimeZone tz = null)
Constraints: if (isSomeString!S);
Creates a [`SysTime`](#SysTime) from a string with the format YYYY-MM-DDTHH:MM:SS.FFFFFFFTZ (where F is fractional seconds is the time zone). Whitespace is stripped from the given string.
The exact format is exactly as described in `toISOExtString` except that trailing zeroes are permitted - including having fractional seconds with all zeroes. However, a decimal point with nothing following it is invalid. Also, while [`toISOExtString`](#toISOExtString) will never generate a string with more than 7 digits in the fractional seconds (because that's the limit with hecto-nanosecond precision), it will allow more than 7 digits in order to read strings from other sources that have higher precision (however, any digits beyond 7 will be truncated).
If there is no time zone in the string, then [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime) is used. If the time zone is "Z", then `UTC` is used. Otherwise, a [`std.datetime.timezone.SimpleTimeZone`](std_datetime_timezone#SimpleTimeZone) which corresponds to the given offset from UTC is used. To get the returned [`SysTime`](#SysTime) to be a particular time zone, pass in that time zone and the [`SysTime`](#SysTime) to be returned will be converted to that time zone (though it will still be read in as whatever time zone is in its string).
The accepted formats for time zone offsets are +HH, -HH, +HH:MM, and -HH:MM.
Parameters:
| | |
| --- | --- |
| S `isoExtString` | A string formatted in the ISO Extended format for dates and times. |
| TimeZone `tz` | The time zone to convert the given time to (no conversion occurs if null). |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given string is not in the ISO format or if the resulting [`SysTime`](#SysTime) would not be valid.
@safe SysTime **fromSimpleString**(S)(scope const S simpleString, immutable TimeZone tz = null)
Constraints: if (isSomeString!S);
Creates a [`SysTime`](#SysTime) from a string with the format YYYY-MM-DD HH:MM:SS.FFFFFFFTZ (where F is fractional seconds is the time zone). Whitespace is stripped from the given string.
The exact format is exactly as described in `toSimpleString` except that trailing zeroes are permitted - including having fractional seconds with all zeroes. However, a decimal point with nothing following it is invalid. Also, while [`toSimpleString`](#toSimpleString) will never generate a string with more than 7 digits in the fractional seconds (because that's the limit with hecto-nanosecond precision), it will allow more than 7 digits in order to read strings from other sources that have higher precision (however, any digits beyond 7 will be truncated).
If there is no time zone in the string, then [`std.datetime.timezone.LocalTime`](std_datetime_timezone#LocalTime) is used. If the time zone is "Z", then `UTC` is used. Otherwise, a [`std.datetime.timezone.SimpleTimeZone`](std_datetime_timezone#SimpleTimeZone) which corresponds to the given offset from UTC is used. To get the returned [`SysTime`](#SysTime) to be a particular time zone, pass in that time zone and the [`SysTime`](#SysTime) to be returned will be converted to that time zone (though it will still be read in as whatever time zone is in its string).
The accepted formats for time zone offsets are +HH, -HH, +HH:MM, and -HH:MM.
Parameters:
| | |
| --- | --- |
| S `simpleString` | A string formatted in the way that `toSimpleString` formats dates and times. |
| TimeZone `tz` | The time zone to convert the given time to (no conversion occurs if null). |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given string is not in the ISO format or if the resulting [`SysTime`](#SysTime) would not be valid.
static pure nothrow @property @safe SysTime **min**();
Returns the [`SysTime`](#SysTime) farthest in the past which is representable by [`SysTime`](#SysTime).
The [`SysTime`](#SysTime) which is returned is in UTC.
static pure nothrow @property @safe SysTime **max**();
Returns the [`SysTime`](#SysTime) farthest in the future which is representable by [`SysTime`](#SysTime).
The [`SysTime`](#SysTime) which is returned is in UTC.
pure nothrow @nogc @safe long **unixTimeToStdTime**(long unixTime);
Converts from unix time (which uses midnight, January 1st, 1970 UTC as its epoch and seconds as its units) to "std time" (which uses midnight, January 1st, 1 A.D. UTC and hnsecs as its units).
The C standard does not specify the representation of time\_t, so it is implementation defined. On POSIX systems, unix time is equivalent to time\_t, but that's not necessarily true on other systems (e.g. it is not true for the Digital Mars C runtime). So, be careful when using unix time with C functions on non-POSIX systems.
"std time"'s epoch is based on the Proleptic Gregorian Calendar per ISO 8601 and is what [`SysTime`](#SysTime) uses internally. However, holding the time as an integer in hnsecs since that epoch technically isn't actually part of the standard, much as it's based on it, so the name "std time" isn't particularly good, but there isn't an official name for it. C# uses "ticks" for the same thing, but they aren't actually clock ticks, and the term "ticks" *is* used for actual clock ticks for [`core.time.MonoTime`](core_time#MonoTime), so it didn't make sense to use the term ticks here. So, for better or worse, std.datetime uses the term "std time" for this.
Parameters:
| | |
| --- | --- |
| long `unixTime` | The unix time to convert. |
See Also:
SysTime.fromUnixTime
Examples:
```
import std.datetime.date : DateTime;
import std.datetime.timezone : UTC;
// Midnight, January 1st, 1970
writeln(unixTimeToStdTime(0)); // 621_355_968_000_000_000L
assert(SysTime(unixTimeToStdTime(0)) ==
SysTime(DateTime(1970, 1, 1), UTC()));
writeln(unixTimeToStdTime(int.max)); // 642_830_804_470_000_000L
assert(SysTime(unixTimeToStdTime(int.max)) ==
SysTime(DateTime(2038, 1, 19, 3, 14, 07), UTC()));
writeln(unixTimeToStdTime(-127_127)); // 621_354_696_730_000_000L
assert(SysTime(unixTimeToStdTime(-127_127)) ==
SysTime(DateTime(1969, 12, 30, 12, 41, 13), UTC()));
```
pure nothrow @safe T **stdTimeToUnixTime**(T = time\_t)(long stdTime)
Constraints: if (is(T == int) || is(T == long));
Converts std time (which uses midnight, January 1st, 1 A.D. UTC as its epoch and hnsecs as its units) to unix time (which uses midnight, January 1st, 1970 UTC as its epoch and seconds as its units).
The C standard does not specify the representation of time\_t, so it is implementation defined. On POSIX systems, unix time is equivalent to time\_t, but that's not necessarily true on other systems (e.g. it is not true for the Digital Mars C runtime). So, be careful when using unix time with C functions on non-POSIX systems.
"std time"'s epoch is based on the Proleptic Gregorian Calendar per ISO 8601 and is what [`SysTime`](#SysTime) uses internally. However, holding the time as an integer in hnescs since that epoch technically isn't actually part of the standard, much as it's based on it, so the name "std time" isn't particularly good, but there isn't an official name for it. C# uses "ticks" for the same thing, but they aren't actually clock ticks, and the term "ticks" *is* used for actual clock ticks for [`core.time.MonoTime`](core_time#MonoTime), so it didn't make sense to use the term ticks here. So, for better or worse, std.datetime uses the term "std time" for this.
By default, the return type is time\_t (which is normally an alias for int on 32-bit systems and long on 64-bit systems), but if a different size is required than either int or long can be passed as a template argument to get the desired size.
If the return type is int, and the result can't fit in an int, then the closest value that can be held in 32 bits will be used (so `int.max` if it goes over and `int.min` if it goes under). However, no attempt is made to deal with integer overflow if the return type is long.
Parameters:
| | |
| --- | --- |
| T | The return type (int or long). It defaults to time\_t, which is normally 32 bits on a 32-bit system and 64 bits on a 64-bit system. |
| long `stdTime` | The std time to convert. |
Returns:
A signed integer representing the unix time which is equivalent to the given std time.
See Also:
SysTime.toUnixTime
Examples:
```
// Midnight, January 1st, 1970 UTC
writeln(stdTimeToUnixTime(621_355_968_000_000_000L)); // 0
// 2038-01-19 03:14:07 UTC
writeln(stdTimeToUnixTime(642_830_804_470_000_000L)); // int.max
```
@safe SysTime **SYSTEMTIMEToSysTime**(scope const SYSTEMTIME\* st, immutable TimeZone tz = LocalTime());
This function is Windows-Only.
Converts a `SYSTEMTIME` struct to a [`SysTime`](#SysTime).
Parameters:
| | |
| --- | --- |
| SYSTEMTIME\* `st` | The `SYSTEMTIME` struct to convert. |
| TimeZone `tz` | The time zone that the time in the `SYSTEMTIME` struct is assumed to be (if the `SYSTEMTIME` was supplied by a Windows system call, the `SYSTEMTIME` will either be in local time or UTC, depending on the call). |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given `SYSTEMTIME` will not fit in a [`SysTime`](#SysTime), which is highly unlikely to happen given that `SysTime.max` is in 29,228 A.D. and the maximum `SYSTEMTIME` is in 30,827 A.D.
@safe SYSTEMTIME **SysTimeToSYSTEMTIME**(scope SysTime sysTime);
This function is Windows-Only.
Converts a [`SysTime`](#SysTime) to a `SYSTEMTIME` struct.
The `SYSTEMTIME` which is returned will be set using the given [`SysTime`](#SysTime)'s time zone, so to get the `SYSTEMTIME` in UTC, set the [`SysTime`](#SysTime)'s time zone to UTC.
Parameters:
| | |
| --- | --- |
| SysTime `sysTime` | The [`SysTime`](#SysTime) to convert. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given [`SysTime`](#SysTime) will not fit in a `SYSTEMTIME`. This will only happen if the [`SysTime`](#SysTime)'s date is prior to 1601 A.D.
@safe long **FILETIMEToStdTime**(scope const FILETIME\* ft);
This function is Windows-Only.
Converts a `FILETIME` struct to the number of hnsecs since midnight, January 1st, 1 A.D.
Parameters:
| | |
| --- | --- |
| FILETIME\* `ft` | The `FILETIME` struct to convert. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given `FILETIME` cannot be represented as the return value.
@safe SysTime **FILETIMEToSysTime**(scope const FILETIME\* ft, immutable TimeZone tz = LocalTime());
This function is Windows-Only.
Converts a `FILETIME` struct to a [`SysTime`](#SysTime).
Parameters:
| | |
| --- | --- |
| FILETIME\* `ft` | The `FILETIME` struct to convert. |
| TimeZone `tz` | The time zone that the [`SysTime`](#SysTime) will be in (`FILETIME`s are in UTC). |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given `FILETIME` will not fit in a [`SysTime`](#SysTime).
@safe FILETIME **stdTimeToFILETIME**(long stdTime);
This function is Windows-Only.
Converts a number of hnsecs since midnight, January 1st, 1 A.D. to a `FILETIME` struct.
Parameters:
| | |
| --- | --- |
| long `stdTime` | The number of hnsecs since midnight, January 1st, 1 A.D. UTC. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given value will not fit in a `FILETIME`.
@safe FILETIME **SysTimeToFILETIME**(scope SysTime sysTime);
This function is Windows-Only.
Converts a [`SysTime`](#SysTime) to a `FILETIME` struct.
`FILETIME`s are always in UTC.
Parameters:
| | |
| --- | --- |
| SysTime `sysTime` | The [`SysTime`](#SysTime) to convert. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given [`SysTime`](#SysTime) will not fit in a `FILETIME`.
alias **DosFileTime** = uint;
Type representing the DOS file date/time format.
@safe SysTime **DosFileTimeToSysTime**(DosFileTime dft, immutable TimeZone tz = LocalTime());
Converts from DOS file date/time to [`SysTime`](#SysTime).
Parameters:
| | |
| --- | --- |
| DosFileTime `dft` | The DOS file time to convert. |
| TimeZone `tz` | The time zone which the DOS file time is assumed to be in. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the `DosFileTime` is invalid.
Examples:
```
import std.datetime.date : DateTime;
// SysTime(DateTime(1980, 1, 1, 0, 0, 0))
writeln(DosFileTimeToSysTime(0b00000000001000010000000000000000));
// SysTime(DateTime(2107, 12, 31, 23, 59, 58))
writeln(DosFileTimeToSysTime(0b11111111100111111011111101111101));
writeln(DosFileTimeToSysTime(0x3E3F8456)); // SysTime(DateTime(2011, 1, 31, 16, 34, 44))
```
@safe DosFileTime **SysTimeToDosFileTime**(scope SysTime sysTime);
Converts from [`SysTime`](#SysTime) to DOS file date/time.
Parameters:
| | |
| --- | --- |
| SysTime `sysTime` | The [`SysTime`](#SysTime) to convert. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given [`SysTime`](#SysTime) cannot be converted to a `DosFileTime`.
Examples:
```
import std.datetime.date : DateTime;
// 0b00000000001000010000000000000000
writeln(SysTimeToDosFileTime(SysTime(DateTime(1980, 1, 1, 0, 0, 0))));
// 0b11111111100111111011111101111101
writeln(SysTimeToDosFileTime(SysTime(DateTime(2107, 12, 31, 23, 59, 58))));
writeln(SysTimeToDosFileTime(SysTime(DateTime(2011, 1, 31, 16, 34, 44)))); // 0x3E3F8456
```
@safe SysTime **parseRFC822DateTime**()(scope const char[] value);
SysTime **parseRFC822DateTime**(R)(scope R value)
Constraints: if (isRandomAccessRange!R && hasSlicing!R && hasLength!R && (is(immutable(ElementType!R) == immutable(char)) || is(immutable(ElementType!R) == immutable(ubyte))));
The given array of `char` or random-access range of `char` or `ubyte` is expected to be in the format specified in [RFC 5322](http://tools.ietf.org/html/rfc5322) section 3.3 with the grammar rule *date-time*. It is the date-time format commonly used in internet messages such as e-mail and HTTP. The corresponding [`SysTime`](#SysTime) will be returned.
RFC 822 was the original spec (hence the function's name), whereas RFC 5322 is the current spec.
The day of the week is ignored beyond verifying that it's a valid day of the week, as the day of the week can be inferred from the date. It is not checked whether the given day of the week matches the actual day of the week of the given date (though it is technically invalid per the spec if the day of the week doesn't match the actual day of the week of the given date).
If the time zone is `"-0000"` (or considered to be equivalent to `"-0000"` by section 4.3 of the spec), a [`std.datetime.timezone.SimpleTimeZone`](std_datetime_timezone#SimpleTimeZone) with a utc offset of `0` is used rather than [`std.datetime.timezone.UTC`](std_datetime_timezone#UTC), whereas `"+0000"` uses [`std.datetime.timezone.UTC`](std_datetime_timezone#UTC).
Note that because [`SysTime`](#SysTime) does not currently support having a second value of 60 (as is sometimes done for leap seconds), if the date-time value does have a value of 60 for the seconds, it is treated as 59.
The one area in which this function violates RFC 5322 is that it accepts `"\n"` in folding whitespace in the place of `"\r\n"`, because the HTTP spec requires it.
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given string doesn't follow the grammar for a date-time field or if the resulting [`SysTime`](#SysTime) is invalid.
Examples:
```
import core.time : hours;
import std.datetime.date : DateTime, DateTimeException;
import std.datetime.timezone : SimpleTimeZone, UTC;
import std.exception : assertThrown;
auto tz = new immutable SimpleTimeZone(hours(-8));
assert(parseRFC822DateTime("Sat, 6 Jan 1990 12:14:19 -0800") ==
SysTime(DateTime(1990, 1, 6, 12, 14, 19), tz));
assert(parseRFC822DateTime("9 Jul 2002 13:11 +0000") ==
SysTime(DateTime(2002, 7, 9, 13, 11, 0), UTC()));
auto badStr = "29 Feb 2001 12:17:16 +0200";
assertThrown!DateTimeException(parseRFC822DateTime(badStr));
```
| programming_docs |
d Structs, Unions Structs, Unions
===============
**Contents** 1. [Struct Layout](#struct_layout)
2. [Plain Old Data](#POD)
3. [Opaque Structs and Unions](#opaque_struct_unions)
4. [Default Initialization of Structs](#default_struct_init)
5. [Static Initialization of Structs](#static_struct_init)
6. [Default Initialization of Unions](#default_union_init)
7. [Static Initialization of Unions](#static_union_init)
8. [Dynamic Initialization of Structs](#dynamic_struct_init)
9. [Struct Literals](#struct-literal)
10. [Struct Properties](#struct_properties)
11. [Struct Instance Properties](#struct_instance_properties)
12. [Struct Field Properties](#struct_field_properties)
13. [Const, Immutable and Shared Structs](#const-struct)
14. [Union Constructors](#UnionConstructor)
15. [Struct Constructors](#struct-constructor)
1. [Delegating Constructors](#delegating-constructor)
2. [Struct Instantiation](#struct-instantiation)
3. [Constructor Attributes](#constructor-attributes)
4. [Disabling Default Struct Construction](#disable_default_construction)
5. [Field initialization inside a constructor](#field-init)
6. [Struct Copy Constructors](#struct-copy-constructor)
16. [Struct Postblits](#struct-postblit)
17. [Struct Destructors](#struct-destructor)
18. [Struct Invariants](#StructInvariant)
19. [Identity Assignment Overload](#assign-overload)
20. [Nested Structs](#nested)
21. [Unions and Special Member Functions](#unions_and_special_memb_funct)
Whereas classes are reference types, structs are value types. Structs and unions are simple aggregations of data and their associated operations on that data.
```
AggregateDeclaration:
ClassDeclaration
InterfaceDeclaration
StructDeclaration
UnionDeclaration
StructDeclaration:
struct Identifier ;
struct Identifier AggregateBody
StructTemplateDeclaration
AnonStructDeclaration
AnonStructDeclaration:
struct AggregateBody
UnionDeclaration:
union Identifier ;
union Identifier AggregateBody
UnionTemplateDeclaration
AnonUnionDeclaration
AnonUnionDeclaration:
union AggregateBody
AggregateBody:
{ DeclDefsopt }
```
A struct is defined to not have an identity; that is, the implementation is free to make bit copies of the struct as convenient.
Structs and unions may not contain an instance of themselves, however, they may contain a pointer to the same type.
**Best Practices:** 1. Bit fields are supported with the [bitfields](https://dlang.org/phobos/std_bitmanip.html#bitfields) template.
Struct Layout
-------------
The non-static data members of a struct are called *fields*. Fields are laid out in lexical order. Fields are aligned according to the [Align Attribute](attribute#align) in effect. Unnamed padding is inserted between fields to align fields. There is no padding between the first field and the start of the object.
Structs with no fields of non-zero size (aka *Empty Structs*) have a size of one byte.
Non-static [function-nested D structs](#nested), which access the context of their enclosing scope, have an extra field.
**Implementation Defined:** 1. The default layout of the fields of a struct is an exact match with the *associated C compiler*.
2. g++ and clang++ differ in how empty structs are handled. Both return `1` from `sizeof`, however, clang++ does not push them onto the parameter stack while g++ does. This is a binary incompatibility between g++ and clang++. dmd follows clang++ behavior for OSX and FreeBSD, and g++ behavior for Linux and other Posix platforms.
3. clang and gcc both return `0` from `sizeof` for empty structs. Using `extern "C++"` in clang++ and g++ does not cause them to conform to the behavior of their respective C compilers.
**Undefined Behavior:** 1. The padding data can be accessed, but its contents are undefined.
2. Do not pass or return structs with no fields of non-zero size to `extern (C)` functions. According to C11 6.7.2.1p8 this is undefined behavior.
**Best Practices:** 1. When laying out a struct to match an externally defined layout, use align attributes to describe an exact match. Using a [Static Assert](version#static-assert) to ensure the result is as expected.
2. Although the contents of the padding are often zero, do not rely on that.
3. Avoid using empty structs when interfacing with C and C++ code.
4. Avoid using empty structs as parameters or arguments to variadic functions.
Plain Old Data
--------------
A struct or union is *Plain Old Data* (POD) if it meets the following criteria:
1. it is not nested
2. it has no postblits, copy constructors, destructors, or assignment operators
3. it has no `ref` fields or fields that are themselves non-POD
**Best Practices:** Structs or unions that interface with C code should be POD. Opaque Structs and Unions
-------------------------
Opaque struct and union declarations do not have a [*AggregateBody*](#AggregateBody):
```
struct S;
union U;
struct V(T);
union W(T);
```
The members are completely hidden to the user, and so the only operations on those types are ones that do not require any knowledge of the contents of those types. For example:
```
struct S;
S.sizeof; // error, size is not known
S s; // error, cannot initialize unknown contents
S* p; // ok, knowledge of members is not necessary
```
**Best Practices:** They can be used to implement the [PIMPL idiom](https://en.wikipedia.org/wiki/Opaque_pointer). Default Initialization of Structs
---------------------------------
Struct fields are by default initialized to whatever the [*Initializer*](declaration#Initializer) for the field is, and if none is supplied, to the default initializer for the field's type.
```
struct S { int a = 4; int b; }
S x; // x.a is set to 4, x.b to 0
```
The default initializers are evaluated at compile time.
The default initializers may not contain references to mutable data.
Static Initialization of Structs
--------------------------------
If a [*StructInitializer*](declaration#StructInitializer) is supplied, the fields are initialized by the [*StructMemberInitializer*](declaration#StructMemberInitializer) syntax. *StructMemberInitializers* with the *Identifier : NonVoidInitializer* syntax may be appear in any order, where *Identifier* is the field identifier. *StructMemberInitializer*s with the [*NonVoidInitializer*](declaration#NonVoidInitializer) syntax appear in the lexical order of the fields in the [*StructDeclaration*](#StructDeclaration).
Fields not specified in the *StructInitializer* are default initialized.
```
struct S { int a, b, c, d = 7; }
S r; // r.a = 0, r.b = 0, r.c = 0, r.d = 7
S s = { a:1, b:2 }; // s.a = 1, s.b = 2, s.c = 0, s.d = 7
S t = { c:4, b:5, a:2, d:5 }; // t.a = 2, t.b = 5, t.c = 4, t.d = 5
S u = { 1, 2 }; // u.a = 1, u.b = 2, u.c = 0, u.d = 7
S v = { 1, d:3 }; // v.a = 1, v.b = 0, v.c = 0, v.d = 3
S w = { b:1, 3 }; // w.a = 0, w.b = 1, w.c = 3, w.d = 7
```
Initializing a field more than once is an error:
```
S x = { 1, a:2 }; // error: duplicate initializer for field `a`
```
Default Initialization of Unions
--------------------------------
Unions are by default initialized to whatever the [*Initializer*](declaration#Initializer) for the first field is, and if none is supplied, to the default initializer for the first field's type.
If the union is larger than the first field, the remaining bits are set to 0.
```
union U { int a = 4; long b; }
U x; // x.a is set to 4, x.b to an implementation-defined value
union V { int a; long b = 4; }
V y; // y.a is set to 0, y.b to an implementation-defined value
```
```
union W { int a = 4; long b = 5; } // error: overlapping default initialization for `a` and `b`
```
The default initializer is evaluated at compile time.
**Implementation Defined:** The values the fields other than the default initialized field are set to. Static Initialization of Unions
-------------------------------
Unions are initialized similarly to structs, except that only one initializer is allowed.
```
union U { int a; double b; }
U u = { 2 }; // u.a = 2
U v = { b : 5.0 }; // v.b = 5.0
```
```
U w = { 2, 3 }; // error: overlapping initialization for field `a` and `b`
```
If the union is larger than the initialized field, the remaining bits are set to 0.
**Implementation Defined:** The values the fields other than the initialized field are set to. Dynamic Initialization of Structs
---------------------------------
The [static initializer syntax](#static_struct_init) can also be used to initialize non-static variables. The initializer need not be evaluable at compile time.
```
struct S { int a, b, c, d = 7; }
void test(int i)
{
S q = { 1, b:i }; // q.a = 1, q.b = i, q.c = 0, q.d = 7
}
```
Structs can be dynamically initialized from another value of the same type:
```
struct S { int a; }
S t; // default initialized
t.a = 3;
S s = t; // s.a is set to 3
```
If `opCall` is overridden for the struct, and the struct is initialized with a value that is of a different type, then the `opCall` operator is called:
```
struct S
{
int a;
static S opCall(int v)
{
S s;
s.a = v;
return s;
}
static S opCall(S v)
{
assert(0);
}
}
S s = 3; // sets s.a to 3 using S.opCall(int)
S t = s; // sets t.a to 3, S.opCall(S) is not called
```
Struct Literals
---------------
Struct literals consist of the name of the struct followed by a parenthesized argument list:
```
struct S { int x; float y; }
int foo(S s) { return s.x; }
foo( S(1, 2) ); // set field x to 1, field y to 2
```
Struct literals are syntactically like function calls. If a struct has a member function named `opCall`, then struct literals for that struct are not possible. See also [opCall operator overloading](operatoroverloading#FunctionCall) for the issue workaround. It is an error if there are more arguments than fields of the struct. If there are fewer arguments than fields, the remaining fields are initialized with their respective default initializers. If there are anonymous unions in the struct, only the first member of the anonymous union can be initialized with a struct literal, and all subsequent non-overlapping fields are default initialized.
Struct Properties
-----------------
Struct Properties| **Name** | **Description** |
| `.sizeof` | Size in bytes of struct |
| `.alignof` | Size boundary struct needs to be aligned on |
Struct Instance Properties
--------------------------
Struct Instance Properties| **Name** | **Description** |
| `.tupleof` | An [expression sequence](template#variadic-templates) of all struct fields - see [Class Properties](class#class_properties) for a class-based example. |
Struct Field Properties
-----------------------
Struct Field Properties| **Name** | **Description** |
| `.offsetof` | Offset in bytes of field from beginning of struct |
Const, Immutable and Shared Structs
-----------------------------------
A struct declaration can have a storage class of `const`, `immutable` or `shared`. It has an equivalent effect as declaring each member of the struct as `const`, `immutable` or `shared`.
```
const struct S { int a; int b = 2; }
void main()
{
S s = S(3); // initializes s.a to 3
S t; // initializes t.a to 0
t = s; // error, t.a and t.b are const, so cannot modify them.
t.a = 4; // error, t.a is const
}
```
Union Constructors
------------------
Unions are constructed in the same way as structs.
Struct Constructors
-------------------
Struct constructors are used to initialize an instance of a struct when a more complex construction is needed than is allowed by [static initialization](#static_struct_init) or a [struct literal](#struct-literal).
Constructors are defined with a function name of `this` and have no return value. The grammar is the same as for the class [*Constructor*](class#Constructor).
A struct constructor is called by the name of the struct followed by [*Parameters*](function#Parameters).
If the [*ParameterList*](function#ParameterList) is empty, the struct instance is default initialized.
```
struct S
{
int x, y = 4, z = 6;
this(int a, int b)
{
x = a;
y = b;
}
}
void main()
{
S a = S(4, 5); // calls S.this(4, 5): a.x = 4, a.y = 5, a.z = 6
S b = S(); // default initialized: b.x = 0, b.y = 4, b.z = 6
S c = S(1); // error, matching this(int) not found
}
```
A *default constructor* (i.e. one with an empty [*ParameterList*](function#ParameterList)) is not allowed.
```
struct S
{
int x;
this() { } // error, struct default constructor not allowed
}
```
### Delegating Constructors
A constructor can call another constructor for the same struct in order to share common initializations. This is called a *delegating constructor*:
```
struct S
{
int j = 1;
long k = 2;
this(long k)
{
this.k = k;
}
this(int i)
{
// At this point: j=1, k=2
this(6); // delegating constructor call
// At this point: j=1, k=6
j = i;
// At this point: j=i, k=6
}
}
```
The following restrictions apply:
1. If a constructor's code contains a delegating constructor call, all possible execution paths through the constructor must make exactly one delegating constructor call:
```
struct S
{
int a;
this(int i) { }
this(char c)
{
c || this(1); // error, not on all paths
}
this(wchar w)
{
(w) ? this(1) : this('c'); } // ok
this(byte b)
{
foreach (i; 0 .. b)
{
this(1); // error, inside loop
}
}
}
```
2. It is illegal to refer to `this` implicitly or explicitly prior to making a delegating constructor call.
3. Once the delegating constructor returns, all fields are considered constructed.
4. Delegating constructor calls cannot appear after labels.
See also: [delegating class constructors](class#delegating-constructors).
### Struct Instantiation
When an instance of a struct is created, the following steps happen:
1. The raw data is statically initialized using the values provided in the struct definition. This operation is equivalent to doing a memory copy of a static version of the object onto the newly allocated one.
2. If there is a constructor defined for the struct, the constructor matching the argument list is called.
3. If struct invariant checking is turned on, the struct invariant is called at the end of the constructor.
### Constructor Attributes
A constructor qualifier (`const`, `immutable` or `shared`) constructs the object instance with that specific qualifier.
```
struct S1
{
int[] a;
this(int n) { a = new int[](n); }
}
struct S2
{
int[] a;
this(int n) immutable { a = new int[](n); }
}
void main()
{
// Mutable constructor creates mutable object.
S1 m1 = S1(1);
// Constructed mutable object is implicitly convertible to const.
const S1 c1 = S1(1);
// Constructed mutable object is not implicitly convertible to immutable.
immutable i1 = S1(1); // error
// Mutable constructor cannot construct immutable object.
auto x1 = immutable S1(1); // error
// Immutable constructor creates immutable object.
immutable i2 = immutable S2(1);
// Immutable constructor cannot construct mutable object.
auto x2 = S2(1); // error
// Constructed immutable object is not implicitly convertible to mutable.
S2 m2 = immutable S2(1); // error
// Constructed immutable object is implicitly convertible to const.
const S2 c2 = immutable S2(1);
}
```
Constructors can be overloaded with different attributes.
```
struct S
{
this(int); // non-shared mutable constructor
this(int) shared; // shared mutable constructor
this(int) immutable; // immutable constructor
}
S m = S(1);
shared s = shared S(2);
immutable i = immutable S(3);
```
#### Pure Constructors
If the constructor can create a unique object (i.e. if it is `pure`), the object is implicitly convertible to any qualifiers.
```
struct S
{
this(int) pure;
// Based on the definition, this creates a mutable object. But the
// created object cannot contain any mutable global data.
// Therefore the created object is unique.
this(int[] arr) immutable pure;
// Based on the definition, this creates an immutable object. But
// the argument int[] never appears in the created object so it
// isn't implicitly convertible to immutable. Also, it cannot store
// any immutable global data.
// Therefore the created object is unique.
}
immutable i = immutable S(1); // this(int) pure is called
shared s = shared S(1); // this(int) pure is called
S m = S([1,2,3]); // this(int[]) immutable pure is called
```
### Disabling Default Struct Construction
If a struct constructor is annotated with `@disable` and has an empty [*ParameterList*](function#ParameterList), the struct has disabled default construction. The only way it can be constructed is via a call to another constructor with a non-empty *ParameterList*.
A struct with a disabled default constructor, and no other constructors, cannot be instantiated other than via a [*VoidInitializer*](declaration#VoidInitializer).
A disabled default constructor may not have a [*FunctionBody*](function#FunctionBody).
If any fields have disabled default construction, struct default construction is also disabled.
```
struct S
{
int x;
// Disables default construction
@disable this();
this(int v) { x = v; }
}
struct T
{
int y;
S s;
}
void main()
{
S s; // error: default construction is disabled
S t = S(); // error: also disabled
S u = S(1); // constructed by calling `S.this(1)`
S v = void; // not initialized, but allowed
S w = { 1 }; // error: cannot use { } since constructor exists
S[3] a; // error: default construction is disabled
S[3] b = [S(1), S(20), S(-2)]; // ok
T t; // error: default construction is disabled
}
```
**Best Practices:** Disabling default construction is useful when the default value, such as `null`, is not acceptable. ### Field initialization inside a constructor
In a constructor body, if a delegating constructor is called, all field assignments are considered assignments. Otherwise, the first instance of field assignment is its initialization, and assignments of the form `field = expression` are treated as equivalent to `typeof(field)(expression)`. The values of fields may be read before initialization or construction with a delegating constructor.
```
struct S
{
int num;
int ber;
this(int i)
{
num = i + 1; // initialization
num = i + 2; // assignment
ber = ber + 1; // ok to read before initialization
}
this(int i, int j)
{
this(i);
num = i + 1; // assignment
}
}
```
If the field type has an [`opAssign`](operatoroverloading#assignment) method, it will not be used for initialization.
```
struct A
{
this(int n) {}
void opAssign(A rhs) {}
}
struct S
{
A val;
this(int i)
{
val = A(i); // val is initialized to the value of A(i)
val = A(2); // rewritten to val.opAssign(A(2))
}
}
```
If the field type is not mutable, multiple initialization will be rejected.
```
struct S
{
immutable int num;
this(int)
{
num = 1; // OK
num = 2; // Error: assignment to immutable
}
}
```
If the field is initialized on one path, it must be initialized on all paths.
```
struct S
{
immutable int num;
immutable int ber;
this(int i)
{
if (i)
num = 3; // initialization
else
num = 4; // initialization
}
this(long j)
{
j ? (num = 3) : (num = 4); // ok
j || (ber = 3); // Error: initialized on only one path
j && (ber = 3); // Error: initialized on only one path
}
}
```
A field initialization may not appear in a loop or after a label.
```
struct S
{
immutable int num;
immutable string str;
this(int j)
{
foreach (i; 0..j)
{
num = 1; // Error: field initialization not allowed in loops
}
size_t i = 0;
Label:
str = "hello"; // Error: field initialization not allowed after labels
if (i++ < 2)
goto Label;
}
this(int j, int k)
{
switch (j)
{
case 1: ++j; break;
default: break;
}
num = j; // Error: `case` and `default` are also labels
}
}
```
If a field's type has disabled default construction, then it must be initialized in the constructor.
```
struct S { int y; @disable this(); }
struct T
{
S s;
this(S t) { s = t; } // ok
this(int i) { this('c'); } // ok
this(char) { } // Error: s not initialized
}
```
### Struct Copy Constructors
Copy constructors are used to initialize a `struct` instance from another `struct` of the same type.
A constructor declaration is a copy constructor declaration if and only if it is a constructor declaration that takes only one non-default parameter by reference that is of the same type as `typeof(this)`, followed by any number of default parameters:
```
struct A
{
this(ref return scope A rhs) {} // copy constructor
this(ref return scope const A rhs, int b = 7) {} // copy constructor with default parameter
}
```
The copy constructor is type checked as a normal constructor.
If a copy constructor is defined, implicit calls to it will be inserted in the following situations:
1. When a variable is explicitly initialized:
```
struct A
{
this(ref return scope A rhs) {}
}
void main()
{
A a;
A b = a; // copy constructor gets called
}
```
3. When a parameter is passed by value to a function:
```
struct A
{
this(ref return scope A another) {}
}
void fun(A a) {}
void main()
{
A a;
fun(a); // copy constructor gets called
}
```
5. When a parameter is returned by value from a function and Named Returned Value Optiomization (NRVO) cannot be performed:
```
struct A
{
this(ref return scope A another) {}
}
A fun()
{
A a;
return a; // NRVO, no copy constructor call
}
A a;
A gun()
{
return a; // cannot perform NRVO, rewrite to: return (A __tmp; __tmp.copyCtor(a));
}
void main()
{
A a = fun();
A b = gun();
}
```
When a copy constructor is defined for a `struct` (or marked `@disable`), the compiler no longer implicitly generates default copy/blitting constructors for that `struct`:
```
struct A
{
int[] a;
this(ref return scope A rhs) {}
}
void fun(immutable A) {}
void main()
{
immutable A a;
fun(a); // error: copy constructor cannot be called with types (immutable) immutable
}
```
If a `union S` has fields that define a copy constructor, whenever an object of type `S` is initialized by copy, an error will be issued. The same rule applies to overlapped fields (anonymous unions).
A `struct` that defines a copy constructor is not a POD.
#### Copy Constructor Attributes
The copy constructor can be overloaded with different qualifiers applied to the parameter (copying from a qualified source) or to the copy constructor itself (copying to a qualified destination):
```
struct A
{
this(ref return scope A another) {} // 1 - mutable source, mutable destination
this(ref return scope immutable A another) {} // 2 - immutable source, mutable destination
this(ref return scope A another) immutable {} // 3 - mutable source, immutable destination
this(ref return scope immutable A another) immutable {} // 4 - immutable source, immutable destination
}
void main()
{
A a;
immutable A ia;
A a2 = a; // calls 1
A a3 = ia; // calls 2
immutable A a4 = a; // calls 3
immutable A a5 = ia; // calls 4
}
```
The `inout` qualifier may be applied to the copy constructor parameter in order to specify that mutable, `const`, or `immutable` types are treated the same:
```
struct A
{
this(ref return scope inout A rhs) immutable {}
}
void main()
{
A r1;
const(A) r2;
immutable(A) r3;
// All call the same copy constructor because `inout` acts like a wildcard
immutable(A) a = r1;
immutable(A) b = r2;
immutable(A) c = r3;
}
```
#### Implicit Copy Constructors
A copy constructor is generated implicitly by the compiler for a `struct S` if all of the following conditions are met:
1. `S` does not explicitly declare any copy constructors;
2. `S` defines at least one direct member that has a copy constructor, and that member is not overlapped (by means of `union`) with any other member.
If the restrictions above are met, the following copy constructor is generated:
```
this(ref return scope inout(S) src) inout
{
foreach (i, ref inout field; src.tupleof)
this.tupleof[i] = field;
}
```
If the generated copy constructor fails to type check, it will receive the `@disable` attribute.
Struct Postblits
----------------
```
Postblit:
this ( this ) MemberFunctionAttributesopt ;
this ( this ) MemberFunctionAttributesopt FunctionBody
```
WARNING: The postblit is considered legacy and is not recommended for new code. Code should use [copy constructors](#struct-copy-constructor) defined in the previous section. For backward compatibility reasons, a `struct` that explicitly defines both a copy constructor and a postblit will only use the postblit for implicit copying. However, if the postblit is disabled, the copy constructor will be used. If a struct defines a copy constructor (user-defined or generated) and has fields that define postblits, a deprecation will be issued, informing that the postblit will have priority over the copy constructor.
*Copy construction* is defined as initializing a struct instance from another struct of the same type. Copy construction is divided into two parts:
1. blitting the fields, i.e. copying the bits
2. running *postblit* on the result
The first part is done automatically by the language, the second part is done if a postblit function is defined for the struct. The postblit has access only to the destination struct object, not the source. Its job is to ‘fix up’ the destination as necessary, such as making copies of referenced data, incrementing reference counts, etc. For example:
```
struct S
{
int[] a; // array is privately owned by this instance
this(this)
{
a = a.dup;
}
}
```
Disabling struct postblit makes the object not copyable.
```
struct T
{
@disable this(this); // disabling makes T not copyable
}
struct S
{
T t; // uncopyable member makes S also not copyable
}
void main()
{
S s;
S t = s; // error, S is not copyable
}
```
Depending on the struct layout, the compiler may generate the following internal postblit functions:
1. `void __postblit()`. The compiler assigns this name to the explicitly defined postblit `this(this)` so that it can be treated exactly as a normal function. Note that if a struct defines a postblit, it cannot define a function named `__postblit` - no matter the signature - as this would result in a compilation error due to the name conflict.
2. `void __fieldPostblit()`. If a struct `X` has at least one `struct` member that in turn defines (explicitly or implicitly) a postblit, then a field postblit is generated for `X` that calls all the underlying postblits of the struct fields in declaration order.
3. `void __aggrPostblit()`. If a struct has an explicitly defined postblit and at least 1 struct member that has a postblit (explicit or implicit) an aggregated postblit is generated which calls `__fieldPostblit` first and then `__postblit`.
4. `void __xpostblit()`. The field and aggregated postblits, although generated for a struct, are not actual struct members. In order to be able to call them, the compiler internally creates an alias, called `__xpostblit` which is a member of the struct and which points to the generated postblit that is the most inclusive.
```
// struct with alias __xpostblit = __postblit
struct X
{
this(this) {}
}
// struct with alias __xpostblit = __fieldPostblit
// which contains a call to X.__xpostblit
struct Y
{
X a;
}
// struct with alias __xpostblit = __aggrPostblit which contains
// a call to Y.__xpostblit and a call to Z.__postblit
struct Z
{
Y a;
this(this) {}
}
void main()
{
// X has __postblit and __xpostblit (pointing to __postblit)
static assert(__traits(hasMember, X, "__postblit"));
static assert(__traits(hasMember, X, "__xpostblit"));
// Y does not have __postblit, but has __xpostblit (pointing to __fieldPostblit)
static assert(!__traits(hasMember, Y, "__postblit"));
static assert(__traits(hasMember, Y, "__xpostblit"));
// __fieldPostblit is not a member of the struct
static assert(!__traits(hasMember, Y, "__fieldPostblit"));
// Z has __postblit and __xpostblit (pointing to __aggrPostblit)
static assert(__traits(hasMember, Z, "__postblit"));
static assert(__traits(hasMember, Z, "__xpostblit"));
// __aggrPostblit is not a member of the struct
static assert(!__traits(hasMember, Z, "__aggrPostblit"));
}
```
Neither of the above postblits is defined for structs that don't define `this(this)` and don't have fields that transitively define it. If a struct does not define a postblit (implicit or explicit) but defines functions that use the same name/signature as the internally generated postblits, the compiler is able to identify that the functions are not actual postblits and does not insert calls to them when the struct is copied. Example:
```
struct X
{}
int a;
struct Y
{
int a;
X b;
void __fieldPostPostblit()
{
a = 42;
}
}
void main()
{
static assert(!__traits(hasMember, X, "__postblit"));
static assert(!__traits(hasMember, X, "__xpostblit"));
static assert(!__traits(hasMember, Y, "__postblit"));
static assert(!__traits(hasMember, Y, "__xpostblit"));
Y y;
auto y2 = y;
assert(a == 0); // __fieldPostBlit does not get called
}
```
Postblits cannot be overloaded. If two or more postblits are defined, even if the signatures differ, the compiler assigns the `__postblit` name to both and later issues a conflicting function name error:
```
struct X
{
this(this) {}
this(this) const {} // error: function X.__postblit conflicts with function X.__postblit
}
```
The following describes the behavior of the qualified postblit definitions:
1. `const`. When a postblit is qualified with `const` as in `this(this) const;` or `const this(this);` then the postblit is successfully called on mutable (unqualified), `const`, and `immutable` objects, but the postblit cannot modify the object because it regards it as `const`; hence `const` postblits are of limited usefulness. Example:
```
struct S
{
int n;
this(this) const
{
import std.stdio : writeln;
writeln("postblit called");
//++n; // error: cannot modify this.n in `const` function
}
}
void main()
{
S s1;
auto s2 = s1;
const S s3;
auto s4 = s3;
immutable S s5;
auto s6 = s5;
}
```
3. `immutable`. When a postblit is qualified with `immutable` as in `this(this) immutable` or `immutable this(this)` the code is ill-formed. The `immutable` postblit passes the compilation phase but cannot be invoked. Example:
```
struct Y
{
// not invoked anywhere, no error is issued
this(this) immutable
{ }
}
struct S
{
this(this) immutable
{ }
}
void main()
{
S s1;
auto s2 = s1; // error: immutable method `__postblit` is not callable using a mutable object
const S s3;
auto s4 = s3; // error: immutable method `__postblit` is not callable using a mutable object
immutable S s5;
auto s6 = s5; // error: immutable method `__postblit` is not callable using a mutable object
}
```
5. `shared`. When a postblit is qualified with `shared` as in `this(this) shared` or `shared this(this)` solely `shared` objects may invoke the postblit; attempts of postbliting unshared objects will result in compile time errors:
```
struct S
{
this(this) shared
{ }
}
void main()
{
S s1;
auto s2 = s1; // error: shared method `__postblit` is not callable using a non-shared object
const S s3;
auto s4 = s3; // error: shared method `__postblit` is not callable using a non-shared object
immutable S s5;
auto s6 = s5; // error: shared method `__postblit` is not callable using a non-shared object
// calling the shared postblit on a shared object is accepted
shared S s7;
auto s8 = s7;
}
```
An unqualified postblit will get called even if the struct is instantiated as `immutable` or `const`, but the compiler issues an error if the struct is instantiated as `shared`:
```
struct S
{
int n;
this(this) { ++n; }
}
void main()
{
immutable S a; // shared S a; => error : non-shared method is not callable using a shared object
auto a2 = a;
import std.stdio: writeln;
writeln(a2.n); // prints 1
}
```
From a postblit perspective, qualifiying the struct definition yields the same result as explicitly qualifying the postblit.
The following table lists all the possibilities of grouping qualifiers for a postblit associated with the type of object that needs to be used in order to successfully invoke the postblit:
Qualifier Groups| object type to be invoked on | `const` | `immutable` | `shared` |
| any object type | ✔ | | |
| uncallable | | ✔ | |
| shared object | | | ✔ |
| uncallable | ✔ | ✔ | |
| shared object | ✔ | | ✔ |
| uncallable | | ✔ | ✔ |
| uncallable | ✔ | ✔ | ✔ |
Note that when `const` and `immutable` are used to explicitly qualify a postblit as in `this(this) const immutable;` or `const immutable this(this);` - the order in which the qualifiers are declared does not matter - the compiler generates a conflicting attribute error, however declaring the struct as `const`/`immutable` and the postblit as `immutable`/`const` achieves the effect of applying both qualifiers to the postblit. In both cases the postblit is qualified with the more restrictive qualifier, which is `immutable`.
The postblits `__fieldPostblit` and `__aggrPostblit` are generated without any implicit qualifiers and are not considered struct members. This leads to the situation where qualifying an entire struct declaration with `const` or `immutable` does not have any impact on the above-mentioned postblits. However, since `__xpostblit` is a member of the struct and an alias of one of the other postblits, the qualifiers applied to the struct will affect the aliased postblit.
```
struct S
{
this(this)
{ }
}
// `__xpostblit` aliases the aggregated postblit so the `const` applies to it.
// However, the aggregated postblit calls the field postblit which does not have
// any qualifier applied, resulting in a qualifier mismatch error
const struct B
{
S a; // error : mutable method B.__fieldPostblit is not callable using a const object
this(this)
{ }
}
// `__xpostblit` aliases the field postblit; no error
const struct B2
{
S a;
}
// Similar to B
immutable struct C
{
S a; // error : mutable method C.__fieldPostblit is not callable using a immutable object
this(this)
{ }
}
// Similar to B2, compiles
immutable struct C2
{
S a;
}
```
In the above situations the errors do not contain line numbers because the errors are regarding generated code.
Qualifying an entire struct as `shared` correctly propagates the attribute to the generated postblits:
```
shared struct A
{
this(this)
{
import std.stdio : writeln;
writeln("the shared postblit was called");
}
}
struct B
{
A a;
}
void main()
{
shared B b1;
auto b2 = b1;
}
```
Unions may have fields that have postblits. However, a union itself never has a postblit. Copying a union does not result in postblit calls for any fields. If those calls are desired, they must be inserted explicitly by the programmer:
```
struct S
{
int count;
this(this)
{
++count;
}
}
union U
{
S s;
}
void main()
{
U a = U.init;
U b = a;
assert(b.s.count == 0);
b.s.__postblit;
assert(b.s.count == 1);
}
```
Struct Destructors
------------------
Destructors are called when an object goes out of scope. Their purpose is to free up resources owned by the struct object.
Unions may have fields that have destructors. However, a union itself never has a destructor. When a union goes out of scope, destructors for it's fields are not called. If those calls are desired, they must be inserted explicitly by the programmer:
```
struct S
{
~this()
{
import std.stdio;
writeln("S is being destructed");
}
}
union U
{
S s;
}
void main()
{
import std.stdio;
{
writeln("entering first scope");
U u = U.init;
scope (exit) writeln("exiting first scope");
}
{
writeln("entering second scope");
U u = U.init;
scope (exit)
{
writeln("exiting second scope");
destroy(u.s);
}
}
}
```
Struct Invariants
-----------------
```
StructInvariant:
invariant ( ) BlockStatement
invariant BlockStatement
invariant ( AssertArguments ) ;
```
*StructInvariant*s specify the relationships among the members of a struct instance. Those relationships must hold for any interactions with the instance from its public interface.
The invariant is in the form of a `const` member function. The invariant is defined to *hold* if all the [*AssertExpression*](expression#AssertExpression)s within the invariant that are executed succeed.
If the invariant does not hold, then the program enters an invalid state.
Any invariants for fields are applied before the struct invariant.
There may be multiple invariants in a struct. They are applied in lexical order.
*StructInvariant*s must hold at the exit of the struct constructor (if any), and at the entry of the struct destructor (if any).
*StructInvariant*s must hold at the entry and exit of all public or exported non-static member functions. The order of application of invariants is:
1. preconditions
2. invariant
3. function body
4. invariant
5. postconditions
The invariant need not hold if the struct instance is implicitly constructed using the default `.init` value.
```
struct Date
{
this(int d, int h)
{
day = d; // days are 1..31
hour = h; // hours are 0..23
}
invariant
{
assert(1 <= day && day <= 31);
assert(0 <= hour && hour < 24);
}
private:
int day;
int hour;
}
```
Public or exported non-static member functions cannot be called from within an invariant.
```
struct Foo
{
public void f() { }
private void g() { }
invariant
{
f(); // error, cannot call public member function from invariant
g(); // ok, g() is not public
}
}
```
**Undefined Behavior:** happens if the invariant does not hold and execution continues. **Implementation Defined:** 1. Whether the *StructInvariant* is executed at runtime or not. This is typically controlled with a compiler switch.
2. The behavior when the invariant does not hold is typically the same as for when [*AssertExpression*](expression#AssertExpression)s fail.
**Best Practices:** 1. Do not indirectly call exported or public member functions within a struct invariant, as this can result in infinite recursion.
2. Avoid reliance on side effects in the invariant. as the invariant may or may not be executed.
3. Avoid having mutable public fields of structs with invariants, as then the invariant cannot verify the public interface.
Identity Assignment Overload
----------------------------
While copy construction takes care of initializing an object from another object of the same type, assignment is defined as copying the contents of a source object over those of a destination object, calling the destination object's destructor if it has one in the process:
```
struct S { ... } // S has postblit or destructor
S s; // default construction of s
S t = s; // t is copy-constructed from s
t = s; // t is assigned from s
```
Struct assignment `t=s` is defined to be semantically equivalent to:
```
t.opAssign(s);
```
where `opAssign` is a member function of S:
```
ref S opAssign(ref S s)
{
S tmp = this; // bitcopy this into tmp
this = s; // bitcopy s into this
tmp.__dtor(); // call destructor on tmp
return this;
}
```
An identity assignment overload is required for a struct if one or more of these conditions hold:
* it has a [destructor](#struct-destructor)
* it has a [postblit](#struct-postblit)
* it has a field with an identity assignment overload
If an identity assignment overload is required and does not exist, an identity assignment overload function of the type `ref S opAssign(ref S)` will be automatically generated.
A user-defined one can implement the equivalent semantics, but can be more efficient.
One reason a custom `opAssign` might be more efficient is if the struct has a reference to a local buffer:
```
struct S
{
int[] buf;
int a;
ref S opAssign(ref const S s) return
{
a = s.a;
return this;
}
this(this)
{
buf = buf.dup;
}
}
```
Here, `S` has a temporary workspace `buf[]`. The normal postblit will pointlessly free and reallocate it. The custom `opAssign` will reuse the existing storage.
Nested Structs
--------------
A *nested struct* is a struct that is declared inside the scope of a function or a templated struct that has aliases to local functions as a template argument. Nested structs have member functions. It has access to the context of its enclosing scope (via an added hidden field).
```
void foo()
{
int i = 7;
struct SS
{
int x,y;
int bar() { return x + i + 1; }
}
SS s;
s.x = 3;
s.bar(); // returns 11
}
```
A struct can be prevented from being nested by using the static attribute, but then of course it will not be able to access variables from its enclosing scope.
```
void foo()
{
int i = 7;
static struct SS
{
int x, y;
int bar()
{
return i; // error, SS is not a nested struct
}
}
}
```
Unions and Special Member Functions
-----------------------------------
Unions may not have postblits, destructors, or invariants.
| programming_docs |
d Conditional Compilation Conditional Compilation
=======================
**Contents** 1. [Version Condition](#version)
2. [Version Specification](#version-specification)
1. [Predefined Versions](#predefined-versions)
3. [Debug Condition](#debug)
1. [Debug Statement](#DebugStatement)
4. [Debug Specification](#debug_specification)
5. [Static If Condition](#staticif)
6. [Static Foreach](#staticforeach)
1. [`break` and `continue`](#break-continue)
7. [Static Assert](#static-assert)
*Conditional compilation* is the process of selecting which code to compile and which code to not compile.
```
ConditionalDeclaration:
Condition DeclarationBlock
Condition DeclarationBlock else DeclarationBlock
Condition : DeclDefsopt
Condition DeclarationBlock else : DeclDefsopt
ConditionalStatement:
Condition NoScopeNonEmptyStatement
Condition NoScopeNonEmptyStatement else NoScopeNonEmptyStatement
```
If the [*Condition*](#Condition) is satisfied, then the following *DeclarationBlock* or *Statement* is compiled in. If it is not satisfied, the *DeclarationBlock* or *Statement* after the optional `else` is compiled in.
Any *DeclarationBlock* or *Statement* that is not compiled in still must be syntactically correct.
No new scope is introduced, even if the *DeclarationBlock* or *Statement* is enclosed by `{ }`.
*ConditionalDeclaration*s and *ConditionalStatement*s can be nested.
The [*StaticAssert*](#StaticAssert) can be used to issue errors at compilation time for branches of the conditional compilation that are errors.
*Condition* comes in the following forms:
```
Condition:
VersionCondition
DebugCondition
StaticIfCondition
```
Version Condition
-----------------
```
VersionCondition:
version ( IntegerLiteral )
version ( Identifier )
version ( unittest )
version ( assert )
```
Versions enable multiple versions of a module to be implemented with a single source file.
The *VersionCondition* is satisfied if the *IntegerLiteral* is greater than or equal to the current *version level*, or if *Identifier* matches a *version identifier*.
The *version level* and *version identifier* can be set on the command line by the `-version` switch or in the module itself with a [*VersionSpecification*](#VersionSpecification), or they can be predefined by the compiler.
Version identifiers are in their own unique name space, they do not conflict with debug identifiers or other symbols in the module. Version identifiers defined in one module have no influence over other imported modules.
```
int k;
version (Demo) // compile in this code block for the demo version
{
int i;
int k; // error, k already defined
i = 3;
}
x = i; // uses the i declared above
```
```
version (X86)
{
... // implement custom inline assembler version
}
else
{
... // use default, but slow, version
}
```
The `version(unittest)` is satisfied if and only if the code is compiled with unit tests enabled (the [*-unittest*](https://dlang.org/dmd.html#switch-unittest) option on *dmd*).
Version Specification
---------------------
```
VersionSpecification:
version = Identifier ;
version = IntegerLiteral ;
```
The version specification makes it straightforward to group a set of features under one major version, for example:
```
version (ProfessionalEdition)
{
version = FeatureA;
version = FeatureB;
version = FeatureC;
}
version (HomeEdition)
{
version = FeatureA;
}
...
version (FeatureB)
{
... implement Feature B ...
}
```
Version identifiers or levels may not be forward referenced:
```
version (Foo)
{
int x;
}
version = Foo; // error, Foo already used
```
*VersionSpecification*s may only appear at module scope.
While the debug and version conditions superficially behave the same, they are intended for very different purposes. Debug statements are for adding debug code that is removed for the release version. Version statements are to aid in portability and multiple release versions.
Here's an example of a *full* version as opposed to a *demo* version:
```
class Foo
{
int a, b;
version(full)
{
int extrafunctionality()
{
...
return 1; // extra functionality is supported
}
}
else // demo
{
int extrafunctionality()
{
return 0; // extra functionality is not supported
}
}
}
```
Various different version builds can be built with a parameter to version:
```
version(n) // add in version code if version level is >= n
{
... version code ...
}
version(identifier) // add in version code if version
// keyword is identifier
{
... version code ...
}
```
These are presumably set by the command line as `-version=n` and `-version=identifier`.
### Predefined Versions
Several environmental version identifiers and identifier name spaces are predefined for consistent usage. Version identifiers do not conflict with other identifiers in the code, they are in a separate name space. Predefined version identifiers are global, i.e. they apply to all modules being compiled and imported.
Predefined Version Identifiers| **Version Identifier** | **Description** |
| `DigitalMars` | DMD (Digital Mars D) is the compiler |
| `GNU` | GDC (GNU D Compiler) is the compiler |
| `LDC` | LDC (LLVM D Compiler) is the compiler |
| `SDC` | SDC (Stupid D Compiler) is the compiler |
| `Windows` | Microsoft Windows systems |
| `Win32` | Microsoft 32-bit Windows systems |
| `Win64` | Microsoft 64-bit Windows systems |
| `linux` | All Linux systems |
| `OSX` | macOS |
| `iOS` | iOS |
| `TVOS` | tvOS |
| `WatchOS` | watchOS |
| `FreeBSD` | FreeBSD |
| `OpenBSD` | OpenBSD |
| `NetBSD` | NetBSD |
| `DragonFlyBSD` | DragonFlyBSD |
| `BSD` | All other BSDs |
| `Solaris` | Solaris |
| `Posix` | All POSIX systems (includes Linux, FreeBSD, OS X, Solaris, etc.) |
| `AIX` | IBM Advanced Interactive eXecutive OS |
| `Haiku` | The Haiku operating system |
| `SkyOS` | The SkyOS operating system |
| `SysV3` | System V Release 3 |
| `SysV4` | System V Release 4 |
| `Hurd` | GNU Hurd |
| `Android` | The Android platform |
| `Emscripten` | The Emscripten platform |
| `PlayStation` | The PlayStation platform |
| `PlayStation4` | The PlayStation 4 platform |
| `Cygwin` | The Cygwin environment |
| `MinGW` | The MinGW environment |
| `FreeStanding` | An environment without an operating system (such as Bare-metal targets) |
| `CRuntime_Bionic` | Bionic C runtime |
| `CRuntime_DigitalMars` | DigitalMars C runtime |
| `CRuntime_Glibc` | Glibc C runtime |
| `CRuntime_Microsoft` | Microsoft C runtime |
| `CRuntime_Musl` | musl C runtime |
| `CRuntime_Newlib` | newlib C runtime |
| `CRuntime_UClibc` | uClibc C runtime |
| `CRuntime_WASI` | WASI C runtime |
| `CppRuntime_Clang` | Clang Cpp runtime |
| `CppRuntime_DigitalMars` | DigitalMars Cpp runtime |
| `CppRuntime_Gcc` | Gcc Cpp runtime |
| `CppRuntime_Microsoft` | Microsoft Cpp runtime |
| `CppRuntime_Sun` | Sun Cpp runtime |
| `X86` | Intel and AMD 32-bit processors |
| `X86_64` | Intel and AMD 64-bit processors |
| `ARM` | The ARM architecture (32-bit) (AArch32 et al) |
| `ARM_Thumb` | ARM in any Thumb mode |
| `ARM_SoftFloat` | The ARM `soft` floating point ABI |
| `ARM_SoftFP` | The ARM `softfp` floating point ABI |
| `ARM_HardFloat` | The ARM `hardfp` floating point ABI |
| `AArch64` | The Advanced RISC Machine architecture (64-bit) |
| `AsmJS` | The asm.js intermediate programming language |
| `AVR` | 8-bit Atmel AVR Microcontrollers |
| `Epiphany` | The Epiphany architecture |
| `PPC` | The PowerPC architecture, 32-bit |
| `PPC_SoftFloat` | The PowerPC soft float ABI |
| `PPC_HardFloat` | The PowerPC hard float ABI |
| `PPC64` | The PowerPC architecture, 64-bit |
| `IA64` | The Itanium architecture (64-bit) |
| `MIPS32` | The MIPS architecture, 32-bit |
| `MIPS64` | The MIPS architecture, 64-bit |
| `MIPS_O32` | The MIPS O32 ABI |
| `MIPS_N32` | The MIPS N32 ABI |
| `MIPS_O64` | The MIPS O64 ABI |
| `MIPS_N64` | The MIPS N64 ABI |
| `MIPS_EABI` | The MIPS EABI |
| `MIPS_SoftFloat` | The MIPS `soft-float` ABI |
| `MIPS_HardFloat` | The MIPS `hard-float` ABI |
| `MSP430` | The MSP430 architecture |
| `NVPTX` | The Nvidia Parallel Thread Execution (PTX) architecture, 32-bit |
| `NVPTX64` | The Nvidia Parallel Thread Execution (PTX) architecture, 64-bit |
| `RISCV32` | The RISC-V architecture, 32-bit |
| `RISCV64` | The RISC-V architecture, 64-bit |
| `SPARC` | The SPARC architecture, 32-bit |
| `SPARC_V8Plus` | The SPARC v8+ ABI |
| `SPARC_SoftFloat` | The SPARC soft float ABI |
| `SPARC_HardFloat` | The SPARC hard float ABI |
| `SPARC64` | The SPARC architecture, 64-bit |
| `S390` | The System/390 architecture, 32-bit |
| `SystemZ` | The System Z architecture, 64-bit |
| `HPPA` | The HP PA-RISC architecture, 32-bit |
| `HPPA64` | The HP PA-RISC architecture, 64-bit |
| `SH` | The SuperH architecture, 32-bit |
| `WebAssembly` | The WebAssembly virtual ISA (instruction set architecture), 32-bit |
| `WASI` | The WebAssembly System Interface |
| `Alpha` | The Alpha architecture |
| `Alpha_SoftFloat` | The Alpha soft float ABI |
| `Alpha_HardFloat` | The Alpha hard float ABI |
| `LittleEndian` | Byte order, least significant first |
| `BigEndian` | Byte order, most significant first |
| `ELFv1` | The Executable and Linkable Format v1 |
| `ELFv2` | The Executable and Linkable Format v2 |
| `D_BetterC` | [D as Better C](betterc) code (command line switch [*-betterC*](https://dlang.org/dmd.html#switch-betterC)) is being generated |
| `D_Exceptions` | Exception handling is supported. Evaluates to `false` when compiling with command line switch [*-betterC*](https://dlang.org/dmd.html#switch-betterC) |
| `D_ModuleInfo` | `ModuleInfo` is supported. Evaluates to `false` when compiling with command line switch [*-betterC*](https://dlang.org/dmd.html#switch-betterC) |
| `D_TypeInfo` | Runtime type information (a.k.a `TypeInfo`) is supported. Evaluates to `false` when compiling with command line switch [*-betterC*](https://dlang.org/dmd.html#switch-betterC) |
| `D_Coverage` | [Code coverage analysis](https://dlang.org/code_coverage.html) instrumentation (command line switch [*-cov*](https://dlang.org/dmd.html#switch-cov)) is being generated |
| `D_Ddoc` | [Ddoc](ddoc) documentation (command line switch [*-D*](https://dlang.org/dmd.html#switch-D)) is being generated |
| `D_InlineAsm_X86` | [Inline assembler](iasm) for X86 is implemented |
| `D_InlineAsm_X86_64` | [Inline assembler](iasm) for X86-64 is implemented |
| `D_LP64` | Pointers are 64 bits (command line switch [*-m64*](https://dlang.org/dmd.html#switch-m64)). (Do not confuse this with C's LP64 model) |
| `D_X32` | Pointers are 32 bits, but words are still 64 bits (x32 ABI) (This can be defined in parallel to `X86_64`) |
| `D_HardFloat` | The target hardware has a floating point unit |
| `D_SoftFloat` | The target hardware does not have a floating point unit |
| `D_PIC` | Position Independent Code (command line switch [*-fPIC*](https://dlang.org/dmd-linux.html#switch-fPIC)) is being generated |
| `D_SIMD` | [Vector extensions](simd) (via `__simd`) are supported |
| `D_AVX` | AVX Vector instructions are supported |
| `D_AVX2` | AVX2 Vector instructions are supported |
| `D_Version2` | This is a D version 2 compiler |
| `D_NoBoundsChecks` | Array bounds checks are disabled (command line switch [*-boundscheck=off*](https://dlang.org/dmd.html#switch-boundscheck)) |
| `D_ObjectiveC` | The target supports interfacing with Objective-C |
| `Core` | Defined when building the standard runtime |
| `Std` | Define when building the standard library |
| `unittest` | [Unit tests](unittest) are enabled (command line switch [*-unittest*](https://dlang.org/dmd.html#switch-unittest)) |
| `assert` | Checks are being emitted for [*AssertExpression*](expression#AssertExpression)s |
| `none` | Never defined; used to just disable a section of code |
| `all` | Always defined; used as the opposite of `none` |
The following identifiers are defined, but are deprecated:
Predefined Version Identifiers (deprecated)| **Version Identifier** | **Description** |
| `darwin` | The Darwin operating system; use `OSX` instead |
| `Thumb` | ARM in Thumb mode; use `ARM_Thumb` instead |
| `S390X` | The System/390X architecture | 64-bit; use `SystemZ` instead |
Others will be added as they make sense and new implementations appear.
It is inevitable that the D language will evolve over time. Therefore, the version identifier namespace beginning with "D\_" is reserved for identifiers indicating D language specification or new feature conformance. Further, all identifiers derived from the ones listed above by appending any character(s) are reserved. This means that e.g. `ARM_foo` and `Windows_bar` are reserved while `foo_ARM` and `bar_Windows` are not.
Furthermore, predefined version identifiers from this list cannot be set from the command line or from version statements. (This prevents things like both `Windows` and `linux` being simultaneously set.)
Compiler vendor specific versions can be predefined if the trademarked vendor identifier prefixes it, as in:
```
version(DigitalMars_funky_extension)
{
...
}
```
It is important to use the right version identifier for the right purpose. For example, use the vendor identifier when using a vendor specific feature. Use the operating system identifier when using an operating system specific feature, etc.
Debug Condition
---------------
```
DebugCondition:
debug
debug ( IntegerLiteral )
debug ( Identifier )
```
Two versions of programs are commonly built, a release build and a debug build. The debug build includes extra error checking code, test harnesses, pretty-printing code, etc. The debug statement conditionally compiles in its statement body. It is D's way of what in C is done with `#ifdef DEBUG` / `#endif` pairs.
The `debug` condition is satisfied when the `-debug` switch is passed to the compiler or when the debug level is >= 1.
The `debug (` *IntegerLiteral* `)` condition is satisfied when the debug level is `>=` *IntegerLiteral*.
The `debug (` *Identifier* `)` condition is satisfied when the debug identifier matches *Identifier*.
```
class Foo
{
int a, b;
debug:
int flag;
}
```
### Debug Statement
A [*ConditionalStatement*](#ConditionalStatement) that has a [*DebugCondition*](#DebugCondition) is called a *DebugStatement*. *DebugStatements* have relaxed semantic checks in that `pure`, `@nogc`, `nothrow` and `@safe` checks are not done. Neither do *DebugStatements* influence the inference of `pure`, `@nogc`, `nothrow` and `@safe` attributes.
**Undefined Behavior:** Since these checks are bypassed, it is up to the programmer to ensure the code is correct. For example, throwing an exception in a `nothrow` function is undefined behavior. **Best Practices:** This enables the easy insertion of code to provide debugging help, by bypassing the otherwise stringent attribute checks. Never ship release code that has *DebugStatements* enabled. Debug Specification
-------------------
```
DebugSpecification:
debug = Identifier ;
debug = IntegerLiteral ;
```
Debug identifiers and levels are set either by the command line switch `-debug` or by a *DebugSpecification*.
Debug specifications only affect the module they appear in, they do not affect any imported modules. Debug identifiers are in their own namespace, independent from version identifiers and other symbols.
It is illegal to forward reference a debug specification:
```
debug(foo) writeln("Foo");
debug = foo; // error, foo used before set
```
*DebugSpecification*s may only appear at module scope.
Various different debug builds can be built with a parameter to debug:
```
debug(IntegerLiteral) { } // add in debug code if debug level is >= IntegerLiteral
debug(identifier) { } // add in debug code if debug keyword is identifier
```
These are presumably set by the command line as `-debug=`*n* and `-debug=`*identifier*.
Static If Condition
-------------------
```
StaticIfCondition:
static if ( AssignExpression )
```
[*AssignExpression*](expression#AssignExpression) is implicitly converted to a boolean type, and is evaluated at compile time. The condition is satisfied if it evaluates to `true`. It is not satisfied if it evaluates to `false`.
It is an error if [*AssignExpression*](expression#AssignExpression) cannot be implicitly converted to a boolean type or if it cannot be evaluated at compile time.
*StaticIfCondition*s can appear in module, class, template, struct, union, or function scope. In function scope, the symbols referred to in the [*AssignExpression*](expression#AssignExpression) can be any that can normally be referenced by an expression at that point.
```
const int i = 3;
int j = 4;
static if (i == 3) // ok, at module scope
int x;
class C
{
const int k = 5;
static if (i == 3) // ok
int x;
else
long x;
static if (j == 3) // error, j is not a constant
int y;
static if (k == 5) // ok, k is in current scope
int z;
}
```
```
template Int(int i)
{
static if (i == 32)
alias Int = int;
else static if (i == 16)
alias Int = short;
else
static assert(0); // not supported
}
Int!(32) a; // a is an int
Int!(16) b; // b is a short
Int!(17) c; // error, static assert trips
```
A *StaticIfCondition* differs from an *IfStatement* in the following ways:
1. It can be used to conditionally compile declarations, not just statements.
2. It does not introduce a new scope even if `{ }` are used for conditionally compiled statements.
3. For unsatisfied conditions, the conditionally compiled code need only be syntactically correct. It does not have to be semantically correct.
4. It must be evaluatable at compile time.
Static Foreach
--------------
```
StaticForeach:
static AggregateForeach
static RangeForeach
StaticForeachDeclaration:
StaticForeach DeclarationBlock
StaticForeach : DeclDefsopt
StaticForeachStatement:
StaticForeach NoScopeNonEmptyStatement
```
The aggregate/range bounds are evaluated at compile time and turned into a sequence of compile-time entities by evaluating corresponding code with a [*ForeachStatement*](statement#ForeachStatement)/[*ForeachRangeStatement*](statement#ForeachRangeStatement) at compile time. The body of the `static foreach` is then copied a number of times that corresponds to the number of elements of the sequence. Within the i-th copy, the name of the `static foreach` variable is bound to the i-th entry of the sequence, either as an `enum` variable declaration (for constants) or an `alias` declaration (for symbols). (In particular, `static foreach` variables are never runtime variables.)
```
static foreach(i; [0, 1, 2, 3])
{
pragma(msg, i);
}
```
`static foreach` supports multiple variables in cases where the corresponding `foreach` statement supports them. (In this case, `static foreach` generates a compile-time sequence of tuples, and the tuples are subsequently unpacked during iteration).
```
static foreach(i, v; ['a', 'b', 'c', 'd'])
{
static assert(i + 'a' == v);
}
```
Like bodies of [*ConditionalDeclaration*](#ConditionalDeclaration)s, a `static foreach` body does not introduce a new scope. Therefore, it can be used to generate declarations:
```
import std.range : iota;
static foreach(i; iota(0, 3))
{
mixin(`enum x`, i, ` = i;`);
}
pragma(msg, x0, " ", x1," ", x2); // 0 1 2
```
If a new scope is desired for each expansion, use another set of braces:
```
static foreach(s; ["hi", "hey", "hello"])
{{
enum len = s.length; // local to each iteration
static assert(len <= 5);
}}
static assert(!is(typeof(len)));
```
###
`break` and `continue`
As `static foreach` is a code generation construct and not a loop, `break` and `continue` cannot be used to change control flow within it. Instead of breaking or continuing a suitable enclosing statement, such an usage yields an error (this is to prevent misunderstandings).
```
int test(int x)
{
int r = -1;
switch(x)
{
static foreach(i; 0 .. 100)
{
case i:
r = i;
break; // error
}
default: break;
}
return r;
}
static foreach(i; 0 .. 200)
{
static assert(test(i) == (i < 100 ? i : -1));
}
```
An explicit `break`/`continue` label can be used to avoid this limitation. (Note that `static foreach` itself cannot be broken nor continued even if it is explicitly labeled.)
```
int test(int x)
{
int r = -1;
Lswitch: switch(x)
{
static foreach(i; 0 .. 100)
{
case i:
r = i;
break Lswitch;
}
default: break;
}
return r;
}
static foreach(i; 0 .. 200)
{
static assert(test(i) == (i<100 ? i : -1));
}
```
Static Assert
-------------
```
StaticAssert:
static assert ( AssertArguments );
```
The first [*AssignExpression*](expression#AssignExpression) is evaluated at compile time, and converted to a boolean value. If the value is true, the static assert is ignored. If the value is false, an error diagnostic is issued and the compile fails.
Unlike [*AssertExpression*](expression#AssertExpression)s, *StaticAssert*s are always checked and evaluated by the compiler unless they appear in an unsatisfied conditional.
```
void foo()
{
if (0)
{
assert(0); // never trips
static assert(0); // always trips
}
version (BAR)
{
}
else
{
static assert(0); // trips when version BAR is not defined
}
}
```
*StaticAssert* is useful tool for drawing attention to conditional configurations not supported in the code.
The optional second [*AssignExpression*](expression#AssignExpression) can be used to supply additional information, such as a text string, that will be printed out along with the error diagnostic.
| programming_docs |
d rt.sections_darwin_64 rt.sections\_darwin\_64
=======================
Written in the D programming language. This module provides Darwin 64 bit specific support for sections.
License:
Distributed under the [Boost Software License 1.0](http://www.boost.org/LICENSE_1_0.txt). (See accompanying file LICENSE)
Authors:
Jacob Carlborg
Source
[rt/sections\_darwin\_64.d](https://github.com/dlang/druntime/blob/master/src/rt/sections_darwin_64.d)
d rt.arrayassign rt.arrayassign
==============
Implementation of array assignment support routines.
License:
Distributed under the [Boost Software License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
Walter Bright, Kenji Hara
Source
[rt/arrayassign.d](https://github.com/dlang/druntime/blob/master/src/rt/arrayassign.d)
void[] **\_d\_arrayassign**(TypeInfo ti, void[] from, void[] to);
Keep for backward binary compatibility. This function can be removed in the future.
void[] **\_d\_arrayassign\_l**(TypeInfo ti, void[] src, void[] dst, void\* ptmp);
Does array assignment (not construction) from another lvalue array of the same element type. Handles overlapping copies.
Input
ti TypeInfo of the element type. dst Points target memory. Its .length is equal to the element count, not byte length. src Points source memory. Its .length is equal to the element count, not byte length. ptmp Temporary memory for element swapping.
void[] **\_d\_arrayassign\_r**(TypeInfo ti, void[] src, void[] dst, void\* ptmp);
Does array assignment (not construction) from another rvalue array of the same element type.
Input
ti TypeInfo of the element type. dst Points target memory. Its .length is equal to the element count, not byte length. src Points source memory. Its .length is equal to the element count, not byte length. It is always allocated on stack and never overlapping with dst. ptmp Temporary memory for element swapping.
void[] **\_d\_arrayctor**(TypeInfo ti, void[] from, void[] to);
Does array initialization (not assignment) from another array of the same element type. ti is the element type.
void\* **\_d\_arraysetassign**(void\* p, void\* value, int count, TypeInfo ti);
Do assignment to an array. p[0 .. count] = value;
void\* **\_d\_arraysetctor**(void\* p, void\* value, int count, TypeInfo ti);
Do construction of an array. ti[count] p = value;
d std.algorithm.mutation std.algorithm.mutation
======================
This is a submodule of [`std.algorithm`](std_algorithm). It contains generic mutation algorithms.
Cheat Sheet| Function Name | Description |
| [`bringToFront`](#bringToFront) | If `a = [1, 2, 3]` and `b = [4, 5, 6, 7]`, `bringToFront(a, b)` leaves `a = [4, 5, 6]` and `b = [7, 1, 2, 3]`. |
| [`copy`](#copy) | Copies a range to another. If `a = [1, 2, 3]` and `b = new int[5]`, then `copy(a, b)` leaves `b = [1, 2, 3, 0, 0]` and returns `b[3 .. $]`. |
| [`fill`](#fill) | Fills a range with a pattern, e.g., if `a = new int[3]`, then `fill(a, 4)` leaves `a = [4, 4, 4]` and `fill(a, [3, 4])` leaves `a = [3, 4, 3]`. |
| [`initializeAll`](#initializeAll) | If `a = [1.2, 3.4]`, then `initializeAll(a)` leaves `a = [double.init, double.init]`. |
| [`move`](#move) | `move(a, b)` moves `a` into `b`. `move(a)` reads `a` destructively when necessary. |
| [`moveEmplace`](#moveEmplace) | Similar to `move` but assumes `target` is uninitialized. |
| [`moveAll`](#moveAll) | Moves all elements from one range to another. |
| [`moveEmplaceAll`](#moveEmplaceAll) | Similar to `moveAll` but assumes all elements in `target` are uninitialized. |
| [`moveSome`](#moveSome) | Moves as many elements as possible from one range to another. |
| [`moveEmplaceSome`](#moveEmplaceSome) | Similar to `moveSome` but assumes all elements in `target` are uninitialized. |
| [`remove`](#remove) | Removes elements from a range in-place, and returns the shortened range. |
| [`reverse`](#reverse) | If `a = [1, 2, 3]`, `reverse(a)` changes it to `[3, 2, 1]`. |
| [`strip`](#strip) | Strips all leading and trailing elements equal to a value, or that satisfy a predicate. If `a = [1, 1, 0, 1, 1]`, then `strip(a, 1)` and `strip!(e => e == 1)(a)` returns `[0]`. |
| [`stripLeft`](#stripLeft) | Strips all leading elements equal to a value, or that satisfy a predicate. If `a = [1, 1, 0, 1, 1]`, then `stripLeft(a, 1)` and `stripLeft!(e => e == 1)(a)` returns `[0, 1, 1]`. |
| [`stripRight`](#stripRight) | Strips all trailing elements equal to a value, or that satisfy a predicate. If `a = [1, 1, 0, 1, 1]`, then `stripRight(a, 1)` and `stripRight!(e => e == 1)(a)` returns `[1, 1, 0]`. |
| [`swap`](#swap) | Swaps two values. |
| [`swapAt`](#swapAt) | Swaps two values by indices. |
| [`swapRanges`](#swapRanges) | Swaps all elements of two ranges. |
| [`uninitializedFill`](#uninitializedFill) | Fills a range (assumed uninitialized) with a value. |
License:
[Boost License 1.0](http://boost.org/LICENSE_1_0.txt).
Authors:
[Andrei Alexandrescu](http://erdani.com)
Source
[std/algorithm/mutation.d](https://github.com/dlang/phobos/blob/master/std/algorithm/mutation.d)
size\_t **bringToFront**(InputRange, ForwardRange)(InputRange front, ForwardRange back)
Constraints: if (isInputRange!InputRange && isForwardRange!ForwardRange);
`bringToFront` takes two ranges `front` and `back`, which may be of different types. Considering the concatenation of `front` and `back` one unified range, `bringToFront` rotates that unified range such that all elements in `back` are brought to the beginning of the unified range. The relative ordering of elements in `front` and `back`, respectively, remains unchanged.
The `bringToFront` function treats strings at the code unit level and it is not concerned with Unicode character integrity. `bringToFront` is designed as a function for moving elements in ranges, not as a string function.
Performs Ο(`max(front.length, back.length)`) evaluations of `swap`.
The `bringToFront` function can rotate elements in one buffer left or right, swap buffers of equal length, and even move elements across disjoint buffers of different types and different lengths.
Preconditions
Either `front` and `back` are disjoint, or `back` is reachable from `front` and `front` is not reachable from `back`.
Parameters:
| | |
| --- | --- |
| InputRange `front` | an [input range](std_range_primitives#isInputRange) |
| ForwardRange `back` | a [forward range](std_range_primitives#isForwardRange) |
Returns:
The number of elements brought to the front, i.e., the length of `back`.
See Also:
[STL's `rotate`](http://en.cppreference.com/w/cpp/algorithm/rotate)
Examples:
The simplest use of `bringToFront` is for rotating elements in a buffer. For example:
```
auto arr = [4, 5, 6, 7, 1, 2, 3];
auto p = bringToFront(arr[0 .. 4], arr[4 .. $]);
writeln(p); // arr.length - 4
writeln(arr); // [1, 2, 3, 4, 5, 6, 7]
```
Examples:
The `front` range may actually "step over" the `back` range. This is very useful with forward ranges that cannot compute comfortably right-bounded subranges like `arr[0 .. 4]` above. In the example below, `r2` is a right subrange of `r1`.
```
import std.algorithm.comparison : equal;
import std.container : SList;
import std.range.primitives : popFrontN;
auto list = SList!(int)(4, 5, 6, 7, 1, 2, 3);
auto r1 = list[];
auto r2 = list[]; popFrontN(r2, 4);
assert(equal(r2, [ 1, 2, 3 ]));
bringToFront(r1, r2);
assert(equal(list[], [ 1, 2, 3, 4, 5, 6, 7 ]));
```
Examples:
Elements can be swapped across ranges of different types:
```
import std.algorithm.comparison : equal;
import std.container : SList;
auto list = SList!(int)(4, 5, 6, 7);
auto vec = [ 1, 2, 3 ];
bringToFront(list[], vec);
assert(equal(list[], [ 1, 2, 3, 4 ]));
assert(equal(vec, [ 5, 6, 7 ]));
```
Examples:
Unicode integrity is not preserved:
```
import std.string : representation;
auto ar = representation("a".dup);
auto br = representation("ç".dup);
bringToFront(ar, br);
auto a = cast(char[]) ar;
auto b = cast(char[]) br;
// Illegal UTF-8
writeln(a); // "\303"
// Illegal UTF-8
writeln(b); // "\247a"
```
TargetRange **copy**(SourceRange, TargetRange)(SourceRange source, TargetRange target)
Constraints: if (isInputRange!SourceRange && isOutputRange!(TargetRange, ElementType!SourceRange));
Copies the content of `source` into `target` and returns the remaining (unfilled) part of `target`.
Preconditions
`target` shall have enough room to accommodate the entirety of `source`.
Parameters:
| | |
| --- | --- |
| SourceRange `source` | an [input range](std_range_primitives#isInputRange) |
| TargetRange `target` | an output range |
Returns:
The unfilled part of target
Examples:
```
int[] a = [ 1, 5 ];
int[] b = [ 9, 8 ];
int[] buf = new int[](a.length + b.length + 10);
auto rem = a.copy(buf); // copy a into buf
rem = b.copy(rem); // copy b into remainder of buf
writeln(buf[0 .. a.length + b.length]); // [1, 5, 9, 8]
assert(rem.length == 10); // unused slots in buf
```
Examples:
As long as the target range elements support assignment from source range elements, different types of ranges are accepted:
```
float[] src = [ 1.0f, 5 ];
double[] dest = new double[src.length];
src.copy(dest);
```
Examples:
To copy at most `n` elements from a range, you may want to use [`std.range.take`](std_range#take):
```
import std.range;
int[] src = [ 1, 5, 8, 9, 10 ];
auto dest = new int[](3);
src.take(dest.length).copy(dest);
writeln(dest); // [1, 5, 8]
```
Examples:
To copy just those elements from a range that satisfy a predicate, use [`filter`](#filter):
```
import std.algorithm.iteration : filter;
int[] src = [ 1, 5, 8, 9, 10, 1, 2, 0 ];
auto dest = new int[src.length];
auto rem = src
.filter!(a => (a & 1) == 1)
.copy(dest);
writeln(dest[0 .. $ - rem.length]); // [1, 5, 9, 1]
```
Examples:
[`std.range.retro`](std_range#retro) can be used to achieve behavior similar to [STL's `copy_backward`'](http://en.cppreference.com/w/cpp/algorithm/copy_backward):
```
import std.algorithm, std.range;
int[] src = [1, 2, 4];
int[] dest = [0, 0, 0, 0, 0];
src.retro.copy(dest.retro);
writeln(dest); // [0, 0, 1, 2, 4]
```
void **fill**(Range, Value)(auto ref Range range, auto ref Value value)
Constraints: if (isInputRange!Range && is(typeof(range.front = value)) || isSomeChar!Value && is(typeof(range[] = value)));
void **fill**(InputRange, ForwardRange)(InputRange range, ForwardRange filler)
Constraints: if (isInputRange!InputRange && (isForwardRange!ForwardRange || isInputRange!ForwardRange && isInfinite!ForwardRange) && is(typeof(InputRange.init.front = ForwardRange.init.front)));
Assigns `value` to each element of input range `range`.
Alternatively, instead of using a single `value` to fill the `range`, a `filter` [forward range](std_range_primitives#isForwardRange) can be provided. The length of `filler` and `range` do not need to match, but `filler` must not be empty.
Parameters:
| | |
| --- | --- |
| Range `range` | An [input range](std_range_primitives#isInputRange) that exposes references to its elements and has assignable elements |
| Value `value` | Assigned to each element of range |
| ForwardRange `filler` | A [forward range](std_range_primitives#isForwardRange) representing the fill pattern. |
Throws:
If `filler` is empty.
See Also:
[`uninitializedFill`](#uninitializedFill) [`initializeAll`](#initializeAll)
Examples:
```
int[] a = [ 1, 2, 3, 4 ];
fill(a, 5);
writeln(a); // [5, 5, 5, 5]
```
Examples:
```
int[] a = [ 1, 2, 3, 4, 5 ];
int[] b = [ 8, 9 ];
fill(a, b);
writeln(a); // [8, 9, 8, 9, 8]
```
void **initializeAll**(Range)(Range range)
Constraints: if (isInputRange!Range && hasLvalueElements!Range && hasAssignableElements!Range);
void **initializeAll**(Range)(Range range)
Constraints: if (is(Range == char[]) || is(Range == wchar[]));
Initializes all elements of `range` with their `.init` value. Assumes that the elements of the range are uninitialized.
Parameters:
| | |
| --- | --- |
| Range `range` | An [input range](std_range_primitives#isInputRange) that exposes references to its elements and has assignable elements |
See Also:
[`fill`](#fill) [`uninitializeFill`](#uninitializeFill)
Examples:
```
import core.stdc.stdlib : malloc, free;
struct S
{
int a = 10;
}
auto s = (cast(S*) malloc(5 * S.sizeof))[0 .. 5];
initializeAll(s);
writeln(s); // [S(10), S(10), S(10), S(10), S(10)]
scope(exit) free(s.ptr);
```
void **move**(T)(ref T source, ref T target);
T **move**(T)(return ref scope T source);
Moves `source` into `target`, via a destructive copy when necessary.
If `T` is a struct with a destructor or postblit defined, source is reset to its `.init` value after it is moved into target, otherwise it is left unchanged.
Preconditions
If source has internal pointers that point to itself and doesn't define opPostMove, it cannot be moved, and will trigger an assertion failure.
Parameters:
| | |
| --- | --- |
| T `source` | Data to copy. |
| T `target` | Where to copy into. The destructor, if any, is invoked before the copy is performed. |
Examples:
For non-struct types, `move` just performs `target = source`:
```
Object obj1 = new Object;
Object obj2 = obj1;
Object obj3;
move(obj2, obj3);
assert(obj3 is obj1);
// obj2 unchanged
assert(obj2 is obj1);
```
Examples:
```
// Structs without destructors are simply copied
struct S1
{
int a = 1;
int b = 2;
}
S1 s11 = { 10, 11 };
S1 s12;
move(s11, s12);
writeln(s12); // S1(10, 11)
writeln(s11); // s12
// But structs with destructors or postblits are reset to their .init value
// after copying to the target.
struct S2
{
int a = 1;
int b = 2;
~this() pure nothrow @safe @nogc { }
}
S2 s21 = { 3, 4 };
S2 s22;
move(s21, s22);
writeln(s21); // S2(1, 2)
writeln(s22); // S2(3, 4)
```
Examples:
Non-copyable structs can still be moved:
```
struct S
{
int a = 1;
@disable this(this);
~this() pure nothrow @safe @nogc {}
}
S s1;
s1.a = 2;
S s2 = move(s1);
writeln(s1.a); // 1
writeln(s2.a); // 2
```
Examples:
`opPostMove` will be called if defined:
```
struct S
{
int a;
void opPostMove(const ref S old)
{
writeln(a); // old.a
a++;
}
}
S s1;
s1.a = 41;
S s2 = move(s1);
writeln(s2.a); // 42
```
pure @system void **moveEmplace**(T)(ref T source, ref T target);
Similar to [`move`](#move) but assumes `target` is uninitialized. This is more efficient because `source` can be blitted over `target` without destroying or initializing it first.
Parameters:
| | |
| --- | --- |
| T `source` | value to be moved into target |
| T `target` | uninitialized value to be filled by source |
Examples:
```
static struct Foo
{
pure nothrow @nogc:
this(int* ptr) { _ptr = ptr; }
~this() { if (_ptr) ++*_ptr; }
int* _ptr;
}
int val;
Foo foo1 = void; // uninitialized
auto foo2 = Foo(&val); // initialized
assert(foo2._ptr is &val);
// Using `move(foo2, foo1)` would have an undefined effect because it would destroy
// the uninitialized foo1.
// moveEmplace directly overwrites foo1 without destroying or initializing it first.
moveEmplace(foo2, foo1);
assert(foo1._ptr is &val);
assert(foo2._ptr is null);
writeln(val); // 0
```
InputRange2 **moveAll**(InputRange1, InputRange2)(InputRange1 src, InputRange2 tgt)
Constraints: if (isInputRange!InputRange1 && isInputRange!InputRange2 && is(typeof(move(src.front, tgt.front))));
Calls `move(a, b)` for each element `a` in `src` and the corresponding element `b` in `tgt`, in increasing order.
Preconditions
`walkLength(src) <= walkLength(tgt)`. This precondition will be asserted. If you cannot ensure there is enough room in `tgt` to accommodate all of `src` use [`moveSome`](#moveSome) instead.
Parameters:
| | |
| --- | --- |
| InputRange1 `src` | An [input range](std_range_primitives#isInputRange) with movable elements. |
| InputRange2 `tgt` | An [input range](std_range_primitives#isInputRange) with elements that elements from `src` can be moved into. |
Returns:
The leftover portion of `tgt` after all elements from `src` have been moved.
Examples:
```
int[3] a = [ 1, 2, 3 ];
int[5] b;
assert(moveAll(a[], b[]) is b[3 .. $]);
writeln(a[]); // b[0 .. 3]
int[3] cmp = [ 1, 2, 3 ];
writeln(a[]); // cmp[]
```
@system InputRange2 **moveEmplaceAll**(InputRange1, InputRange2)(InputRange1 src, InputRange2 tgt)
Constraints: if (isInputRange!InputRange1 && isInputRange!InputRange2 && is(typeof(moveEmplace(src.front, tgt.front))));
Similar to [`moveAll`](#moveAll) but assumes all elements in `tgt` are uninitialized. Uses [`moveEmplace`](#moveEmplace) to move elements from `src` over elements from `tgt`.
Examples:
```
static struct Foo
{
~this() pure nothrow @nogc { if (_ptr) ++*_ptr; }
int* _ptr;
}
int[3] refs = [0, 1, 2];
Foo[3] src = [Foo(&refs[0]), Foo(&refs[1]), Foo(&refs[2])];
Foo[5] dst = void;
auto tail = moveEmplaceAll(src[], dst[]); // move 3 value from src over dst
assert(tail.length == 2); // returns remaining uninitialized values
initializeAll(tail);
import std.algorithm.searching : all;
assert(src[].all!(e => e._ptr is null));
assert(dst[0 .. 3].all!(e => e._ptr !is null));
```
Tuple!(InputRange1, InputRange2) **moveSome**(InputRange1, InputRange2)(InputRange1 src, InputRange2 tgt)
Constraints: if (isInputRange!InputRange1 && isInputRange!InputRange2 && is(typeof(move(src.front, tgt.front))));
Calls `move(a, b)` for each element `a` in `src` and the corresponding element `b` in `tgt`, in increasing order, stopping when either range has been exhausted.
Parameters:
| | |
| --- | --- |
| InputRange1 `src` | An [input range](std_range_primitives#isInputRange) with movable elements. |
| InputRange2 `tgt` | An [input range](std_range_primitives#isInputRange) with elements that elements from `src` can be moved into. |
Returns:
The leftover portions of the two ranges after one or the other of the ranges have been exhausted.
Examples:
```
int[5] a = [ 1, 2, 3, 4, 5 ];
int[3] b;
assert(moveSome(a[], b[])[0] is a[3 .. $]);
writeln(a[0 .. 3]); // b
writeln(a); // [1, 2, 3, 4, 5]
```
@system Tuple!(InputRange1, InputRange2) **moveEmplaceSome**(InputRange1, InputRange2)(InputRange1 src, InputRange2 tgt)
Constraints: if (isInputRange!InputRange1 && isInputRange!InputRange2 && is(typeof(move(src.front, tgt.front))));
Same as [`moveSome`](#moveSome) but assumes all elements in `tgt` are uninitialized. Uses [`moveEmplace`](#moveEmplace) to move elements from `src` over elements from `tgt`.
Examples:
```
static struct Foo
{
~this() pure nothrow @nogc { if (_ptr) ++*_ptr; }
int* _ptr;
}
int[4] refs = [0, 1, 2, 3];
Foo[4] src = [Foo(&refs[0]), Foo(&refs[1]), Foo(&refs[2]), Foo(&refs[3])];
Foo[3] dst = void;
auto res = moveEmplaceSome(src[], dst[]);
writeln(res.length); // 2
import std.algorithm.searching : all;
assert(src[0 .. 3].all!(e => e._ptr is null));
assert(src[3]._ptr !is null);
assert(dst[].all!(e => e._ptr !is null));
```
enum **SwapStrategy**: int;
Defines the swapping strategy for algorithms that need to swap elements in a range (such as partition and sort). The strategy concerns the swapping of elements that are not the core concern of the algorithm. For example, consider an algorithm that sorts `[ "abc", "b", "aBc" ]` according to `toUpper(a) < toUpper(b)`. That algorithm might choose to swap the two equivalent strings `"abc"` and `"aBc"`. That does not affect the sorting since both `["abc", "aBc", "b" ]` and `[ "aBc", "abc", "b" ]` are valid outcomes.
Some situations require that the algorithm must NOT ever change the relative ordering of equivalent elements (in the example above, only `[ "abc", "aBc", "b" ]` would be the correct result). Such algorithms are called **stable**. If the ordering algorithm may swap equivalent elements discretionarily, the ordering is called **unstable**.
Yet another class of algorithms may choose an intermediate tradeoff by being stable only on a well-defined subrange of the range. There is no established terminology for such behavior; this library calls it **semistable**.
Generally, the `stable` ordering strategy may be more costly in time and/or space than the other two because it imposes additional constraints. Similarly, `semistable` may be costlier than `unstable`. As (semi-)stability is not needed very often, the ordering algorithms in this module parameterized by `SwapStrategy` all choose `SwapStrategy.unstable` as the default.
Examples:
```
int[] a = [0, 1, 2, 3];
writeln(remove!(SwapStrategy.stable)(a, 1)); // [0, 2, 3]
a = [0, 1, 2, 3];
writeln(remove!(SwapStrategy.unstable)(a, 1)); // [0, 3, 2]
```
Examples:
```
import std.algorithm.sorting : partition;
// Put stuff greater than 3 on the left
auto arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
writeln(partition!(a => a > 3, SwapStrategy.stable)(arr)); // [1, 2, 3]
writeln(arr); // [4, 5, 6, 7, 8, 9, 10, 1, 2, 3]
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
writeln(partition!(a => a > 3, SwapStrategy.semistable)(arr)); // [2, 3, 1]
writeln(arr); // [4, 5, 6, 7, 8, 9, 10, 2, 3, 1]
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
writeln(partition!(a => a > 3, SwapStrategy.unstable)(arr)); // [3, 2, 1]
writeln(arr); // [10, 9, 8, 4, 5, 6, 7, 3, 2, 1]
```
**unstable**
Allows freely swapping of elements as long as the output satisfies the algorithm's requirements.
**semistable**
In algorithms partitioning ranges in two, preserve relative ordering of elements only to the left of the partition point.
**stable**
Preserve the relative ordering of elements to the largest extent allowed by the algorithm's requirements.
Range **remove**(SwapStrategy s = SwapStrategy.stable, Range, Offset...)(Range range, Offset offset)
Constraints: if (Offset.length >= 1 && allSatisfy!(isValidIntegralTuple, Offset));
Eliminates elements at given offsets from `range` and returns the shortened range.
For example, here is how to remove a single element from an array:
```
string[] a = [ "a", "b", "c", "d" ];
a = a.remove(1); // remove element at offset 1
assert(a == [ "a", "c", "d"]);
```
Note that `remove` does not change the length of the original range directly; instead, it returns the shortened range. If its return value is not assigned to the original range, the original range will retain its original length, though its contents will have changed:
```
int[] a = [ 3, 5, 7, 8 ];
assert(remove(a, 1) == [ 3, 7, 8 ]);
assert(a == [ 3, 7, 8, 8 ]);
```
The element at offset `1` has been removed and the rest of the elements have shifted up to fill its place, however, the original array remains of the same length. This is because all functions in `std.algorithm` only change *content*, not *topology*. The value `8` is repeated because [`move`](#move) was invoked to rearrange elements, and on integers `move` simply copies the source to the destination. To replace `a` with the effect of the removal, simply assign the slice returned by `remove` to it, as shown in the first example.
Multiple indices can be passed into `remove`. In that case, elements at the respective indices are all removed. The indices must be passed in increasing order, otherwise an exception occurs.
```
int[] a = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ];
assert(remove(a, 1, 3, 5) ==
[ 0, 2, 4, 6, 7, 8, 9, 10 ]);
```
(Note that all indices refer to slots in the *original* array, not in the array as it is being progressively shortened.)
Tuples of two integral offsets can be used to remove an indices range:
```
int[] a = [ 3, 4, 5, 6, 7];
assert(remove(a, 1, tuple(1, 3), 9) == [ 3, 6, 7 ]);
```
The tuple passes in a range closed to the left and open to the right (consistent with built-in slices), e.g. `tuple(1, 3)` means indices `1` and `2` but not `3`.
Finally, any combination of integral offsets and tuples composed of two integral offsets can be passed in:
```
int[] a = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ];
assert(remove(a, 1, tuple(3, 5), 9) == [ 0, 2, 5, 6, 7, 8, 10 ]);
```
In this case, the slots at positions 1, 3, 4, and 9 are removed from the array.
If the need is to remove some elements in the range but the order of the remaining elements does not have to be preserved, you may want to pass `SwapStrategy.unstable` to `remove`.
```
int[] a = [ 0, 1, 2, 3 ];
assert(remove!(SwapStrategy.unstable)(a, 1) == [ 0, 3, 2 ]);
```
In the case above, the element at slot `1` is removed, but replaced with the last element of the range. Taking advantage of the relaxation of the stability requirement, `remove` moved elements from the end of the array over the slots to be removed. This way there is less data movement to be done which improves the execution time of the function.
The function `remove` works on bidirectional ranges that have assignable lvalue elements. The moving strategy is (listed from fastest to slowest):
* If `s == SwapStrategy.unstable && isRandomAccessRange!Range && hasLength!Range && hasLvalueElements!Range`, then elements are moved from the end of the range into the slots to be filled. In this case, the absolute minimum of moves is performed.
* Otherwise, if `s == SwapStrategy.unstable && isBidirectionalRange!Range && hasLength!Range && hasLvalueElements!Range`, then elements are still moved from the end of the range, but time is spent on advancing between slots by repeated calls to `range.popFront`.
* Otherwise, elements are moved incrementally towards the front of `range`; a given element is never moved several times, but more elements are moved than in the previous cases.
Parameters:
| | |
| --- | --- |
| s | a SwapStrategy to determine if the original order needs to be preserved |
| Range `range` | a [bidirectional range](std_range_primitives#isBidirectionalRange) with a length member |
| Offset `offset` | which element(s) to remove |
Returns:
A range containing all of the elements of range with offset removed.
Range **remove**(alias pred, SwapStrategy s = SwapStrategy.stable, Range)(Range range);
Reduces the length of the [bidirectional range](std_range_primitives#isBidirectionalRange) `range` by removing elements that satisfy `pred`. If `s = SwapStrategy.unstable`, elements are moved from the right end of the range over the elements to eliminate. If `s = SwapStrategy.stable` (the default), elements are moved progressively to front such that their relative order is preserved. Returns the filtered range.
Parameters:
| | |
| --- | --- |
| Range `range` | a bidirectional ranges with lvalue elements or mutable character arrays |
Returns:
the range with all of the elements where `pred` is `true` removed
Examples:
```
static immutable base = [1, 2, 3, 2, 4, 2, 5, 2];
int[] arr = base[].dup;
// using a string-based predicate
writeln(remove!("a == 2")(arr)); // [1, 3, 4, 5]
// The original array contents have been modified,
// so we need to reset it to its original state.
// The length is unmodified however.
arr[] = base[];
// using a lambda predicate
writeln(remove!(a => a == 2)(arr)); // [1, 3, 4, 5]
```
Range **reverse**(Range)(Range r)
Constraints: if (isBidirectionalRange!Range && (hasSwappableElements!Range || hasAssignableElements!Range && hasLength!Range && isRandomAccessRange!Range || isNarrowString!Range && isAssignable!(ElementType!Range)));
Reverses `r` in-place. Performs `r.length / 2` evaluations of `swap`. UTF sequences consisting of multiple code units are preserved properly.
Parameters:
| | |
| --- | --- |
| Range `r` | a [bidirectional range](std_range_primitives#isBidirectionalRange) with either swappable elements, a random access range with a length member, or a narrow string |
Returns:
`r`
Note
When passing a string with unicode modifiers on characters, such as `\u0301`, this function will not properly keep the position of the modifier. For example, reversing `ba\u0301d` ("bád") will result in d\u0301ab ("d́ab") instead of `da\u0301b` ("dáb").
See Also:
[`std.range.retro`](std_range#retro) for a lazy reverse without changing `r`
Examples:
```
int[] arr = [ 1, 2, 3 ];
writeln(arr.reverse); // [3, 2, 1]
```
Examples:
```
char[] arr = "hello\U00010143\u0100\U00010143".dup;
writeln(arr.reverse); // "\U00010143\u0100\U00010143olleh"
```
Range **strip**(Range, E)(Range range, E element)
Constraints: if (isBidirectionalRange!Range && is(typeof(range.front == element) : bool));
Range **strip**(alias pred, Range)(Range range)
Constraints: if (isBidirectionalRange!Range && is(typeof(pred(range.back)) : bool));
Range **stripLeft**(Range, E)(Range range, E element)
Constraints: if (isInputRange!Range && is(typeof(range.front == element) : bool));
Range **stripLeft**(alias pred, Range)(Range range)
Constraints: if (isInputRange!Range && is(typeof(pred(range.front)) : bool));
Range **stripRight**(Range, E)(Range range, E element)
Constraints: if (isBidirectionalRange!Range && is(typeof(range.back == element) : bool));
Range **stripRight**(alias pred, Range)(Range range)
Constraints: if (isBidirectionalRange!Range && is(typeof(pred(range.back)) : bool));
The strip group of functions allow stripping of either leading, trailing, or both leading and trailing elements.
The `stripLeft` function will strip the `front` of the range, the `stripRight` function will strip the `back` of the range, while the `strip` function will strip both the `front` and `back` of the range.
Note that the `strip` and `stripRight` functions require the range to be a [`BidirectionalRange`](#BidirectionalRange) range.
All of these functions come in two varieties: one takes a target element, where the range will be stripped as long as this element can be found. The other takes a lambda predicate, where the range will be stripped as long as the predicate returns true.
Parameters:
| | |
| --- | --- |
| Range `range` | a [bidirectional range](std_range_primitives#isBidirectionalRange) or [input range](std_range_primitives#isInputRange) |
| E `element` | the elements to remove |
Returns:
a Range with all of range except element at the start and end
Examples:
Strip leading and trailing elements equal to the target element.
```
writeln(" foobar ".strip(' ')); // "foobar"
writeln("00223.444500".strip('0')); // "223.4445"
writeln("ëëêéüŗōpéêëë".strip('ë')); // "êéüŗōpéê"
writeln([1, 1, 0, 1, 1].strip(1)); // [0]
writeln([0.0, 0.01, 0.01, 0.0].strip(0).length); // 2
```
Examples:
Strip leading and trailing elements while the predicate returns true.
```
writeln(" foobar ".strip!(a => a == ' ')()); // "foobar"
writeln("00223.444500".strip!(a => a == '0')()); // "223.4445"
writeln("ëëêéüŗōpéêëë".strip!(a => a == 'ë')()); // "êéüŗōpéê"
writeln([1, 1, 0, 1, 1].strip!(a => a == 1)()); // [0]
writeln([0.0, 0.01, 0.5, 0.6, 0.01, 0.0].strip!(a => a < 0.4)().length); // 2
```
Examples:
Strip leading elements equal to the target element.
```
writeln(" foobar ".stripLeft(' ')); // "foobar "
writeln("00223.444500".stripLeft('0')); // "223.444500"
writeln("ůůűniçodêéé".stripLeft('ů')); // "űniçodêéé"
writeln([1, 1, 0, 1, 1].stripLeft(1)); // [0, 1, 1]
writeln([0.0, 0.01, 0.01, 0.0].stripLeft(0).length); // 3
```
Examples:
Strip leading elements while the predicate returns true.
```
writeln(" foobar ".stripLeft!(a => a == ' ')()); // "foobar "
writeln("00223.444500".stripLeft!(a => a == '0')()); // "223.444500"
writeln("ůůűniçodêéé".stripLeft!(a => a == 'ů')()); // "űniçodêéé"
writeln([1, 1, 0, 1, 1].stripLeft!(a => a == 1)()); // [0, 1, 1]
writeln([0.0, 0.01, 0.10, 0.5, 0.6].stripLeft!(a => a < 0.4)().length); // 2
```
Examples:
Strip trailing elements equal to the target element.
```
writeln(" foobar ".stripRight(' ')); // " foobar"
writeln("00223.444500".stripRight('0')); // "00223.4445"
writeln("ùniçodêéé".stripRight('é')); // "ùniçodê"
writeln([1, 1, 0, 1, 1].stripRight(1)); // [1, 1, 0]
writeln([0.0, 0.01, 0.01, 0.0].stripRight(0).length); // 3
```
Examples:
Strip trailing elements while the predicate returns true.
```
writeln(" foobar ".stripRight!(a => a == ' ')()); // " foobar"
writeln("00223.444500".stripRight!(a => a == '0')()); // "00223.4445"
writeln("ùniçodêéé".stripRight!(a => a == 'é')()); // "ùniçodê"
writeln([1, 1, 0, 1, 1].stripRight!(a => a == 1)()); // [1, 1, 0]
writeln([0.0, 0.01, 0.10, 0.5, 0.6].stripRight!(a => a > 0.4)().length); // 3
```
pure nothrow @nogc @trusted void **swap**(T)(ref T lhs, ref T rhs)
Constraints: if (isBlitAssignable!T && !is(typeof(lhs.proxySwap(rhs))));
void **swap**(T)(ref T lhs, ref T rhs)
Constraints: if (is(typeof(lhs.proxySwap(rhs))));
Swaps `lhs` and `rhs`. The instances `lhs` and `rhs` are moved in memory, without ever calling `opAssign`, nor any other function. `T` need not be assignable at all to be swapped.
If `lhs` and `rhs` reference the same instance, then nothing is done.
`lhs` and `rhs` must be mutable. If `T` is a struct or union, then its fields must also all be (recursively) mutable.
Parameters:
| | |
| --- | --- |
| T `lhs` | Data to be swapped with `rhs`. |
| T `rhs` | Data to be swapped with `lhs`. |
Examples:
```
// Swapping POD (plain old data) types:
int a = 42, b = 34;
swap(a, b);
assert(a == 34 && b == 42);
// Swapping structs with indirection:
static struct S { int x; char c; int[] y; }
S s1 = { 0, 'z', [ 1, 2 ] };
S s2 = { 42, 'a', [ 4, 6 ] };
swap(s1, s2);
writeln(s1.x); // 42
writeln(s1.c); // 'a'
writeln(s1.y); // [4, 6]
writeln(s2.x); // 0
writeln(s2.c); // 'z'
writeln(s2.y); // [1, 2]
// Immutables cannot be swapped:
immutable int imm1 = 1, imm2 = 2;
static assert(!__traits(compiles, swap(imm1, imm2)));
int c = imm1 + 0;
int d = imm2 + 0;
swap(c, d);
writeln(c); // 2
writeln(d); // 1
```
Examples:
```
// Non-copyable types can still be swapped.
static struct NoCopy
{
this(this) { assert(0); }
int n;
string s;
}
NoCopy nc1, nc2;
nc1.n = 127; nc1.s = "abc";
nc2.n = 513; nc2.s = "uvwxyz";
swap(nc1, nc2);
assert(nc1.n == 513 && nc1.s == "uvwxyz");
assert(nc2.n == 127 && nc2.s == "abc");
swap(nc1, nc1);
swap(nc2, nc2);
assert(nc1.n == 513 && nc1.s == "uvwxyz");
assert(nc2.n == 127 && nc2.s == "abc");
// Types containing non-copyable fields can also be swapped.
static struct NoCopyHolder
{
NoCopy noCopy;
}
NoCopyHolder h1, h2;
h1.noCopy.n = 31; h1.noCopy.s = "abc";
h2.noCopy.n = 65; h2.noCopy.s = null;
swap(h1, h2);
assert(h1.noCopy.n == 65 && h1.noCopy.s == null);
assert(h2.noCopy.n == 31 && h2.noCopy.s == "abc");
swap(h1, h1);
swap(h2, h2);
assert(h1.noCopy.n == 65 && h1.noCopy.s == null);
assert(h2.noCopy.n == 31 && h2.noCopy.s == "abc");
// Const types cannot be swapped.
const NoCopy const1, const2;
assert(const1.n == 0 && const2.n == 0);
static assert(!__traits(compiles, swap(const1, const2)));
```
void **swapAt**(R)(auto ref R r, size\_t i1, size\_t i2);
Swaps two elements in-place of a range `r`, specified by their indices `i1` and `i2`.
Parameters:
| | |
| --- | --- |
| R `r` | a range with swappable elements |
| size\_t `i1` | first index |
| size\_t `i2` | second index |
Examples:
```
import std.algorithm.comparison : equal;
auto a = [1, 2, 3];
a.swapAt(1, 2);
assert(a.equal([1, 3, 2]));
```
Tuple!(InputRange1, InputRange2) **swapRanges**(InputRange1, InputRange2)(InputRange1 r1, InputRange2 r2)
Constraints: if (hasSwappableElements!InputRange1 && hasSwappableElements!InputRange2 && is(ElementType!InputRange1 == ElementType!InputRange2));
Swaps all elements of `r1` with successive elements in `r2`. Returns a tuple containing the remainder portions of `r1` and `r2` that were not swapped (one of them will be empty). The ranges may be of different types but must have the same element type and support swapping.
Parameters:
| | |
| --- | --- |
| InputRange1 `r1` | an [input range](std_range_primitives#isInputRange) with swappable elements |
| InputRange2 `r2` | an [input range](std_range_primitives#isInputRange) with swappable elements |
Returns:
Tuple containing the remainder portions of r1 and r2 that were not swapped
Examples:
```
import std.range : empty;
int[] a = [ 100, 101, 102, 103 ];
int[] b = [ 0, 1, 2, 3 ];
auto c = swapRanges(a[1 .. 3], b[2 .. 4]);
assert(c[0].empty && c[1].empty);
writeln(a); // [100, 2, 3, 103]
writeln(b); // [0, 1, 101, 102]
```
void **uninitializedFill**(Range, Value)(Range range, Value value)
Constraints: if (isInputRange!Range && hasLvalueElements!Range && is(typeof(range.front = value)));
Initializes each element of `range` with `value`. Assumes that the elements of the range are uninitialized. This is of interest for structs that define copy constructors (for all other types, [`fill`](#fill) and uninitializedFill are equivalent).
Parameters:
| | |
| --- | --- |
| Range `range` | An [input range](std_range_primitives#isInputRange) that exposes references to its elements and has assignable elements |
| Value `value` | Assigned to each element of range |
See Also:
[`fill`](#fill) [`initializeAll`](#initializeAll)
Examples:
```
import core.stdc.stdlib : malloc, free;
auto s = (cast(int*) malloc(5 * int.sizeof))[0 .. 5];
uninitializedFill(s, 42);
writeln(s); // [42, 42, 42, 42, 42]
scope(exit) free(s.ptr);
```
| programming_docs |
d std.datetime.date std.datetime.date
=================
| Category | Functions |
| --- | --- |
| Main date types | [`Date`](#Date) [`DateTime`](#DateTime) |
| Other date types | [`Month`](#Month) [`DayOfWeek`](#DayOfWeek) [`TimeOfDay`](#TimeOfDay) |
| Date checking | [`valid`](#valid) [`validTimeUnits`](#validTimeUnits) [`yearIsLeapYear`](#yearIsLeapYear) [`isTimePoint`](#isTimePoint) [`enforceValid`](#enforceValid) |
| Date conversion | [`daysToDayOfWeek`](#daysToDayOfWeek) [`monthsToMonth`](#monthsToMonth) |
| Time units | [`cmpTimeUnits`](#cmpTimeUnits) [`timeStrings`](#timeStrings) |
| Other | [`AllowDayOverflow`](#AllowDayOverflow) [`DateTimeException`](#DateTimeException) |
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
[Jonathan M Davis](http://jmdavisprog.com)
Source
[std/datetime/date.d](https://github.com/dlang/phobos/blob/master/std/datetime/date.d)
alias **DateTimeException** = core.time.TimeException;
Exception type used by std.datetime. It's an alias to [`core.time.TimeException`](core_time#TimeException). Either can be caught without concern about which module it came from.
enum **Month**: ubyte;
Represents the 12 months of the Gregorian year (January is 1).
Examples:
```
writeln(Date(2018, 10, 1).month); // Month.oct
writeln(DateTime(1, 1, 1).month); // Month.jan
```
**jan** **feb** **mar** **apr** **may** **jun** **jul** **aug** **sep** **oct** **nov** **dec** enum **DayOfWeek**: ubyte;
Represents the 7 days of the Gregorian week (Sunday is 0).
Examples:
```
writeln(Date(2018, 10, 1).dayOfWeek); // DayOfWeek.mon
writeln(DateTime(5, 5, 5).dayOfWeek); // DayOfWeek.thu
```
**sun** **mon** **tue** **wed** **thu** **fri** **sat** alias **AllowDayOverflow** = std.typecons.Flag!"allowDayOverflow".Flag;
In some date calculations, adding months or years can cause the date to fall on a day of the month which is not valid (e.g. February 29th 2001 or June 31st 2000). If overflow is allowed (as is the default), then the month will be incremented accordingly (so, February 29th 2001 would become March 1st 2001, and June 31st 2000 would become July 1st 2000). If overflow is not allowed, then the day will be adjusted to the last valid day in that month (so, February 29th 2001 would become February 28th 2001 and June 31st 2000 would become June 30th 2000).
AllowDayOverflow only applies to calculations involving months or years.
If set to `AllowDayOverflow.no`, then day overflow is not allowed.
Otherwise, if set to `AllowDayOverflow.yes`, then day overflow is allowed.
immutable string[] **timeStrings**;
Array of the strings representing time units, starting with the smallest unit and going to the largest. It does not include `"nsecs"`.
Includes `"hnsecs"` (hecto-nanoseconds (100 ns)), `"usecs"` (microseconds), `"msecs"` (milliseconds), `"seconds"`, `"minutes"`, `"hours"`, `"days"`, `"weeks"`, `"months"`, and `"years"`
struct **DateTime**;
Combines the [`std.datetime.date.Date`](std_datetime_date#Date) and [`std.datetime.date.TimeOfDay`](std_datetime_date#TimeOfDay) structs to give an object which holds both the date and the time. It is optimized for calendar-based operations and has no concept of time zone. For an object which is optimized for time operations based on the system time, use [`std.datetime.systime.SysTime`](std_datetime_systime#SysTime). [`std.datetime.systime.SysTime`](std_datetime_systime#SysTime) has a concept of time zone and has much higher precision (hnsecs). `DateTime` is intended primarily for calendar-based uses rather than precise time operations.
Examples:
```
import core.time : days, seconds;
auto dt = DateTime(2000, 6, 1, 10, 30, 0);
writeln(dt.date); // Date(2000, 6, 1)
writeln(dt.timeOfDay); // TimeOfDay(10, 30, 0)
writeln(dt.dayOfYear); // 153
writeln(dt.dayOfWeek); // DayOfWeek.thu
dt += 10.days + 100.seconds;
writeln(dt); // DateTime(2000, 6, 11, 10, 31, 40)
writeln(dt.toISOExtString()); // "2000-06-11T10:31:40"
writeln(dt.toISOString()); // "20000611T103140"
writeln(dt.toSimpleString()); // "2000-Jun-11 10:31:40"
writeln(DateTime.fromISOExtString("2018-01-01T12:00:00")); // DateTime(2018, 1, 1, 12, 0, 0)
writeln(DateTime.fromISOString("20180101T120000")); // DateTime(2018, 1, 1, 12, 0, 0)
writeln(DateTime.fromSimpleString("2018-Jan-01 12:00:00")); // DateTime(2018, 1, 1, 12, 0, 0)
```
pure nothrow @nogc @safe this(Date date, TimeOfDay tod = TimeOfDay.init);
Parameters:
| | |
| --- | --- |
| Date `date` | The date portion of [`DateTime`](#DateTime). |
| TimeOfDay `tod` | The time portion of [`DateTime`](#DateTime). |
pure @safe this(int year, int month, int day, int hour = 0, int minute = 0, int second = 0);
Parameters:
| | |
| --- | --- |
| int `year` | The year portion of the date. |
| int `month` | The month portion of the date (January is 1). |
| int `day` | The day portion of the date. |
| int `hour` | The hour portion of the time; |
| int `minute` | The minute portion of the time; |
| int `second` | The second portion of the time; |
const pure nothrow @nogc @safe int **opCmp**(DateTime rhs);
Compares this [`DateTime`](#DateTime) with the given `DateTime.`.
Returns:
| | |
| --- | --- |
| this < rhs | < 0 |
| this == rhs | 0 |
| this > rhs | > 0 |
const pure nothrow @nogc @property @safe Date **date**();
The date portion of [`DateTime`](#DateTime).
pure nothrow @nogc @property @safe void **date**(Date **date**);
The date portion of [`DateTime`](#DateTime).
Parameters:
| | |
| --- | --- |
| Date `date` | The Date to set this [`DateTime`](#DateTime)'s date portion to. |
const pure nothrow @nogc @property @safe TimeOfDay **timeOfDay**();
The time portion of [`DateTime`](#DateTime).
pure nothrow @nogc @property @safe void **timeOfDay**(TimeOfDay tod);
The time portion of [`DateTime`](#DateTime).
Parameters:
| | |
| --- | --- |
| TimeOfDay `tod` | The [`std.datetime.date.TimeOfDay`](std_datetime_date#TimeOfDay) to set this [`DateTime`](#DateTime)'s time portion to. |
const pure nothrow @nogc @property @safe short **year**();
Year of the Gregorian Calendar. Positive numbers are A.D. Non-positive are B.C.
pure @property @safe void **year**(int **year**);
Year of the Gregorian Calendar. Positive numbers are A.D. Non-positive are B.C.
Parameters:
| | |
| --- | --- |
| int `year` | The year to set this [`DateTime`](#DateTime)'s year to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the new year is not a leap year and if the resulting date would be on February 29th.
Examples:
```
writeln(DateTime(Date(1999, 7, 6), TimeOfDay(9, 7, 5)).year); // 1999
writeln(DateTime(Date(2010, 10, 4), TimeOfDay(0, 0, 30)).year); // 2010
writeln(DateTime(Date(-7, 4, 5), TimeOfDay(7, 45, 2)).year); // -7
```
const pure @property @safe short **yearBC**();
Year B.C. of the Gregorian Calendar counting year 0 as 1 B.C.
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if `isAD` is true.
Examples:
```
writeln(DateTime(Date(0, 1, 1), TimeOfDay(12, 30, 33)).yearBC); // 1
writeln(DateTime(Date(-1, 1, 1), TimeOfDay(10, 7, 2)).yearBC); // 2
writeln(DateTime(Date(-100, 1, 1), TimeOfDay(4, 59, 0)).yearBC); // 101
```
pure @property @safe void **yearBC**(int year);
Year B.C. of the Gregorian Calendar counting year 0 as 1 B.C.
Parameters:
| | |
| --- | --- |
| int `year` | The year B.C. to set this [`DateTime`](#DateTime)'s year to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if a non-positive value is given.
Examples:
```
auto dt = DateTime(Date(2010, 1, 1), TimeOfDay(7, 30, 0));
dt.yearBC = 1;
writeln(dt); // DateTime(Date(0, 1, 1), TimeOfDay(7, 30, 0))
dt.yearBC = 10;
writeln(dt); // DateTime(Date(-9, 1, 1), TimeOfDay(7, 30, 0))
```
const pure nothrow @nogc @property @safe Month **month**();
Month of a Gregorian Year.
Examples:
```
writeln(DateTime(Date(1999, 7, 6), TimeOfDay(9, 7, 5)).month); // 7
writeln(DateTime(Date(2010, 10, 4), TimeOfDay(0, 0, 30)).month); // 10
writeln(DateTime(Date(-7, 4, 5), TimeOfDay(7, 45, 2)).month); // 4
```
pure @property @safe void **month**(Month **month**);
Month of a Gregorian Year.
Parameters:
| | |
| --- | --- |
| Month `month` | The month to set this [`DateTime`](#DateTime)'s month to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given month is not a valid month.
const pure nothrow @nogc @property @safe ubyte **day**();
Day of a Gregorian Month.
Examples:
```
writeln(DateTime(Date(1999, 7, 6), TimeOfDay(9, 7, 5)).day); // 6
writeln(DateTime(Date(2010, 10, 4), TimeOfDay(0, 0, 30)).day); // 4
writeln(DateTime(Date(-7, 4, 5), TimeOfDay(7, 45, 2)).day); // 5
```
pure @property @safe void **day**(int **day**);
Day of a Gregorian Month.
Parameters:
| | |
| --- | --- |
| int `day` | The day of the month to set this [`DateTime`](#DateTime)'s day to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given day is not a valid day of the current month.
const pure nothrow @nogc @property @safe ubyte **hour**();
Hours past midnight.
pure @property @safe void **hour**(int **hour**);
Hours past midnight.
Parameters:
| | |
| --- | --- |
| int `hour` | The hour of the day to set this [`DateTime`](#DateTime)'s hour to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given hour would result in an invalid [`DateTime`](#DateTime).
const pure nothrow @nogc @property @safe ubyte **minute**();
Minutes past the hour.
pure @property @safe void **minute**(int **minute**);
Minutes past the hour.
Parameters:
| | |
| --- | --- |
| int `minute` | The minute to set this [`DateTime`](#DateTime)'s minute to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given minute would result in an invalid [`DateTime`](#DateTime).
const pure nothrow @nogc @property @safe ubyte **second**();
Seconds past the minute.
pure @property @safe void **second**(int **second**);
Seconds past the minute.
Parameters:
| | |
| --- | --- |
| int `second` | The second to set this [`DateTime`](#DateTime)'s second to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given seconds would result in an invalid [`DateTime`](#DateTime).
pure nothrow @nogc ref @safe DateTime **add**(string units)(long value, AllowDayOverflow allowOverflow = AllowDayOverflow.yes)
Constraints: if (units == "years" || units == "months");
Adds the given number of years or months to this [`DateTime`](#DateTime), mutating it. A negative number will subtract.
Note that if day overflow is allowed, and the date with the adjusted year/month overflows the number of days in the new month, then the month will be incremented by one, and the day set to the number of days overflowed. (e.g. if the day were 31 and the new month were June, then the month would be incremented to July, and the new day would be 1). If day overflow is not allowed, then the day will be set to the last valid day in the month (e.g. June 31st would become June 30th).
Parameters:
| | |
| --- | --- |
| units | The type of units to add ("years" or "months"). |
| long `value` | The number of months or years to add to this [`DateTime`](#DateTime). |
| AllowDayOverflow `allowOverflow` | Whether the days should be allowed to overflow, causing the month to increment. |
Returns:
A reference to the `DateTime` (`this`).
Examples:
```
auto dt1 = DateTime(2010, 1, 1, 12, 30, 33);
dt1.add!"months"(11);
writeln(dt1); // DateTime(2010, 12, 1, 12, 30, 33)
auto dt2 = DateTime(2010, 1, 1, 12, 30, 33);
dt2.add!"months"(-11);
writeln(dt2); // DateTime(2009, 2, 1, 12, 30, 33)
auto dt3 = DateTime(2000, 2, 29, 12, 30, 33);
dt3.add!"years"(1);
writeln(dt3); // DateTime(2001, 3, 1, 12, 30, 33)
auto dt4 = DateTime(2000, 2, 29, 12, 30, 33);
dt4.add!"years"(1, AllowDayOverflow.no);
writeln(dt4); // DateTime(2001, 2, 28, 12, 30, 33)
```
pure nothrow @nogc ref @safe DateTime **roll**(string units)(long value, AllowDayOverflow allowOverflow = AllowDayOverflow.yes)
Constraints: if (units == "years" || units == "months");
Adds the given number of years or months to this [`DateTime`](#DateTime), mutating it. A negative number will subtract.
The difference between rolling and adding is that rolling does not affect larger units. Rolling a [`DateTime`](#DateTime) 12 months gets the exact same [`DateTime`](#DateTime). However, the days can still be affected due to the differing number of days in each month.
Because there are no units larger than years, there is no difference between adding and rolling years.
Parameters:
| | |
| --- | --- |
| units | The type of units to add ("years" or "months"). |
| long `value` | The number of months or years to add to this [`DateTime`](#DateTime). |
| AllowDayOverflow `allowOverflow` | Whether the days should be allowed to overflow, causing the month to increment. |
Returns:
A reference to the `DateTime` (`this`).
Examples:
```
auto dt1 = DateTime(2010, 1, 1, 12, 33, 33);
dt1.roll!"months"(1);
writeln(dt1); // DateTime(2010, 2, 1, 12, 33, 33)
auto dt2 = DateTime(2010, 1, 1, 12, 33, 33);
dt2.roll!"months"(-1);
writeln(dt2); // DateTime(2010, 12, 1, 12, 33, 33)
auto dt3 = DateTime(1999, 1, 29, 12, 33, 33);
dt3.roll!"months"(1);
writeln(dt3); // DateTime(1999, 3, 1, 12, 33, 33)
auto dt4 = DateTime(1999, 1, 29, 12, 33, 33);
dt4.roll!"months"(1, AllowDayOverflow.no);
writeln(dt4); // DateTime(1999, 2, 28, 12, 33, 33)
auto dt5 = DateTime(2000, 2, 29, 12, 30, 33);
dt5.roll!"years"(1);
writeln(dt5); // DateTime(2001, 3, 1, 12, 30, 33)
auto dt6 = DateTime(2000, 2, 29, 12, 30, 33);
dt6.roll!"years"(1, AllowDayOverflow.no);
writeln(dt6); // DateTime(2001, 2, 28, 12, 30, 33)
```
pure nothrow @nogc ref @safe DateTime **roll**(string units)(long value)
Constraints: if (units == "days");
pure nothrow @nogc ref @safe DateTime **roll**(string units)(long value)
Constraints: if (units == "hours" || units == "minutes" || units == "seconds");
Adds the given number of units to this [`DateTime`](#DateTime), mutating it. A negative number will subtract.
The difference between rolling and adding is that rolling does not affect larger units. For instance, rolling a [`DateTime`](#DateTime) one year's worth of days gets the exact same [`DateTime`](#DateTime).
Accepted units are `"days"`, `"minutes"`, `"hours"`, `"minutes"`, and `"seconds"`.
Parameters:
| | |
| --- | --- |
| units | The units to add. |
| long `value` | The number of units to add to this [`DateTime`](#DateTime). |
Returns:
A reference to the `DateTime` (`this`).
Examples:
```
auto dt1 = DateTime(2010, 1, 1, 11, 23, 12);
dt1.roll!"days"(1);
writeln(dt1); // DateTime(2010, 1, 2, 11, 23, 12)
dt1.roll!"days"(365);
writeln(dt1); // DateTime(2010, 1, 26, 11, 23, 12)
dt1.roll!"days"(-32);
writeln(dt1); // DateTime(2010, 1, 25, 11, 23, 12)
auto dt2 = DateTime(2010, 7, 4, 12, 0, 0);
dt2.roll!"hours"(1);
writeln(dt2); // DateTime(2010, 7, 4, 13, 0, 0)
auto dt3 = DateTime(2010, 1, 1, 0, 0, 0);
dt3.roll!"seconds"(-1);
writeln(dt3); // DateTime(2010, 1, 1, 0, 0, 59)
```
const pure nothrow @nogc @safe DateTime **opBinary**(string op)(Duration duration)
Constraints: if (op == "+" || op == "-");
Gives the result of adding or subtracting a [`core.time.Duration`](core_time#Duration) from this [`DateTime`](#DateTime).
The legal types of arithmetic for [`DateTime`](#DateTime) using this operator are
| | | | | |
| --- | --- | --- | --- | --- |
| DateTime | + | Duration | --> | DateTime |
| DateTime | - | Duration | --> | DateTime |
Parameters:
| | |
| --- | --- |
| Duration `duration` | The [`core.time.Duration`](core_time#Duration) to add to or subtract from this [`DateTime`](#DateTime). |
Examples:
```
import core.time : hours, seconds;
assert(DateTime(2015, 12, 31, 23, 59, 59) + seconds(1) ==
DateTime(2016, 1, 1, 0, 0, 0));
assert(DateTime(2015, 12, 31, 23, 59, 59) + hours(1) ==
DateTime(2016, 1, 1, 0, 59, 59));
assert(DateTime(2016, 1, 1, 0, 0, 0) - seconds(1) ==
DateTime(2015, 12, 31, 23, 59, 59));
assert(DateTime(2016, 1, 1, 0, 59, 59) - hours(1) ==
DateTime(2015, 12, 31, 23, 59, 59));
```
pure nothrow @nogc ref @safe DateTime **opOpAssign**(string op)(Duration duration)
Constraints: if (op == "+" || op == "-");
Gives the result of adding or subtracting a duration from this [`DateTime`](#DateTime), as well as assigning the result to this [`DateTime`](#DateTime).
The legal types of arithmetic for [`DateTime`](#DateTime) using this operator are
| | | | | |
| --- | --- | --- | --- | --- |
| DateTime | + | duration | --> | DateTime |
| DateTime | - | duration | --> | DateTime |
Parameters:
| | |
| --- | --- |
| Duration `duration` | The duration to add to or subtract from this [`DateTime`](#DateTime). |
const pure nothrow @nogc @safe Duration **opBinary**(string op)(DateTime rhs)
Constraints: if (op == "-");
Gives the difference between two [`DateTime`](#DateTime)s.
The legal types of arithmetic for [`DateTime`](#DateTime) using this operator are
| | | | | |
| --- | --- | --- | --- | --- |
| DateTime | - | DateTime | --> | duration |
const pure nothrow @nogc @safe int **diffMonths**(DateTime rhs);
Returns the difference between the two [`DateTime`](#DateTime)s in months.
To get the difference in years, subtract the year property of two [`DateTime`](#DateTime)s. To get the difference in days or weeks, subtract the [`DateTime`](#DateTime)s themselves and use the [`core.time.Duration`](core_time#Duration) that results. Because converting between months and smaller units requires a specific date (which [`core.time.Duration`](core_time#Duration)s don't have), getting the difference in months requires some math using both the year and month properties, so this is a convenience function for getting the difference in months.
Note that the number of days in the months or how far into the month either date is is irrelevant. It is the difference in the month property combined with the difference in years \* 12. So, for instance, December 31st and January 1st are one month apart just as December 1st and January 31st are one month apart.
Parameters:
| | |
| --- | --- |
| DateTime `rhs` | The [`DateTime`](#DateTime) to subtract from this one. |
Examples:
```
assert(DateTime(1999, 2, 1, 12, 2, 3).diffMonths(
DateTime(1999, 1, 31, 23, 59, 59)) == 1);
assert(DateTime(1999, 1, 31, 0, 0, 0).diffMonths(
DateTime(1999, 2, 1, 12, 3, 42)) == -1);
assert(DateTime(1999, 3, 1, 5, 30, 0).diffMonths(
DateTime(1999, 1, 1, 2, 4, 7)) == 2);
assert(DateTime(1999, 1, 1, 7, 2, 4).diffMonths(
DateTime(1999, 3, 31, 0, 30, 58)) == -2);
```
const pure nothrow @nogc @property @safe bool **isLeapYear**();
Whether this [`DateTime`](#DateTime) is in a leap year.
const pure nothrow @nogc @property @safe DayOfWeek **dayOfWeek**();
Day of the week this [`DateTime`](#DateTime) is on.
const pure nothrow @nogc @property @safe ushort **dayOfYear**();
Day of the year this [`DateTime`](#DateTime) is on.
Examples:
```
writeln(DateTime(Date(1999, 1, 1), TimeOfDay(12, 22, 7)).dayOfYear); // 1
writeln(DateTime(Date(1999, 12, 31), TimeOfDay(7, 2, 59)).dayOfYear); // 365
writeln(DateTime(Date(2000, 12, 31), TimeOfDay(21, 20, 0)).dayOfYear); // 366
```
pure @property @safe void **dayOfYear**(int day);
Day of the year.
Parameters:
| | |
| --- | --- |
| int `day` | The day of the year to set which day of the year this [`DateTime`](#DateTime) is on. |
const pure nothrow @nogc @property @safe int **dayOfGregorianCal**();
The Xth day of the Gregorian Calendar that this [`DateTime`](#DateTime) is on.
Examples:
```
writeln(DateTime(Date(1, 1, 1), TimeOfDay(0, 0, 0)).dayOfGregorianCal); // 1
writeln(DateTime(Date(1, 12, 31), TimeOfDay(23, 59, 59)).dayOfGregorianCal); // 365
writeln(DateTime(Date(2, 1, 1), TimeOfDay(2, 2, 2)).dayOfGregorianCal); // 366
writeln(DateTime(Date(0, 12, 31), TimeOfDay(7, 7, 7)).dayOfGregorianCal); // 0
writeln(DateTime(Date(0, 1, 1), TimeOfDay(19, 30, 0)).dayOfGregorianCal); // -365
writeln(DateTime(Date(-1, 12, 31), TimeOfDay(4, 7, 0)).dayOfGregorianCal); // -366
writeln(DateTime(Date(2000, 1, 1), TimeOfDay(9, 30, 20)).dayOfGregorianCal); // 730_120
writeln(DateTime(Date(2010, 12, 31), TimeOfDay(15, 45, 50)).dayOfGregorianCal); // 734_137
```
pure nothrow @nogc @property @safe void **dayOfGregorianCal**(int days);
The Xth day of the Gregorian Calendar that this [`DateTime`](#DateTime) is on. Setting this property does not affect the time portion of [`DateTime`](#DateTime).
Parameters:
| | |
| --- | --- |
| int `days` | The day of the Gregorian Calendar to set this [`DateTime`](#DateTime) to. |
Examples:
```
auto dt = DateTime(Date.init, TimeOfDay(12, 0, 0));
dt.dayOfGregorianCal = 1;
writeln(dt); // DateTime(Date(1, 1, 1), TimeOfDay(12, 0, 0))
dt.dayOfGregorianCal = 365;
writeln(dt); // DateTime(Date(1, 12, 31), TimeOfDay(12, 0, 0))
dt.dayOfGregorianCal = 366;
writeln(dt); // DateTime(Date(2, 1, 1), TimeOfDay(12, 0, 0))
dt.dayOfGregorianCal = 0;
writeln(dt); // DateTime(Date(0, 12, 31), TimeOfDay(12, 0, 0))
dt.dayOfGregorianCal = -365;
writeln(dt); // DateTime(Date(-0, 1, 1), TimeOfDay(12, 0, 0))
dt.dayOfGregorianCal = -366;
writeln(dt); // DateTime(Date(-1, 12, 31), TimeOfDay(12, 0, 0))
dt.dayOfGregorianCal = 730_120;
writeln(dt); // DateTime(Date(2000, 1, 1), TimeOfDay(12, 0, 0))
dt.dayOfGregorianCal = 734_137;
writeln(dt); // DateTime(Date(2010, 12, 31), TimeOfDay(12, 0, 0))
```
const pure nothrow @property @safe ubyte **isoWeek**();
The ISO 8601 week of the year that this [`DateTime`](#DateTime) is in.
See Also:
[ISO Week Date](http://en.wikipedia.org/wiki/ISO_week_date)
const pure nothrow @property @safe short **isoWeekYear**();
The year of the ISO 8601 week calendar that this [`DateTime`](#DateTime) is in.
See Also:
[ISO Week Date](http://en.wikipedia.org/wiki/ISO_week_date)
const pure nothrow @property @safe DateTime **endOfMonth**();
[`DateTime`](#DateTime) for the last day in the month that this [`DateTime`](#DateTime) is in. The time portion of endOfMonth is always 23:59:59.
Examples:
```
assert(DateTime(Date(1999, 1, 6), TimeOfDay(0, 0, 0)).endOfMonth ==
DateTime(Date(1999, 1, 31), TimeOfDay(23, 59, 59)));
assert(DateTime(Date(1999, 2, 7), TimeOfDay(19, 30, 0)).endOfMonth ==
DateTime(Date(1999, 2, 28), TimeOfDay(23, 59, 59)));
assert(DateTime(Date(2000, 2, 7), TimeOfDay(5, 12, 27)).endOfMonth ==
DateTime(Date(2000, 2, 29), TimeOfDay(23, 59, 59)));
assert(DateTime(Date(2000, 6, 4), TimeOfDay(12, 22, 9)).endOfMonth ==
DateTime(Date(2000, 6, 30), TimeOfDay(23, 59, 59)));
```
const pure nothrow @nogc @property @safe ubyte **daysInMonth**();
The last day in the month that this [`DateTime`](#DateTime) is in.
Examples:
```
writeln(DateTime(Date(1999, 1, 6), TimeOfDay(0, 0, 0)).daysInMonth); // 31
writeln(DateTime(Date(1999, 2, 7), TimeOfDay(19, 30, 0)).daysInMonth); // 28
writeln(DateTime(Date(2000, 2, 7), TimeOfDay(5, 12, 27)).daysInMonth); // 29
writeln(DateTime(Date(2000, 6, 4), TimeOfDay(12, 22, 9)).daysInMonth); // 30
```
const pure nothrow @nogc @property @safe bool **isAD**();
Whether the current year is a date in A.D.
Examples:
```
assert(DateTime(Date(1, 1, 1), TimeOfDay(12, 7, 0)).isAD);
assert(DateTime(Date(2010, 12, 31), TimeOfDay(0, 0, 0)).isAD);
assert(!DateTime(Date(0, 12, 31), TimeOfDay(23, 59, 59)).isAD);
assert(!DateTime(Date(-2010, 1, 1), TimeOfDay(2, 2, 2)).isAD);
```
const pure nothrow @nogc @property @safe long **julianDay**();
The [Julian day](http://en.wikipedia.org/wiki/Julian_day) for this [`DateTime`](#DateTime) at the given time. For example, prior to noon, 1996-03-31 would be the Julian day number 2\_450\_173, so this function returns 2\_450\_173, while from noon onward, the julian day number would be 2\_450\_174, so this function returns 2\_450\_174.
const pure nothrow @nogc @property @safe long **modJulianDay**();
The modified [Julian day](http://en.wikipedia.org/wiki/Julian_day) for any time on this date (since, the modified Julian day changes at midnight).
const pure nothrow @safe string **toISOString**();
const void **toISOString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`DateTime`](#DateTime) to a string with the format `YYYYMMDDTHHMMSS`. If `writer` is set, the resulting string will be written directly to it.
Parameters:
| | |
| --- | --- |
| W `writer` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
Examples:
```
assert(DateTime(Date(2010, 7, 4), TimeOfDay(7, 6, 12)).toISOString() ==
"20100704T070612");
assert(DateTime(Date(1998, 12, 25), TimeOfDay(2, 15, 0)).toISOString() ==
"19981225T021500");
assert(DateTime(Date(0, 1, 5), TimeOfDay(23, 9, 59)).toISOString() ==
"00000105T230959");
assert(DateTime(Date(-4, 1, 5), TimeOfDay(0, 0, 2)).toISOString() ==
"-00040105T000002");
```
const pure nothrow @safe string **toISOExtString**();
const void **toISOExtString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`DateTime`](#DateTime) to a string with the format `YYYY-MM-DDTHH:MM:SS`. If `writer` is set, the resulting string will be written directly to it.
Parameters:
| | |
| --- | --- |
| W `writer` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
Examples:
```
assert(DateTime(Date(2010, 7, 4), TimeOfDay(7, 6, 12)).toISOExtString() ==
"2010-07-04T07:06:12");
assert(DateTime(Date(1998, 12, 25), TimeOfDay(2, 15, 0)).toISOExtString() ==
"1998-12-25T02:15:00");
assert(DateTime(Date(0, 1, 5), TimeOfDay(23, 9, 59)).toISOExtString() ==
"0000-01-05T23:09:59");
assert(DateTime(Date(-4, 1, 5), TimeOfDay(0, 0, 2)).toISOExtString() ==
"-0004-01-05T00:00:02");
```
const pure nothrow @safe string **toSimpleString**();
const void **toSimpleString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`DateTime`](#DateTime) to a string with the format `YYYY-Mon-DD HH:MM:SS`. If `writer` is set, the resulting string will be written directly to it.
Parameters:
| | |
| --- | --- |
| W `writer` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
Examples:
```
assert(DateTime(Date(2010, 7, 4), TimeOfDay(7, 6, 12)).toSimpleString() ==
"2010-Jul-04 07:06:12");
assert(DateTime(Date(1998, 12, 25), TimeOfDay(2, 15, 0)).toSimpleString() ==
"1998-Dec-25 02:15:00");
assert(DateTime(Date(0, 1, 5), TimeOfDay(23, 9, 59)).toSimpleString() ==
"0000-Jan-05 23:09:59");
assert(DateTime(Date(-4, 1, 5), TimeOfDay(0, 0, 2)).toSimpleString() ==
"-0004-Jan-05 00:00:02");
```
const pure nothrow @safe string **toString**();
const void **toString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`DateTime`](#DateTime) to a string.
This function exists to make it easy to convert a [`DateTime`](#DateTime) to a string for code that does not care what the exact format is - just that it presents the information in a clear manner. It also makes it easy to simply convert a [`DateTime`](#DateTime) to a string when using functions such as `to!string`, `format`, or `writeln` which use toString to convert user-defined types. So, it is unlikely that much code will call toString directly.
The format of the string is purposefully unspecified, and code that cares about the format of the string should use `toISOString`, `toISOExtString`, `toSimpleString`, or some other custom formatting function that explicitly generates the format that the code needs. The reason is that the code is then clear about what format it's using, making it less error-prone to maintain the code and interact with other software that consumes the generated strings. It's for this same reason that [`DateTime`](#DateTime) has no `fromString` function, whereas it does have `fromISOString`, `fromISOExtString`, and `fromSimpleString`.
The format returned by toString may or may not change in the future.
pure @safe DateTime **fromISOString**(S)(scope const S isoString)
Constraints: if (isSomeString!S);
Creates a [`DateTime`](#DateTime) from a string with the format YYYYMMDDTHHMMSS. Whitespace is stripped from the given string.
Parameters:
| | |
| --- | --- |
| S `isoString` | A string formatted in the ISO format for dates and times. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given string is not in the ISO format or if the resulting [`DateTime`](#DateTime) would not be valid.
pure @safe DateTime **fromISOExtString**(S)(scope const S isoExtString)
Constraints: if (isSomeString!S);
Creates a [`DateTime`](#DateTime) from a string with the format YYYY-MM-DDTHH:MM:SS. Whitespace is stripped from the given string.
Parameters:
| | |
| --- | --- |
| S `isoExtString` | A string formatted in the ISO Extended format for dates and times. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given string is not in the ISO Extended format or if the resulting [`DateTime`](#DateTime) would not be valid.
pure @safe DateTime **fromSimpleString**(S)(scope const S simpleString)
Constraints: if (isSomeString!S);
Creates a [`DateTime`](#DateTime) from a string with the format YYYY-Mon-DD HH:MM:SS. Whitespace is stripped from the given string.
Parameters:
| | |
| --- | --- |
| S `simpleString` | A string formatted in the way that toSimpleString formats dates and times. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given string is not in the correct format or if the resulting [`DateTime`](#DateTime) would not be valid.
static pure nothrow @nogc @property @safe DateTime **min**();
Returns the [`DateTime`](#DateTime) farthest in the past which is representable by [`DateTime`](#DateTime).
static pure nothrow @nogc @property @safe DateTime **max**();
Returns the [`DateTime`](#DateTime) farthest in the future which is representable by [`DateTime`](#DateTime).
struct **Date**;
Represents a date in the [Proleptic Gregorian Calendar](http://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar) ranging from 32,768 B.C. to 32,767 A.D. Positive years are A.D. Non-positive years are B.C.
Year, month, and day are kept separately internally so that `Date` is optimized for calendar-based operations.
`Date` uses the Proleptic Gregorian Calendar, so it assumes the Gregorian leap year calculations for its entire length. As per [ISO 8601](http://en.wikipedia.org/wiki/ISO_8601), it treats 1 B.C. as year 0, i.e. 1 B.C. is 0, 2 B.C. is -1, etc. Use [`yearBC`](#yearBC) to use B.C. as a positive integer with 1 B.C. being the year prior to 1 A.D.
Year 0 is a leap year.
Examples:
```
import core.time : days;
auto d = Date(2000, 6, 1);
writeln(d.dayOfYear); // 153
writeln(d.dayOfWeek); // DayOfWeek.thu
d += 10.days;
writeln(d); // Date(2000, 6, 11)
writeln(d.toISOExtString()); // "2000-06-11"
writeln(d.toISOString()); // "20000611"
writeln(d.toSimpleString()); // "2000-Jun-11"
writeln(Date.fromISOExtString("2018-01-01")); // Date(2018, 1, 1)
writeln(Date.fromISOString("20180101")); // Date(2018, 1, 1)
writeln(Date.fromSimpleString("2018-Jan-01")); // Date(2018, 1, 1)
```
pure @safe this(int year, int month, int day);
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the resulting [`Date`](#Date) would not be valid.
Parameters:
| | |
| --- | --- |
| int `year` | Year of the Gregorian Calendar. Positive values are A.D. Non-positive values are B.C. with year 0 being the year prior to 1 A.D. |
| int `month` | Month of the year (January is 1). |
| int `day` | Day of the month. |
pure nothrow @nogc @safe this(int day);
Parameters:
| | |
| --- | --- |
| int `day` | The Xth day of the Gregorian Calendar that the constructed [`Date`](#Date) will be for. |
const pure nothrow @nogc @safe int **opCmp**(Date rhs);
Compares this [`Date`](#Date) with the given [`Date`](#Date).
Returns:
| | |
| --- | --- |
| this < rhs | < 0 |
| this == rhs | 0 |
| this > rhs | > 0 |
const pure nothrow @nogc @property @safe short **year**();
Year of the Gregorian Calendar. Positive numbers are A.D. Non-positive are B.C.
Examples:
```
writeln(Date(1999, 7, 6).year); // 1999
writeln(Date(2010, 10, 4).year); // 2010
writeln(Date(-7, 4, 5).year); // -7
```
pure @property @safe void **year**(int **year**);
Year of the Gregorian Calendar. Positive numbers are A.D. Non-positive are B.C.
Parameters:
| | |
| --- | --- |
| int `year` | The year to set this Date's year to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the new year is not a leap year and the resulting date would be on February 29th.
Examples:
```
writeln(Date(1999, 7, 6).year); // 1999
writeln(Date(2010, 10, 4).year); // 2010
writeln(Date(-7, 4, 5).year); // -7
```
const pure @property @safe ushort **yearBC**();
Year B.C. of the Gregorian Calendar counting year 0 as 1 B.C.
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if `isAD` is true.
Examples:
```
writeln(Date(0, 1, 1).yearBC); // 1
writeln(Date(-1, 1, 1).yearBC); // 2
writeln(Date(-100, 1, 1).yearBC); // 101
```
pure @property @safe void **yearBC**(int year);
Year B.C. of the Gregorian Calendar counting year 0 as 1 B.C.
Parameters:
| | |
| --- | --- |
| int `year` | The year B.C. to set this [`Date`](#Date)'s year to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if a non-positive value is given.
Examples:
```
auto date = Date(2010, 1, 1);
date.yearBC = 1;
writeln(date); // Date(0, 1, 1)
date.yearBC = 10;
writeln(date); // Date(-9, 1, 1)
```
const pure nothrow @nogc @property @safe Month **month**();
Month of a Gregorian Year.
Examples:
```
writeln(Date(1999, 7, 6).month); // 7
writeln(Date(2010, 10, 4).month); // 10
writeln(Date(-7, 4, 5).month); // 4
```
pure @property @safe void **month**(Month **month**);
Month of a Gregorian Year.
Parameters:
| | |
| --- | --- |
| Month `month` | The month to set this [`Date`](#Date)'s month to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given month is not a valid month or if the current day would not be valid in the given month.
const pure nothrow @nogc @property @safe ubyte **day**();
Day of a Gregorian Month.
Examples:
```
writeln(Date(1999, 7, 6).day); // 6
writeln(Date(2010, 10, 4).day); // 4
writeln(Date(-7, 4, 5).day); // 5
```
pure @property @safe void **day**(int **day**);
Day of a Gregorian Month.
Parameters:
| | |
| --- | --- |
| int `day` | The day of the month to set this [`Date`](#Date)'s day to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given day is not a valid day of the current month.
pure nothrow @nogc ref @safe Date **add**(string units)(long value, AllowDayOverflow allowOverflow = AllowDayOverflow.yes)
Constraints: if (units == "years");
Adds the given number of years or months to this [`Date`](#Date), mutating it. A negative number will subtract.
Note that if day overflow is allowed, and the date with the adjusted year/month overflows the number of days in the new month, then the month will be incremented by one, and the day set to the number of days overflowed. (e.g. if the day were 31 and the new month were June, then the month would be incremented to July, and the new day would be 1). If day overflow is not allowed, then the day will be set to the last valid day in the month (e.g. June 31st would become June 30th).
Parameters:
| | |
| --- | --- |
| units | The type of units to add ("years" or "months"). |
| long `value` | The number of months or years to add to this [`Date`](#Date). |
| AllowDayOverflow `allowOverflow` | Whether the day should be allowed to overflow, causing the month to increment. |
Returns:
A reference to the `Date` (`this`).
Examples:
```
auto d1 = Date(2010, 1, 1);
d1.add!"months"(11);
writeln(d1); // Date(2010, 12, 1)
auto d2 = Date(2010, 1, 1);
d2.add!"months"(-11);
writeln(d2); // Date(2009, 2, 1)
auto d3 = Date(2000, 2, 29);
d3.add!"years"(1);
writeln(d3); // Date(2001, 3, 1)
auto d4 = Date(2000, 2, 29);
d4.add!"years"(1, AllowDayOverflow.no);
writeln(d4); // Date(2001, 2, 28)
```
pure nothrow @nogc ref @safe Date **roll**(string units)(long value, AllowDayOverflow allowOverflow = AllowDayOverflow.yes)
Constraints: if (units == "years");
Adds the given number of years or months to this [`Date`](#Date), mutating it. A negative number will subtract.
The difference between rolling and adding is that rolling does not affect larger units. Rolling a [`Date`](#Date) 12 months gets the exact same [`Date`](#Date). However, the days can still be affected due to the differing number of days in each month.
Because there are no units larger than years, there is no difference between adding and rolling years.
Parameters:
| | |
| --- | --- |
| units | The type of units to add ("years" or "months"). |
| long `value` | The number of months or years to add to this [`Date`](#Date). |
| AllowDayOverflow `allowOverflow` | Whether the day should be allowed to overflow, causing the month to increment. |
Returns:
A reference to the `Date` (`this`).
Examples:
```
auto d1 = Date(2010, 1, 1);
d1.roll!"months"(1);
writeln(d1); // Date(2010, 2, 1)
auto d2 = Date(2010, 1, 1);
d2.roll!"months"(-1);
writeln(d2); // Date(2010, 12, 1)
auto d3 = Date(1999, 1, 29);
d3.roll!"months"(1);
writeln(d3); // Date(1999, 3, 1)
auto d4 = Date(1999, 1, 29);
d4.roll!"months"(1, AllowDayOverflow.no);
writeln(d4); // Date(1999, 2, 28)
auto d5 = Date(2000, 2, 29);
d5.roll!"years"(1);
writeln(d5); // Date(2001, 3, 1)
auto d6 = Date(2000, 2, 29);
d6.roll!"years"(1, AllowDayOverflow.no);
writeln(d6); // Date(2001, 2, 28)
```
pure nothrow @nogc ref @safe Date **roll**(string units)(long days)
Constraints: if (units == "days");
Adds the given number of units to this [`Date`](#Date), mutating it. A negative number will subtract.
The difference between rolling and adding is that rolling does not affect larger units. For instance, rolling a [`Date`](#Date) one year's worth of days gets the exact same [`Date`](#Date).
The only accepted units are `"days"`.
Parameters:
| | |
| --- | --- |
| units | The units to add. Must be `"days"`. |
| long `days` | The number of days to add to this [`Date`](#Date). |
Returns:
A reference to the `Date` (`this`).
Examples:
```
auto d = Date(2010, 1, 1);
d.roll!"days"(1);
writeln(d); // Date(2010, 1, 2)
d.roll!"days"(365);
writeln(d); // Date(2010, 1, 26)
d.roll!"days"(-32);
writeln(d); // Date(2010, 1, 25)
```
const pure nothrow @nogc @safe Date **opBinary**(string op)(Duration duration)
Constraints: if (op == "+" || op == "-");
Gives the result of adding or subtracting a [`core.time.Duration`](core_time#Duration) from
The legal types of arithmetic for [`Date`](#Date) using this operator are
| | | | | |
| --- | --- | --- | --- | --- |
| Date | + | Duration | --> | Date |
| Date | - | Duration | --> | Date |
Parameters:
| | |
| --- | --- |
| Duration `duration` | The [`core.time.Duration`](core_time#Duration) to add to or subtract from this [`Date`](#Date). |
Examples:
```
import core.time : days;
writeln(Date(2015, 12, 31) + days(1)); // Date(2016, 1, 1)
writeln(Date(2004, 2, 26) + days(4)); // Date(2004, 3, 1)
writeln(Date(2016, 1, 1) - days(1)); // Date(2015, 12, 31)
writeln(Date(2004, 3, 1) - days(4)); // Date(2004, 2, 26)
```
pure nothrow @nogc ref @safe Date **opOpAssign**(string op)(Duration duration)
Constraints: if (op == "+" || op == "-");
Gives the result of adding or subtracting a [`core.time.Duration`](core_time#Duration) from this [`Date`](#Date), as well as assigning the result to this [`Date`](#Date).
The legal types of arithmetic for [`Date`](#Date) using this operator are
| | | | | |
| --- | --- | --- | --- | --- |
| Date | + | Duration | --> | Date |
| Date | - | Duration | --> | Date |
Parameters:
| | |
| --- | --- |
| Duration `duration` | The [`core.time.Duration`](core_time#Duration) to add to or subtract from this [`Date`](#Date). |
const pure nothrow @nogc @safe Duration **opBinary**(string op)(Date rhs)
Constraints: if (op == "-");
Gives the difference between two [`Date`](#Date)s.
The legal types of arithmetic for [`Date`](#Date) using this operator are
| | | | | |
| --- | --- | --- | --- | --- |
| Date | - | Date | --> | duration |
const pure nothrow @nogc @safe int **diffMonths**(Date rhs);
Returns the difference between the two [`Date`](#Date)s in months.
To get the difference in years, subtract the year property of two [`Date`](#Date)s. To get the difference in days or weeks, subtract the [`Date`](#Date)s themselves and use the [`core.time.Duration`](core_time#Duration) that results. Because converting between months and smaller units requires a specific date (which [`core.time.Duration`](core_time#Duration)s don't have), getting the difference in months requires some math using both the year and month properties, so this is a convenience function for getting the difference in months.
Note that the number of days in the months or how far into the month either [`Date`](#Date) is is irrelevant. It is the difference in the month property combined with the difference in years \* 12. So, for instance, December 31st and January 1st are one month apart just as December 1st and January 31st are one month apart.
Parameters:
| | |
| --- | --- |
| Date `rhs` | The [`Date`](#Date) to subtract from this one. |
Examples:
```
writeln(Date(1999, 2, 1).diffMonths(Date(1999, 1, 31))); // 1
writeln(Date(1999, 1, 31).diffMonths(Date(1999, 2, 1))); // -1
writeln(Date(1999, 3, 1).diffMonths(Date(1999, 1, 1))); // 2
writeln(Date(1999, 1, 1).diffMonths(Date(1999, 3, 31))); // -2
```
const pure nothrow @nogc @property @safe bool **isLeapYear**();
Whether this [`Date`](#Date) is in a leap year.
const pure nothrow @nogc @property @safe DayOfWeek **dayOfWeek**();
Day of the week this [`Date`](#Date) is on.
const pure nothrow @nogc @property @safe ushort **dayOfYear**();
Day of the year this [`Date`](#Date) is on.
Examples:
```
writeln(Date(1999, 1, 1).dayOfYear); // 1
writeln(Date(1999, 12, 31).dayOfYear); // 365
writeln(Date(2000, 12, 31).dayOfYear); // 366
```
pure @property @safe void **dayOfYear**(int day);
Day of the year.
Parameters:
| | |
| --- | --- |
| int `day` | The day of the year to set which day of the year this [`Date`](#Date) is on. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given day is an invalid day of the year.
const pure nothrow @nogc @property @safe int **dayOfGregorianCal**();
The Xth day of the Gregorian Calendar that this [`Date`](#Date) is on.
Examples:
```
writeln(Date(1, 1, 1).dayOfGregorianCal); // 1
writeln(Date(1, 12, 31).dayOfGregorianCal); // 365
writeln(Date(2, 1, 1).dayOfGregorianCal); // 366
writeln(Date(0, 12, 31).dayOfGregorianCal); // 0
writeln(Date(0, 1, 1).dayOfGregorianCal); // -365
writeln(Date(-1, 12, 31).dayOfGregorianCal); // -366
writeln(Date(2000, 1, 1).dayOfGregorianCal); // 730_120
writeln(Date(2010, 12, 31).dayOfGregorianCal); // 734_137
```
pure nothrow @nogc @property @safe void **dayOfGregorianCal**(int day);
The Xth day of the Gregorian Calendar that this [`Date`](#Date) is on.
Parameters:
| | |
| --- | --- |
| int `day` | The day of the Gregorian Calendar to set this [`Date`](#Date) to. |
Examples:
```
auto date = Date.init;
date.dayOfGregorianCal = 1;
writeln(date); // Date(1, 1, 1)
date.dayOfGregorianCal = 365;
writeln(date); // Date(1, 12, 31)
date.dayOfGregorianCal = 366;
writeln(date); // Date(2, 1, 1)
date.dayOfGregorianCal = 0;
writeln(date); // Date(0, 12, 31)
date.dayOfGregorianCal = -365;
writeln(date); // Date(-0, 1, 1)
date.dayOfGregorianCal = -366;
writeln(date); // Date(-1, 12, 31)
date.dayOfGregorianCal = 730_120;
writeln(date); // Date(2000, 1, 1)
date.dayOfGregorianCal = 734_137;
writeln(date); // Date(2010, 12, 31)
```
const pure nothrow @property @safe auto **isoWeekAndYear**();
The ISO 8601 week and year of the year that this [`Date`](#Date) is in.
Returns:
An anonymous struct with the members `isoWeekYear` for the resulting year and `isoWeek` for the resulting ISO week.
See Also:
[ISO Week Date](http://en.wikipedia.org/wiki/ISO_week_date)
const pure nothrow @property @safe ubyte **isoWeek**();
The ISO 8601 week of the year that this [`Date`](#Date) is in.
See Also:
[ISO Week Date](http://en.wikipedia.org/wiki/ISO_week_date)
const pure nothrow @property @safe short **isoWeekYear**();
The year inside the ISO 8601 week calendar that this [`Date`](#Date) is in.
May differ from [`year`](#year) between 28 December and 4 January.
See Also:
[ISO Week Date](http://en.wikipedia.org/wiki/ISO_week_date)
const pure nothrow @property @safe Date **endOfMonth**();
[`Date`](#Date) for the last day in the month that this [`Date`](#Date) is in.
Examples:
```
writeln(Date(1999, 1, 6).endOfMonth); // Date(1999, 1, 31)
writeln(Date(1999, 2, 7).endOfMonth); // Date(1999, 2, 28)
writeln(Date(2000, 2, 7).endOfMonth); // Date(2000, 2, 29)
writeln(Date(2000, 6, 4).endOfMonth); // Date(2000, 6, 30)
```
const pure nothrow @nogc @property @safe ubyte **daysInMonth**();
The last day in the month that this [`Date`](#Date) is in.
Examples:
```
writeln(Date(1999, 1, 6).daysInMonth); // 31
writeln(Date(1999, 2, 7).daysInMonth); // 28
writeln(Date(2000, 2, 7).daysInMonth); // 29
writeln(Date(2000, 6, 4).daysInMonth); // 30
```
const pure nothrow @nogc @property @safe bool **isAD**();
Whether the current year is a date in A.D.
Examples:
```
assert(Date(1, 1, 1).isAD);
assert(Date(2010, 12, 31).isAD);
assert(!Date(0, 12, 31).isAD);
assert(!Date(-2010, 1, 1).isAD);
```
const pure nothrow @nogc @property @safe long **julianDay**();
The [Julian day](http://en.wikipedia.org/wiki/Julian_day) for this [`Date`](#Date) at noon (since the Julian day changes at noon).
const pure nothrow @nogc @property @safe long **modJulianDay**();
The modified [Julian day](http://en.wikipedia.org/wiki/Julian_day) for any time on this date (since, the modified Julian day changes at midnight).
const pure nothrow @safe string **toISOString**();
const void **toISOString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`Date`](#Date) to a string with the format `YYYYMMDD`. If `writer` is set, the resulting string will be written directly to it.
Parameters:
| | |
| --- | --- |
| W `writer` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
Examples:
```
writeln(Date(2010, 7, 4).toISOString()); // "20100704"
writeln(Date(1998, 12, 25).toISOString()); // "19981225"
writeln(Date(0, 1, 5).toISOString()); // "00000105"
writeln(Date(-4, 1, 5).toISOString()); // "-00040105"
```
const pure nothrow @safe string **toISOExtString**();
const void **toISOExtString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`Date`](#Date) to a string with the format `YYYY-MM-DD`. If `writer` is set, the resulting string will be written directly to it.
Parameters:
| | |
| --- | --- |
| W `writer` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
Examples:
```
writeln(Date(2010, 7, 4).toISOExtString()); // "2010-07-04"
writeln(Date(1998, 12, 25).toISOExtString()); // "1998-12-25"
writeln(Date(0, 1, 5).toISOExtString()); // "0000-01-05"
writeln(Date(-4, 1, 5).toISOExtString()); // "-0004-01-05"
```
const pure nothrow @safe string **toSimpleString**();
const void **toSimpleString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`Date`](#Date) to a string with the format `YYYY-Mon-DD`. If `writer` is set, the resulting string will be written directly to it.
Parameters:
| | |
| --- | --- |
| W `writer` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
Examples:
```
writeln(Date(2010, 7, 4).toSimpleString()); // "2010-Jul-04"
writeln(Date(1998, 12, 25).toSimpleString()); // "1998-Dec-25"
writeln(Date(0, 1, 5).toSimpleString()); // "0000-Jan-05"
writeln(Date(-4, 1, 5).toSimpleString()); // "-0004-Jan-05"
```
const pure nothrow @safe string **toString**();
const void **toString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`Date`](#Date) to a string.
This function exists to make it easy to convert a [`Date`](#Date) to a string for code that does not care what the exact format is - just that it presents the information in a clear manner. It also makes it easy to simply convert a [`Date`](#Date) to a string when using functions such as `to!string`, `format`, or `writeln` which use toString to convert user-defined types. So, it is unlikely that much code will call toString directly.
The format of the string is purposefully unspecified, and code that cares about the format of the string should use `toISOString`, `toISOExtString`, `toSimpleString`, or some other custom formatting function that explicitly generates the format that the code needs. The reason is that the code is then clear about what format it's using, making it less error-prone to maintain the code and interact with other software that consumes the generated strings. It's for this same reason [`Date`](#Date) has no `fromString` function, whereas it does have `fromISOString`, `fromISOExtString`, and `fromSimpleString`.
The format returned by toString may or may not change in the future.
pure @safe Date **fromISOString**(S)(scope const S isoString)
Constraints: if (isSomeString!S);
Creates a [`Date`](#Date) from a string with the format YYYYMMDD. Whitespace is stripped from the given string.
Parameters:
| | |
| --- | --- |
| S `isoString` | A string formatted in the ISO format for dates. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given string is not in the ISO format or if the resulting [`Date`](#Date) would not be valid.
pure @safe Date **fromISOExtString**(S)(scope const S isoExtString)
Constraints: if (isSomeString!S);
Creates a [`Date`](#Date) from a string with the format YYYY-MM-DD. Whitespace is stripped from the given string.
Parameters:
| | |
| --- | --- |
| S `isoExtString` | A string formatted in the ISO Extended format for dates. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given string is not in the ISO Extended format or if the resulting [`Date`](#Date) would not be valid.
pure @safe Date **fromSimpleString**(S)(scope const S simpleString)
Constraints: if (isSomeString!S);
Creates a [`Date`](#Date) from a string with the format YYYY-Mon-DD. Whitespace is stripped from the given string.
Parameters:
| | |
| --- | --- |
| S `simpleString` | A string formatted in the way that toSimpleString formats dates. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given string is not in the correct format or if the resulting [`Date`](#Date) would not be valid.
static pure nothrow @nogc @property @safe Date **min**();
Returns the [`Date`](#Date) farthest in the past which is representable by [`Date`](#Date).
static pure nothrow @nogc @property @safe Date **max**();
Returns the [`Date`](#Date) farthest in the future which is representable by [`Date`](#Date).
struct **TimeOfDay**;
Represents a time of day with hours, minutes, and seconds. It uses 24 hour time.
Examples:
```
import core.time : minutes, seconds;
auto t = TimeOfDay(12, 30, 0);
t += 10.minutes + 100.seconds;
writeln(t); // TimeOfDay(12, 41, 40)
writeln(t.toISOExtString()); // "12:41:40"
writeln(t.toISOString()); // "124140"
writeln(TimeOfDay.fromISOExtString("15:00:00")); // TimeOfDay(15, 0, 0)
writeln(TimeOfDay.fromISOString("015000")); // TimeOfDay(1, 50, 0)
```
pure @safe this(int hour, int minute, int second = 0);
Parameters:
| | |
| --- | --- |
| int `hour` | Hour of the day [0 - 24). |
| int `minute` | Minute of the hour [0 - 60). |
| int `second` | Second of the minute [0 - 60). |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the resulting [`TimeOfDay`](#TimeOfDay) would be not be valid.
const pure nothrow @nogc @safe int **opCmp**(TimeOfDay rhs);
Compares this [`TimeOfDay`](#TimeOfDay) with the given [`TimeOfDay`](#TimeOfDay).
Returns:
| | |
| --- | --- |
| this < rhs | < 0 |
| this == rhs | 0 |
| this > rhs | > 0 |
const pure nothrow @nogc @property @safe ubyte **hour**();
Hours past midnight.
pure @property @safe void **hour**(int **hour**);
Hours past midnight.
Parameters:
| | |
| --- | --- |
| int `hour` | The hour of the day to set this [`TimeOfDay`](#TimeOfDay)'s hour to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given hour would result in an invalid [`TimeOfDay`](#TimeOfDay).
const pure nothrow @nogc @property @safe ubyte **minute**();
Minutes past the hour.
pure @property @safe void **minute**(int **minute**);
Minutes past the hour.
Parameters:
| | |
| --- | --- |
| int `minute` | The minute to set this [`TimeOfDay`](#TimeOfDay)'s minute to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given minute would result in an invalid [`TimeOfDay`](#TimeOfDay).
const pure nothrow @nogc @property @safe ubyte **second**();
Seconds past the minute.
pure @property @safe void **second**(int **second**);
Seconds past the minute.
Parameters:
| | |
| --- | --- |
| int `second` | The second to set this [`TimeOfDay`](#TimeOfDay)'s second to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given second would result in an invalid [`TimeOfDay`](#TimeOfDay).
pure nothrow @nogc ref @safe TimeOfDay **roll**(string units)(long value)
Constraints: if (units == "hours");
pure nothrow @nogc ref @safe TimeOfDay **roll**(string units)(long value)
Constraints: if (units == "minutes" || units == "seconds");
Adds the given number of units to this [`TimeOfDay`](#TimeOfDay), mutating it. A negative number will subtract.
The difference between rolling and adding is that rolling does not affect larger units. For instance, rolling a [`TimeOfDay`](#TimeOfDay) one hours's worth of minutes gets the exact same [`TimeOfDay`](#TimeOfDay).
Accepted units are `"hours"`, `"minutes"`, and `"seconds"`.
Parameters:
| | |
| --- | --- |
| units | The units to add. |
| long `value` | The number of units to add to this [`TimeOfDay`](#TimeOfDay). |
Returns:
A reference to the `TimeOfDay` (`this`).
Examples:
```
auto tod1 = TimeOfDay(7, 12, 0);
tod1.roll!"hours"(1);
writeln(tod1); // TimeOfDay(8, 12, 0)
auto tod2 = TimeOfDay(7, 12, 0);
tod2.roll!"hours"(-1);
writeln(tod2); // TimeOfDay(6, 12, 0)
auto tod3 = TimeOfDay(23, 59, 0);
tod3.roll!"minutes"(1);
writeln(tod3); // TimeOfDay(23, 0, 0)
auto tod4 = TimeOfDay(0, 0, 0);
tod4.roll!"minutes"(-1);
writeln(tod4); // TimeOfDay(0, 59, 0)
auto tod5 = TimeOfDay(23, 59, 59);
tod5.roll!"seconds"(1);
writeln(tod5); // TimeOfDay(23, 59, 0)
auto tod6 = TimeOfDay(0, 0, 0);
tod6.roll!"seconds"(-1);
writeln(tod6); // TimeOfDay(0, 0, 59)
```
const pure nothrow @nogc @safe TimeOfDay **opBinary**(string op)(Duration duration)
Constraints: if (op == "+" || op == "-");
Gives the result of adding or subtracting a [`core.time.Duration`](core_time#Duration) from this [`TimeOfDay`](#TimeOfDay).
The legal types of arithmetic for [`TimeOfDay`](#TimeOfDay) using this operator are
| | | | | |
| --- | --- | --- | --- | --- |
| TimeOfDay | + | Duration | --> | TimeOfDay |
| TimeOfDay | - | Duration | --> | TimeOfDay |
Parameters:
| | |
| --- | --- |
| Duration `duration` | The [`core.time.Duration`](core_time#Duration) to add to or subtract from this [`TimeOfDay`](#TimeOfDay). |
Examples:
```
import core.time : hours, minutes, seconds;
writeln(TimeOfDay(12, 12, 12) + seconds(1)); // TimeOfDay(12, 12, 13)
writeln(TimeOfDay(12, 12, 12) + minutes(1)); // TimeOfDay(12, 13, 12)
writeln(TimeOfDay(12, 12, 12) + hours(1)); // TimeOfDay(13, 12, 12)
writeln(TimeOfDay(23, 59, 59) + seconds(1)); // TimeOfDay(0, 0, 0)
writeln(TimeOfDay(12, 12, 12) - seconds(1)); // TimeOfDay(12, 12, 11)
writeln(TimeOfDay(12, 12, 12) - minutes(1)); // TimeOfDay(12, 11, 12)
writeln(TimeOfDay(12, 12, 12) - hours(1)); // TimeOfDay(11, 12, 12)
writeln(TimeOfDay(0, 0, 0) - seconds(1)); // TimeOfDay(23, 59, 59)
```
pure nothrow @nogc ref @safe TimeOfDay **opOpAssign**(string op)(Duration duration)
Constraints: if (op == "+" || op == "-");
Gives the result of adding or subtracting a [`core.time.Duration`](core_time#Duration) from this [`TimeOfDay`](#TimeOfDay), as well as assigning the result to this [`TimeOfDay`](#TimeOfDay).
The legal types of arithmetic for [`TimeOfDay`](#TimeOfDay) using this operator are
| | | | | |
| --- | --- | --- | --- | --- |
| TimeOfDay | + | Duration | --> | TimeOfDay |
| TimeOfDay | - | Duration | --> | TimeOfDay |
Parameters:
| | |
| --- | --- |
| Duration `duration` | The [`core.time.Duration`](core_time#Duration) to add to or subtract from this [`TimeOfDay`](#TimeOfDay). |
const pure nothrow @nogc @safe Duration **opBinary**(string op)(TimeOfDay rhs)
Constraints: if (op == "-");
Gives the difference between two [`TimeOfDay`](#TimeOfDay)s.
The legal types of arithmetic for [`TimeOfDay`](#TimeOfDay) using this operator are
| | | | | |
| --- | --- | --- | --- | --- |
| TimeOfDay | - | TimeOfDay | --> | duration |
Parameters:
| | |
| --- | --- |
| TimeOfDay `rhs` | The [`TimeOfDay`](#TimeOfDay) to subtract from this one. |
const pure nothrow @safe string **toISOString**();
const void **toISOString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`TimeOfDay`](#TimeOfDay) to a string with the format `HHMMSS`. If `writer` is set, the resulting string will be written directly to it.
Parameters:
| | |
| --- | --- |
| W `writer` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
Examples:
```
writeln(TimeOfDay(0, 0, 0).toISOString()); // "000000"
writeln(TimeOfDay(12, 30, 33).toISOString()); // "123033"
```
const pure nothrow @safe string **toISOExtString**();
const void **toISOExtString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this [`TimeOfDay`](#TimeOfDay) to a string with the format `HH:MM:SS`. If `writer` is set, the resulting string will be written directly to it.
Parameters:
| | |
| --- | --- |
| W `writer` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
Examples:
```
writeln(TimeOfDay(0, 0, 0).toISOExtString()); // "00:00:00"
writeln(TimeOfDay(12, 30, 33).toISOExtString()); // "12:30:33"
```
const pure nothrow @safe string **toString**();
const void **toString**(W)(ref W writer)
Constraints: if (isOutputRange!(W, char));
Converts this TimeOfDay to a string.
This function exists to make it easy to convert a [`TimeOfDay`](#TimeOfDay) to a string for code that does not care what the exact format is - just that it presents the information in a clear manner. It also makes it easy to simply convert a [`TimeOfDay`](#TimeOfDay) to a string when using functions such as `to!string`, `format`, or `writeln` which use toString to convert user-defined types. So, it is unlikely that much code will call toString directly.
The format of the string is purposefully unspecified, and code that cares about the format of the string should use `toISOString`, `toISOExtString`, or some other custom formatting function that explicitly generates the format that the code needs. The reason is that the code is then clear about what format it's using, making it less error-prone to maintain the code and interact with other software that consumes the generated strings. It's for this same reason that [`TimeOfDay`](#TimeOfDay) has no `fromString` function, whereas it does have `fromISOString` and `fromISOExtString`.
The format returned by toString may or may not change in the future.
Parameters:
| | |
| --- | --- |
| W `writer` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
pure @safe TimeOfDay **fromISOString**(S)(scope const S isoString)
Constraints: if (isSomeString!S);
Creates a [`TimeOfDay`](#TimeOfDay) from a string with the format HHMMSS. Whitespace is stripped from the given string.
Parameters:
| | |
| --- | --- |
| S `isoString` | A string formatted in the ISO format for times. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given string is not in the ISO format or if the resulting [`TimeOfDay`](#TimeOfDay) would not be valid.
pure @safe TimeOfDay **fromISOExtString**(S)(scope const S isoExtString)
Constraints: if (isSomeString!S);
Creates a [`TimeOfDay`](#TimeOfDay) from a string with the format HH:MM:SS. Whitespace is stripped from the given string.
Parameters:
| | |
| --- | --- |
| S `isoExtString` | A string formatted in the ISO Extended format for times. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given string is not in the ISO Extended format or if the resulting [`TimeOfDay`](#TimeOfDay) would not be valid.
static pure nothrow @nogc @property @safe TimeOfDay **min**();
Returns midnight.
static pure nothrow @nogc @property @safe TimeOfDay **max**();
Returns one second short of midnight.
pure nothrow @nogc @safe bool **valid**(string units)(int value)
Constraints: if (units == "months" || units == "hours" || units == "minutes" || units == "seconds");
Returns whether the given value is valid for the given unit type when in a time point. Naturally, a duration is not held to a particular range, but the values in a time point are (e.g. a month must be in the range of 1 - 12 inclusive).
Parameters:
| | |
| --- | --- |
| units | The units of time to validate. |
| int `value` | The number to validate. |
Examples:
```
assert(valid!"hours"(12));
assert(!valid!"hours"(32));
assert(valid!"months"(12));
assert(!valid!"months"(13));
```
pure nothrow @nogc @safe bool **valid**(string units)(int year, int month, int day)
Constraints: if (units == "days");
Returns whether the given day is valid for the given year and month.
Parameters:
| | |
| --- | --- |
| units | The units of time to validate. |
| int `year` | The year of the day to validate. |
| int `month` | The month of the day to validate (January is 1). |
| int `day` | The day to validate. |
Examples:
```
assert(valid!"days"(2016, 2, 29));
assert(!valid!"days"(2016, 2, 30));
assert(valid!"days"(2017, 2, 20));
assert(!valid!"days"(2017, 2, 29));
```
pure @safe void **enforceValid**(string units)(int value, string file = \_\_FILE\_\_, size\_t line = \_\_LINE\_\_)
Constraints: if (units == "months" || units == "hours" || units == "minutes" || units == "seconds");
Parameters:
| | |
| --- | --- |
| units | The units of time to validate. |
| int `value` | The number to validate. |
| string `file` | The file that the [`DateTimeException`](#DateTimeException) will list if thrown. |
| size\_t `line` | The line number that the [`DateTimeException`](#DateTimeException) will list if thrown. |
Throws:
[`DateTimeException`](#DateTimeException) if `valid!units(value)` is false.
Examples:
```
import std.exception : assertThrown, assertNotThrown;
assertNotThrown(enforceValid!"months"(10));
assertNotThrown(enforceValid!"seconds"(40));
assertThrown!DateTimeException(enforceValid!"months"(0));
assertThrown!DateTimeException(enforceValid!"hours"(24));
assertThrown!DateTimeException(enforceValid!"minutes"(60));
assertThrown!DateTimeException(enforceValid!"seconds"(60));
```
pure @safe void **enforceValid**(string units)(int year, Month month, int day, string file = \_\_FILE\_\_, size\_t line = \_\_LINE\_\_)
Constraints: if (units == "days");
Because the validity of the day number depends on both on the year and month of which the day is occurring, take all three variables to validate the day.
Parameters:
| | |
| --- | --- |
| units | The units of time to validate. |
| int `year` | The year of the day to validate. |
| Month `month` | The month of the day to validate. |
| int `day` | The day to validate. |
| string `file` | The file that the [`DateTimeException`](#DateTimeException) will list if thrown. |
| size\_t `line` | The line number that the [`DateTimeException`](#DateTimeException) will list if thrown. |
Throws:
[`DateTimeException`](#DateTimeException) if `valid!"days"(year, month, day)` is false.
Examples:
```
import std.exception : assertThrown, assertNotThrown;
assertNotThrown(enforceValid!"days"(2000, Month.jan, 1));
// leap year
assertNotThrown(enforceValid!"days"(2000, Month.feb, 29));
assertThrown!DateTimeException(enforceValid!"days"(2001, Month.feb, 29));
assertThrown!DateTimeException(enforceValid!"days"(2000, Month.jan, 32));
assertThrown!DateTimeException(enforceValid!"days"(2000, Month.apr, 31));
```
pure nothrow @nogc @safe int **daysToDayOfWeek**(DayOfWeek currDoW, DayOfWeek dow);
Returns the number of days from the current day of the week to the given day of the week. If they are the same, then the result is 0.
Parameters:
| | |
| --- | --- |
| DayOfWeek `currDoW` | The current day of the week. |
| DayOfWeek `dow` | The day of the week to get the number of days to. |
Examples:
```
writeln(daysToDayOfWeek(DayOfWeek.mon, DayOfWeek.mon)); // 0
writeln(daysToDayOfWeek(DayOfWeek.mon, DayOfWeek.sun)); // 6
writeln(daysToDayOfWeek(DayOfWeek.mon, DayOfWeek.wed)); // 2
```
pure @safe int **monthsToMonth**(int currMonth, int month);
Returns the number of months from the current months of the year to the given month of the year. If they are the same, then the result is 0.
Parameters:
| | |
| --- | --- |
| int `currMonth` | The current month of the year. |
| int `month` | The month of the year to get the number of months to. |
Examples:
```
writeln(monthsToMonth(Month.jan, Month.jan)); // 0
writeln(monthsToMonth(Month.jan, Month.dec)); // 11
writeln(monthsToMonth(Month.jul, Month.oct)); // 3
```
pure nothrow @nogc @safe bool **yearIsLeapYear**(int year);
Whether the given Gregorian Year is a leap year.
Parameters:
| | |
| --- | --- |
| int `year` | The year to to be tested. |
Examples:
```
foreach (year; [1, 2, 100, 2001, 2002, 2003, 2005, 2006, 2007, 2009, 2010])
{
assert(!yearIsLeapYear(year));
assert(!yearIsLeapYear(-year));
}
foreach (year; [0, 4, 8, 400, 800, 1600, 1996, 2000, 2004, 2008, 2012])
{
assert(yearIsLeapYear(year));
assert(yearIsLeapYear(-year));
}
```
enum auto **isTimePoint**(T);
Whether the given type defines all of the necessary functions for it to function as a time point.
1. `T` must define a static property named `min` which is the smallest value of `T` as `Unqual!T`.
2. `T` must define a static property named `max` which is the largest value of `T` as `Unqual!T`.
3. `T` must define an `opBinary` for addition and subtraction that accepts [`core.time.Duration`](core_time#Duration) and returns `Unqual!T`.
4. `T` must define an `opOpAssign` for addition and subtraction that accepts [`core.time.Duration`](core_time#Duration) and returns `ref Unqual!T`.
5. `T` must define a `opBinary` for subtraction which accepts `T` and returns [`core.time.Duration`](core_time#Duration).
Examples:
```
import core.time : Duration;
import std.datetime.interval : Interval;
import std.datetime.systime : SysTime;
static assert(isTimePoint!Date);
static assert(isTimePoint!DateTime);
static assert(isTimePoint!SysTime);
static assert(isTimePoint!TimeOfDay);
static assert(!isTimePoint!int);
static assert(!isTimePoint!Duration);
static assert(!isTimePoint!(Interval!SysTime));
```
pure nothrow @nogc @safe bool **validTimeUnits**(string[] units...);
Whether all of the given strings are valid units of time.
`"nsecs"` is not considered a valid unit of time. Nothing in std.datetime can handle precision greater than hnsecs, and the few functions in core.time which deal with "nsecs" deal with it explicitly.
Examples:
```
assert(validTimeUnits("msecs", "seconds", "minutes"));
assert(validTimeUnits("days", "weeks", "months"));
assert(!validTimeUnits("ms", "seconds", "minutes"));
```
pure @safe int **cmpTimeUnits**(string lhs, string rhs);
Compares two time unit strings. `"years"` are the largest units and `"hnsecs"` are the smallest.
Returns:
| | |
| --- | --- |
| this < rhs | < 0 |
| this == rhs | 0 |
| this > rhs | > 0 |
Throws:
[`DateTimeException`](#DateTimeException) if either of the given strings is not a valid time unit string.
Examples:
```
import std.exception : assertThrown;
writeln(cmpTimeUnits("hours", "hours")); // 0
assert(cmpTimeUnits("hours", "weeks") < 0);
assert(cmpTimeUnits("months", "seconds") > 0);
assertThrown!DateTimeException(cmpTimeUnits("month", "second"));
```
template **CmpTimeUnits**(string lhs, string rhs) if (validTimeUnits(lhs, rhs))
Compares two time unit strings at compile time. `"years"` are the largest units and `"hnsecs"` are the smallest.
This template is used instead of `cmpTimeUnits` because exceptions can't be thrown at compile time and `cmpTimeUnits` must enforce that the strings it's given are valid time unit strings. This template uses a template constraint instead.
Returns:
| | |
| --- | --- |
| this < rhs | < 0 |
| this == rhs | 0 |
| this > rhs | > 0 |
Examples:
```
static assert(CmpTimeUnits!("years", "weeks") > 0);
static assert(CmpTimeUnits!("days", "days") == 0);
static assert(CmpTimeUnits!("seconds", "hours") < 0);
```
| programming_docs |
d std.datetime.interval std.datetime.interval
=====================
| Category | Functions |
| --- | --- |
| Main types | [`Interval`](#Interval) [`Direction`](#Direction) |
| Special intervals | [`everyDayOfWeek`](#everyDayOfWeek) [`everyMonth`](#everyMonth) [`everyDuration`](#everyDuration) |
| Special intervals | [`NegInfInterval`](#NegInfInterval) [`PosInfInterval`](#PosInfInterval) |
| Underlying ranges | [`IntervalRange`](#IntervalRange) [`NegInfIntervalRange`](#NegInfIntervalRange) [`PosInfIntervalRange`](#PosInfIntervalRange) |
| Flags | [`PopFirst`](#PopFirst) |
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
[Jonathan M Davis](http://jmdavisprog.com)
Source
[std/datetime/interval.d](https://github.com/dlang/phobos/blob/master/std/datetime/interval.d)
enum **Direction**: int;
Indicates a direction in time. One example of its use is [`Interval`](#Interval)'s [`expand`](#expand) function which uses it to indicate whether the interval should be expanded backwards (into the past), forwards (into the future), or both.
**bwd**
Backward.
**fwd**
Forward.
**both**
Both backward and forward.
alias **PopFirst** = std.typecons.Flag!"popFirst".Flag;
Used to indicate whether `popFront` should be called immediately upon creating a range. The idea is that for some functions used to generate a range for an interval, `front` is not necessarily a time point which would ever be generated by the range (e.g. if the range were every Sunday within an interval, but the interval started on a Monday), so there needs to be a way to deal with that. To get the first time point in the range to match what the function generates, then use `PopFirst.yes` to indicate that the range should have `popFront` called on it before the range is returned so that `front` is a time point which the function would generate. To let the first time point not match the generator function, use `PopFront.no`.
For instance, if the function used to generate a range of time points generated successive Easters (i.e. you're iterating over all of the Easters within the interval), the initial date probably isn't an Easter. Using `PopFirst.yes` would tell the function which returned the range that `popFront` was to be called so that front would then be an Easter - the next one generated by the function (which when iterating forward would be the Easter following the original `front`, while when iterating backward, it would be the Easter prior to the original `front`). If `PopFirst.no` were used, then `front` would remain the original time point and it would not necessarily be a time point which would be generated by the range-generating function (which in many cases is exactly what is desired - e.g. if iterating over every day starting at the beginning of the interval).
If set to `PopFirst.no`, then popFront is not called before returning the range.
Otherwise, if set to `PopFirst.yes`, then popFront is called before returning the range.
struct **Interval**(TP);
Represents an interval of time.
An `Interval` has a starting point and an end point. The interval of time is therefore the time starting at the starting point up to, but not including, the end point. e.g.
| |
| --- |
| [January 5th, 2010 - March 10th, 2010) |
| [05:00:30 - 12:00:00) |
| [1982-01-04T08:59:00 - 2010-07-04T12:00:00) |
A range can be obtained from an `Interval`, allowing iteration over that interval, with the exact time points which are iterated over depending on the function which generates the range.
pure this(U)(scope const TP begin, scope const U end)
Constraints: if (is(immutable(TP) == immutable(U)));
Parameters:
| | |
| --- | --- |
| TP `begin` | The time point which begins the interval. |
| U `end` | The time point which ends (but is not included in) the interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if end is before begin.
Example
```
Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1));
```
pure this(D)(scope const TP begin, scope const D duration)
Constraints: if (\_\_traits(compiles, begin + duration));
Parameters:
| | |
| --- | --- |
| TP `begin` | The time point which begins the interval. |
| D `duration` | The duration from the starting point to the end point. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the resulting `end` is before `begin`.
Example
```
assert(Interval!Date(Date(1996, 1, 2), dur!"days"(3)) ==
Interval!Date(Date(1996, 1, 2), Date(1996, 1, 5)));
```
pure nothrow ref Interval **opAssign**(ref const Interval rhs);
Parameters:
| | |
| --- | --- |
| Interval `rhs` | The [`Interval`](#Interval) to assign to this one. |
pure nothrow ref Interval **opAssign**(Interval rhs);
Parameters:
| | |
| --- | --- |
| Interval `rhs` | The [`Interval`](#Interval) to assign to this one. |
const pure nothrow @property TP **begin**();
The starting point of the interval. It is included in the interval.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).begin ==
Date(1996, 1, 2));
```
pure @property void **begin**(TP timePoint);
The starting point of the interval. It is included in the interval.
Parameters:
| | |
| --- | --- |
| TP `timePoint` | The time point to set `begin` to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the resulting interval would be invalid.
const pure nothrow @property TP **end**();
The end point of the interval. It is excluded from the interval.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).end ==
Date(2012, 3, 1));
```
pure @property void **end**(TP timePoint);
The end point of the interval. It is excluded from the interval.
Parameters:
| | |
| --- | --- |
| TP `timePoint` | The time point to set end to. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the resulting interval would be invalid.
const pure nothrow @property auto **length**();
Returns the duration between `begin` and `end`.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).length ==
dur!"days"(5903));
```
const pure nothrow @property bool **empty**();
Whether the interval's length is 0, that is, whether `begin == end`.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(1996, 1, 2)).empty);
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).empty);
```
const pure bool **contains**(scope const TP timePoint);
Whether the given time point is within this interval.
Parameters:
| | |
| --- | --- |
| TP `timePoint` | The time point to check for inclusion in this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).contains(
Date(1994, 12, 24)));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).contains(
Date(2000, 1, 5)));
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).contains(
Date(2012, 3, 1)));
```
const pure bool **contains**(scope const Interval interval);
Whether the given interval is completely within this interval.
Parameters:
| | |
| --- | --- |
| Interval `interval` | The interval to check for inclusion in this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if either interval is empty.
Example
```
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).contains(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).contains(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))));
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).contains(
Interval!Date(Date(1998, 2, 28), Date(2013, 5, 1))));
```
const pure bool **contains**(scope const PosInfInterval!TP interval);
Whether the given interval is completely within this interval.
Always returns false (unless this interval is empty), because an interval going to positive infinity can never be contained in a finite interval.
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to check for inclusion in this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).contains(
PosInfInterval!Date(Date(1999, 5, 4))));
```
const pure bool **contains**(scope const NegInfInterval!TP interval);
Whether the given interval is completely within this interval.
Always returns false (unless this interval is empty), because an interval beginning at negative infinity can never be contained in a finite interval.
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to check for inclusion in this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).contains(
NegInfInterval!Date(Date(1996, 5, 4))));
```
const pure bool **isBefore**(scope const TP timePoint);
Whether this interval is before the given time point.
Parameters:
| | |
| --- | --- |
| TP `timePoint` | The time point to check whether this interval is before it. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isBefore(
Date(1994, 12, 24)));
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isBefore(
Date(2000, 1, 5)));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isBefore(
Date(2012, 3, 1)));
```
const pure bool **isBefore**(scope const Interval interval);
Whether this interval is before the given interval and does not intersect with it.
Parameters:
| | |
| --- | --- |
| Interval `interval` | The interval to check for against this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if either interval is empty.
Example
```
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isBefore(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))));
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isBefore(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isBefore(
Interval!Date(Date(2012, 3, 1), Date(2013, 5, 1))));
```
const pure bool **isBefore**(scope const PosInfInterval!TP interval);
Whether this interval is before the given interval and does not intersect with it.
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to check for against this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isBefore(
PosInfInterval!Date(Date(1999, 5, 4))));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isBefore(
PosInfInterval!Date(Date(2013, 3, 7))));
```
const pure bool **isBefore**(scope const NegInfInterval!TP interval);
Whether this interval is before the given interval and does not intersect with it.
Always returns false (unless this interval is empty) because a finite interval can never be before an interval beginning at negative infinity.
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to check for against this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isBefore(
NegInfInterval!Date(Date(1996, 5, 4))));
```
const pure bool **isAfter**(scope const TP timePoint);
Whether this interval is after the given time point.
Parameters:
| | |
| --- | --- |
| TP `timePoint` | The time point to check whether this interval is after it. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAfter(
Date(1994, 12, 24)));
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAfter(
Date(2000, 1, 5)));
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAfter(
Date(2012, 3, 1)));
```
const pure bool **isAfter**(scope const Interval interval);
Whether this interval is after the given interval and does not intersect it.
Parameters:
| | |
| --- | --- |
| Interval `interval` | The interval to check against this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if either interval is empty.
Example
```
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAfter(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))));
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAfter(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAfter(
Interval!Date(Date(1989, 3, 1), Date(1996, 1, 2))));
```
const pure bool **isAfter**(scope const PosInfInterval!TP interval);
Whether this interval is after the given interval and does not intersect it.
Always returns false (unless this interval is empty) because a finite interval can never be after an interval going to positive infinity.
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to check against this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAfter(
PosInfInterval!Date(Date(1999, 5, 4))));
```
const pure bool **isAfter**(scope const NegInfInterval!TP interval);
Whether this interval is after the given interval and does not intersect it.
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to check against this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAfter(
NegInfInterval!Date(Date(1996, 1, 2))));
```
const pure bool **intersects**(scope const Interval interval);
Whether the given interval overlaps this interval.
Parameters:
| | |
| --- | --- |
| Interval `interval` | The interval to check for intersection with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if either interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).intersects(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).intersects(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))));
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).intersects(
Interval!Date(Date(1989, 3, 1), Date(1996, 1, 2))));
```
const pure bool **intersects**(scope const PosInfInterval!TP interval);
Whether the given interval overlaps this interval.
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to check for intersection with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).intersects(
PosInfInterval!Date(Date(1999, 5, 4))));
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).intersects(
PosInfInterval!Date(Date(2012, 3, 1))));
```
const pure bool **intersects**(scope const NegInfInterval!TP interval);
Whether the given interval overlaps this interval.
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to check for intersection with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).intersects(
NegInfInterval!Date(Date(1996, 1, 2))));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).intersects(
NegInfInterval!Date(Date(2000, 1, 2))));
```
const Interval **intersection**(scope const Interval interval);
Returns the intersection of two intervals
Parameters:
| | |
| --- | --- |
| Interval `interval` | The interval to intersect with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the two intervals do not intersect or if either interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).intersection(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))) ==
Interval!Date(Date(1996, 1 , 2), Date(2000, 8, 2)));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).intersection(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))) ==
Interval!Date(Date(1999, 1 , 12), Date(2011, 9, 17)));
```
const Interval **intersection**(scope const PosInfInterval!TP interval);
Returns the intersection of two intervals
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to intersect with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the two intervals do not intersect or if this interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).intersection(
PosInfInterval!Date(Date(1990, 7, 6))) ==
Interval!Date(Date(1996, 1 , 2), Date(2012, 3, 1)));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).intersection(
PosInfInterval!Date(Date(1999, 1, 12))) ==
Interval!Date(Date(1999, 1 , 12), Date(2012, 3, 1)));
```
const Interval **intersection**(scope const NegInfInterval!TP interval);
Returns the intersection of two intervals
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to intersect with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the two intervals do not intersect or if this interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).intersection(
NegInfInterval!Date(Date(1999, 7, 6))) ==
Interval!Date(Date(1996, 1 , 2), Date(1999, 7, 6)));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).intersection(
NegInfInterval!Date(Date(2013, 1, 12))) ==
Interval!Date(Date(1996, 1 , 2), Date(2012, 3, 1)));
```
const pure bool **isAdjacent**(scope const Interval interval);
Whether the given interval is adjacent to this interval.
Parameters:
| | |
| --- | --- |
| Interval `interval` | The interval to check whether its adjecent to this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if either interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAdjacent(
Interval!Date(Date(1990, 7, 6), Date(1996, 1, 2))));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAdjacent(
Interval!Date(Date(2012, 3, 1), Date(2013, 9, 17))));
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAdjacent(
Interval!Date(Date(1989, 3, 1), Date(2012, 3, 1))));
```
const pure bool **isAdjacent**(scope const PosInfInterval!TP interval);
Whether the given interval is adjacent to this interval.
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to check whether its adjecent to this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAdjacent(
PosInfInterval!Date(Date(1999, 5, 4))));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAdjacent(
PosInfInterval!Date(Date(2012, 3, 1))));
```
const pure bool **isAdjacent**(scope const NegInfInterval!TP interval);
Whether the given interval is adjacent to this interval.
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to check whether its adjecent to this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAdjacent(
NegInfInterval!Date(Date(1996, 1, 2))));
assert(!Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).isAdjacent(
NegInfInterval!Date(Date(2000, 1, 2))));
```
const Interval **merge**(scope const Interval interval);
Returns the union of two intervals
Parameters:
| | |
| --- | --- |
| Interval `interval` | The interval to merge with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the two intervals do not intersect and are not adjacent or if either interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).merge(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))) ==
Interval!Date(Date(1990, 7 , 6), Date(2012, 3, 1)));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).merge(
Interval!Date(Date(2012, 3, 1), Date(2013, 5, 7))) ==
Interval!Date(Date(1996, 1 , 2), Date(2013, 5, 7)));
```
const PosInfInterval!TP **merge**(scope const PosInfInterval!TP interval);
Returns the union of two intervals
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to merge with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the two intervals do not intersect and are not adjacent or if this interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).merge(
PosInfInterval!Date(Date(1990, 7, 6))) ==
PosInfInterval!Date(Date(1990, 7 , 6)));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).merge(
PosInfInterval!Date(Date(2012, 3, 1))) ==
PosInfInterval!Date(Date(1996, 1 , 2)));
```
const NegInfInterval!TP **merge**(scope const NegInfInterval!TP interval);
Returns the union of two intervals
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to merge with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the two intervals do not intersect and are not adjacent or if this interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).merge(
NegInfInterval!Date(Date(1996, 1, 2))) ==
NegInfInterval!Date(Date(2012, 3 , 1)));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).merge(
NegInfInterval!Date(Date(2013, 1, 12))) ==
NegInfInterval!Date(Date(2013, 1 , 12)));
```
const pure Interval **span**(scope const Interval interval);
Returns an interval that covers from the earliest time point of two intervals up to (but not including) the latest time point of two intervals.
Parameters:
| | |
| --- | --- |
| Interval `interval` | The interval to create a span together with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if either interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).span(
Interval!Date(Date(1990, 7, 6), Date(1991, 1, 8))) ==
Interval!Date(Date(1990, 7 , 6), Date(2012, 3, 1)));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).span(
Interval!Date(Date(2012, 3, 1), Date(2013, 5, 7))) ==
Interval!Date(Date(1996, 1 , 2), Date(2013, 5, 7)));
```
const pure PosInfInterval!TP **span**(scope const PosInfInterval!TP interval);
Returns an interval that covers from the earliest time point of two intervals up to (but not including) the latest time point of two intervals.
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to create a span together with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).span(
PosInfInterval!Date(Date(1990, 7, 6))) ==
PosInfInterval!Date(Date(1990, 7 , 6)));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).span(
PosInfInterval!Date(Date(2050, 1, 1))) ==
PosInfInterval!Date(Date(1996, 1 , 2)));
```
const pure NegInfInterval!TP **span**(scope const NegInfInterval!TP interval);
Returns an interval that covers from the earliest time point of two intervals up to (but not including) the latest time point of two intervals.
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to create a span together with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Example
```
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).span(
NegInfInterval!Date(Date(1602, 5, 21))) ==
NegInfInterval!Date(Date(2012, 3 , 1)));
assert(Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1)).span(
NegInfInterval!Date(Date(2013, 1, 12))) ==
NegInfInterval!Date(Date(2013, 1 , 12)));
```
pure void **shift**(D)(D duration)
Constraints: if (\_\_traits(compiles, begin + duration));
Shifts the interval forward or backwards in time by the given duration (a positive duration shifts the interval forward; a negative duration shifts it backward). Effectively, it does `begin += duration` and `end += duration`.
Parameters:
| | |
| --- | --- |
| D `duration` | The duration to shift the interval by. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) this interval is empty or if the resulting interval would be invalid.
Example
```
auto interval1 = Interval!Date(Date(1996, 1, 2), Date(2012, 4, 5));
auto interval2 = Interval!Date(Date(1996, 1, 2), Date(2012, 4, 5));
interval1.shift(dur!"days"(50));
assert(interval1 == Interval!Date(Date(1996, 2, 21), Date(2012, 5, 25)));
interval2.shift(dur!"days"(-50));
assert(interval2 == Interval!Date(Date(1995, 11, 13), Date(2012, 2, 15)));
```
void **shift**(T)(T years, T months = 0, AllowDayOverflow allowOverflow = AllowDayOverflow.yes)
Constraints: if (isIntegral!T);
Shifts the interval forward or backwards in time by the given number of years and/or months (a positive number of years and months shifts the interval forward; a negative number shifts it backward). It adds the years the given years and months to both begin and end. It effectively calls `add!"years"()` and then `add!"months"()` on begin and end with the given number of years and months.
Parameters:
| | |
| --- | --- |
| T `years` | The number of years to shift the interval by. |
| T `months` | The number of months to shift the interval by. |
| AllowDayOverflow `allowOverflow` | Whether the days should be allowed to overflow on `begin` and `end`, causing their month to increment. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty or if the resulting interval would be invalid.
Example
```
auto interval1 = Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1));
auto interval2 = Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1));
interval1.shift(2);
assert(interval1 == Interval!Date(Date(1998, 1, 2), Date(2014, 3, 1)));
interval2.shift(-2);
assert(interval2 == Interval!Date(Date(1994, 1, 2), Date(2010, 3, 1)));
```
pure void **expand**(D)(D duration, Direction dir = Direction.both)
Constraints: if (\_\_traits(compiles, begin + duration));
Expands the interval forwards and/or backwards in time. Effectively, it does `begin -= duration` and/or `end += duration`. Whether it expands forwards and/or backwards in time is determined by dir.
Parameters:
| | |
| --- | --- |
| D `duration` | The duration to expand the interval by. |
| Direction `dir` | The direction in time to expand the interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) this interval is empty or if the resulting interval would be invalid.
Example
```
auto interval1 = Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1));
auto interval2 = Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1));
interval1.expand(2);
assert(interval1 == Interval!Date(Date(1994, 1, 2), Date(2014, 3, 1)));
interval2.expand(-2);
assert(interval2 == Interval!Date(Date(1998, 1, 2), Date(2010, 3, 1)));
```
void **expand**(T)(T years, T months = 0, AllowDayOverflow allowOverflow = AllowDayOverflow.yes, Direction dir = Direction.both)
Constraints: if (isIntegral!T);
Expands the interval forwards and/or backwards in time. Effectively, it subtracts the given number of months/years from `begin` and adds them to `end`. Whether it expands forwards and/or backwards in time is determined by dir.
Parameters:
| | |
| --- | --- |
| T `years` | The number of years to expand the interval by. |
| T `months` | The number of months to expand the interval by. |
| AllowDayOverflow `allowOverflow` | Whether the days should be allowed to overflow on `begin` and `end`, causing their month to increment. |
| Direction `dir` | The direction in time to expand the interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty or if the resulting interval would be invalid.
Example
```
auto interval1 = Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1));
auto interval2 = Interval!Date(Date(1996, 1, 2), Date(2012, 3, 1));
interval1.expand(2);
assert(interval1 == Interval!Date(Date(1994, 1, 2), Date(2014, 3, 1)));
interval2.expand(-2);
assert(interval2 == Interval!Date(Date(1998, 1, 2), Date(2010, 3, 1)));
```
const IntervalRange!(TP, Direction.fwd) **fwdRange**(TP delegate(scope const TP) func, PopFirst popFirst = PopFirst.no);
Returns a range which iterates forward over the interval, starting at `begin`, using func to generate each successive time point.
The range's `front` is the interval's `begin`. func is used to generate the next `front` when `popFront` is called. If popFirst is `PopFirst.yes`, then `popFront` is called before the range is returned (so that `front` is a time point which func would generate).
If func ever generates a time point less than or equal to the current `front` of the range, then a [`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) will be thrown. The range will be empty and iteration complete when func generates a time point equal to or beyond the `end` of the interval.
There are helper functions in this module which generate common delegates to pass to `fwdRange`. Their documentation starts with "Range-generating function," making them easily searchable.
Parameters:
| | |
| --- | --- |
| TP delegate(scope const TP) `func` | The function used to generate the time points of the range over the interval. |
| PopFirst `popFirst` | Whether `popFront` should be called on the range before returning it. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Warning
func must be logically pure. Ideally, func would be a function pointer to a pure function, but forcing func to be pure is far too restrictive to be useful, and in order to have the ease of use of having functions which generate functions to pass to `fwdRange`, func must be a delegate.
If func retains state which changes as it is called, then some algorithms will not work correctly, because the range's `save` will have failed to have really saved the range's state. To avoid such bugs, don't pass a delegate which is not logically pure to `fwdRange`. If func is given the same time point with two different calls, it must return the same result both times. Of course, none of the functions in this module have this problem, so it's only relevant if when creating a custom delegate.
Example
```
auto interval = Interval!Date(Date(2010, 9, 1), Date(2010, 9, 9));
auto func = delegate (scope const Date date) // For iterating over even-numbered days.
{
if ((date.day & 1) == 0)
return date + dur!"days"(2);
return date + dur!"days"(1);
};
auto range = interval.fwdRange(func);
// An odd day. Using PopFirst.yes would have made this Date(2010, 9, 2).
assert(range.front == Date(2010, 9, 1));
range.popFront();
assert(range.front == Date(2010, 9, 2));
range.popFront();
assert(range.front == Date(2010, 9, 4));
range.popFront();
assert(range.front == Date(2010, 9, 6));
range.popFront();
assert(range.front == Date(2010, 9, 8));
range.popFront();
assert(range.empty);
```
const IntervalRange!(TP, Direction.bwd) **bwdRange**(TP delegate(scope const TP) func, PopFirst popFirst = PopFirst.no);
Returns a range which iterates backwards over the interval, starting at `end`, using func to generate each successive time point.
The range's `front` is the interval's `end`. func is used to generate the next `front` when `popFront` is called. If popFirst is `PopFirst.yes`, then `popFront` is called before the range is returned (so that `front` is a time point which func would generate).
If func ever generates a time point greater than or equal to the current `front` of the range, then a [`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) will be thrown. The range will be empty and iteration complete when func generates a time point equal to or less than the `begin` of the interval.
There are helper functions in this module which generate common delegates to pass to `bwdRange`. Their documentation starts with "Range-generating function," making them easily searchable.
Parameters:
| | |
| --- | --- |
| TP delegate(scope const TP) `func` | The function used to generate the time points of the range over the interval. |
| PopFirst `popFirst` | Whether `popFront` should be called on the range before returning it. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Warning
func must be logically pure. Ideally, func would be a function pointer to a pure function, but forcing func to be pure is far too restrictive to be useful, and in order to have the ease of use of having functions which generate functions to pass to `fwdRange`, func must be a delegate.
If func retains state which changes as it is called, then some algorithms will not work correctly, because the range's `save` will have failed to have really saved the range's state. To avoid such bugs, don't pass a delegate which is not logically pure to `fwdRange`. If func is given the same time point with two different calls, it must return the same result both times. Of course, none of the functions in this module have this problem, so it's only relevant for custom delegates.
Example
```
auto interval = Interval!Date(Date(2010, 9, 1), Date(2010, 9, 9));
auto func = delegate (scope const Date date) // For iterating over even-numbered days.
{
if ((date.day & 1) == 0)
return date - dur!"days"(2);
return date - dur!"days"(1);
};
auto range = interval.bwdRange(func);
// An odd day. Using PopFirst.yes would have made this Date(2010, 9, 8).
assert(range.front == Date(2010, 9, 9));
range.popFront();
assert(range.front == Date(2010, 9, 8));
range.popFront();
assert(range.front == Date(2010, 9, 6));
range.popFront();
assert(range.front == Date(2010, 9, 4));
range.popFront();
assert(range.front == Date(2010, 9, 2));
range.popFront();
assert(range.empty);
```
const nothrow @safe string **toString**();
const void **toString**(Writer)(ref Writer w)
Constraints: if (isOutputRange!(Writer, char));
Converts this interval to a string.
Parameters:
| | |
| --- | --- |
| Writer `w` | A `char` accepting [output range](std_range_primitives#isOutputRange) |
Returns:
A `string` when not using an output range; `void` otherwise.
struct **PosInfInterval**(TP);
Represents an interval of time which has positive infinity as its end point.
Any ranges which iterate over a `PosInfInterval` are infinite. So, the main purpose of using `PosInfInterval` is to create an infinite range which starts at a fixed point in time and goes to positive infinity.
pure nothrow this(scope const TP begin);
Parameters:
| | |
| --- | --- |
| TP `begin` | The time point which begins the interval. |
Example
```
auto interval = PosInfInterval!Date(Date(1996, 1, 2));
```
pure nothrow ref PosInfInterval **opAssign**(ref const PosInfInterval rhs);
Parameters:
| | |
| --- | --- |
| PosInfInterval `rhs` | The `PosInfInterval` to assign to this one. |
pure nothrow ref PosInfInterval **opAssign**(PosInfInterval rhs);
Parameters:
| | |
| --- | --- |
| PosInfInterval `rhs` | The `PosInfInterval` to assign to this one. |
const pure nothrow @property TP **begin**();
The starting point of the interval. It is included in the interval.
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).begin == Date(1996, 1, 2));
```
pure nothrow @property void **begin**(TP timePoint);
The starting point of the interval. It is included in the interval.
Parameters:
| | |
| --- | --- |
| TP `timePoint` | The time point to set `begin` to. |
enum bool **empty**;
Whether the interval's length is 0. Always returns false.
Example
```
assert(!PosInfInterval!Date(Date(1996, 1, 2)).empty);
```
const pure nothrow bool **contains**(TP timePoint);
Whether the given time point is within this interval.
Parameters:
| | |
| --- | --- |
| TP `timePoint` | The time point to check for inclusion in this interval. |
Example
```
assert(!PosInfInterval!Date(Date(1996, 1, 2)).contains(Date(1994, 12, 24)));
assert(PosInfInterval!Date(Date(1996, 1, 2)).contains(Date(2000, 1, 5)));
```
const pure bool **contains**(scope const Interval!TP interval);
Whether the given interval is completely within this interval.
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to check for inclusion in this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given interval is empty.
Example
```
assert(!PosInfInterval!Date(Date(1996, 1, 2)).contains(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))));
assert(PosInfInterval!Date(Date(1996, 1, 2)).contains(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))));
assert(PosInfInterval!Date(Date(1996, 1, 2)).contains(
Interval!Date(Date(1998, 2, 28), Date(2013, 5, 1))));
```
const pure nothrow bool **contains**(scope const PosInfInterval interval);
Whether the given interval is completely within this interval.
Parameters:
| | |
| --- | --- |
| PosInfInterval `interval` | The interval to check for inclusion in this interval. |
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).contains(
PosInfInterval!Date(Date(1999, 5, 4))));
assert(!PosInfInterval!Date(Date(1996, 1, 2)).contains(
PosInfInterval!Date(Date(1995, 7, 2))));
```
const pure nothrow bool **contains**(scope const NegInfInterval!TP interval);
Whether the given interval is completely within this interval.
Always returns false because an interval going to positive infinity can never contain an interval beginning at negative infinity.
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to check for inclusion in this interval. |
Example
```
assert(!PosInfInterval!Date(Date(1996, 1, 2)).contains(
NegInfInterval!Date(Date(1996, 5, 4))));
```
const pure nothrow bool **isBefore**(scope const TP timePoint);
Whether this interval is before the given time point.
Always returns false because an interval going to positive infinity can never be before any time point.
Parameters:
| | |
| --- | --- |
| TP `timePoint` | The time point to check whether this interval is before it. |
Example
```
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isBefore(Date(1994, 12, 24)));
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isBefore(Date(2000, 1, 5)));
```
const pure bool **isBefore**(scope const Interval!TP interval);
Whether this interval is before the given interval and does not intersect it.
Always returns false (unless the given interval is empty) because an interval going to positive infinity can never be before any other interval.
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to check for against this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given interval is empty.
Example
```
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isBefore(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))));
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isBefore(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))));
```
const pure nothrow bool **isBefore**(scope const PosInfInterval interval);
Whether this interval is before the given interval and does not intersect it.
Always returns false because an interval going to positive infinity can never be before any other interval.
Parameters:
| | |
| --- | --- |
| PosInfInterval `interval` | The interval to check for against this interval. |
Example
```
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isBefore(
PosInfInterval!Date(Date(1992, 5, 4))));
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isBefore(
PosInfInterval!Date(Date(2013, 3, 7))));
```
const pure nothrow bool **isBefore**(scope const NegInfInterval!TP interval);
Whether this interval is before the given interval and does not intersect it.
Always returns false because an interval going to positive infinity can never be before any other interval.
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to check for against this interval. |
Example
```
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isBefore(
NegInfInterval!Date(Date(1996, 5, 4))));
```
const pure nothrow bool **isAfter**(scope const TP timePoint);
Whether this interval is after the given time point.
Parameters:
| | |
| --- | --- |
| TP `timePoint` | The time point to check whether this interval is after it. |
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).isAfter(Date(1994, 12, 24)));
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isAfter(Date(2000, 1, 5)));
```
const pure bool **isAfter**(scope const Interval!TP interval);
Whether this interval is after the given interval and does not intersect it.
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to check against this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given interval is empty.
Example
```
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isAfter(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))));
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isAfter(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))));
assert(PosInfInterval!Date(Date(1996, 1, 2)).isAfter(
Interval!Date(Date(1989, 3, 1), Date(1996, 1, 2))));
```
const pure nothrow bool **isAfter**(scope const PosInfInterval interval);
Whether this interval is after the given interval and does not intersect it.
Always returns false because an interval going to positive infinity can never be after another interval going to positive infinity.
Parameters:
| | |
| --- | --- |
| PosInfInterval `interval` | The interval to check against this interval. |
Example
```
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isAfter(
PosInfInterval!Date(Date(1990, 1, 7))));
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isAfter(
PosInfInterval!Date(Date(1999, 5, 4))));
```
const pure nothrow bool **isAfter**(scope const NegInfInterval!TP interval);
Whether this interval is after the given interval and does not intersect it.
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to check against this interval. |
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).isAfter(
NegInfInterval!Date(Date(1996, 1, 2))));
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isAfter(
NegInfInterval!Date(Date(2000, 7, 1))));
```
const pure bool **intersects**(scope const Interval!TP interval);
Whether the given interval overlaps this interval.
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to check for intersection with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given interval is empty.
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).intersects(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))));
assert(PosInfInterval!Date(Date(1996, 1, 2)).intersects(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))));
assert(!PosInfInterval!Date(Date(1996, 1, 2)).intersects(
Interval!Date(Date(1989, 3, 1), Date(1996, 1, 2))));
```
const pure nothrow bool **intersects**(scope const PosInfInterval interval);
Whether the given interval overlaps this interval.
Always returns true because two intervals going to positive infinity always overlap.
Parameters:
| | |
| --- | --- |
| PosInfInterval `interval` | The interval to check for intersection with this interval. |
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).intersects(
PosInfInterval!Date(Date(1990, 1, 7))));
assert(PosInfInterval!Date(Date(1996, 1, 2)).intersects(
PosInfInterval!Date(Date(1999, 5, 4))));
```
const pure nothrow bool **intersects**(scope const NegInfInterval!TP interval);
Whether the given interval overlaps this interval.
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to check for intersection with this interval. |
Example
```
assert(!PosInfInterval!Date(Date(1996, 1, 2)).intersects(
NegInfInterval!Date(Date(1996, 1, 2))));
assert(PosInfInterval!Date(Date(1996, 1, 2)).intersects(
NegInfInterval!Date(Date(2000, 7, 1))));
```
const Interval!TP **intersection**(scope const Interval!TP interval);
Returns the intersection of two intervals
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to intersect with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the two intervals do not intersect or if the given interval is empty.
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).intersection(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))) ==
Interval!Date(Date(1996, 1 , 2), Date(2000, 8, 2)));
assert(PosInfInterval!Date(Date(1996, 1, 2)).intersection(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))) ==
Interval!Date(Date(1999, 1 , 12), Date(2011, 9, 17)));
```
const pure nothrow PosInfInterval **intersection**(scope const PosInfInterval interval);
Returns the intersection of two intervals
Parameters:
| | |
| --- | --- |
| PosInfInterval `interval` | The interval to intersect with this interval. |
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).intersection(
PosInfInterval!Date(Date(1990, 7, 6))) ==
PosInfInterval!Date(Date(1996, 1 , 2)));
assert(PosInfInterval!Date(Date(1996, 1, 2)).intersection(
PosInfInterval!Date(Date(1999, 1, 12))) ==
PosInfInterval!Date(Date(1999, 1 , 12)));
```
const Interval!TP **intersection**(scope const NegInfInterval!TP interval);
Returns the intersection of two intervals
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to intersect with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the two intervals do not intersect.
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).intersection(
NegInfInterval!Date(Date(1999, 7, 6))) ==
Interval!Date(Date(1996, 1 , 2), Date(1999, 7, 6)));
assert(PosInfInterval!Date(Date(1996, 1, 2)).intersection(
NegInfInterval!Date(Date(2013, 1, 12))) ==
Interval!Date(Date(1996, 1 , 2), Date(2013, 1, 12)));
```
const pure bool **isAdjacent**(scope const Interval!TP interval);
Whether the given interval is adjacent to this interval.
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to check whether its adjecent to this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given interval is empty.
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).isAdjacent(
Interval!Date(Date(1989, 3, 1), Date(1996, 1, 2))));
assert(!PosInfInterval!Date(Date(1999, 1, 12)).isAdjacent(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))));
```
const pure nothrow bool **isAdjacent**(scope const PosInfInterval interval);
Whether the given interval is adjacent to this interval.
Always returns false because two intervals going to positive infinity can never be adjacent to one another.
Parameters:
| | |
| --- | --- |
| PosInfInterval `interval` | The interval to check whether its adjecent to this interval. |
Example
```
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isAdjacent(
PosInfInterval!Date(Date(1990, 1, 7))));
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isAdjacent(
PosInfInterval!Date(Date(1996, 1, 2))));
```
const pure nothrow bool **isAdjacent**(scope const NegInfInterval!TP interval);
Whether the given interval is adjacent to this interval.
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to check whether its adjecent to this interval. |
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).isAdjacent(
NegInfInterval!Date(Date(1996, 1, 2))));
assert(!PosInfInterval!Date(Date(1996, 1, 2)).isAdjacent(
NegInfInterval!Date(Date(2000, 7, 1))));
```
const PosInfInterval **merge**(scope const Interval!TP interval);
Returns the union of two intervals
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to merge with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the two intervals do not intersect and are not adjacent or if the given interval is empty.
Note
There is no overload for `merge` which takes a `NegInfInterval`, because an interval going from negative infinity to positive infinity is not possible.
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).merge(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))) ==
PosInfInterval!Date(Date(1990, 7 , 6)));
assert(PosInfInterval!Date(Date(1996, 1, 2)).merge(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))) ==
PosInfInterval!Date(Date(1996, 1 , 2)));
```
const pure nothrow PosInfInterval **merge**(scope const PosInfInterval interval);
Returns the union of two intervals
Parameters:
| | |
| --- | --- |
| PosInfInterval `interval` | The interval to merge with this interval. |
Note
There is no overload for `merge` which takes a `NegInfInterval`, because an interval going from negative infinity to positive infinity is not possible.
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).merge(
PosInfInterval!Date(Date(1990, 7, 6))) ==
PosInfInterval!Date(Date(1990, 7 , 6)));
assert(PosInfInterval!Date(Date(1996, 1, 2)).merge(
PosInfInterval!Date(Date(1999, 1, 12))) ==
PosInfInterval!Date(Date(1996, 1 , 2)));
```
const pure PosInfInterval **span**(scope const Interval!TP interval);
Returns an interval that covers from the earliest time point of two intervals up to (but not including) the latest time point of two intervals.
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to create a span together with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given interval is empty.
Note
There is no overload for `span` which takes a `NegInfInterval`, because an interval going from negative infinity to positive infinity is not possible.
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).span(
Interval!Date(Date(500, 8, 9), Date(1602, 1, 31))) ==
PosInfInterval!Date(Date(500, 8, 9)));
assert(PosInfInterval!Date(Date(1996, 1, 2)).span(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))) ==
PosInfInterval!Date(Date(1990, 7 , 6)));
assert(PosInfInterval!Date(Date(1996, 1, 2)).span(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))) ==
PosInfInterval!Date(Date(1996, 1 , 2)));
```
const pure nothrow PosInfInterval **span**(scope const PosInfInterval interval);
Returns an interval that covers from the earliest time point of two intervals up to (but not including) the latest time point of two intervals.
Parameters:
| | |
| --- | --- |
| PosInfInterval `interval` | The interval to create a span together with this interval. |
Note
There is no overload for `span` which takes a `NegInfInterval`, because an interval going from negative infinity to positive infinity is not possible.
Example
```
assert(PosInfInterval!Date(Date(1996, 1, 2)).span(
PosInfInterval!Date(Date(1990, 7, 6))) ==
PosInfInterval!Date(Date(1990, 7 , 6)));
assert(PosInfInterval!Date(Date(1996, 1, 2)).span(
PosInfInterval!Date(Date(1999, 1, 12))) ==
PosInfInterval!Date(Date(1996, 1 , 2)));
```
pure nothrow void **shift**(D)(D duration)
Constraints: if (\_\_traits(compiles, begin + duration));
Shifts the `begin` of this interval forward or backwards in time by the given duration (a positive duration shifts the interval forward; a negative duration shifts it backward). Effectively, it does `begin += duration`.
Parameters:
| | |
| --- | --- |
| D `duration` | The duration to shift the interval by. |
Example
```
auto interval1 = PosInfInterval!Date(Date(1996, 1, 2));
auto interval2 = PosInfInterval!Date(Date(1996, 1, 2));
interval1.shift(dur!"days"(50));
assert(interval1 == PosInfInterval!Date(Date(1996, 2, 21)));
interval2.shift(dur!"days"(-50));
assert(interval2 == PosInfInterval!Date(Date(1995, 11, 13)));
```
void **shift**(T)(T years, T months = 0, AllowDayOverflow allowOverflow = AllowDayOverflow.yes)
Constraints: if (isIntegral!T);
Shifts the `begin` of this interval forward or backwards in time by the given number of years and/or months (a positive number of years and months shifts the interval forward; a negative number shifts it backward). It adds the years the given years and months to `begin`. It effectively calls `add!"years"()` and then `add!"months"()` on `begin` with the given number of years and months.
Parameters:
| | |
| --- | --- |
| T `years` | The number of years to shift the interval by. |
| T `months` | The number of months to shift the interval by. |
| AllowDayOverflow `allowOverflow` | Whether the days should be allowed to overflow on `begin`, causing its month to increment. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty or if the resulting interval would be invalid.
Example
```
auto interval1 = PosInfInterval!Date(Date(1996, 1, 2));
auto interval2 = PosInfInterval!Date(Date(1996, 1, 2));
interval1.shift(dur!"days"(50));
assert(interval1 == PosInfInterval!Date(Date(1996, 2, 21)));
interval2.shift(dur!"days"(-50));
assert(interval2 == PosInfInterval!Date(Date(1995, 11, 13)));
```
pure nothrow void **expand**(D)(D duration)
Constraints: if (\_\_traits(compiles, begin + duration));
Expands the interval backwards in time. Effectively, it does `begin -= duration`.
Parameters:
| | |
| --- | --- |
| D `duration` | The duration to expand the interval by. |
Example
```
auto interval1 = PosInfInterval!Date(Date(1996, 1, 2));
auto interval2 = PosInfInterval!Date(Date(1996, 1, 2));
interval1.expand(dur!"days"(2));
assert(interval1 == PosInfInterval!Date(Date(1995, 12, 31)));
interval2.expand(dur!"days"(-2));
assert(interval2 == PosInfInterval!Date(Date(1996, 1, 4)));
```
void **expand**(T)(T years, T months = 0, AllowDayOverflow allowOverflow = AllowDayOverflow.yes)
Constraints: if (isIntegral!T);
Expands the interval forwards and/or backwards in time. Effectively, it subtracts the given number of months/years from `begin`.
Parameters:
| | |
| --- | --- |
| T `years` | The number of years to expand the interval by. |
| T `months` | The number of months to expand the interval by. |
| AllowDayOverflow `allowOverflow` | Whether the days should be allowed to overflow on `begin`, causing its month to increment. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty or if the resulting interval would be invalid.
Example
```
auto interval1 = PosInfInterval!Date(Date(1996, 1, 2));
auto interval2 = PosInfInterval!Date(Date(1996, 1, 2));
interval1.expand(2);
assert(interval1 == PosInfInterval!Date(Date(1994, 1, 2)));
interval2.expand(-2);
assert(interval2 == PosInfInterval!Date(Date(1998, 1, 2)));
```
const PosInfIntervalRange!TP **fwdRange**(TP delegate(scope const TP) func, PopFirst popFirst = PopFirst.no);
Returns a range which iterates forward over the interval, starting at `begin`, using func to generate each successive time point.
The range's `front` is the interval's `begin`. func is used to generate the next `front` when `popFront` is called. If popFirst is `PopFirst.yes`, then `popFront` is called before the range is returned (so that `front` is a time point which func would generate).
If func ever generates a time point less than or equal to the current `front` of the range, then a [`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) will be thrown.
There are helper functions in this module which generate common delegates to pass to `fwdRange`. Their documentation starts with "Range-generating function," to make them easily searchable.
Parameters:
| | |
| --- | --- |
| TP delegate(scope const TP) `func` | The function used to generate the time points of the range over the interval. |
| PopFirst `popFirst` | Whether `popFront` should be called on the range before returning it. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Warning
func must be logically pure. Ideally, func would be a function pointer to a pure function, but forcing func to be pure is far too restrictive to be useful, and in order to have the ease of use of having functions which generate functions to pass to `fwdRange`, func must be a delegate.
If func retains state which changes as it is called, then some algorithms will not work correctly, because the range's `save` will have failed to have really saved the range's state. To avoid such bugs, don't pass a delegate which is not logically pure to `fwdRange`. If func is given the same time point with two different calls, it must return the same result both times. Of course, none of the functions in this module have this problem, so it's only relevant for custom delegates.
Example
```
auto interval = PosInfInterval!Date(Date(2010, 9, 1));
auto func = delegate (scope const Date date) //For iterating over even-numbered days.
{
if ((date.day & 1) == 0)
return date + dur!"days"(2);
return date + dur!"days"(1);
};
auto range = interval.fwdRange(func);
//An odd day. Using PopFirst.yes would have made this Date(2010, 9, 2).
assert(range.front == Date(2010, 9, 1));
range.popFront();
assert(range.front == Date(2010, 9, 2));
range.popFront();
assert(range.front == Date(2010, 9, 4));
range.popFront();
assert(range.front == Date(2010, 9, 6));
range.popFront();
assert(range.front == Date(2010, 9, 8));
range.popFront();
assert(!range.empty);
```
const nothrow string **toString**();
Converts this interval to a string.
struct **NegInfInterval**(TP);
Represents an interval of time which has negative infinity as its starting point.
Any ranges which iterate over a `NegInfInterval` are infinite. So, the main purpose of using `NegInfInterval` is to create an infinite range which starts at negative infinity and goes to a fixed end point. Iterate over it in reverse.
pure nothrow this(scope const TP end);
Parameters:
| | |
| --- | --- |
| TP `end` | The time point which ends the interval. |
Example
```
auto interval = PosInfInterval!Date(Date(1996, 1, 2));
```
pure nothrow ref NegInfInterval **opAssign**(ref const NegInfInterval rhs);
Parameters:
| | |
| --- | --- |
| NegInfInterval `rhs` | The `NegInfInterval` to assign to this one. |
pure nothrow ref NegInfInterval **opAssign**(NegInfInterval rhs);
Parameters:
| | |
| --- | --- |
| NegInfInterval `rhs` | The `NegInfInterval` to assign to this one. |
const pure nothrow @property TP **end**();
The end point of the interval. It is excluded from the interval.
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).end == Date(2012, 3, 1));
```
pure nothrow @property void **end**(TP timePoint);
The end point of the interval. It is excluded from the interval.
Parameters:
| | |
| --- | --- |
| TP `timePoint` | The time point to set end to. |
enum bool **empty**;
Whether the interval's length is 0. Always returns false.
Example
```
assert(!NegInfInterval!Date(Date(1996, 1, 2)).empty);
```
const pure nothrow bool **contains**(TP timePoint);
Whether the given time point is within this interval.
Parameters:
| | |
| --- | --- |
| TP `timePoint` | The time point to check for inclusion in this interval. |
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).contains(Date(1994, 12, 24)));
assert(NegInfInterval!Date(Date(2012, 3, 1)).contains(Date(2000, 1, 5)));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).contains(Date(2012, 3, 1)));
```
const pure bool **contains**(scope const Interval!TP interval);
Whether the given interval is completely within this interval.
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to check for inclusion in this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given interval is empty.
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).contains(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))));
assert(NegInfInterval!Date(Date(2012, 3, 1)).contains(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).contains(
Interval!Date(Date(1998, 2, 28), Date(2013, 5, 1))));
```
const pure nothrow bool **contains**(scope const PosInfInterval!TP interval);
Whether the given interval is completely within this interval.
Always returns false because an interval beginning at negative infinity can never contain an interval going to positive infinity.
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to check for inclusion in this interval. |
Example
```
assert(!NegInfInterval!Date(Date(2012, 3, 1)).contains(
PosInfInterval!Date(Date(1999, 5, 4))));
```
const pure nothrow bool **contains**(scope const NegInfInterval interval);
Whether the given interval is completely within this interval.
Parameters:
| | |
| --- | --- |
| NegInfInterval `interval` | The interval to check for inclusion in this interval. |
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).contains(
NegInfInterval!Date(Date(1996, 5, 4))));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).contains(
NegInfInterval!Date(Date(2013, 7, 9))));
```
const pure nothrow bool **isBefore**(scope const TP timePoint);
Whether this interval is before the given time point.
Parameters:
| | |
| --- | --- |
| TP `timePoint` | The time point to check whether this interval is before it. |
Example
```
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isBefore(Date(1994, 12, 24)));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isBefore(Date(2000, 1, 5)));
assert(NegInfInterval!Date(Date(2012, 3, 1)).isBefore(Date(2012, 3, 1)));
```
const pure bool **isBefore**(scope const Interval!TP interval);
Whether this interval is before the given interval and does not intersect it.
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to check for against this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given interval is empty
Example
```
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isBefore(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isBefore(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))));
assert(NegInfInterval!Date(Date(2012, 3, 1)).isBefore(
Interval!Date(Date(2022, 10, 19), Date(2027, 6, 3))));
```
const pure nothrow bool **isBefore**(scope const PosInfInterval!TP interval);
Whether this interval is before the given interval and does not intersect it.
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to check for against this interval. |
Example
```
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isBefore(
PosInfInterval!Date(Date(1999, 5, 4))));
assert(NegInfInterval!Date(Date(2012, 3, 1)).isBefore(
PosInfInterval!Date(Date(2012, 3, 1))));
```
const pure nothrow bool **isBefore**(scope const NegInfInterval interval);
Whether this interval is before the given interval and does not intersect it.
Always returns false because an interval beginning at negative infinity can never be before another interval beginning at negative infinity.
Parameters:
| | |
| --- | --- |
| NegInfInterval `interval` | The interval to check for against this interval. |
Example
```
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isBefore(
NegInfInterval!Date(Date(1996, 5, 4))));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isBefore(
NegInfInterval!Date(Date(2013, 7, 9))));
```
const pure nothrow bool **isAfter**(scope const TP timePoint);
Whether this interval is after the given time point.
Always returns false because an interval beginning at negative infinity can never be after any time point.
Parameters:
| | |
| --- | --- |
| TP `timePoint` | The time point to check whether this interval is after it. |
Example
```
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAfter(Date(1994, 12, 24)));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAfter(Date(2000, 1, 5)));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAfter(Date(2012, 3, 1)));
```
const pure bool **isAfter**(scope const Interval!TP interval);
Whether this interval is after the given interval and does not intersect it.
Always returns false (unless the given interval is empty) because an interval beginning at negative infinity can never be after any other interval.
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to check against this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given interval is empty.
Example
```
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAfter(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAfter(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAfter(
Interval!Date(Date(2022, 10, 19), Date(2027, 6, 3))));
```
const pure nothrow bool **isAfter**(scope const PosInfInterval!TP interval);
Whether this interval is after the given interval and does not intersect it.
Always returns false because an interval beginning at negative infinity can never be after any other interval.
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to check against this interval. |
Example
```
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAfter(
PosInfInterval!Date(Date(1999, 5, 4))));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAfter(
PosInfInterval!Date(Date(2012, 3, 1))));
```
const pure nothrow bool **isAfter**(scope const NegInfInterval interval);
Whether this interval is after the given interval and does not intersect it.
Always returns false because an interval beginning at negative infinity can never be after any other interval.
Parameters:
| | |
| --- | --- |
| NegInfInterval `interval` | The interval to check against this interval. |
Example
```
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAfter(
NegInfInterval!Date(Date(1996, 5, 4))));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAfter(
NegInfInterval!Date(Date(2013, 7, 9))));
```
const pure bool **intersects**(scope const Interval!TP interval);
Whether the given interval overlaps this interval.
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to check for intersection with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given interval is empty.
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).intersects(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))));
assert(NegInfInterval!Date(Date(2012, 3, 1)).intersects(
Interval!Date(Date(1999, 1, 12), Date(2011, 9, 17))));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).intersects(
Interval!Date(Date(2022, 10, 19), Date(2027, 6, 3))));
```
const pure nothrow bool **intersects**(scope const PosInfInterval!TP interval);
Whether the given interval overlaps this interval.
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to check for intersection with this interval. |
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).intersects(
PosInfInterval!Date(Date(1999, 5, 4))));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).intersects(
PosInfInterval!Date(Date(2012, 3, 1))));
```
const pure nothrow bool **intersects**(scope const NegInfInterval!TP interval);
Whether the given interval overlaps this interval.
Always returns true because two intervals beginning at negative infinity always overlap.
Parameters:
| | |
| --- | --- |
| NegInfInterval!TP `interval` | The interval to check for intersection with this interval. |
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).intersects(
NegInfInterval!Date(Date(1996, 5, 4))));
assert(NegInfInterval!Date(Date(2012, 3, 1)).intersects(
NegInfInterval!Date(Date(2013, 7, 9))));
```
const Interval!TP **intersection**(scope const Interval!TP interval);
Returns the intersection of two intervals
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to intersect with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the two intervals do not intersect or if the given interval is empty.
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).intersection(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))) ==
Interval!Date(Date(1990, 7 , 6), Date(2000, 8, 2)));
assert(NegInfInterval!Date(Date(2012, 3, 1)).intersection(
Interval!Date(Date(1999, 1, 12), Date(2015, 9, 2))) ==
Interval!Date(Date(1999, 1 , 12), Date(2012, 3, 1)));
```
const Interval!TP **intersection**(scope const PosInfInterval!TP interval);
Returns the intersection of two intervals
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to intersect with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the two intervals do not intersect.
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).intersection(
PosInfInterval!Date(Date(1990, 7, 6))) ==
Interval!Date(Date(1990, 7 , 6), Date(2012, 3, 1)));
assert(NegInfInterval!Date(Date(2012, 3, 1)).intersection(
PosInfInterval!Date(Date(1999, 1, 12))) ==
Interval!Date(Date(1999, 1 , 12), Date(2012, 3, 1)));
```
const nothrow NegInfInterval **intersection**(scope const NegInfInterval interval);
Returns the intersection of two intervals
Parameters:
| | |
| --- | --- |
| NegInfInterval `interval` | The interval to intersect with this interval. |
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).intersection(
NegInfInterval!Date(Date(1999, 7, 6))) ==
NegInfInterval!Date(Date(1999, 7 , 6)));
assert(NegInfInterval!Date(Date(2012, 3, 1)).intersection(
NegInfInterval!Date(Date(2013, 1, 12))) ==
NegInfInterval!Date(Date(2012, 3 , 1)));
```
const pure bool **isAdjacent**(scope const Interval!TP interval);
Whether the given interval is adjacent to this interval.
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to check whether its adjecent to this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given interval is empty.
Example
```
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAdjacent(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAdjacent(
Interval!Date(Date(1999, 1, 12), Date(2012, 3, 1))));
assert(NegInfInterval!Date(Date(2012, 3, 1)).isAdjacent(
Interval!Date(Date(2012, 3, 1), Date(2019, 2, 2))));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAdjacent(
Interval!Date(Date(2022, 10, 19), Date(2027, 6, 3))));
```
const pure nothrow bool **isAdjacent**(scope const PosInfInterval!TP interval);
Whether the given interval is adjacent to this interval.
Parameters:
| | |
| --- | --- |
| PosInfInterval!TP `interval` | The interval to check whether its adjecent to this interval. |
Example
```
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAdjacent(
PosInfInterval!Date(Date(1999, 5, 4))));
assert(NegInfInterval!Date(Date(2012, 3, 1)).isAdjacent(
PosInfInterval!Date(Date(2012, 3, 1))));
```
const pure nothrow bool **isAdjacent**(scope const NegInfInterval interval);
Whether the given interval is adjacent to this interval.
Always returns false because two intervals beginning at negative infinity can never be adjacent to one another.
Parameters:
| | |
| --- | --- |
| NegInfInterval `interval` | The interval to check whether its adjecent to this interval. |
Example
```
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAdjacent(
NegInfInterval!Date(Date(1996, 5, 4))));
assert(!NegInfInterval!Date(Date(2012, 3, 1)).isAdjacent(
NegInfInterval!Date(Date(2012, 3, 1))));
```
const NegInfInterval **merge**(scope const Interval!TP interval);
Returns the union of two intervals
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to merge with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the two intervals do not intersect and are not adjacent or if the given interval is empty.
Note
There is no overload for `merge` which takes a `PosInfInterval`, because an interval going from negative infinity to positive infinity is not possible.
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).merge(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))) ==
NegInfInterval!Date(Date(2012, 3 , 1)));
assert(NegInfInterval!Date(Date(2012, 3, 1)).merge(
Interval!Date(Date(1999, 1, 12), Date(2015, 9, 2))) ==
NegInfInterval!Date(Date(2015, 9 , 2)));
```
const pure nothrow NegInfInterval **merge**(scope const NegInfInterval interval);
Returns the union of two intervals
Parameters:
| | |
| --- | --- |
| NegInfInterval `interval` | The interval to merge with this interval. |
Note
There is no overload for `merge` which takes a `PosInfInterval`, because an interval going from negative infinity to positive infinity is not possible.
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).merge(
NegInfInterval!Date(Date(1999, 7, 6))) ==
NegInfInterval!Date(Date(2012, 3 , 1)));
assert(NegInfInterval!Date(Date(2012, 3, 1)).merge(
NegInfInterval!Date(Date(2013, 1, 12))) ==
NegInfInterval!Date(Date(2013, 1 , 12)));
```
const pure NegInfInterval **span**(scope const Interval!TP interval);
Returns an interval that covers from the earliest time point of two intervals up to (but not including) the latest time point of two intervals.
Parameters:
| | |
| --- | --- |
| Interval!TP `interval` | The interval to create a span together with this interval. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the given interval is empty.
Note
There is no overload for `span` which takes a `PosInfInterval`, because an interval going from negative infinity to positive infinity is not possible.
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).span(
Interval!Date(Date(1990, 7, 6), Date(2000, 8, 2))) ==
NegInfInterval!Date(Date(2012, 3 , 1)));
assert(NegInfInterval!Date(Date(2012, 3, 1)).span(
Interval!Date(Date(1999, 1, 12), Date(2015, 9, 2))) ==
NegInfInterval!Date(Date(2015, 9 , 2)));
assert(NegInfInterval!Date(Date(1600, 1, 7)).span(
Interval!Date(Date(2012, 3, 11), Date(2017, 7, 1))) ==
NegInfInterval!Date(Date(2017, 7 , 1)));
```
const pure nothrow NegInfInterval **span**(scope const NegInfInterval interval);
Returns an interval that covers from the earliest time point of two intervals up to (but not including) the latest time point of two intervals.
Parameters:
| | |
| --- | --- |
| NegInfInterval `interval` | The interval to create a span together with this interval. |
Note
There is no overload for `span` which takes a `PosInfInterval`, because an interval going from negative infinity to positive infinity is not possible.
Example
```
assert(NegInfInterval!Date(Date(2012, 3, 1)).span(
NegInfInterval!Date(Date(1999, 7, 6))) ==
NegInfInterval!Date(Date(2012, 3 , 1)));
assert(NegInfInterval!Date(Date(2012, 3, 1)).span(
NegInfInterval!Date(Date(2013, 1, 12))) ==
NegInfInterval!Date(Date(2013, 1 , 12)));
```
pure nothrow void **shift**(D)(D duration)
Constraints: if (\_\_traits(compiles, end + duration));
Shifts the `end` of this interval forward or backwards in time by the given duration (a positive duration shifts the interval forward; a negative duration shifts it backward). Effectively, it does `end += duration`.
Parameters:
| | |
| --- | --- |
| D `duration` | The duration to shift the interval by. |
Example
```
auto interval1 = NegInfInterval!Date(Date(2012, 4, 5));
auto interval2 = NegInfInterval!Date(Date(2012, 4, 5));
interval1.shift(dur!"days"(50));
assert(interval1 == NegInfInterval!Date(Date(2012, 5, 25)));
interval2.shift(dur!"days"(-50));
assert(interval2 == NegInfInterval!Date( Date(2012, 2, 15)));
```
void **shift**(T)(T years, T months = 0, AllowDayOverflow allowOverflow = AllowDayOverflow.yes)
Constraints: if (isIntegral!T);
Shifts the `end` of this interval forward or backwards in time by the given number of years and/or months (a positive number of years and months shifts the interval forward; a negative number shifts it backward). It adds the years the given years and months to end. It effectively calls `add!"years"()` and then `add!"months"()` on end with the given number of years and months.
Parameters:
| | |
| --- | --- |
| T `years` | The number of years to shift the interval by. |
| T `months` | The number of months to shift the interval by. |
| AllowDayOverflow `allowOverflow` | Whether the days should be allowed to overflow on `end`, causing its month to increment. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if empty is true or if the resulting interval would be invalid.
Example
```
auto interval1 = NegInfInterval!Date(Date(2012, 3, 1));
auto interval2 = NegInfInterval!Date(Date(2012, 3, 1));
interval1.shift(2);
assert(interval1 == NegInfInterval!Date(Date(2014, 3, 1)));
interval2.shift(-2);
assert(interval2 == NegInfInterval!Date(Date(2010, 3, 1)));
```
pure nothrow void **expand**(D)(D duration)
Constraints: if (\_\_traits(compiles, end + duration));
Expands the interval forwards in time. Effectively, it does `end += duration`.
Parameters:
| | |
| --- | --- |
| D `duration` | The duration to expand the interval by. |
Example
```
auto interval1 = NegInfInterval!Date(Date(2012, 3, 1));
auto interval2 = NegInfInterval!Date(Date(2012, 3, 1));
interval1.expand(dur!"days"(2));
assert(interval1 == NegInfInterval!Date(Date(2012, 3, 3)));
interval2.expand(dur!"days"(-2));
assert(interval2 == NegInfInterval!Date(Date(2012, 2, 28)));
```
void **expand**(T)(T years, T months = 0, AllowDayOverflow allowOverflow = AllowDayOverflow.yes)
Constraints: if (isIntegral!T);
Expands the interval forwards and/or backwards in time. Effectively, it adds the given number of months/years to end.
Parameters:
| | |
| --- | --- |
| T `years` | The number of years to expand the interval by. |
| T `months` | The number of months to expand the interval by. |
| AllowDayOverflow `allowOverflow` | Whether the days should be allowed to overflow on `end`, causing their month to increment. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if empty is true or if the resulting interval would be invalid.
Example
```
auto interval1 = NegInfInterval!Date(Date(2012, 3, 1));
auto interval2 = NegInfInterval!Date(Date(2012, 3, 1));
interval1.expand(2);
assert(interval1 == NegInfInterval!Date(Date(2014, 3, 1)));
interval2.expand(-2);
assert(interval2 == NegInfInterval!Date(Date(2010, 3, 1)));
```
const NegInfIntervalRange!TP **bwdRange**(TP delegate(scope const TP) func, PopFirst popFirst = PopFirst.no);
Returns a range which iterates backwards over the interval, starting at `end`, using func to generate each successive time point.
The range's `front` is the interval's `end`. func is used to generate the next `front` when `popFront` is called. If popFirst is `PopFirst.yes`, then `popFront` is called before the range is returned (so that `front` is a time point which func would generate).
If func ever generates a time point greater than or equal to the current `front` of the range, then a [`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) will be thrown.
There are helper functions in this module which generate common delegates to pass to `bwdRange`. Their documentation starts with "Range-generating function," to make them easily searchable.
Parameters:
| | |
| --- | --- |
| TP delegate(scope const TP) `func` | The function used to generate the time points of the range over the interval. |
| PopFirst `popFirst` | Whether `popFront` should be called on the range before returning it. |
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if this interval is empty.
Warning
func must be logically pure. Ideally, func would be a function pointer to a pure function, but forcing func to be pure is far too restrictive to be useful, and in order to have the ease of use of having functions which generate functions to pass to `fwdRange`, func must be a delegate.
If func retains state which changes as it is called, then some algorithms will not work correctly, because the range's `save` will have failed to have really saved the range's state. To avoid such bugs, don't pass a delegate which is not logically pure to `fwdRange`. If func is given the same time point with two different calls, it must return the same result both times. Of course, none of the functions in this module have this problem, so it's only relevant for custom delegates.
Example
```
auto interval = NegInfInterval!Date(Date(2010, 9, 9));
auto func = delegate (scope const Date date) //For iterating over even-numbered days.
{
if ((date.day & 1) == 0)
return date - dur!"days"(2);
return date - dur!"days"(1);
};
auto range = interval.bwdRange(func);
assert(range.front == Date(2010, 9, 9)); //An odd day. Using PopFirst.yes would have made this Date(2010, 9, 8).
range.popFront();
assert(range.front == Date(2010, 9, 8));
range.popFront();
assert(range.front == Date(2010, 9, 6));
range.popFront();
assert(range.front == Date(2010, 9, 4));
range.popFront();
assert(range.front == Date(2010, 9, 2));
range.popFront();
assert(!range.empty);
```
const nothrow string **toString**();
Converts this interval to a string.
nothrow TP delegate(scope const TP) **everyDayOfWeek**(TP, Direction dir = Direction.fwd)(DayOfWeek dayOfWeek)
Constraints: if (isTimePoint!TP && (dir == Direction.fwd || dir == Direction.bwd) && \_\_traits(hasMember, TP, "dayOfWeek") && !\_\_traits(isStaticFunction, TP.dayOfWeek) && is(typeof(TP.dayOfWeek) == DayOfWeek));
Range-generating function.
Returns a delegate which returns the next time point with the given `DayOfWeek` in a range.
Using this delegate allows iteration over successive time points which are all the same day of the week. e.g. passing `DayOfWeek.mon` to `everyDayOfWeek` would result in a delegate which could be used to iterate over all of the Mondays in a range.
Parameters:
| | |
| --- | --- |
| dir | The direction to iterate in. If passing the return value to `fwdRange`, use `Direction.fwd`. If passing it to `bwdRange`, use `Direction.bwd`. |
| DayOfWeek `dayOfWeek` | The week that each time point in the range will be. |
Examples:
```
import std.datetime.date : Date, DayOfWeek;
auto interval = Interval!Date(Date(2010, 9, 2), Date(2010, 9, 27));
auto func = everyDayOfWeek!Date(DayOfWeek.mon);
auto range = interval.fwdRange(func);
// A Thursday. Using PopFirst.yes would have made this Date(2010, 9, 6).
writeln(range.front); // Date(2010, 9, 2)
range.popFront();
writeln(range.front); // Date(2010, 9, 6)
range.popFront();
writeln(range.front); // Date(2010, 9, 13)
range.popFront();
writeln(range.front); // Date(2010, 9, 20)
range.popFront();
assert(range.empty);
```
TP delegate(scope const TP) **everyMonth**(TP, Direction dir = Direction.fwd)(int month)
Constraints: if (isTimePoint!TP && (dir == Direction.fwd || dir == Direction.bwd) && \_\_traits(hasMember, TP, "month") && !\_\_traits(isStaticFunction, TP.month) && is(typeof(TP.month) == Month));
Range-generating function.
Returns a delegate which returns the next time point with the given month which would be reached by adding months to the given time point.
So, using this delegate allows iteration over successive time points which are in the same month but different years. For example, iterate over each successive December 25th in an interval by starting with a date which had the 25th as its day and passed `Month.dec` to `everyMonth` to create the delegate.
Since it wouldn't really make sense to be iterating over a specific month and end up with some of the time points in the succeeding month or two years after the previous time point, `AllowDayOverflow.no` is always used when calculating the next time point.
Parameters:
| | |
| --- | --- |
| dir | The direction to iterate in. If passing the return value to `fwdRange`, use `Direction.fwd`. If passing it to `bwdRange`, use `Direction.bwd`. |
| int `month` | The month that each time point in the range will be in (January is 1). |
Examples:
```
import std.datetime.date : Date, Month;
auto interval = Interval!Date(Date(2000, 1, 30), Date(2004, 8, 5));
auto func = everyMonth!Date(Month.feb);
auto range = interval.fwdRange(func);
// Using PopFirst.yes would have made this Date(2010, 2, 29).
writeln(range.front); // Date(2000, 1, 30)
range.popFront();
writeln(range.front); // Date(2000, 2, 29)
range.popFront();
writeln(range.front); // Date(2001, 2, 28)
range.popFront();
writeln(range.front); // Date(2002, 2, 28)
range.popFront();
writeln(range.front); // Date(2003, 2, 28)
range.popFront();
writeln(range.front); // Date(2004, 2, 28)
range.popFront();
assert(range.empty);
```
nothrow TP delegate(scope const TP) **everyDuration**(TP, Direction dir = Direction.fwd, D)(D duration)
Constraints: if (isTimePoint!TP && \_\_traits(compiles, TP.init + duration) && (dir == Direction.fwd || dir == Direction.bwd));
Range-generating function.
Returns a delegate which returns the next time point which is the given duration later.
Using this delegate allows iteration over successive time points which are apart by the given duration e.g. passing `dur!"days"(3)` to `everyDuration` would result in a delegate which could be used to iterate over a range of days which are each 3 days apart.
Parameters:
| | |
| --- | --- |
| dir | The direction to iterate in. If passing the return value to `fwdRange`, use `Direction.fwd`. If passing it to `bwdRange`, use `Direction.bwd`. |
| D `duration` | The duration which separates each successive time point in the range. |
Examples:
```
import core.time : dur;
import std.datetime.date : Date;
auto interval = Interval!Date(Date(2010, 9, 2), Date(2010, 9, 27));
auto func = everyDuration!Date(dur!"days"(8));
auto range = interval.fwdRange(func);
// Using PopFirst.yes would have made this Date(2010, 9, 10).
writeln(range.front); // Date(2010, 9, 2)
range.popFront();
writeln(range.front); // Date(2010, 9, 10)
range.popFront();
writeln(range.front); // Date(2010, 9, 18)
range.popFront();
writeln(range.front); // Date(2010, 9, 26)
range.popFront();
assert(range.empty);
```
nothrow TP delegate(scope const TP) **everyDuration**(TP, Direction dir = Direction.fwd, D)(int years, int months = 0, AllowDayOverflow allowOverflow = AllowDayOverflow.yes, D duration = dur!"days"(0))
Constraints: if (isTimePoint!TP && \_\_traits(compiles, TP.init + duration) && \_\_traits(compiles, TP.init.add!"years"(years)) && \_\_traits(compiles, TP.init.add!"months"(months)) && (dir == Direction.fwd || dir == Direction.bwd));
Range-generating function.
Returns a delegate which returns the next time point which is the given number of years, month, and duration later.
The difference between this version of `everyDuration` and the version which just takes a [`core.time.Duration`](core_time#Duration) is that this one also takes the number of years and months (along with an `AllowDayOverflow` to indicate whether adding years and months should allow the days to overflow).
Note that if iterating forward, `add!"years"()` is called on the given time point, then `add!"months"()`, and finally the duration is added to it. However, if iterating backwards, the duration is added first, then `add!"months"()` is called, and finally `add!"years"()` is called. That way, going backwards generates close to the same time points that iterating forward does, but since adding years and months is not entirely reversible (due to possible day overflow, regardless of whether `AllowDayOverflow.yes` or `AllowDayOverflow.no` is used), it can't be guaranteed that iterating backwards will give the same time points as iterating forward would have (even assuming that the end of the range is a time point which would be returned by the delegate when iterating forward from `begin`).
Parameters:
| | |
| --- | --- |
| dir | The direction to iterate in. If passing the return value to `fwdRange`, use `Direction.fwd`. If passing it to `bwdRange`, use `Direction.bwd`. |
| int `years` | The number of years to add to the time point passed to the delegate. |
| int `months` | The number of months to add to the time point passed to the delegate. |
| AllowDayOverflow `allowOverflow` | Whether the days should be allowed to overflow on `begin` and `end`, causing their month to increment. |
| D `duration` | The duration to add to the time point passed to the delegate. |
Examples:
```
import core.time : dur;
import std.datetime.date : AllowDayOverflow, Date;
auto interval = Interval!Date(Date(2010, 9, 2), Date(2025, 9, 27));
auto func = everyDuration!Date(4, 1, AllowDayOverflow.yes, dur!"days"(2));
auto range = interval.fwdRange(func);
// Using PopFirst.yes would have made this Date(2014, 10, 12).
writeln(range.front); // Date(2010, 9, 2)
range.popFront();
writeln(range.front); // Date(2014, 10, 4)
range.popFront();
writeln(range.front); // Date(2018, 11, 6)
range.popFront();
writeln(range.front); // Date(2022, 12, 8)
range.popFront();
assert(range.empty);
```
struct **IntervalRange**(TP, Direction dir) if (isTimePoint!TP && (dir != Direction.both));
A range over an [`Interval`](#Interval).
`IntervalRange` is only ever constructed by [`Interval`](#Interval). However, when it is constructed, it is given a function, `func`, which is used to generate the time points which are iterated over. `func` takes a time point and returns a time point of the same type. For instance, to iterate over all of the days in the interval `Interval!Date`, pass a function to [`Interval`](#Interval)'s `fwdRange` where that function took a [`std.datetime.date.Date`](std_datetime_date#Date) and returned a [`std.datetime.date.Date`](std_datetime_date#Date) which was one day later. That function would then be used by `IntervalRange`'s `popFront` to iterate over the [`std.datetime.date.Date`](std_datetime_date#Date)s in the interval.
If `dir == Direction.fwd`, then a range iterates forward in time, whereas if `dir == Direction.bwd`, then it iterates backwards in time. So, if `dir == Direction.fwd` then `front == interval.begin`, whereas if `dir == Direction.bwd` then `front == interval.end`. `func` must generate a time point going in the proper direction of iteration, or a [`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) will be thrown. So, to iterate forward in time, the time point that `func` generates must be later in time than the one passed to it. If it's either identical or earlier in time, then a [`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) will be thrown. To iterate backwards, then the generated time point must be before the time point which was passed in.
If the generated time point is ever passed the edge of the range in the proper direction, then the edge of that range will be used instead. So, if iterating forward, and the generated time point is past the interval's `end`, then `front` becomes `end`. If iterating backwards, and the generated time point is before `begin`, then `front` becomes `begin`. In either case, the range would then be empty.
Also note that while normally the `begin` of an interval is included in it and its `end` is excluded from it, if `dir == Direction.bwd`, then `begin` is treated as excluded and `end` is treated as included. This allows for the same behavior in both directions. This works because none of [`Interval`](#Interval)'s functions which care about whether `begin` or `end` is included or excluded are ever called by `IntervalRange`. `interval` returns a normal interval, regardless of whether `dir == Direction.fwd` or if `dir == Direction.bwd`, so any [`Interval`](#Interval) functions which are called on it which care about whether `begin` or `end` are included or excluded will treat `begin` as included and `end` as excluded.
pure nothrow ref IntervalRange **opAssign**(ref IntervalRange rhs);
pure nothrow ref IntervalRange **opAssign**(IntervalRange rhs);
Parameters:
| | |
| --- | --- |
| IntervalRange `rhs` | The `IntervalRange` to assign to this one. |
const pure nothrow @property bool **empty**();
Whether this `IntervalRange` is empty.
const pure @property TP **front**();
The first time point in the range.
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the range is empty.
void **popFront**();
Pops `front` from the range, using `func` to generate the next time point in the range. If the generated time point is beyond the edge of the range, then `front` is set to that edge, and the range is then empty. So, if iterating forwards, and the generated time point is greater than the interval's `end`, then `front` is set to `end`. If iterating backwards, and the generated time point is less than the interval's `begin`, then `front` is set to `begin`.
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the range is empty or if the generated time point is in the wrong direction (i.e. if iterating forward and the generated time point is before `front`, or if iterating backwards and the generated time point is after `front`).
pure nothrow @property IntervalRange **save**();
Returns a copy of `this`.
const pure nothrow @property Interval!TP **interval**();
The interval that this `IntervalRange` currently covers.
pure nothrow @property TP delegate(scope const TP) **func**();
The function used to generate the next time point in the range.
const pure nothrow @property Direction **direction**();
The `Direction` that this range iterates in.
struct **PosInfIntervalRange**(TP) if (isTimePoint!TP);
A range over a `PosInfInterval`. It is an infinite range.
`PosInfIntervalRange` is only ever constructed by `PosInfInterval`. However, when it is constructed, it is given a function, `func`, which is used to generate the time points which are iterated over. `func` takes a time point and returns a time point of the same type. For instance, to iterate over all of the days in the interval `PosInfInterval!Date`, pass a function to `PosInfInterval`'s `fwdRange` where that function took a [`std.datetime.date.Date`](std_datetime_date#Date) and returned a [`std.datetime.date.Date`](std_datetime_date#Date) which was one day later. That function would then be used by `PosInfIntervalRange`'s `popFront` to iterate over the [`std.datetime.date.Date`](std_datetime_date#Date)s in the interval - though obviously, since the range is infinite, use a function such as `std.range.take` with it rather than iterating over *all* of the dates.
As the interval goes to positive infinity, the range is always iterated over forwards, never backwards. `func` must generate a time point going in the proper direction of iteration, or a [`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) will be thrown. So, the time points that `func` generates must be later in time than the one passed to it. If it's either identical or earlier in time, then a [`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) will be thrown.
pure nothrow ref PosInfIntervalRange **opAssign**(ref PosInfIntervalRange rhs);
pure nothrow ref PosInfIntervalRange **opAssign**(PosInfIntervalRange rhs);
Parameters:
| | |
| --- | --- |
| PosInfIntervalRange `rhs` | The `PosInfIntervalRange` to assign to this one. |
enum bool **empty**;
This is an infinite range, so it is never empty.
const pure nothrow @property TP **front**();
The first time point in the range.
void **popFront**();
Pops `front` from the range, using `func` to generate the next time point in the range.
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the generated time point is less than `front`.
pure nothrow @property PosInfIntervalRange **save**();
Returns a copy of `this`.
const pure nothrow @property PosInfInterval!TP **interval**();
The interval that this range currently covers.
pure nothrow @property TP delegate(scope const TP) **func**();
The function used to generate the next time point in the range.
struct **NegInfIntervalRange**(TP) if (isTimePoint!TP);
A range over a `NegInfInterval`. It is an infinite range.
`NegInfIntervalRange` is only ever constructed by `NegInfInterval`. However, when it is constructed, it is given a function, `func`, which is used to generate the time points which are iterated over. `func` takes a time point and returns a time point of the same type. For instance, to iterate over all of the days in the interval `NegInfInterval!Date`, pass a function to `NegInfInterval`'s `bwdRange` where that function took a [`std.datetime.date.Date`](std_datetime_date#Date) and returned a [`std.datetime.date.Date`](std_datetime_date#Date) which was one day earlier. That function would then be used by `NegInfIntervalRange`'s `popFront` to iterate over the [`std.datetime.date.Date`](std_datetime_date#Date)s in the interval - though obviously, since the range is infinite, use a function such as `std.range.take` with it rather than iterating over *all* of the dates.
As the interval goes to negative infinity, the range is always iterated over backwards, never forwards. `func` must generate a time point going in the proper direction of iteration, or a [`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) will be thrown. So, the time points that `func` generates must be earlier in time than the one passed to it. If it's either identical or later in time, then a [`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) will be thrown.
Also note that while normally the `end` of an interval is excluded from it, `NegInfIntervalRange` treats it as if it were included. This allows for the same behavior as with `PosInfIntervalRange`. This works because none of `NegInfInterval`'s functions which care about whether `end` is included or excluded are ever called by `NegInfIntervalRange`. `interval` returns a normal interval, so any `NegInfInterval` functions which are called on it which care about whether `end` is included or excluded will treat `end` as excluded.
pure nothrow ref NegInfIntervalRange **opAssign**(ref NegInfIntervalRange rhs);
pure nothrow ref NegInfIntervalRange **opAssign**(NegInfIntervalRange rhs);
Parameters:
| | |
| --- | --- |
| NegInfIntervalRange `rhs` | The `NegInfIntervalRange` to assign to this one. |
enum bool **empty**;
This is an infinite range, so it is never empty.
const pure nothrow @property TP **front**();
The first time point in the range.
void **popFront**();
Pops `front` from the range, using `func` to generate the next time point in the range.
Throws:
[`std.datetime.date.DateTimeException`](std_datetime_date#DateTimeException) if the generated time point is greater than `front`.
pure nothrow @property NegInfIntervalRange **save**();
Returns a copy of `this`.
const pure nothrow @property NegInfInterval!TP **interval**();
The interval that this range currently covers.
pure nothrow @property TP delegate(scope const TP) **func**();
The function used to generate the next time point in the range.
| programming_docs |
d rt.deh rt.deh
======
Entry point for exception handling support routines.
There are three style of exception handling being supported by DMD: DWARF, Win32, and Win64. The Win64 code also supports POSIX. Support for those scheme is in `rt.dwarfeh`, `rt.deh_win32`, and `rt.deh_win64_posix`, respectively, and publicly imported here.
When an exception is thrown by the user, the compiler translates code like `throw e;` into either `_d_throwdwarf` (for DWARF exceptions) or `_d_throwc` (Win32 / Win64), with the `Exception` object as argument.
During those functions' handling, they eventually call `_d_createTrace`, which will store inside the `Exception` object the return of `_d_traceContext`, which is an object implementing `Throwable.TraceInfo`. `_d_traceContext` is a configurable hook, and by default will call `core.runtime : defaultTraceHandler`, which itself will call `backtrace` or something similar to store an array of stack frames (`void*` pointers) in the object it returns. Note that `defaultTraceHandler` returns a GC-allocated instance, hence a GC allocation can happen in the middle of throwing an `Exception`.
The `Throwable.TraceInfo`-implementing should not resolves function names, file and line number until its `opApply` function is called, avoiding the overhead of reading the debug infos until the user call `toString`. If the user only calls `Throwable.message` (or use `Throwable.msg` directly), only the overhead of `backtrace` will be paid, which is minimal enouh.
License:
Distributed under the [Boost Software License 1.0](http://www.boost.org/LICENSE_1_0.txt). (See accompanying file LICENSE)
Authors:
Walter Bright
Source
[rt/deh.d](https://github.com/dlang/druntime/blob/master/src/rt/deh.d)
d core.bitop core.bitop
==========
This module contains a collection of bit-level operations.
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt)
Authors:
Don Clugston, Sean Kelly, Walter Bright, Alex Rønne Petersen, Thomas Stuart Bockman
Source
[core/bitop.d](https://github.com/dlang/druntime/blob/master/src/core/bitop.d)
pure nothrow @nogc @safe int **bsf**(uint v);
pure nothrow @nogc @safe int **bsf**(ulong v);
Scans the bits in v starting with bit 0, looking for the first set bit.
Returns:
The bit number of the first bit set. The return value is undefined if v is zero.
Examples:
```
assert(bsf(0x21) == 0);
assert(bsf(ulong.max << 39) == 39);
```
pure nothrow @nogc @safe int **bsr**(uint v);
pure nothrow @nogc @safe int **bsr**(ulong v);
Scans the bits in v from the most significant bit to the least significant bit, looking for the first set bit.
Returns:
The bit number of the first bit set. The return value is undefined if v is zero.
Examples:
```
assert(bsr(0x21) == 5);
assert(bsr((ulong.max >> 15) - 1) == 48);
```
pure nothrow @nogc @system int **bt**(scope const size\_t\* p, size\_t bitnum);
Tests the bit. (No longer an intrisic - the compiler recognizes the patterns in the body.)
Examples:
```
size_t[2] array;
array[0] = 2;
array[1] = 0x100;
assert(bt(array.ptr, 1));
assert(array[0] == 2);
assert(array[1] == 0x100);
```
pure nothrow @nogc @system int **btc**(size\_t\* p, size\_t bitnum);
Tests and complements the bit.
pure nothrow @nogc @system int **btr**(size\_t\* p, size\_t bitnum);
Tests and resets (sets to 0) the bit.
pure nothrow @nogc @system int **bts**(size\_t\* p, size\_t bitnum);
Tests and sets the bit.
Parameters:
| | |
| --- | --- |
| size\_t\* `p` | a non-NULL pointer to an array of size\_ts. |
| size\_t `bitnum` | a bit number, starting with bit 0 of p[0], and progressing. It addresses bits like the expression:
```
p[index / (size_t.sizeof*8)] & (1 << (index & ((size_t.sizeof*8) - 1)))
```
|
Returns:
A non-zero value if the bit was set, and a zero if it was clear.
Examples:
```
size_t[2] array;
array[0] = 2;
array[1] = 0x100;
assert(btc(array.ptr, 35) == 0);
if (size_t.sizeof == 8)
{
assert(array[0] == 0x8_0000_0002);
assert(array[1] == 0x100);
}
else
{
assert(array[0] == 2);
assert(array[1] == 0x108);
}
assert(btc(array.ptr, 35));
assert(array[0] == 2);
assert(array[1] == 0x100);
assert(bts(array.ptr, 35) == 0);
if (size_t.sizeof == 8)
{
assert(array[0] == 0x8_0000_0002);
assert(array[1] == 0x100);
}
else
{
assert(array[0] == 2);
assert(array[1] == 0x108);
}
assert(btr(array.ptr, 35));
assert(array[0] == 2);
assert(array[1] == 0x100);
```
struct **BitRange**;
Range over bit set. Each element is the bit number that is set.
This is more efficient than testing each bit in a sparsely populated bit set. Note that the first bit in the bit set would be bit 0.
Examples:
```
import core.stdc.stdlib : malloc, free;
import core.stdc.string : memset;
// initialize a bit array
enum nBytes = (100 + BitRange.bitsPerWord - 1) / 8;
size_t *bitArr = cast(size_t *)malloc(nBytes);
scope(exit) free(bitArr);
memset(bitArr, 0, nBytes);
// set some bits
bts(bitArr, 48);
bts(bitArr, 24);
bts(bitArr, 95);
bts(bitArr, 78);
enum sum = 48 + 24 + 95 + 78;
// iterate
size_t testSum;
size_t nBits;
foreach (b; BitRange(bitArr, 100))
{
testSum += b;
++nBits;
}
assert(testSum == sum);
assert(nBits == 4);
```
enum ulong **bitsPerWord**;
Number of bits in each size\_t
pure nothrow @nogc @system this(const(size\_t)\* bitarr, size\_t numBits);
Construct a BitRange.
Parameters:
| | |
| --- | --- |
| const(size\_t)\* `bitarr` | The array of bits to iterate over |
| size\_t `numBits` | The total number of valid bits in the given bit array |
pure nothrow @nogc @safe size\_t **front**();
const pure nothrow @nogc @safe bool **empty**();
pure nothrow @nogc @system void **popFront**();
Range functions
pure nothrow @nogc @safe ushort **byteswap**(ushort x);
Swaps bytes in a 2 byte ushort.
Parameters:
| | |
| --- | --- |
| ushort `x` | value |
Returns:
`x` with bytes swapped
Examples:
```
assert(byteswap(cast(ushort)0xF234) == 0x34F2);
static ushort xx = 0xF234;
assert(byteswap(xx) == 0x34F2);
```
pure nothrow @nogc @safe uint **bswap**(uint v);
Swaps bytes in a 4 byte uint end-to-end, i.e. byte 0 becomes byte 3, byte 1 becomes byte 2, byte 2 becomes byte 1, byte 3 becomes byte 0.
Examples:
```
assert(bswap(0x01020304u) == 0x04030201u);
static uint xx = 0x10203040u;
assert(bswap(xx) == 0x40302010u);
```
pure nothrow @nogc @safe ulong **bswap**(ulong v);
Swaps bytes in an 8 byte ulong end-to-end, i.e. byte 0 becomes byte 7, byte 1 becomes byte 6, etc. This is meant to be recognized by the compiler as an intrinsic.
Examples:
```
assert(bswap(0x01020304_05060708uL) == 0x08070605_04030201uL);
static ulong xx = 0x10203040_50607080uL;
assert(bswap(xx) == 0x80706050_40302010uL);
```
nothrow @nogc @system ubyte **inp**(uint port\_address);
nothrow @nogc @system ushort **inpw**(uint port\_address);
nothrow @nogc @system uint **inpl**(uint port\_address);
Reads I/O port at port\_address.
nothrow @nogc @system ubyte **outp**(uint port\_address, ubyte value);
nothrow @nogc @system ushort **outpw**(uint port\_address, ushort value);
nothrow @nogc @system uint **outpl**(uint port\_address, uint value);
Writes and returns value to I/O port at port\_address.
pure nothrow @nogc @safe int **popcnt**(uint x);
pure nothrow @nogc @safe int **popcnt**(ulong x);
Calculates the number of set bits in an integer.
pure nothrow @nogc @safe ushort **\_popcnt**(ushort x);
pure nothrow @nogc @safe int **\_popcnt**(uint x);
pure nothrow @nogc @safe int **\_popcnt**(ulong x);
Calculates the number of set bits in an integer using the X86 SSE4 POPCNT instruction. POPCNT is not available on all X86 CPUs.
pure nothrow @nogc @safe uint **bitswap**(uint x);
Reverses the order of bits in a 32-bit integer.
pure nothrow @nogc @safe ulong **bitswap**(ulong x);
Reverses the order of bits in a 64-bit integer.
pure T **rol**(T)(const T value, const uint count)
Constraints: if (\_\_traits(isIntegral, T) && \_\_traits(isUnsigned, T));
pure T **ror**(T)(const T value, const uint count)
Constraints: if (\_\_traits(isIntegral, T) && \_\_traits(isUnsigned, T));
pure T **rol**(uint count, T)(const T value)
Constraints: if (\_\_traits(isIntegral, T) && \_\_traits(isUnsigned, T));
pure T **ror**(uint count, T)(const T value)
Constraints: if (\_\_traits(isIntegral, T) && \_\_traits(isUnsigned, T));
Bitwise rotate `value` left (`rol`) or right (`ror`) by `count` bit positions.
Examples:
```
ubyte a = 0b11110000U;
ulong b = ~1UL;
assert(rol(a, 1) == 0b11100001);
assert(ror(a, 1) == 0b01111000);
assert(rol(a, 3) == 0b10000111);
assert(ror(a, 3) == 0b00011110);
assert(rol(a, 0) == a);
assert(ror(a, 0) == a);
assert(rol(b, 63) == ~(1UL << 63));
assert(ror(b, 63) == ~2UL);
assert(rol!3(a) == 0b10000111);
assert(ror!3(a) == 0b00011110);
```
d std.stdint std.stdint
==========
D constrains integral types to specific sizes. But efficiency of different sizes varies from machine to machine, pointer sizes vary, and the maximum integer size varies. **stdint** offers a portable way of trading off size vs efficiency, in a manner compatible with the stdint.h definitions in C.
In the table below, the **exact alias**es are types of exactly the specified number of bits. The **at least alias**es are at least the specified number of bits large, and can be larger. The **fast alias**es are the fastest integral type supported by the processor that is at least as wide as the specified number of bits.
The aliases are:
| Exact Alias | Description | At Least Alias | Description | Fast Alias | Description |
| --- | --- | --- | --- | --- | --- |
| int8\_t | exactly 8 bits signed | int\_least8\_t | at least 8 bits signed | int\_fast8\_t | fast 8 bits signed |
| uint8\_t | exactly 8 bits unsigned | uint\_least8\_t | at least 8 bits unsigned | uint\_fast8\_t | fast 8 bits unsigned |
| int16\_t | exactly 16 bits signed | int\_least16\_t | at least 16 bits signed | int\_fast16\_t | fast 16 bits signed |
| uint16\_t | exactly 16 bits unsigned | uint\_least16\_t | at least 16 bits unsigned | uint\_fast16\_t | fast 16 bits unsigned |
| int32\_t | exactly 32 bits signed | int\_least32\_t | at least 32 bits signed | int\_fast32\_t | fast 32 bits signed |
| uint32\_t | exactly 32 bits unsigned | uint\_least32\_t | at least 32 bits unsigned | uint\_fast32\_t | fast 32 bits unsigned |
| int64\_t | exactly 64 bits signed | int\_least64\_t | at least 64 bits signed | int\_fast64\_t | fast 64 bits signed |
| uint64\_t | exactly 64 bits unsigned | uint\_least64\_t | at least 64 bits unsigned | uint\_fast64\_t | fast 64 bits unsigned |
The ptr aliases are integral types guaranteed to be large enough to hold a pointer without losing bits:
| Alias | Description |
| --- | --- |
| intptr\_t | signed integral type large enough to hold a pointer |
| uintptr\_t | unsigned integral type large enough to hold a pointer |
The max aliases are the largest integral types:
| Alias | Description |
| --- | --- |
| intmax\_t | the largest signed integral type |
| uintmax\_t | the largest unsigned integral type |
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
[Walter Bright](http://digitalmars.com)
Source
[std/stdint.d](https://github.com/dlang/phobos/blob/master/std/stdint.d)
d Functions Functions
=========
**Contents** 1. [Grammar](#grammar)
2. [Function Contracts](#contracts)
1. [In, Out and Inheritance](#in_out_inheritance)
3. [Function Return Values](#function-return-values)
4. [Functions Without Bodies](#function-declarations)
5. [Pure Functions](#pure-functions)
6. [Nothrow Functions](#nothrow-functions)
7. [Ref Functions](#ref-functions)
8. [Auto Functions](#auto-functions)
9. [Auto Ref Functions](#auto-ref-functions)
10. [Inout Functions](#inout-functions)
1. [Matching an `inout` Parameter](#matching-an-inout-parameter)
11. [Optional Parentheses](#optional-parenthesis)
12. [Property Functions](#property-functions)
13. [Virtual Functions](#virtual-functions)
1. [Function Inheritance and Overriding](#function-inheritance)
14. [Inline Functions](#inline-functions)
15. [Function Overloading](#function-overloading)
1. [Overload Sets](#overload-sets)
16. [Function Parameters](#parameters)
1. [Parameter Storage Classes](#param-storage)
2. [In Parameters](#in-params)
3. [Ref and Out Parameters](#ref-params)
4. [Lazy Parameters](#lazy-params)
5. [Function Default Arguments](#function-default-args)
6. [Return Ref Parameters](#return-ref-parameters)
7. [Scope Parameters](#scope-parameters)
8. [Return Scope Parameters](#return-scope-parameters)
9. [Ref Return Scope Parameters](#ref-return-scope-parameters)
10. [User-Defined Attributes for Parameters](#udas-parameters)
11. [Variadic Functions](#variadic)
17. [Local Variables](#local-variables)
1. [Local Static Variables](#local-static-variables)
18. [Nested Functions](#nested)
19. [Delegates, Function Pointers, and Closures](#closures)
1. [Anonymous Functions and Anonymous Delegates](#anonymous)
20. [`main()` Function](#main)
1. [BetterC `main()` Function](#betterc-main)
21. [Function Templates](#function-templates)
22. [Compile Time Function Execution (CTFE)](#interpretation)
1. [String Mixins and Compile Time Function Execution](#string-mixins)
23. [No-GC Functions](#nogc-functions)
24. [Function Safety](#function-safety)
1. [Safe Functions](#safe-functions)
2. [Trusted Functions](#trusted-functions)
3. [System Functions](#system-functions)
4. [Safe Interfaces](#safe-interfaces)
5. [Safe Values](#safe-values)
6. [Safe Aliasing](#safe-aliasing)
25. [Function Attribute Inference](#function-attribute-inference)
26. [Uniform Function Call Syntax (UFCS)](#pseudo-member)
Grammar
-------
### Function declaration
```
FuncDeclaration:
StorageClassesopt BasicType FuncDeclarator FunctionBody
AutoFuncDeclaration
AutoFuncDeclaration:
StorageClasses Identifier FuncDeclaratorSuffix FunctionBody
FuncDeclarator:
TypeSuffixesopt Identifier FuncDeclaratorSuffix
FuncDeclaratorSuffix:
Parameters MemberFunctionAttributesopt
TemplateParameters Parameters MemberFunctionAttributesopt Constraintopt
```
### Parameters
```
Parameters:
( ParameterListopt )
ParameterList:
Parameter
Parameter , ParameterList
VariadicArgumentsAttributes ...
Parameter:
ParameterAttributesopt BasicType Declarator
ParameterAttributesopt BasicType Declarator ...
ParameterAttributesopt BasicType Declarator = AssignExpression
ParameterAttributesopt Type
ParameterAttributesopt Type ...
ParameterAttributes:
InOut
UserDefinedAttribute
ParameterAttributes InOut
ParameterAttributes UserDefinedAttribute
ParameterAttributes
InOut:
auto
TypeCtor
final
in
lazy
out
ref
return ref
scope
VariadicArgumentsAttributes:
VariadicArgumentsAttribute
VariadicArgumentsAttribute VariadicArgumentsAttributes
VariadicArgumentsAttribute:
const
immutable
return
scope
shared
```
### Function attributes
```
FunctionAttributes:
FunctionAttribute
FunctionAttribute FunctionAttributes
FunctionAttribute:
FunctionAttributeKwd
Property
MemberFunctionAttributes:
MemberFunctionAttribute
MemberFunctionAttribute MemberFunctionAttributes
MemberFunctionAttribute:
const
immutable
inout
return
shared
FunctionAttribute
```
### Function body
```
FunctionBody:
SpecifiedFunctionBody
MissingFunctionBody
ShortenedFunctionBody
FunctionLiteralBody:
SpecifiedFunctionBody
SpecifiedFunctionBody:
doopt BlockStatement
FunctionContractsopt InOutContractExpression doopt BlockStatement
FunctionContractsopt InOutStatement do BlockStatement
MissingFunctionBody:
;
FunctionContractsopt InOutContractExpression ;
FunctionContractsopt InOutStatement
ShortenedFunctionBody:
=> AssignExpression ;
```
Examples:
```
int hasSpecifiedBody() { return 1; }
int hasMissingBody();
int hasShortenedBody() => 1;
```
**Note:** The `ShortenedFunctionBody` form requires the `-preview=shortenedMethods` command-line switch, which is available starting in v2.096.0.
Function Contracts
------------------
```
FunctionContracts:
FunctionContract
FunctionContract FunctionContracts
FunctionContract:
InOutContractExpression
InOutStatement
InOutContractExpression:
InContractExpression
OutContractExpression
InOutStatement:
InStatement
OutStatement
InContractExpression:
in ( AssertArguments )
OutContractExpression:
out ( ; AssertArguments )
out ( Identifier ; AssertArguments )
InStatement:
in BlockStatement
OutStatement:
out BlockStatement
out ( Identifier ) BlockStatement
```
Function Contracts specify the preconditions and postconditions of a function. They are used in [Contract Programming](contracts).
Preconditions and postconditions do not affect the type of the function.
### Preconditions
An [*InContractExpression*](#InContractExpression) is a precondition.
The first [*AssignExpression*](expression#AssignExpression) of the [*AssertArguments*](expression#AssertArguments) must evaluate to true. If it does not, the precondition has failed.
The second *AssignExpression*, if present, must be implicitly convertible to type `const(char)[]`.
An [*InStatement*](#InStatement) is also a precondition. Any [*AssertExpression*](expression#AssertExpression) appearing in an *InStatement* will be an *InContractExpression*.
Preconditions must semantically be satisfied before the function starts executing. If it is not, the program enters an *Invalid State*.
**Implementation Defined:** Whether the preconditions are actually run or not is implementation defined. This is usually selectable with a compiler switch. Its behavior upon precondition failure is also usually selectable with a compiler switch. One option is to throw an `AssertError` with a message consisting of the optional second *AssignExpression*. **Best Practices:** Use preconditions to validate that input arguments have values that are expected by the function. **Best Practices:** Since preconditions may or may not be actually checked at runtime, avoid using preconditions that have side effects. The expression form is:
```
in (expression)
in (expression, "failure string")
{
...function body...
}
```
The block statement form is:
```
in
{
...contract preconditions...
}
do
{
...function body...
}
```
### Postconditions
An [*OutContractExpression*](#OutContractExpression) is a postcondition.
The first [*AssignExpression*](expression#AssignExpression) of the [*AssertArguments*](expression#AssertArguments) must evaluate to true. If it does not, the postcondition has failed.
The second *AssignExpression*, if present, must be implicitly convertible to type `const(char)[]`.
An [*OutStatement*](#OutStatement) is also a postcondition. Any [*AssertExpression*](expression#AssertExpression) appearing in an *OutStatement* will be an *OutContractExpression*.
Postconditions must semantically be satisfied after the function finishes executing. If it is not, the program enters an *Invalid State*.
**Implementation Defined:** Whether the postconditions are actually run or not is implementation defined. This is usually selectable with a compiler switch. Its behavior upon postcondition failure is also usually selectable with a compiler switch. One option is to throw an `AssertError` with a message consisting of the optional second *AssignExpression*. **Best Practices:** Use postconditions to validate that the input arguments and return value have values that are expected by the function. **Best Practices:** Since postconditions may or may not be actually checked at runtime, avoid using postconditions that have side effects. The expression form is:
```
out (identifier; expression)
out (identifier; expression, "failure string")
out (; expression)
out (; expression, "failure string")
{
...function body...
}
```
The block statement form is:
```
out
{
...contract postconditions...
}
out (identifier)
{
...contract postconditions...
}
do
{
...function body...
}
```
The optional identifier in either type of postcondition is set to the return value of the function, and can be accessed from within the postcondition.
### Example
```
int fun(ref int a, int b)
in (a > 0)
in (b >= 0, "b cannot be negative!")
out (r; r > 0, "return must be positive")
out (; a != 0)
{
// function body
}
```
```
int fun(ref int a, int b)
in
{
assert(a > 0);
assert(b >= 0, "b cannot be negative!");
}
out (r)
{
assert(r > 0, "return must be positive");
assert(a != 0);
}
do
{
// function body
}
```
The two functions are identical semantically.
### In, Out and Inheritance
If a function in a derived class overrides a function from its super class, then only the preconditions of one of the function and its overridden functions must be satisfied. Overriding functions then becomes a process of *loosening* the preconditions.
A function without preconditions means its precondition is always satisfied. Therefore if any function in an inheritance hierarchy has no preconditions, then any preconditions on functions overriding it have no meaningful effect.
Conversely, all of the postconditions of the function and its overridden functions must to be satisfied. Adding overriding functions then becomes a processes of *tightening* the postconditions.
Function Return Values
----------------------
Function return values not marked as `ref` are considered to be rvalues. This means they cannot be passed by reference to other functions.
Functions Without Bodies
------------------------
Functions without bodies:
```
int foo();
```
that are not declared as `abstract` are expected to have their implementations elsewhere, and that implementation will be provided at the link step. This enables an implementation of a function to be completely hidden from the user of it, and the implementation may be in another language such as C, assembler, etc.
Pure Functions
--------------
Pure functions are annotated with the `pure` attribute.
Pure functions cannot directly access global or static mutable state.
Pure functions can only call pure functions.
A pure function can override an impure function, but cannot be overridden by an impure function. I.e. it is covariant with an impure function.
A *weakly pure function* has parameters with mutable indirections. Program state can be modified transitively through the matching argument.
```
pure int foo(int[] arr) { arr[] += 1; return arr.length; }
int[] a = [1, 2, 3];
foo(a);
assert(a == [2, 3, 4]);
```
A *strongly pure function* has no parameters with mutable indirections and cannot modify any program state external to the function.
```
struct S { double x; }
pure int foo(immutable(int)[] arr, int num, S val)
{
//arr[num] = 1; // compile error
num = 2; // has no side effect to the caller side
val.x = 3.14; // ditto
return arr.length;
}
```
A strongly pure function can call a weakly pure function.
Pure functions can modify the local state of the function.
A pure function can:
* read and write the floating point exception flags
* read and write the floating point mode flags, as long as those flags are restored to their initial state upon function entry
**Undefined Behavior:** occurs if these flags are not restored to their initial state upon function exit. It is the programmer's responsibility to ensure this. A pure function can perform impure operations in statements that are in a [*ConditionalStatement*](version#ConditionalStatement) controlled by a [*DebugCondition*](version#DebugCondition).
**Best Practices:** this relaxation of purity checks in DebugConditions is intended solely to make debugging programs easier. A pure function can throw exceptions.
```
import std.stdio;
int x;
immutable int y;
const int* pz;
pure int foo(int i,
char* p,
const char* q,
immutable int* s)
{
debug writeln("in foo()"); // ok, impure code allowed in debug statement
x = i; // error, modifying global state
i = x; // error, reading mutable global state
i = y; // ok, reading immutable global state
i = *pz; // error, reading const global state
return i;
}
```
[Nested functions](#variadicnested) inside a pure function are implicitly marked as pure.
```
pure int foo(int x, immutable int y)
{
int bar()
// implicitly marked as pure, to be "weakly pure"
// since hidden context pointer to foo stack context is mutable
{
x = 10; // can access states in enclosing scope
// through the mutable context pointer
return x;
}
pragma(msg, typeof(&bar)); // int delegate() pure
int baz() immutable
// qualifies hidden context pointer with immutable,
// and has no other parameters, therefore "strongly pure"
{
//return x; // error, cannot access mutable data
// through the immutable context pointer
return y; // ok
}
// can call pure nested functions
return bar() + baz();
}
```
A *pure factory function* is a strongly pure function that returns a result that has mutable indirections. All mutable memory returned by the call may not be referenced by any other part of the program, i.e. it is newly allocated by the function. Nor may the mutable references of the result refer to any object that existed before the function call. For example:
```
struct List { int payload; List* next; }
pure List* make(int a, int b)
{
auto result = new List(a, null);
result.next = new List(b, result);
return result;
}
```
All references in `make`'s result refer to other `List` objects created by `make`, and no other part of the program refers to any of these objects. This restriction does not apply to any Exception or Error thrown from the function.
Pure destructors do not benefit of special elision.
**Implementation Defined:** An implementation may assume that a strongly pure function that returns a result without mutable indirections will have the same effect for all invocations with equivalent arguments. It is allowed to memoize the result of the function under the assumption that equivalent parameters always produce equivalent results. A strongly pure function may still have behavior inconsistent with memoization by e.g. using `cast`s or by changing behavior depending on the address of its parameters. An implementation is currently not required to enforce validity of memoization in all cases. If a strongly pure function throws an Exception or an Error, the assumptions related to memoization do not carry to the thrown exception. Nothrow Functions
-----------------
Nothrow functions can only throw exceptions derived from [`class Error`](https://dlang.org/phobos/object.html#Error).
Nothrow functions are covariant with throwing ones.
Ref Functions
-------------
Ref functions allow functions to return by reference, meaning that the return value must be an lvalue, and the lvalue is returned, not the rvalue.
```
ref int foo()
{
auto p = new int(2);
return *p;
}
...
int i = foo(); // i is set to 2
foo() = 3; // reference returns can be lvalues
```
Returning a reference to an expired function context is not allowed. This includes local variables, temporaries and parameters that are part of an expired function context.
```
ref int sun()
{
int i;
return i; // error, escaping a reference to local variable i
}
```
A `ref` parameter may not be returned by `ref`.
```
ref int moon(ref int i)
{
return i; // error
}
```
Auto Functions
--------------
Auto functions have their return type inferred from any [*ReturnStatement*](statement#ReturnStatement)s in the function body.
An auto function is declared without a return type. If it does not already have a storage class, use the auto storage class.
If there are multiple *ReturnStatement*s, the types of them must be implicitly convertible to a common type. If there are no *ReturnStatement*s, the return type is inferred to be void.
```
auto foo(int x) { return x + 3; } // inferred to be int
auto bar(int x) { return x; return 2.5; } // inferred to be double
```
Auto Ref Functions
------------------
Auto ref functions infer their return type just as [auto functions](#auto-functions) do. In addition, they become [ref functions](#ref-functions) if all return expressions are lvalues, and it would not be a reference to a local or a parameter.
```
auto ref f1(int x) { return x; } // value return
auto ref f2() { return 3; } // value return
auto ref f3(ref int x) { return x; } // ref return
auto ref f4(out int x) { return x; } // ref return
auto ref f5() { static int x; return x; } // ref return
```
The ref-ness of a function is determined from all [*ReturnStatement*](statement#ReturnStatement)s in the function body:
```
auto ref f1(ref int x) { return 3; return x; } // ok, value return
auto ref f2(ref int x) { return x; return 3; } // ok, value return
auto ref f3(ref int x, ref double y)
{
return x; return y;
// The return type is deduced to double, but cast(double)x is not an lvalue,
// then become a value return.
}
```
Auto ref function can have explicit return type.
```
auto ref int (ref int x) { return x; } // ok, ref return
auto ref int foo(double x) { return x; } // error, cannot convert double to int
```
Inout Functions
---------------
Functions that differ only in whether the parameters are mutable, `const` or `immutable`, and have corresponding mutable, `const` or `immutable` return types, can be combined into one function using the `inout` type constructor. Consider the following overload set:
```
int[] slice(int[] a, int x, int y) { return a[x .. y]; }
const(int)[] slice(const(int)[] a, int x, int y) { return a[x .. y]; }
immutable(int)[] slice(immutable(int)[] a, int x, int y) { return a[x .. y]; }
```
The code generated by each of these functions is identical. The inout type constructor can combine them into one function:
```
inout(int)[] slice(inout(int)[] a, int x, int y) { return a[x .. y]; }
```
The inout keyword forms a wildcard that stands in for mutable, `const`, `immutable`, `inout`, or `inout const`. When calling the function, the `inout` state of the return type is changed to match that of the argument type passed to the `inout` parameter.
`inout` can also be used as a type constructor inside a function that has a parameter declared with `inout`. The `inout` state of a type declared with `inout` is changed to match that of the argument type passed to the `inout` parameter:
```
inout(int)[] asymmetric(inout(int)[] input_data)
{
inout(int)[] r = input_data;
while (r.length > 1 && r[0] == r[$-1])
r = r[1..$-1];
return r;
}
```
Inout types can be implicitly converted to `const` or `inout const`, but to nothing else. Other types cannot be implicitly converted to `inout`. Casting to or from `inout` is not allowed in `@safe` functions.
```
void f(inout int* ptr)
{
const int* p = ptr;
int* q = ptr; // error
immutable int* r = ptr; // error
}
```
### Matching an `inout` Parameter
A set of arguments to a function with `inout` parameters is considered a match if any `inout` argument types match exactly, or:
1. No argument types are composed of `inout` types.
2. A mutable, `const` or `immutable` argument type can be matched against each corresponding parameter `inout` type.
If such a match occurs, `inout` is considered the common qualifier of the matched qualifiers. If more than two parameters exist, the common qualifier calculation is recursively applied.
Common qualifier of the two type qualifiers| | *mutable* | `const` | `immutable` | `inout` | `inout const` |
| *mutable* (= m) | m | c | c | c | c |
| `const` (= c) | c | c | c | c | c |
| `immutable` (= i) | c | c | i | wc | wc |
| `inout` (= w) | c | c | wc | w | wc |
| `inout const` (= wc) | c | c | wc | wc | wc |
The `inout` in the return type is then rewritten to match the `inout` qualifiers:
```
int[] ma;
const(int)[] ca;
immutable(int)[] ia;
inout(int)[] foo(inout(int)[] a) { return a; }
void test1()
{
// inout matches to mutable, so inout(int)[] is
// rewritten to int[]
int[] x = foo(ma);
// inout matches to const, so inout(int)[] is
// rewritten to const(int)[]
const(int)[] y = foo(ca);
// inout matches to immutable, so inout(int)[] is
// rewritten to immutable(int)[]
immutable(int)[] z = foo(ia);
}
inout(const(int))[] bar(inout(int)[] a) { return a; }
void test2()
{
// inout matches to mutable, so inout(const(int))[] is
// rewritten to const(int)[]
const(int)[] x = bar(ma);
// inout matches to const, so inout(const(int))[] is
// rewritten to const(int)[]
const(int)[] y = bar(ca);
// inout matches to immutable, so inout(int)[] is
// rewritten to immutable(int)[]
immutable(int)[] z = bar(ia);
}
```
**Note:** Shared types cannot be matched with `inout`.
Optional Parentheses
--------------------
If a function call passes no explicit argument, i.e. it would syntactically use `()`, then these parentheses may be omitted, similar to a getter invocation of a [property function](#property-functions).
```
void foo() {} // no arguments
void fun(int x = 10) { }
void bar(int[] arr) {} // for UFCS
void main()
{
foo(); // OK
foo; // also OK
fun; // OK
int[] arr;
arr.bar(); // OK
arr.bar; // also OK
}
```
Optional parentheses are not applied to delegates or function pointers.
```
void main()
{
int function() fp;
assert(fp == 6); // Error, incompatible types int function() and int
assert(*fp == 6); // Error, incompatible types int() and int
int delegate() dg;
assert(dg == 6); // Error, incompatible types int delegate() and int
}
```
If a function returns a delegate or function pointer, the parentheses are required if the returned value is to be called.
```
struct S {
int function() callfp() { return &numfp; }
int delegate() calldg() return { return &numdg; }
int numdg() { return 6; }
}
int numfp() { return 6; }
void main()
{
S s;
int function() fp;
fp = s.callfp;
assert(fp() == 6);
fp = s.callfp();
assert(fp() == 6);
int x = s.callfp()();
assert(x == 6);
int delegate() dg;
dg = s.calldg;
assert(dg() == 6);
dg = s.calldg();
assert(dg() == 6);
int y = s.calldg()();
assert(y == 6);
}
```
Property Functions
------------------
WARNING: The definition and usefulness of property functions is being reviewed, and the implementation is currently incomplete. Using property functions is not recommended until the definition is more certain and implementation more mature.
Properties are functions that can be syntactically treated as if they were fields or variables. Properties can be read from or written to. A property is read by calling a method or function with no arguments; a property is written by calling a method or function with its argument being the value it is set to.
Simple getter and setter properties can be written using [UFCS](#pseudo-member). These can be enhanced with the additon of the `@property` attribute to the function, which adds the following behaviors:
* `@property` functions cannot be overloaded with non-`@property` functions with the same name.
* `@property` functions can only have zero, one or two parameters.
* `@property` functions cannot have variadic parameters.
* For the expression `typeof(exp)` where `exp` is an `@property` function, the type is the return type of the function, rather than the type of the function.
* For the expression `__traits(compiles, exp)` where `exp` is an `@property` function, a further check is made to see if the function can be called.
* `@property` are mangled differently, meaning that `@property` must be consistently used across different compilation units.
* The ObjectiveC interface recognizes `@property` setter functions as special and modifies them accordingly.
A simple property would be:
```
struct Foo
{
@property int data() { return m_data; } // read property
@property int data(int value) { return m_data = value; } // write property
private:
int m_data;
}
```
To use it:
```
int test()
{
Foo f;
f.data = 3; // same as f.data(3);
return f.data + 3; // same as return f.data() + 3;
}
```
The absence of a read method means that the property is write-only. The absence of a write method means that the property is read-only. Multiple write methods can exist; the correct one is selected using the usual function overloading rules.
In all the other respects, these methods are like any other methods. They can be static, have different linkages, have their address taken, etc.
The built in properties `.sizeof`, `.alignof`, and `.mangleof` may not be declared as fields or methods in structs, unions, classes or enums.
If a property function has no parameters, it works as a getter. If has exactly one parameter, it works as a setter.
Virtual Functions
-----------------
Virtual functions are class member functions that are called indirectly through a function pointer table, called a `vtbl[]`, rather than directly.
Member functions that are virtual can be overridden, unless they are `final`.
Struct and union member functions are not virtual.
Static member functions are not virtual.
Member functions which are `private` or `package` are not virtual.
Member template functions are not virtual.
Member functions with `Objective-C` linkage are virtual even if marked with `final` or `static`.
```
class A
{
int def() { ... }
final int foo() { ... }
final private int bar() { ... }
private int abc() { ... }
}
class B : A
{
override int def() { ... } // ok, overrides A.def
override int foo() { ... } // error, A.foo is final
int bar() { ... } // ok, A.bar is final private, but not virtual
int abc() { ... } // ok, A.abc is not virtual, B.abc is virtual
}
void test(A a)
{
a.def(); // calls B.def
a.foo(); // calls A.foo
a.bar(); // calls A.bar
a.abc(); // calls A.abc
}
void func()
{
B b = new B();
test(b);
}
```
The overriding function may be covariant with the overridden function. A covariant function has a type that is implicitly convertible to the type of the overridden function.
```
class A { }
class B : A { }
class Foo
{
A test() { return null; }
}
class Bar : Foo
{
// overrides and is covariant with Foo.test()
override B test() { return null; }
}
```
Virtual functions all have a hidden parameter called the *this* reference, which refers to the class object instance for which the function is called.
Functions with `Objective-C` linkage have an additional hidden, unnamed, parameter which is the selector it was called with.
To directly call a base member function, insert base class name before the member function name. For example:
```
class B
{
int foo() { return 1; }
}
class C : B
{
override int foo() { return 2; }
void test()
{
assert(B.foo() == 1); // translated to this.B.foo(), and
// calls B.foo statically.
assert(C.foo() == 2); // calls C.foo statically, even if
// the actual instance of 'this' is D.
}
}
class D : C
{
override int foo() { return 3; }
}
void main()
{
auto d = new D();
assert(d.foo() == 3); // calls D.foo
assert(d.B.foo() == 1); // calls B.foo
assert(d.C.foo() == 2); // calls C.foo
d.test();
}
```
**Implementation Defined:** Normally calling a virtual function implies getting the address of the function at runtime by indexing into the class's `vtbl[]`. If the implementation can determine that the called virtual function will be statically known, such as if it is `final`, it can use a direct call instead. ### Function Inheritance and Overriding
A function in a derived class with the same name and covariant with a function in a base class overrides that function:
```
class A
{
int foo(int x) { ... }
}
class B : A
{
override int foo(int x) { ... }
}
void test()
{
B b = new B();
bar(b);
}
void bar(A a)
{
a.foo(1); // calls B.foo(int)
}
```
When doing overload resolution, the functions in the base class are not considered, as they are not in the same [Overload Set](#overload-sets):
```
class A
{
int foo(int x) { ... }
int foo(long y) { ... }
}
class B : A
{
override int foo(long x) { ... }
}
void test()
{
B b = new B();
b.foo(1); // calls B.foo(long), since A.foo(int) not considered
A a = b;
a.foo(1); // issues runtime error (instead of calling A.foo(int))
}
```
To include the base class's functions in the overload resolution process, use an [*AliasDeclaration*](declaration#AliasDeclaration):
```
class A
{
int foo(int x) { ... }
int foo(long y) { ... }
}
class B : A
{
alias foo = A.foo;
override int foo(long x) { ... }
}
void test()
{
B b = new B();
bar(b);
}
void bar(A a)
{
a.foo(1); // calls A.foo(int)
B b = new B();
b.foo(1); // calls A.foo(int)
}
```
If such an *AliasDeclaration* is not used, the derived class's functions completely override all the functions of the same name in the base class, even if the types of the parameters in the base class functions are different. It is illegal if, through implicit conversions to the base class, those other functions do get called:
```
class A
{
void set(long i) { }
void set(int i) { }
}
class B : A
{
void set(long i) { }
}
void foo(A a)
{
int i;
a.set(3); // error, use of A.set(int) is hidden by B
// use 'alias set = A.set;' to introduce base class overload set
assert(i == 1);
}
void main()
{
foo(new B);
}
```
A function parameter's default value is not inherited:
```
class A
{
void foo(int x = 5) { ... }
}
class B : A
{
void foo(int x = 7) { ... }
}
class C : B
{
void foo(int x) { ... }
}
void test()
{
A a = new A();
a.foo(); // calls A.foo(5)
B b = new B();
b.foo(); // calls B.foo(7)
C c = new C();
c.foo(); // error, need an argument for C.foo
}
```
An overriding function inherits any unspecified [*FunctionAttributes*](#FunctionAttributes) from the attributes of the overridden function.
```
class B
{
void foo() pure nothrow @safe {}
}
class D : B
{
override void foo() {}
}
void main()
{
auto d = new D();
pragma(msg, typeof(&d.foo));
// prints "void delegate() pure nothrow @safe" in compile time
}
```
The attributes [`@disable`](attribute#disable) and [`deprecated`](attribute#deprecated) are not allowed on overriding functions.
**Rationale:** To stop the compilation or to output the deprecation message, the implementation must be able to determine the target of the call, which can't be guaranteed when it is virtual.
```
class B
{
void foo() {}
}
class D : B
{
@disable override void foo() {} // error, can't apply @disable to overriding function
}
```
Static functions with `Objective-C` linkage are overridable.
Inline Functions
----------------
The compiler makes the decision whether to inline a function or not. This decision may be controlled by [`pragma(inline)`](pragma#inline).
**Implementation Defined:** Whether a function is inlined or not is implementation defined, though any [*FunctionLiteral*](expression#FunctionLiteral) should be inlined when used in its declaration scope. Function Overloading
--------------------
*Function overloading* occurs when two or more functions in the same scope have the same name. The function selected is the one that is the *best match* to the arguments. The matching levels are:
1. no match
2. match with implicit conversions
3. match with qualifier conversion (if the argument type is [qualifier-convertible](http://dlang.org/glossary.html#qualifier-convertible) to the parameter type)
4. exact match
Each argument (including any `this` reference) is compared against the function's corresponding parameter to determine the match level for that argument. The match level for a function is the *worst* match level of each of its arguments.
Literals do not match `ref` or `out` parameters.
If two or more functions have the same match level, then *partial ordering* is used to disambiguate to find the best match. Partial ordering finds the most specialized function. If neither function is more specialized than the other, then it is an ambiguity error. Partial ordering is determined for functions *f* and *g* by taking the parameter types of *f*, constructing a list of arguments by taking the default values of those types, and attempting to match them against *g*. If it succeeds, then *g* is at least as specialized as *f*. For example:
```
class A { }
class B : A { }
class C : B { }
void foo(A);
void foo(B);
void test()
{
C c;
/* Both foo(A) and foo(B) match with implicit conversions (level 2).
* Applying partial ordering rules,
* foo(B) cannot be called with an A, and foo(A) can be called
* with a B. Therefore, foo(B) is more specialized, and is selected.
*/
foo(c); // calls foo(B)
}
```
A function with a variadic argument is considered less specialized than a function without.
### Overload Sets
Functions declared at the same scope overload against each other, and are called an *Overload Set*. An example of an overload set are functions defined at module level:
```
module A;
void foo() { }
void foo(long i) { }
```
`A.foo()` and `A.foo(long)` form an overload set. A different module can also define another overload set of functions with the same name:
```
module B;
class C { }
void foo(C) { }
void foo(int i) { }
```
and A and B can be imported by a third module, C. Both overload sets, the `A.foo` overload set and the `B.foo` overload set, are found when searching for symbol `foo`. An instance of `foo` is selected based on it matching in exactly one overload set:
```
import A;
import B;
void bar(C c , long i)
{
foo(); // calls A.foo()
foo(i); // calls A.foo(long)
foo(c); // calls B.foo(C)
foo(1,2); // error, does not match any foo
foo(1); // error, matches A.foo(long) and B.foo(int)
A.foo(1); // calls A.foo(long)
}
```
Even though `B.foo(int)` is a better match than `A.foo(long)` for `foo(1)`, it is an error because the two matches are in different overload sets.
Overload sets can be merged with an alias declaration:
```
import A;
import B;
alias foo = A.foo;
alias foo = B.foo;
void bar(C c)
{
foo(); // calls A.foo()
foo(1L); // calls A.foo(long)
foo(c); // calls B.foo(C)
foo(1,2); // error, does not match any foo
foo(1); // calls B.foo(int)
A.foo(1); // calls A.foo(long)
}
```
Function Parameters
-------------------
### Parameter Storage Classes
Parameter storage classes are `in`, `out`, `ref`, `lazy`, `return` and `scope`. Parameters can also take the type constructors `const`, `immutable`, `shared` and `inout`.
`in`, `out`, `ref` and `lazy` are mutually exclusive. The first three are used to denote input, input/output, and output parameters, respectively. For example:
```
int read(in char[] input, ref size_t count, out int errno);
void main()
{
size_t a = 42;
int b;
int r = read("Hello World", a, b);
}
```
`read` has three parameters. `input` will only be read and no reference to it will be retained. `count` may be read and written to, and `errno` will be set to a value from within the function.
The argument `"Hello World"` gets bound to parameter `input`, `a` gets bound to `count` and `b` to `errno`.
Parameter Storage Class and Type Constructor Overview| **Storage Class** | **Description** |
| *none* | The parameter will be a mutable copy of its argument. |
| `in` | The parameter is an input to the function. |
| `out` | The argument must be an lvalue, which will be passed by reference and initialized upon function entry with the default value (`T.init`) of its type. |
| `ref` | The parameter is an *input/output* parameter, passed by reference. |
| `scope` | The parameter must not escape the function call (e.g. by being assigned to a global variable). Ignored for any parameter that is not a reference type. |
| `return` | Parameter may be returned or copied to the first parameter, but otherwise does not escape from the function. Such copies are required not to outlive the argument(s) they were derived from. Ignored for parameters with no references. See [Scope Parameters](memory-safe-d#scope-return-params). |
| `lazy` | argument is evaluated by the called function and not by the caller |
| **Type Constructor** | **Description** |
| `const` | argument is implicitly converted to a const type |
| `immutable` | argument is implicitly converted to an immutable type |
| `shared` | argument is implicitly converted to a shared type |
| `inout` | argument is implicitly converted to an inout type |
### In Parameters
**Note: The following requires the `-preview=in` switch, available in [v2.094.0](https://dlang.org/changelog/2.094.0.html#preview-in) or higher. When not in use, `in` is equivalent to `const`.** The parameter is an input to the function. Input parameters behave as if they have the `const scope` storage classes. Input parameters may also be passed by reference by the compiler.
Unlike `ref` parameters, `in` parameters can bind to both lvalues and rvalues (such as literals).
Types that would trigger a side effect if passed by value (such as types with copy constructor, postblit, or destructor), and types which cannot be copied (e.g. if their copy constructor is marked as `@disable`) will always be passed by reference. Dynamic arrays, classes, associative arrays, function pointers, and delegates will always be passed by value.
**Implementation Defined:** If the type of the parameter does not fall in one of those categories, whether or not it is passed by reference is implementation defined, and the backend is free to choose the method that will best fit the ABI of the platform. ### Ref and Out Parameters
By default, parameters take rvalue arguments. A `ref` parameter takes an lvalue argument, so changes to its value will operate on the caller's argument.
```
void inc(ref int x)
{
x += 1;
}
void seattle()
{
int z = 3;
inc(z);
assert(z == 4);
}
```
A `ref` parameter can also be returned by reference, see [Return Ref Parameters.](#return-ref-parameters)
An `out` parameter is similar to a `ref` parameter, except it is initialized with `x.init` upon function invocation.
```
void zero(out int x)
{
assert(x == 0);
}
void two(out int x)
{
x = 2;
}
void tacoma()
{
int a = 3;
zero(a);
assert(a == 0);
int y = 3;
two(y);
assert(y == 2);
}
```
For dynamic array and class object parameters, which are always passed by reference, `out` and `ref` apply only to the reference and not the contents.
### Lazy Parameters
An argument to a `lazy` parameter is not evaluated before the function is called. The argument is only evaluated if/when the parameter is evaluated within the function. Hence, a `lazy` argument can be executed 0 or more times.
```
import std.stdio : writeln;
void main()
{
int x;
3.times(writeln(x++));
writeln("-");
writeln(x);
}
void times(int n, lazy void exp)
{
while (n--)
exp();
}
```
prints to the console:
```
0
1
2
−
3
```
A `lazy` parameter cannot be an lvalue.
The underlying delegate of the `lazy` parameter may be extracted by using the `&` operator:
```
void test(lazy int dg)
{
int delegate() dg_ = &dg;
assert(dg_() == 7);
assert(dg == dg_());
}
void main()
{
int a = 7;
test(a);
}
```
A `lazy` parameter of type `void` can accept an argument of any type.
See Also: [Lazy Variadic Functions](#lazy_variadic_functions)
### Function Default Arguments
Function parameter declarations can have default values:
```
void foo(int x, int y = 3)
{
...
}
...
foo(4); // same as foo(4, 3);
```
Default parameters are resolved and semantically checked in the context of the function declaration.
```
module m;
private immutable int b;
pure void g(int a=b){}
```
```
import m;
int b;
pure void f()
{
g(); // ok, uses m.b
}
```
The attributes of the [*AssignExpression*](expression#AssignExpression) are applied where the default expression is used.
```
module m;
int b;
pure void g(int a=b){}
```
```
import m;
enum int b = 3;
pure void f()
{
g(); // error, cannot access mutable global `m.b` in pure function
}
```
If the default value for a parameter is given, all following parameters must also have default values.
### Return Ref Parameters
Return ref parameters are used with [ref functions](#ref-functions) to ensure that the returned reference will not outlive the matching argument's lifetime.
```
ref int identity(return ref int x) {
return x; // pass-through function that does nothing
}
ref int fun() {
int x;
return identity(x); // Error: escaping reference to local variable x
}
ref int gun(return ref int x) {
return identity(x); // OK
}
```
Struct non-static methods marked with the `return` attribute ensure the returned reference will not outlive the struct instance.
```
struct S
{
private int x;
ref int get() return { return x; }
}
ref int escape()
{
S s;
return s.get(); // Error: escaping reference to local variable s
}
```
Returning the address of a `ref` variable is also checked.
```
int* pluto(ref int i)
{
return &i; // error: returning &i escapes a reference to parameter i
}
int* mars(return ref int i)
{
return &i; // ok
}
```
If the function returns `void`, and the first parameter is `ref` or `out`, then all subsequent `return ref` parameters are considered as being assigned to the first parameter for lifetime checking. The `this` reference parameter to a struct non-static member function is considered the first parameter.
If there are multiple `return ref` parameters, the lifetime of the return value is the smallest lifetime of the corresponding arguments.
Neither the type of the `return ref` parameter(s) nor the type of the return value is considered when determining the lifetime of the return value.
It is not an error if the return type does not contain any indirections.
```
int mercury(return ref int i)
{
return i; // ok
}
```
Template functions, auto functions, nested functions and [lambdas](expression#function_literals) can deduce the `return` attribute.
```
ref int templateFunction()(ref int i)
{
return i; // ok
}
ref auto autoFunction(ref int i)
{
return i; // ok
}
void uranus()
{
ref int nestedFunction(ref int i)
{
return i; // ok
}
}
void venus()
{
auto lambdaFunction =
(ref int i)
{
return &i; // ok
};
}
```
`inout ref` parameters imply the `return` attribute.
```
inout(int)* neptune(inout ref int i)
{
return &i; // ok
}
```
### Scope Parameters
A `scope` parameter of reference type must not escape the function call (e.g. by being assigned to a global variable). It has no effect for non-reference types. `scope` escape analysis is only done for `@safe` functions. For other functions `scope` semantics must be manually enforced.
**Note:** `scope` escape analysis is currently only done by the `dmd` compiler when the `-dip1000` switch is passed.
```
@safe:
int* gp;
void thorin(scope int*);
void gloin(int*);
int* balin(scope int* q, int* r)
{
gp = q; // error, q escapes to global gp
gp = r; // ok
thorin(q); // ok, q does not escape thorin()
thorin(r); // ok
gloin(q); // error, gloin() escapes q
gloin(r); // ok that gloin() escapes r
return q; // error, cannot return 'scope' q
return r; // ok
}
```
As a `scope` parameter must not escape, the compiler can potentially avoid heap-allocating a unique argument to a `scope` parameter. Due to this, passing an array literal, delegate literal or a [*NewExpression*](expression#NewExpression) to a scope parameter may be allowed in a `@nogc` context, depending on the compiler implementation.
### Return Scope Parameters
Parameters marked as `return scope` that contain indirections can only escape those indirections via the function's return value.
```
@safe:
int* gp;
void thorin(scope int*);
void gloin(int*);
int* balin(return scope int* p)
{
gp = p; // error, p escapes to global gp
thorin(p); // ok, p does not escape thorin()
gloin(p); // error, gloin() escapes p
return p; // ok
}
```
Class references are considered pointers that are subject to `scope`.
```
@safe:
class C { }
C gp;
void thorin(scope C);
void gloin(C);
C balin(return scope C p, scope C q, C r)
{
gp = p; // error, p escapes to global gp
gp = q; // error, q escapes to global gp
gp = r; // ok
thorin(p); // ok, p does not escape thorin()
thorin(q); // ok
thorin(r); // ok
gloin(p); // error, gloin() escapes p
gloin(q); // error, gloin() escapes q
gloin(r); // ok that gloin() escapes r
return p; // ok
return q; // error, cannot return 'scope' q
return r; // ok
}
```
`return scope` can be applied to the `this` of class and interface member functions.
```
class C
{
C bofur() return scope { return this; }
}
```
Template functions, auto functions, nested functions and [lambdas](expression#function_literals) can deduce the `return scope` attribute.
### Ref Return Scope Parameters
Parameters marked as `ref return scope` come in two forms:
```
U xerxes(ref return scope V v); // (1) ref and return scope
ref U sargon(ref return scope V v); // (2) return ref and scope
```
The first form attaches the `return` to the `scope`, and has [return scope parameter](#return-scope-parameters) semantics for the value of the `ref` parameter.
The second form attaches the `return` to the `ref`, and has [return ref parameter](#return-ref-parameters) semantics with additional [scope parameter](memory-safe-d#scope-return-params) semantics.
Although a struct constructor returns a reference to the instance being constructed, it is treated as form (1).
The lexical order of the attributes `ref`, `return`, and `scope` is not significant.
It is not possible to have both `return ref` and `return scope` semantics for the same parameter.
```
@safe:
struct S
{
this(return scope ref int* p) { ptr = p; }
int val;
int* ptr;
}
int* foo1(ref return scope S s);
int foo2(ref return scope S s);
ref int* foo3(ref return scope S s);
ref int foo4(ref return scope S s);
int* test1(scope S s)
{
return foo1(s); // Error: scope variable `s` may not be returned
return foo3(s); // Error: scope variable `s` may not be returned
}
int test2(S s)
{
return foo2(s);
return foo4(s);
}
ref int* test3(S s)
{
return foo3(s); // Error: returning `foo3(s)` escapes a reference to parameter `s`
}
ref int test4(S s)
{
return foo4(s); // Error: returning `foo4(s)` escapes a reference to parameter `s`
}
S test5(ref scope int* p)
{
return S(p); // Error: scope variable `p` may not be returned
}
S test6(ref return scope int* p)
{
return S(p);
}
```
### User-Defined Attributes for Parameters
See also: [*User-Defined Attributes*](attribute#UserDefinedAttribute) ### Variadic Functions
*Variadic Functions* take a variable number of arguments. There are three forms:
1. [C-style variadic functions](#c_style_variadic_functions)
2. [Variadic functions with type info](#d_style_variadic_functions)
3. [Typesafe variadic functions](#typesafe-variadic_functions)
#### C-style Variadic Functions
A C-style variadic function is declared with a parameter `...` as the last function parameter. It has non-D linkage, such as `extern (C)`.
To access the variadic arguments, import the standard library module [`core.stdc.stdarg`](https://dlang.org/phobos/core_stdc_stdarg.html).
```
import core.stdc.stdarg;
extern (C) void dry(int x, int y, ...); // C-style Variadic Function
void spin()
{
dry(3, 4); // ok, no variadic arguments
dry(3, 4, 6.8); // ok, one variadic argument
dry(2); // error, no argument for parameter y
}
```
There must be at least one non-variadic parameter declared.
```
extern (C) int def(...); // error, must have at least one parameter
```
C-style variadic functions match the C calling convention for variadic functions, and can call C Standard library functions like `printf`.
```
extern (C) int printf(const(char)*, ...);
void main()
{
printf("hello world\n");
}
```
C-style variadic functions cannot be marked as `@safe`.
```
void wash()
{
rinse(3, 4, 5); // first variadic argument is 5
}
import core.stdc.stdarg;
extern (C) void rinse(int x, int y, ...)
{
va_list args;
va_start(args, y); // y is the last named parameter
int z;
va_arg(args, z); // z is set to 5
va_end(args);
}
```
#### D-style Variadic Functions
D-style variadic functions have D linkage and `...` as the last parameter.
`...` can be the only parameter.
If there are parameters preceding the `...` parameter, there must be a comma separating them from the `...`.
**Note:** If the comma is ommitted, it is a [TypeSafe Variadic Function](#variadic).
```
int abc(char c, ...); // one required parameter: c
int def(...); // no required parameters
int ghi(int i ...); // a typesafe variadic function
//int boo(, ...); // error
```
Two hidden arguments are passed to the function:
* `void* _argptr`
* `TypeInfo[] _arguments`
`_argptr` is a reference to the first of the variadic arguments. To access the variadic arguments, import [`core.vararg`](https://dlang.org/phobos/core_vararg.html). Use `_argptr` in conjunction with `core.va_arg`:
```
import core.vararg;
void test()
{
foo(3, 4, 5); // first variadic argument is 5
}
@system void foo(int x, int y, ...)
{
int z = va_arg!int(_argptr); // z is set to 5 and _argptr is advanced
// to the next argument
}
```
`_arguments` gives the number of arguments and the `typeid` of each, enabling type safety to be checked at run time.
```
import std.stdio;
void main()
{
Foo f = new Foo();
Bar b = new Bar();
writefln("%s", f);
printargs(1, 2, 3L, 4.5, f, b);
}
class Foo { int x = 3; }
class Bar { long y = 4; }
import core.vararg;
@system void printargs(int x, ...)
{
writefln("%d arguments", _arguments.length);
for (int i = 0; i < _arguments.length; i++)
{
writeln(_arguments[i]);
if (_arguments[i] == typeid(int))
{
int j = va_arg!(int)(_argptr);
writefln("\t%d", j);
}
else if (_arguments[i] == typeid(long))
{
long j = va_arg!(long)(_argptr);
writefln("\t%d", j);
}
else if (_arguments[i] == typeid(double))
{
double d = va_arg!(double)(_argptr);
writefln("\t%g", d);
}
else if (_arguments[i] == typeid(Foo))
{
Foo f = va_arg!(Foo)(_argptr);
writefln("\t%s", f);
}
else if (_arguments[i] == typeid(Bar))
{
Bar b = va_arg!(Bar)(_argptr);
writefln("\t%s", b);
}
else
assert(0);
}
}
```
which prints:
```
0x00870FE0
5 arguments
int
2
long
3
double
4.5
Foo
0x00870FE0
Bar
0x00870FD0
```
D-style variadic functions cannot be marked as `@safe`.
#### Typesafe Variadic Functions
A typesafe variadic function has D linkage and a variadic parameter declared as either an array or a class. The array or class is constructed from the arguments, and is passed as an array or class object.
For dynamic arrays:
```
int sum(int[] ar ...) // typesafe variadic function
{
int s;
foreach (int x; ar)
s += x;
return s;
}
import std.stdio;
void main()
{
writeln(stan()); // 6
writeln(ollie()); // 15
}
int stan()
{
return sum(1, 2, 3) + sum(); // returns 6+0
}
int ollie()
{
int[3] ii = [4, 5, 6];
return sum(ii); // returns 15
}
```
For static arrays, the number of arguments must match the array dimension.
```
int sum(int[3] ar ...) // typesafe variadic function
{
int s;
foreach (int x; ar)
s += x;
return s;
}
int frank()
{
return sum(2, 3); // error, need 3 values for array
return sum(1, 2, 3); // returns 6
}
int dave()
{
int[3] ii = [4, 5, 6];
int[] jj = ii;
return sum(ii); // returns 15
return sum(jj); // error, type mismatch
}
```
For class objects:
```
int tesla(int x, C c ...)
{
return x + c.x;
}
class C
{
int x;
string s;
this(int x, string s)
{
this.x = x;
this.s = s;
}
}
void edison()
{
C g = new C(3, "abc");
tesla(1, c); // ok, since c is an instance of C
tesla(1, 4, "def"); // ok
tesla(1, 5); // error, no matching constructor for C
}
```
The lifetime of the variadic class object or array instance ends at the end of the function.
```
C orville(C c ...)
{
return c; // error, c instance contents invalid after return
}
int[] wilbur(int[] a ...)
{
return a; // error, array contents invalid after return
return a[0..1]; // error, array contents invalid after return
return a.dup; // ok, since copy is made
}
```
**Implementation Defined:** the variadic object or array instance may be constructed on the stack. For other types, the argument is passed by value.
```
int neil(int i ...)
{
return i;
}
void buzz()
{
neil(3); // returns 3
neil(3, 4); // error, too many arguments
int[] x;
neil(x); // error, type mismatch
}
```
#### Lazy Variadic Functions
If the variadic parameter of a function is an array of delegates with no parameters, then each of the arguments whose type does not match that of the delegate is converted to a delegate of that type.
```
void hal(scope int delegate()[] dgs ...);
void dave()
{
int delegate() dg;
hal(1, 3+x, dg, cast(int delegate())null); // (1)
hal( { return 1; }, { return 3+x; }, dg, null ); // same as (1)
}
```
The variadic delegate array differs from using a lazy variadic array. With the former each array element access would evaluate every array element. With the latter, only the element being accessed would be evaluated.
```
import std.stdio;
void main()
{
int x;
ming(++x, ++x);
int y;
flash(++y, ++y);
}
// lazy variadic array
void ming(lazy int[] arr...)
{
writeln(arr[0]); // 1
writeln(arr[1]); // 4
}
// variadic delegate array
void flash(scope int delegate()[] arr ...)
{
writeln(arr[0]); // 1
writeln(arr[1]); // 2
}
```
**Best Practices:** Use `scope` when declaring the array of delegates parameter. This will prevent a closure being generated for the delegate, as `scope` means the delegate will not escape the function. Local Variables
---------------
Local variables are declared within the scope of a function. Function parameters are included.
A local variable cannot be read without first assigning it a value.
**Implementation Defined:** The implementation may not always be able to detect these cases. The address of or reference to a local non-static variable cannot be returned from the function.
A local variable and a label in the same function cannot have the same name.
A local variable cannot hide another local variable in the same function.
**Rationale:** whenever this is done it often is a bug or at least looks like a bug.
```
ref double func(int x)
{
int x; // error, hides previous definition of x
double y;
{
char y; // error, hides previous definition of y
int z;
}
{
wchar z; // Ok, previous z is out of scope
}
z: // error, z is a local variable and a label
return y; // error, returning ref to local
}
```
### Local Static Variables
Local variables in functions declared as `static`, `shared static` or `__gshared` are statically allocated rather than being allocated on the stack. The lifetime of `__gshared` and `shared static` variables begins when the function is first executed and ends when the program ends. The lifetime of `static` variables begins when the function is first executed within the thread and ends when that thread terminates.
```
void foo()
{
static int n;
if (++n == 100)
writeln("called 100 times");
}
```
The initializer for a static variable must be evaluatable at compile time. There are no static constructors or static destructors for static local variables.
Although static variable name visibility follows the usual scoping rules, the names of them must be unique within a particular function.
```
void main()
{
{ static int x; }
{ static int x; } // error
{ int i; }
{ int i; } // ok
}
```
Nested Functions
----------------
Functions may be nested within other functions:
```
int bar(int a)
{
int foo(int b)
{
int abc() { return 1; }
return b + abc();
}
return foo(a);
}
void test()
{
int i = bar(3); // i is assigned 4
}
```
Nested functions can be accessed only if the name is in scope.
```
void foo()
{
void A()
{
B(); // error, B() is forward referenced
C(); // error, C undefined
}
void B()
{
A(); // ok, in scope
void C()
{
void D()
{
A(); // ok
B(); // ok
C(); // ok
D(); // ok
}
}
}
A(); // ok
B(); // ok
C(); // error, C undefined
}
```
and:
```
int bar(int a)
{
int foo(int b) { return b + 1; }
int abc(int b) { return foo(b); } // ok
return foo(a);
}
void test()
{
int i = bar(3); // ok
int j = bar.foo(3); // error, bar.foo not visible
}
```
Nested functions have access to the variables and other symbols defined by the lexically enclosing function. This access includes both the ability to read and write them.
```
int bar(int a)
{
int c = 3;
int foo(int b)
{
b += c; // 4 is added to b
c++; // bar.c is now 5
return b + c; // 12 is returned
}
c = 4;
int i = foo(a); // i is set to 12
return i + c; // returns 17
}
void test()
{
int i = bar(3); // i is assigned 17
}
```
This access can span multiple nesting levels:
```
int bar(int a)
{
int c = 3;
int foo(int b)
{
int abc()
{
return c; // access bar.c
}
return b + c + abc();
}
return foo(3);
}
```
Static nested functions cannot access any stack variables of any lexically enclosing function, but can access static variables. This is analogous to how static member functions behave.
```
int bar(int a)
{
int c;
static int d;
static int foo(int b)
{
b = d; // ok
b = c; // error, foo() cannot access frame of bar()
return b + 1;
}
return foo(a);
}
```
Functions can be nested within member functions:
```
struct Foo
{
int a;
int bar()
{
int c;
int foo()
{
return c + a;
}
return 0;
}
}
```
Nested functions always have the D function linkage type.
Unlike module level declarations, declarations within function scope are processed in order. This means that two nested functions cannot mutually call each other:
```
void test()
{
void foo() { bar(); } // error, bar not defined
void bar() { foo(); } // ok
}
```
There are several workarounds for this limitation:
* Declare the functions to be static members of a nested struct:
```
void test()
{
static struct S
{
static void foo() { bar(); } // ok
static void bar() { foo(); } // ok
}
S.foo(); // compiles (but note the infinite runtime loop)
}
```
- Declare one or more of the functions to be function templates even if they take no specific template arguments:
```
void test()
{
void foo()() { bar(); } // ok (foo is a function template)
void bar() { foo(); } // ok
}
```
- Declare the functions inside of a mixin template:
```
mixin template T()
{
void foo() { bar(); } // ok
void bar() { foo(); } // ok
}
void main()
{
mixin T!();
}
```
- Use a delegate:
```
void test()
{
void delegate() fp;
void foo() { fp(); }
void bar() { foo(); }
fp = &bar;
}
```
Nested functions cannot be overloaded.
Delegates, Function Pointers, and Closures
------------------------------------------
A function pointer can point to a static nested function:
```
int function() fp; // fp is a pointer to a function returning an int
void test()
{
static int a = 7;
static int foo() { return a + 3; }
fp = &foo;
}
void bar()
{
test();
int i = fp(); // i is set to 10
}
```
**Implementation Defined:** Two functions with identical bodies, or two functions that compile to identical assembly code, are not guaranteed to have distinct function pointer values. The implementation may merge functions bodies into one if they compile to identical code.
```
int abc(int x) { return x + 1; }
uint def(uint y) { return y + 1; }
int delegate(int) fp1 = &abc;
uint delegate(uint) fp2 = &def;
// Do not rely on fp1 and fp2 being different values; the compiler may merge
// them.
```
A delegate can be set to a non-static nested function:
```
int delegate() dg;
void test()
{
int a = 7;
int foo() { return a + 3; }
dg = &foo;
int i = dg(); // i is set to 10
}
```
The stack variables referenced by a nested function are still valid even after the function exits (NOTE this is different from D 1.0.) This is called a *closure*.
Those referenced stack variables that make up the closure are allocated on the GC heap. Closures are not allowed for `@nogc` functions.
**Note:** Returning addresses of stack variables, however, is not a closure and is an error.
```
void bar()
{
int b;
test();
int i = dg(); // ok, test.a is in a closure and still exists
}
```
Delegates to non-static nested functions contain two pieces of data: the pointer to the stack frame of the lexically enclosing function (called the *context pointer*) and the address of the function. This is analogous to struct/class non-static member function delegates consisting of a *this* pointer and the address of the member function. Both forms of delegates are indistinguishable, and are the same type.
```
struct Foo
{
int a = 7;
int bar() { return a; }
}
int foo(int delegate() dg)
{
return dg() + 1;
}
void test()
{
int x = 27;
int abc() { return x; }
Foo f;
int i;
i = foo(&abc); // i is set to 28
i = foo(&f.bar); // i is set to 8
}
```
This combining of the environment and the function is called a *dynamic closure*.
The `.ptr` property of a delegate will return the *context pointer* value as a `void*`.
The `.funcptr` property of a delegate will return the *function pointer* value as a function type.
Function pointers are zero-initialized by default. They can be initialized to the address of any function (including a function literal). Initialization with the address of a function that requires a context pointer is not allowed in @safe functions.
```
struct S
{
static int sfunc();
int member(); // has hidden `this` reference parameter
}
@safe void sun()
{
int function() fp = &S.sfunc;
fp(); // Ok
fp = &S.member; // error
}
@system void moon()
{
int function() fp = &S.member; // Ok because @system
fp(); // undefined behavior
}
```
Delegates are zero-initialized by default. They can be initialized by taking the address of a non-static member function, but a context pointer must be supplied. They can be initialized by taking the address of a non-static nested function or function literal, where the context pointer will be set to point to the stack frame, closure, or `null`.
Delegates cannot be initialized by taking the address of a global function, a static member function, or a static nested function.
```
struct S
{
static int sfunc();
int member() { return 1; }
}
void main()
{
S s;
int delegate() dg = &s.member; // Ok, s supplies context pointer
assert(dg() == 1);
dg = &S.sfunc; // error
dg = &S.member; // error
int moon() { return 2; }
dg = &moon; // Ok
assert(dg() == 2);
static int mars() { return 3; }
dg = &mars; // error
dg = () { return 4; }; // Ok
assert(dg() == 4);
}
```
**Note:** Function pointers can be passed to functions taking a delegate argument by passing them through the [`std.functional.toDelegate`](https://dlang.org/phobos/std_functional.html#toDelegate) template, which converts any callable to a delegate with a `null` context pointer. ### Anonymous Functions and Anonymous Delegates
See [*FunctionLiteral*](expression#FunctionLiteral)s.
`main()` Function
------------------
For console programs, `main()` serves as the entry point. It gets called after all the module initializers are run, and after any unittests are run. After it returns, all the module destructors are run. `main()` must be declared using one of the following forms:
* `void main() { ... }`
* `void main(string[] args) { ... }`
* `int main() { ... }`
* `int main(string[] args) { ... }`
The main function must have D linkage.
Attributes may be added as needed, e.g. `@safe`, `@nogc`, `nothrow`, etc.
### BetterC `main()` Function
For **BetterC** programs, the main function is declared using one of the following forms:
* `extern (C) int main() { ... }`
* `extern (C) int main(int argc, char** argv) { ... }`
This takes the place of the C main function and serves the identical purpose.
Module constructors, module destructors, and unittests are not run.
**Implementation Defined:** Other system-specific entry points may exist, such as `WinMain` and `DllMain` on Windows systems. Function Templates
------------------
Functions can have compile time arguments in the form of a template. See [function templates](template#function-templates).
Compile Time Function Execution (CTFE)
--------------------------------------
In contexts where a compile time value is required, functions can be used to compute those values. This is called *Compile Time Function Execution*, or *CTFE*.
These contexts are:
* initialization of a static variable or a [manifest constant](enum#manifest_constants)
* static initializers of struct/class members
* dimension of a [static array](arrays#static-arrays)
* argument for a [template value parameter](template#template_value_parameter)
* [`static if`](version#staticif)
* [`static foreach`](version#staticforeach)
* [`static assert`](version#static-assert)
* [`mixin` statement](statement#mixin-statement)
* [`pragma` argument](pragma)
* [`__traits` argument](traits)
```
enum eval(Args...) = Args[0];
int square(int i)
{
return i * i;
}
void foo()
{
static j = square(3); // CTFE
writeln(j);
assert(square(4)); // run time
static assert(square(3) == 9); // CTFE
writeln(eval!(square(5))); // CTFE
}
```
The function must have a [*SpecifiedFunctionBody*](#SpecifiedFunctionBody).
CTFE is subject to the following restrictions:
1. Expressions may not reference any global or local static variables.
2. [AsmStatements](https://dlang.org/iasm.html#asmstatements) are not permitted
3. Non-portable casts (eg, from `int[]` to `float[]`), including casts which depend on endianness, are not permitted. Casts between signed and unsigned types are permitted.
4. Reinterpretation of overlapped fields in a union is not permitted.
Pointers are permitted in CTFE, provided they are used safely:
* Pointer arithmetic is permitted only on pointers which point to static or dynamic array elements. A pointer may also point to the first element past the array, although such pointers cannot be dereferenced. Pointer arithmetic on pointers which are null, or which point to a non-array, is not allowed.
* Ordered comparison (`<`, `<``=`, `>`, `>=`) between two pointers is permitted when both pointers point to the same array, or when at least one pointer is `null`.
* Pointer comparisons between discontiguous memory blocks are illegal, unless two such comparisons are combined using `&&` or `|``|` to yield a result which is independent of the ordering of memory blocks. Each comparison must consist of two pointer expressions compared with `<`, `<``=`, `>`, or `>``=`, and may optionally be negated with `!`. For example, the expression `(p1 > q1 && p2 <= q2)` is permitted when `p1`, `p2` are expressions yielding pointers to memory block *P*, and `q1`, `q2` are expressions yielding pointers to memory block *Q*, even when *P* and *Q* are unrelated memory blocks. It returns true if `[p1..p2]` lies inside `[q1..q2]`, and false otherwise. Similarly, the expression `(p1 < q1 || p2 > q2)` is true if `[p1..p2]` lies outside `[q1..q2]`, and false otherwise.
* Equality comparisons (==, !=, is, !is) are permitted between all pointers, without restriction.
* Any pointer may be cast to `void*` and from `void*` back to its original type. Casting between pointer and non-pointer types is illegal.
The above restrictions apply only to expressions which are actually executed. For example:
```
static int y = 0;
int countTen(int x)
{
if (x > 10)
++y; // access static variable
return x;
}
static assert(countTen(6) == 6); // OK
static assert(countTen(12) == 12); // invalid, modifies y.
```
The `__ctfe` boolean pseudo-variable evaluates to true during CTFE but false otherwise.
**Note:** `__ctfe` can be used to provide an alternative execution path to avoid operations which are forbidden in CTFE. Every usage of `__ctfe` is statically evaluated and has no run-time cost. ) Non-recoverable errors (such as assert failures) are illegal.
**Implementation Defined:** Executing functions via CTFE can take considerably longer than executing it at run time. If the function goes into an infinite loop, it may cause the compiler to hang. **Implementation Defined:** Functions executed via CTFE can give different results from run time when implementation-defined or undefined-behavior occurs. ### String Mixins and Compile Time Function Execution
All functions that execute in CTFE must also be executable at run time. The compile time evaluation of a function does the equivalent of running the function at run time. The semantics of a function cannot depend on compile time values of the function. For example:
```
int foo(string s)
{
return mixin(s);
}
const int x = foo("1");
```
is illegal, because the runtime code for `foo` cannot be generated. **Best Practices:** A function template, where `s` is a template argument, would be the appropriate method to implement this sort of thing. No-GC Functions
---------------
No-GC functions are functions marked with the `@nogc` attribute. Those functions do not allocate memory on the GC heap. These operations are not allowed in No-GC functions:
1. [constructing an array](https://dlang.org/expression.html#ArrayLiteral) on the heap
2. resizing an array by writing to its `.length` property
3. [array concatenation](https://dlang.org/expression.html#CatExpression)
4. [array appending](https://dlang.org/expression.html#simple_assignment_expressions)
5. [constructing an associative array](https://dlang.org/expression.html#AssocArrayLiteral)
6. [indexing](https://dlang.org/expression.html#IndexExpression) an associative array **Note:** because it may throw `RangeError` if the specified key is not present
7. [allocating an object with `new`](https://dlang.org/expression.html#NewExpression) on the heap
8. calling functions that are not `@nogc`, unless the call is in a [*ConditionalStatement*](version#ConditionalStatement) controlled by a [*DebugCondition*](version#DebugCondition)
```
@nogc void foo()
{
auto a = ['a']; // (1) error, allocates
a.length = 1; // (2) error, array resizing allocates
a = a ~ a; // (3) error, arrays concatenation allocates
a ~= 'c'; // (4) error, appending to arrays allocates
auto aa = ["x":1]; // (5) error, allocates
aa["abc"]; // (6) error, indexing may allocate and throws
auto p = new int; // (7) error, operator new allocates
bar(); // (8) error, bar() may allocate
debug bar(); // (8) Ok
}
void bar() { }
```
No-GC functions cannot be closures.
```
@nogc int delegate() foo()
{
int n; // error, variable n cannot be allocated on heap
return (){ return n; } // since `n` escapes `foo()`, a closure is required
}
```
`@nogc` affects the type of the function. A `@nogc` function is covariant with a non-`@nogc` function.
```
void function() fp;
void function() @nogc gp; // pointer to @nogc function
void foo();
@nogc void bar();
void test()
{
fp = &foo; // ok
fp = &bar; // ok, it's covariant
gp = &foo; // error, not contravariant
gp = &bar; // ok
}
```
Function Safety
---------------
### Safe Functions
Safe functions are marked with the `@safe` attribute. `@safe` can be inferred, see [Function Attribute Inference](#function-attribute-inference).
Safe functions have [safe interfaces](#safe-interfaces). An implementation must enforce this by restricting the function's body to operations that are known safe.
The following operations are not allowed in safe functions:
* No casting from a pointer type to any type with pointers other than `void*`.
* No casting from any non-pointer type to a pointer type.
* No pointer arithmetic (including pointer indexing).
* Cannot access unions that have pointers or references overlapping with other types.
* Calling any [System Functions](#system-functions).
* No catching of exceptions that are not derived from [`class Exception`](https://dlang.org/phobos/object.html#Exception).
* No inline assembler.
* No explicit casting of mutable objects to immutable.
* No explicit casting of immutable objects to mutable.
* No explicit casting of thread local objects to shared.
* No explicit casting of shared objects to thread local.
* Cannot access `__gshared` variables.
* Cannot use `void` initializers for pointers.
* Cannot use `void` initializers for class or interface references.
When indexing or slicing an array, an out of bounds access will cause a runtime error.
) Functions nested inside safe functions default to being safe functions.
Safe functions are covariant with trusted or system functions.
**Best Practices:** Mark as many functions `@safe` as practical. #### Safe External Functions
External functions don't have a function body visible to the compiler:
```
@safe extern (C) void play();
```
and so safety cannot be verified automatically. **Best Practices:** Explicitly set an attribute for external functions rather than relying on default settings. ### Trusted Functions
Trusted functions are marked with the `@trusted` attribute.
Like [safe functions](#safe-functions), trusted functions have [safe interfaces](#safe-interfaces). Unlike safe functions, this is not enforced by restrictions on the function body. Instead, it is the responsibility of the programmer to ensure that the interface of a trusted function is safe.
Example:
```
immutable(int)* f(int* p) @trusted
{
version (none) p[2] = 13;
// Invalid. p[2] is out of bounds. This line would exhibit undefined
// behavior.
version (none) p[1] = 13;
// Invalid. In this program, p[1] happens to be in-bounds, so the
// line would not exhibit undefined behavior, but a trusted function
// is not allowed to rely on this.
version (none) return cast(immutable) p;
// Invalid. @safe code still has mutable access and could trigger
// undefined behavior by overwriting the value later on.
int* p2 = new int;
*p2 = 42;
return cast(immutable) p2;
// Valid. After f returns, no mutable aliases of p2 can exist.
}
void main() @safe
{
int[2] a = [10, 20];
int* mp = &a[0];
immutable(int)* ip = f(mp);
assert(a[1] == 20); // Guaranteed. f cannot access a[1].
assert(ip !is mp); // Guaranteed. f cannot introduce unsafe aliasing.
}
```
Trusted functions may call safe, trusted, or system functions.
Trusted functions are covariant with safe or system functions.
**Best Practices:** Trusted functions should be kept small so that they are easier to manually verify. ### System Functions
System functions are functions not marked with `@safe` or `@trusted` and are not nested inside `@safe` functions. System functions may be marked with the `@system` attribute. A function being system does not mean it actually is unsafe, it just means that its safety must be manually verified.
System functions are **not** covariant with trusted or safe functions.
System functions can call safe and trusted functions.
**Best Practices:** When in doubt, mark `extern (C)` and `extern (C++)` functions as `@system` when their implementations are not in D, as the D compiler will be unable to check them. Most of them are `@safe`, but will need to be manually checked. **Best Practices:** The number and size of system functions should be minimized. This minimizes the work necessary to manually check for safety. ### Safe Interfaces
When it is only called with [safe values](#safe-values) and [safe aliasing](#safe-aliasing), a function has a safe interface when:
1. it cannot exhibit [undefined behavior](https://dlang.org/glossary.html#undefined_behavior), and
2. it cannot create unsafe values that are accessible from other parts of the program (e.g., via return values, global variables, or `ref` parameters), and
3. it cannot introduce unsafe aliasing that is accessible from other parts of the program.
Functions that meet these requirements may be [`@safe`](#safe-functions) or [`@trusted`](#trusted-functions). Function that do not meet these requirements can only be [`@system`](#system-functions).
Examples:
* C's `free` does not have a safe interface:
```
extern (C) @system void free(void* ptr);
```
because `free(p)` invalidates `p`, making its value unsafe. `free` can only be `@system`.
* C's `strlen` and `memcpy` do not have safe interfaces:
```
extern (C) @system size_t strlen(char* s);
extern (C) @system void* memcpy(void* dst, void* src, size_t nbytes);
```
because they iterate pointers based on unverified assumptions (`strlen` assumes that `s` is zero-terminated; `memcpy` assumes that the memory objects pointed to by `dst` and `src` are at least `nbytes` big). Any function that traverses a C string passed as an argument can only be `@system`. Any function that trusts a separate parameter for array bounds can only be `@system`.
* C's `malloc` does have a safe interface:
```
extern (C) @trusted void* malloc(size_t sz);
```
It does not exhibit undefined behavior for any input. It returns either a valid pointer, which is safe, or `null` which is also safe. It returns a pointer to a fresh allocation, so it cannot introduce any unsafe aliasing. **Note:** The implementation of `malloc` is most likely @system code.
* A D version of `memcpy` can have a safe interface:
```
@safe void memcpy(E)(E[] src, E[] dst)
{
import std.math : min;
foreach (i; 0 .. min(src.length, dst.length))
{
dst[i] = src[i];
}
}
```
because the rules for [safe values](#safe-values) ensure that the lengths of the arrays are correct.
### Safe Values
For [basic data types](type#basic-data-types), all possible bit patterns are safe.
A pointer is a safe value when it is one of:
1. `null`
2. it points to a memory object that is live and the pointed to value in that memory object is safe.
Examples:
```
int* n = null; /* n is safe because dereferencing null is a well-defined
crash. */
int* x = cast(int*) 0xDEADBEEF; /* x is (most likely) unsafe because it
is not a valid pointer and cannot be dereferenced. */
import core.stdc.stdlib: malloc, free;
int* p1 = cast(int*) malloc(int.sizeof); /* p1 is safe because the
pointer is valid and *p1 is safe regardless of its actual value. */
free(p1); /* This makes p1 unsafe. */
int** p2 = &p1; /* While it can be dereferenced, p2 is unsafe because p1
is unsafe. */
p1 = null; /* This makes p1 and p2 safe. */
```
A dynamic array is safe when:
1. its pointer is safe, and
2. its length is in-bounds with the corresponding memory object, and
3. all its elements are safe.
Examples:
```
int[] f() @system
{
int[3] a;
int[] d1 = a[0 .. 2]; /* d1 is safe. */
int[] d2 = a.ptr[0 .. 3]; /* d2 is unsafe because it goes beyond a's
bounds. */
int*[] d3 = [cast(int*) 0xDEADBEEF]; /* d3 is unsafe because the
element is unsafe. */
return d1; /* Up to here, d1 was safe, but its pointer becomes
invalid when the function returns, so the returned dynamic array
is unsafe. */
}
```
A static array is safe when all its elements are safe. Regardless of the element type, a static array with length zero is always safe.
An associative array is safe when all its keys and elements are safe.
A struct/union instance is safe when:
1. the values of its accessible fields are safe, and
2. it does not introduce [unsafe aliasing](#safe-aliasing) with unions.
Examples:
```
struct S { int* p; }
S s1 = S(new int); /* s1 is safe. */
S s2 = S(cast(int*) 0xDEADBEEF); /* s2 is unsafe, because s2.p is
unsafe. */
union U { int* p; size_t x; }
U u = U(new int); /* Even though both u.p and u.x are safe, u is unsafe
because of unsafe aliasing. */
```
A class reference is safe when it is `null` or:
1. it refers to a valid class instance of the class type or a type derived from the class type, and
2. the values of the instance's accessible fields are safe, and
3. it does not introduce unsafe aliasing with unions.
A function pointer is safe when it is `null` or it refers to a valid function that has the same or a covariant signature.
A `delegate` is safe when:
1. its `.funcptr` property is `null` or refers to a function that matches or is covariant with the delegate type, and
2. its `.ptr` property is `null` or refers to a memory object that is in a form expected by the function.
### Safe Aliasing
When one memory location is accessible with two different types, that aliasing is considered safe if:
1. both types are `const` or `immutable`; or
2. one of the types is mutable while the other is a `const`-qualified [basic data type](type#basic-data-types); or
3. both types are mutable basic data types; or
4. one of the types is a static array type with length zero; or
5. one of the types is a static array type with non-zero length, and aliasing of the array's element type and the other type is safe; or
6. both types are pointer types, and aliasing of the target types is safe, and the target types have the same size.
All other cases of aliasing are considered unsafe.
**Note:** Safe aliasing may be exposed to functions with [safe interfaces](#safe-interfaces) without affecting their guaranteed safety. Unsafe aliasing does not guarantee safety. **Note:** Safe aliasing does not imply that all aliased views of the data have [safe values](#safe-values). Those must examined separately for safety. Examples:
```
void f1(ref ubyte x, ref float y) @safe { x = 0; y = float.init; }
union U1 { ubyte x; float y; } // safe aliasing
U1 u1;
f1(u1.x, u1.y); // Ok
void f2(ref int* x, ref int y) @trusted { x = new int; y = 0xDEADBEEF; }
union U2 { int* x; int y; } // unsafe aliasing
U2 u2;
version (none) f1(u2.x, u2.y); // not safe
```
Function Attribute Inference
----------------------------
[*FunctionLiteral*](expression#FunctionLiteral)s, [Auto Functions](#auto-functions), [Auto Ref Functions](#auto-ref-functions), [nested functions](#nested) and [function templates](template#function-templates), since their function bodies are always present, infer the [`pure`](#pure-functions), [`nothrow`](#nothrow-functions), [`@safe`](#safe-functions), [`@nogc`](#nogc-functions), [return ref parameters](#return-ref-parameters), [scope parameters](#scope-parameters), [return scope parameters](#return-scope-parameters) and [ref return scope parameters](#ref-return-scope-parameters) attributes unless specifically overridden.
Attribute inference is not done for other functions, even if the function body is present.
The inference is done by determining if the function body follows the rules of the particular attribute.
Cyclic functions (i.e. functions that wind up directly or indirectly calling themselves) are inferred as being impure, throwing, and @system.
If a function attempts to test itself for those attributes, then the function is inferred as not having those attributes.
**Rationale:** Function attribute inference greatly reduces the need for the user to add attributes to functions, especially for templates. Uniform Function Call Syntax (UFCS)
-----------------------------------
A free function can be called with a syntax equivalent to that of a member function of its first parameter type. The free function is called a *UFCS function*.
```
void sun(T, int);
void moon(T t)
{
t.sun(1);
// If `T` does not have member function `sun`,
// `t.sun(1)` is interpreted as if it were written `sun(t, 1)`
}
```
**Rationale:** This provides a way to add external functions to a class as if they were public final member functions. This enables minimizing the number of functions in a class to only the essentials that are needed to take care of the object's private state, without the temptation to add a kitchen-sink's worth of member functions. It also enables [function chaining and component programming](http://www.drdobbs.com/architecture-and-design/component-programming-in-d/240008321). A more complex example:
```
stdin.byLine(KeepTerminator.yes)
.map!(a => a.idup)
.array
.sort
.copy(stdout.lockingTextWriter());
```
is the equivalent of:
```
copy(sort(array(map!(a => a.idup)(byLine(stdin, KeepTerminator.yes))), lockingTextWriter(stdout));
```
UFCS works with `@property` functions:
```
@property prop(X thisObj);
@property prop(X thisObj, int value);
X obj;
obj.prop; // if X does not have member prop, reinterpret as prop(obj);
obj.prop = 1; // similarly, reinterpret as prop(obj, 1);
```
Functions declared in a local scope are not found when searching for a matching UFCS function.
Member functions are not found when searching for a matching UFCS function.
Otherwise, UFCS function lookup proceeds normally.
```
module a;
void foo(X);
alias boo = foo;
void main()
{
void bar(X); // bar declared in local scope
import b : baz; // void baz(X);
X obj;
obj.foo(); // OK, calls a.foo;
//obj.bar(); // NG, UFCS does not see nested functions
obj.baz(); // OK, calls b.baz, because it is declared at the
// top level scope of module b
import b : boo = baz;
obj.boo(); // OK, calls aliased b.baz instead of a.boo (== a.foo),
// because the declared alias name 'boo' in local scope
// overrides module scope name
}
class C
{
void mfoo(X); // member function
static void sbar(X); // static member function
import b : ibaz = baz; // void baz(X);
void test()
{
X obj;
//obj.mfoo(); // NG, UFCS does not see member functions
//obj.sbar(); // NG, UFCS does not see static member functions
obj.ibaz(); // OK, ibaz is an alias of baz which declared at
// the top level scope of module b
}
}
```
**Rationale:** Local function symbols are not considered by UFCS to avoid unexpected name conflicts. See below problematic examples.
```
int front(int[] arr) { return arr[0]; }
void main()
{
int[] a = [1,2,3];
auto x = a.front(); // call .front by UFCS
auto front = x; // front is now a variable
auto y = a.front(); // Error, front is not a function
}
class C
{
int[] arr;
int front()
{
return arr.front(); // Error, C.front is not callable
// using argument types (int[])
}
}
```
)
| programming_docs |
d std.regex std.regex
=========
[Regular expressions](https://en.wikipedia.org/wiki/Regular_expression) are a commonly used method of pattern matching on strings, with *regex* being a catchy word for a pattern in this domain specific language. Typical problems usually solved by regular expressions include validation of user input and the ubiquitous find & replace in text processing utilities.
| Category | Functions |
| --- | --- |
| Matching | [`bmatch`](#bmatch) [`match`](#match) [`matchAll`](#matchAll) [`matchFirst`](#matchFirst) |
| Building | [`ctRegex`](#ctRegex) [`escaper`](#escaper) [`regex`](#regex) |
| Replace | [`replace`](#replace) [`replaceAll`](#replaceAll) [`replaceAllInto`](#replaceAllInto) [`replaceFirst`](#replaceFirst) [`replaceFirstInto`](#replaceFirstInto) |
| Split | [`split`](#split) [`splitter`](#splitter) |
| Objects | [`Captures`](#Captures) [`Regex`](#Regex) [`RegexException`](#RegexException) [`RegexMatch`](#RegexMatch) [`Splitter`](#Splitter) [`StaticRegex`](#StaticRegex) |
### [Synopsis](#Synopsis)
```
import std.regex;
import std.stdio;
void main()
{
// Print out all possible dd/mm/yy(yy) dates found in user input.
auto r = regex(r"\b[0-9][0-9]?/[0-9][0-9]?/[0-9][0-9](?:[0-9][0-9])?\b");
foreach (line; stdin.byLine)
{
// matchAll() returns a range that can be iterated
// to get all subsequent matches.
foreach (c; matchAll(line, r))
writeln(c.hit);
}
}
...
// Create a static regex at compile-time, which contains fast native code.
auto ctr = ctRegex!(`^.*/([^/]+)/?$`);
// It works just like a normal regex:
auto c2 = matchFirst("foo/bar", ctr); // First match found here, if any
assert(!c2.empty); // Be sure to check if there is a match before examining contents!
assert(c2[1] == "bar"); // Captures is a range of submatches: 0 = full match.
...
// multi-pattern regex
auto multi = regex([`\d+,\d+`,`(a-z]+):(\d+)`]);
auto m = "abc:43 12,34".matchAll(multi);
assert(m.front.whichPattern == 2);
assert(m.front[1] == "abc");
assert(m.front[2] == "43");
m.popFront();
assert(m.front.whichPattern == 1);
assert(m.front[1] == "12");
...
// The result of the `matchAll/matchFirst` is directly testable with if/assert/while.
// e.g. test if a string consists of letters:
assert(matchFirst("Letter", `^\p{L}+$`));
```
### [Syntax and general information](#Syntax%20and%20general%20information)
The general usage guideline is to keep regex complexity on the side of simplicity, as its capabilities reside in purely character-level manipulation. As such it's ill-suited for tasks involving higher level invariants like matching an integer number *bounded* in an [a,b] interval. Checks of this sort of are better addressed by additional post-processing.
The basic syntax shouldn't surprise experienced users of regular expressions. For an introduction to `std.regex` see a [short tour](http://dlang.org/regular-expression.html) of the module API and its abilities.
There are other web resources on regular expressions to help newcomers, and a good [reference with tutorial](http://www.regular-expressions.info) can easily be found.
This library uses a remarkably common ECMAScript syntax flavor with the following extensions: * Named subexpressions, with Python syntax.
* Unicode properties such as Scripts, Blocks and common binary properties e.g Alphabetic, White\_Space, Hex\_Digit etc.
* Arbitrary length and complexity lookbehind, including lookahead in lookbehind and vise-versa.
### Pattern syntax
*std.regex operates on codepoint level, 'character' in this table denotes a single Unicode codepoint.*
| | |
| --- | --- |
| Pattern element | Semantics |
| Atoms | Match single characters |
| *any character except [{|\*+?()^$* | Matches the character itself. |
| *.* | In single line mode matches any character. Otherwise it matches any character except '\n' and '\r'. |
| *[class]* | Matches a single character that belongs to this character class. |
| *[^class]* | Matches a single character that does *not* belong to this character class. |
| *\cC* | Matches the control character corresponding to letter C |
| *\xXX* | Matches a character with hexadecimal value of XX. |
| *\uXXXX* | Matches a character with hexadecimal value of XXXX. |
| *\U00YYYYYY* | Matches a character with hexadecimal value of YYYYYY. |
| *\f* | Matches a formfeed character. |
| *\n* | Matches a linefeed character. |
| *\r* | Matches a carriage return character. |
| *\t* | Matches a tab character. |
| *\v* | Matches a vertical tab character. |
| *\d* | Matches any Unicode digit. |
| *\D* | Matches any character except Unicode digits. |
| *\w* | Matches any word character (note: this includes numbers). |
| *\W* | Matches any non-word character. |
| *\s* | Matches whitespace, same as \p{White\_Space}. |
| *\S* | Matches any character except those recognized as *\s* . |
| *\\* | Matches \ character. |
| *\c where c is one of [|\*+?()* | Matches the character c itself. |
| *\p{PropertyName}* | Matches a character that belongs to the Unicode PropertyName set. Single letter abbreviations can be used without surrounding {,}. |
| *\P{PropertyName}* | Matches a character that does not belong to the Unicode PropertyName set. Single letter abbreviations can be used without surrounding {,}. |
| *\p{InBasicLatin}* | Matches any character that is part of the BasicLatin Unicode *block*. |
| *\P{InBasicLatin}* | Matches any character except ones in the BasicLatin Unicode *block*. |
| *\p{Cyrillic}* | Matches any character that is part of Cyrillic *script*. |
| *\P{Cyrillic}* | Matches any character except ones in Cyrillic *script*. |
| Quantifiers | Specify repetition of other elements |
| *\** | Matches previous character/subexpression 0 or more times. Greedy version - tries as many times as possible. |
| *\*?* | Matches previous character/subexpression 0 or more times. Lazy version - stops as early as possible. |
| *+* | Matches previous character/subexpression 1 or more times. Greedy version - tries as many times as possible. |
| *+?* | Matches previous character/subexpression 1 or more times. Lazy version - stops as early as possible. |
| *{n}* | Matches previous character/subexpression exactly n times. |
| *{n,}* | Matches previous character/subexpression n times or more. Greedy version - tries as many times as possible. |
| *{n,}?* | Matches previous character/subexpression n times or more. Lazy version - stops as early as possible. |
| *{n,m}* | Matches previous character/subexpression n to m times. Greedy version - tries as many times as possible, but no more than m times. |
| *{n,m}?* | Matches previous character/subexpression n to m times. Lazy version - stops as early as possible, but no less then n times. |
| Other | Subexpressions & alternations |
| *(regex)* | Matches subexpression regex, saving matched portion of text for later retrieval. |
| *(?#comment)* | An inline comment that is ignored while matching. |
| *(?:regex)* | Matches subexpression regex, *not* saving matched portion of text. Useful to speed up matching. |
| *A|B* | Matches subexpression A, or failing that, matches B. |
| *(?P<name>regex)* | Matches named subexpression regex labeling it with name 'name'. When referring to a matched portion of text, names work like aliases in addition to direct numbers. |
| Assertions | Match position rather than character |
| *^* | Matches at the begining of input or line (in multiline mode). |
| *$* | Matches at the end of input or line (in multiline mode). |
| *\b* | Matches at word boundary. |
| *\B* | Matches when *not* at word boundary. |
| *(?=regex)* | Zero-width lookahead assertion. Matches at a point where the subexpression regex could be matched starting from the current position. |
| *(?!regex)* | Zero-width negative lookahead assertion. Matches at a point where the subexpression regex could *not* be matched starting from the current position. |
| *(?<=regex)* | Zero-width lookbehind assertion. Matches at a point where the subexpression regex could be matched ending at the current position (matching goes backwards). |
| *(?<!regex)* | Zero-width negative lookbehind assertion. Matches at a point where the subexpression regex could *not* be matched ending at the current position (matching goes backwards). |
### Character classes
| | |
| --- | --- |
| Pattern element | Semantics |
| *Any atom* | Has the same meaning as outside of a character class, except for ] which must be written as \] |
| *a-z* | Includes characters a, b, c, ..., z. |
| *[a||b], [a--b], [a~~b], [a&&b]* | Where a, b are arbitrary classes, means union, set difference, symmetric set difference, and intersection respectively. *Any sequence of character class elements implicitly forms a union.* |
### Regex flags
| | |
| --- | --- |
| Flag | Semantics |
| *g* | Global regex, repeat over the whole input. |
| *i* | Case insensitive matching. |
| *m* | Multi-line mode, match ^, $ on start and end line separators as well as start and end of input. |
| *s* | Single-line mode, makes . match '\n' and '\r' as well. |
| *x* | Free-form syntax, ignores whitespace in pattern, useful for formatting complex regular expressions. |
### [Unicode support](#Unicode%20support)
This library provides full Level 1 support\* according to [UTS 18](http://unicode.org/reports/tr18/). Specifically: * 1.1 Hex notation via any of \uxxxx, \U00YYYYYY, \xZZ.
* 1.2 Unicode properties.
* 1.3 Character classes with set operations.
* 1.4 Word boundaries use the full set of "word" characters.
* 1.5 Using simple casefolding to match case insensitively across the full range of codepoints.
* 1.6 Respecting line breaks as any of \u000A | \u000B | \u000C | \u000D | \u0085 | \u2028 | \u2029 | \u000D\u000A.
* 1.7 Operating on codepoint level.
\*With exception of point 1.1.1, as of yet, normalization of input is expected to be enforced by user.
### [Replace format string](#Replace%20format%20string)
A set of functions in this module that do the substitution rely on a simple format to guide the process. In particular the table below applies to the `format` argument of [`replaceFirst`](#replaceFirst) and [`replaceAll`](#replaceAll).
The format string can reference parts of match using the following notation.
| | |
| --- | --- |
| Format specifier | Replaced by |
| *$&* | the whole match. |
| *$`* | part of input *preceding* the match. |
| *$'* | part of input *following* the match. |
| *$$* | '$' character. |
| *\c , where c is any character* | the character c itself. |
| *\\* | '\' character. |
| *$1 .. $99* | submatch number 1 to 99 respectively. |
### [Slicing and zero memory allocations orientation](#Slicing%20and%20zero%20memory%20allocations%20orientation)
All matches returned by pattern matching functionality in this library are slices of the original input. The notable exception is the `replace` family of functions that generate a new string from the input.
In cases where producing the replacement is the ultimate goal [`replaceFirstInto`](#replaceFirstInto) and [`replaceAllInto`](#replaceAllInto) could come in handy as functions that avoid allocations even for replacement.
License:
[Boost License 1.0](http://boost.org/LICENSE_1_0.txt).
Authors:
Dmitry Olshansky, API and utility constructs are modeled after the original `std.regex` by Walter Bright and Andrei Alexandrescu.
Source
[std/regex/package.d](https://github.com/dlang/phobos/blob/master/std/regex/package.d)
template **Regex**(Char)
`Regex` object holds regular expression pattern in compiled form.
Instances of this object are constructed via calls to `regex`. This is an intended form for caching and storage of frequently used regular expressions.
Example
Test if this object doesn't contain any compiled pattern.
```
Regex!char r;
assert(r.empty);
r = regex(""); // Note: "" is a valid regex pattern.
assert(!r.empty);
```
Getting a range of all the named captures in the regex.
```
import std.range;
import std.algorithm;
auto re = regex(`(?P<name>\w+) = (?P<var>\d+)`);
auto nc = re.namedCaptures;
static assert(isRandomAccessRange!(typeof(nc)));
assert(!nc.empty);
assert(nc.length == 2);
assert(nc.equal(["name", "var"]));
assert(nc[0] == "name");
assert(nc[1..$].equal(["var"]));
```
alias **StaticRegex** = Regex(Char);
A `StaticRegex` is `Regex` object that contains D code specially generated at compile-time to speed up matching.
No longer used, kept as alias to Regex for backwards compatibility.
@trusted auto **regex**(S : C[], C)(const S[] patterns, const(char)[] flags = "")
Constraints: if (isSomeString!S);
@trusted auto **regex**(S)(S pattern, const(char)[] flags = "")
Constraints: if (isSomeString!S);
Compile regular expression pattern for the later execution.
Returns:
`Regex` object that works on inputs having the same character width as `pattern`.
Parameters:
| | |
| --- | --- |
| S `pattern` | A single regular expression to match. |
| S[] `patterns` | An array of regular expression strings. The resulting `Regex` object will match any expression; use [`whichPattern`](#whichPattern) to know which. |
| const(char)[] `flags` | The attributes (g, i, m, s and x accepted) |
Throws:
`RegexException` if there were any errors during compilation.
Examples:
```
void test(S)()
{
// multi-pattern regex example
S[] arr = [`([a-z]+):(\d+)`, `(\d+),\d+`];
auto multi = regex(arr); // multi regex
S str = "abc:43 12,34";
auto m = str.matchAll(multi);
writeln(m.front.whichPattern); // 1
writeln(m.front[1]); // "abc"
writeln(m.front[2]); // "43"
m.popFront();
writeln(m.front.whichPattern); // 2
writeln(m.front[1]); // "12"
}
import std.meta : AliasSeq;
static foreach (C; AliasSeq!(string, wstring, dstring))
// Test with const array of patterns - see https://issues.dlang.org/show_bug.cgi?id=20301
static foreach (S; AliasSeq!(C, const C, immutable C))
test!S();
```
enum auto **ctRegex**(alias pattern, alias flags = []);
Compile regular expression using CTFE and generate optimized native machine code for matching it.
Returns:
StaticRegex object for faster matching.
Parameters:
| | |
| --- | --- |
| pattern | Regular expression |
| flags | The attributes (g, i, m, s and x accepted) |
struct **Captures**(R) if (isSomeString!R);
`Captures` object contains submatches captured during a call to `match` or iteration over `RegexMatch` range.
First element of range is the whole match.
Examples:
```
import std.range.primitives : popFrontN;
auto c = matchFirst("@abc#", regex(`(\w)(\w)(\w)`));
assert(c.pre == "@"); // Part of input preceding match
assert(c.post == "#"); // Immediately after match
assert(c.hit == c[0] && c.hit == "abc"); // The whole match
writeln(c[2]); // "b"
writeln(c.front); // "abc"
c.popFront();
writeln(c.front); // "a"
writeln(c.back); // "c"
c.popBack();
writeln(c.back); // "b"
popFrontN(c, 2);
assert(c.empty);
assert(!matchFirst("nothing", "something"));
// Captures that are not matched will be null.
c = matchFirst("ac", regex(`a(b)?c`));
assert(c);
assert(!c[1]);
```
@property R **pre**();
Slice of input prior to the match.
@property R **post**();
Slice of input immediately after the match.
@property R **hit**();
Slice of matched portion of input.
@property R **front**();
@property R **back**();
void **popFront**();
void **popBack**();
const @property bool **empty**();
inout inout(R) **opIndex**()(size\_t i);
Range interface.
const nothrow @safe bool **opCast**(T : bool)();
Explicit cast to bool. Useful as a shorthand for !(x.empty) in if and assert statements.
```
import std.regex;
assert(!matchFirst("nothing", "something"));
```
const nothrow @property @safe int **whichPattern**();
Number of pattern matched counting, where 1 - the first pattern. Returns 0 on no match.
Examples:
```
import std.regex;
writeln(matchFirst("abc", "[0-9]+", "[a-z]+").whichPattern); // 2
```
R **opIndex**(String)(String i)
Constraints: if (isSomeString!String);
Lookup named submatch.
```
import std.regex;
import std.range;
auto c = matchFirst("a = 42;", regex(`(?P<var>\w+)\s*=\s*(?P<value>\d+);`));
assert(c["var"] == "a");
assert(c["value"] == "42");
popFrontN(c, 2);
//named groups are unaffected by range primitives
assert(c["var"] =="a");
assert(c.front == "42");
```
const @property size\_t **length**();
Number of matches in this object.
@property ref auto **captures**();
A hook for compatibility with original std.regex.
struct **RegexMatch**(R) if (isSomeString!R);
A regex engine state, as returned by `match` family of functions.
Effectively it's a forward range of Captures!R, produced by lazily searching for matches in a given input.
@property R **pre**();
@property R **post**();
@property R **hit**();
Shorthands for front.pre, front.post, front.hit.
inout @property inout(Captures!R) **front**();
void **popFront**();
auto **save**();
Functionality for processing subsequent matches of global regexes via range interface:
```
import std.regex;
auto m = matchAll("Hello, world!", regex(`\w+`));
assert(m.front.hit == "Hello");
m.popFront();
assert(m.front.hit == "world");
m.popFront();
assert(m.empty);
```
const @property bool **empty**();
Test if this match object is empty.
T **opCast**(T : bool)();
Same as !(x.empty), provided for its convenience in conditional statements.
inout @property inout(Captures!R) **captures**();
Same as .front, provided for compatibility with original std.regex.
auto **match**(R, RegEx)(R input, RegEx re)
Constraints: if (isSomeString!R && isRegexFor!(RegEx, R));
auto **match**(R, String)(R input, String re)
Constraints: if (isSomeString!R && isSomeString!String);
Start matching `input` to regex pattern `re`, using Thompson NFA matching scheme.
The use of this function is discouraged - use either of [`matchAll`](#matchAll) or [`matchFirst`](#matchFirst).
Delegating the kind of operation to "g" flag is soon to be phased out along with the ability to choose the exact matching scheme. The choice of matching scheme to use depends highly on the pattern kind and can done automatically on case by case basis.
Returns:
a `RegexMatch` object holding engine state after first match.
auto **matchFirst**(R, RegEx)(R input, RegEx re)
Constraints: if (isSomeString!R && isRegexFor!(RegEx, R));
auto **matchFirst**(R, String)(R input, String re)
Constraints: if (isSomeString!R && isSomeString!String);
auto **matchFirst**(R, String)(R input, String[] re...)
Constraints: if (isSomeString!R && isSomeString!String);
Find the first (leftmost) slice of the `input` that matches the pattern `re`. This function picks the most suitable regular expression engine depending on the pattern properties.
`re` parameter can be one of three types: * Plain string(s), in which case it's compiled to bytecode before matching.
* Regex!char (wchar/dchar) that contains a pattern in the form of compiled bytecode.
* StaticRegex!char (wchar/dchar) that contains a pattern in the form of compiled native machine code.
Returns:
[`Captures`](#Captures) containing the extent of a match together with all submatches if there was a match, otherwise an empty [`Captures`](#Captures) object.
auto **matchAll**(R, RegEx)(R input, RegEx re)
Constraints: if (isSomeString!R && isRegexFor!(RegEx, R));
auto **matchAll**(R, String)(R input, String re)
Constraints: if (isSomeString!R && isSomeString!String);
auto **matchAll**(R, String)(R input, String[] re...)
Constraints: if (isSomeString!R && isSomeString!String);
Initiate a search for all non-overlapping matches to the pattern `re` in the given `input`. The result is a lazy range of matches generated as they are encountered in the input going left to right.
This function picks the most suitable regular expression engine depending on the pattern properties.
`re` parameter can be one of three types: * Plain string(s), in which case it's compiled to bytecode before matching.
* Regex!char (wchar/dchar) that contains a pattern in the form of compiled bytecode.
* StaticRegex!char (wchar/dchar) that contains a pattern in the form of compiled native machine code.
Returns:
[`RegexMatch`](#RegexMatch) object that represents matcher state after the first match was found or an empty one if not present.
auto **bmatch**(R, RegEx)(R input, RegEx re)
Constraints: if (isSomeString!R && isRegexFor!(RegEx, R));
auto **bmatch**(R, String)(R input, String re)
Constraints: if (isSomeString!R && isSomeString!String);
Start matching of `input` to regex pattern `re`, using traditional [backtracking](https://en.wikipedia.org/wiki/Backtracking) matching scheme.
The use of this function is discouraged - use either of [`matchAll`](#matchAll) or [`matchFirst`](#matchFirst).
Delegating the kind of operation to "g" flag is soon to be phased out along with the ability to choose the exact matching scheme. The choice of matching scheme to use depends highly on the pattern kind and can done automatically on case by case basis.
Returns:
a `RegexMatch` object holding engine state after first match.
R **replaceFirst**(R, C, RegEx)(R input, RegEx re, const(C)[] format)
Constraints: if (isSomeString!R && is(C : dchar) && isRegexFor!(RegEx, R));
Construct a new string from `input` by replacing the first match with a string generated from it according to the `format` specifier.
To replace all matches use [`replaceAll`](#replaceAll).
Parameters:
| | |
| --- | --- |
| R `input` | string to search |
| RegEx `re` | compiled regular expression to use |
| const(C)[] `format` | format string to generate replacements from, see [the format string](#Replace%20format%20string). |
Returns:
A string of the same type with the first match (if any) replaced. If no match is found returns the input string itself.
Examples:
```
writeln(replaceFirst("noon", regex("n"), "[$&]")); // "[n]oon"
```
R **replaceFirst**(alias fun, R, RegEx)(R input, RegEx re)
Constraints: if (isSomeString!R && isRegexFor!(RegEx, R));
This is a general replacement tool that construct a new string by replacing matches of pattern `re` in the `input`. Unlike the other overload there is no format string instead captures are passed to to a user-defined functor `fun` that returns a new string to use as replacement.
This version replaces the first match in `input`, see [`replaceAll`](#replaceAll) to replace the all of the matches.
Returns:
A new string of the same type as `input` with all matches replaced by return values of `fun`. If no matches found returns the `input` itself.
Examples:
```
import std.conv : to;
string list = "#21 out of 46";
string newList = replaceFirst!(cap => to!string(to!int(cap.hit)+1))
(list, regex(`[0-9]+`));
writeln(newList); // "#22 out of 46"
```
@trusted void **replaceFirstInto**(Sink, R, C, RegEx)(ref Sink sink, R input, RegEx re, const(C)[] format)
Constraints: if (isOutputRange!(Sink, dchar) && isSomeString!R && is(C : dchar) && isRegexFor!(RegEx, R));
@trusted void **replaceFirstInto**(alias fun, Sink, R, RegEx)(Sink sink, R input, RegEx re)
Constraints: if (isOutputRange!(Sink, dchar) && isSomeString!R && isRegexFor!(RegEx, R));
A variation on [`replaceFirst`](#replaceFirst) that instead of allocating a new string on each call outputs the result piece-wise to the `sink`. In particular this enables efficient construction of a final output incrementally.
Like in [`replaceFirst`](#replaceFirst) family of functions there is an overload for the substitution guided by the `format` string and the one with the user defined callback.
Examples:
```
import std.array;
string m1 = "first message\n";
string m2 = "second message\n";
auto result = appender!string();
replaceFirstInto(result, m1, regex(`([a-z]+) message`), "$1");
//equivalent of the above with user-defined callback
replaceFirstInto!(cap=>cap[1])(result, m2, regex(`([a-z]+) message`));
writeln(result.data); // "first\nsecond\n"
```
@trusted R **replaceAll**(R, C, RegEx)(R input, RegEx re, const(C)[] format)
Constraints: if (isSomeString!R && is(C : dchar) && isRegexFor!(RegEx, R));
Construct a new string from `input` by replacing all of the fragments that match a pattern `re` with a string generated from the match according to the `format` specifier.
To replace only the first match use [`replaceFirst`](#replaceFirst).
Parameters:
| | |
| --- | --- |
| R `input` | string to search |
| RegEx `re` | compiled regular expression to use |
| const(C)[] `format` | format string to generate replacements from, see [the format string](#Replace%20format%20string). |
Returns:
A string of the same type as `input` with the all of the matches (if any) replaced. If no match is found returns the input string itself.
Examples:
```
// insert comma as thousands delimiter
auto re = regex(r"(?<=\d)(?=(\d\d\d)+\b)","g");
writeln(replaceAll("12000 + 42100 = 54100", re, ",")); // "12,000 + 42,100 = 54,100"
```
@trusted R **replaceAll**(alias fun, R, RegEx)(R input, RegEx re)
Constraints: if (isSomeString!R && isRegexFor!(RegEx, R));
This is a general replacement tool that construct a new string by replacing matches of pattern `re` in the `input`. Unlike the other overload there is no format string instead captures are passed to to a user-defined functor `fun` that returns a new string to use as replacement.
This version replaces all of the matches found in `input`, see [`replaceFirst`](#replaceFirst) to replace the first match only.
Returns:
A new string of the same type as `input` with all matches replaced by return values of `fun`. If no matches found returns the `input` itself.
Parameters:
| | |
| --- | --- |
| R `input` | string to search |
| RegEx `re` | compiled regular expression |
| fun | delegate to use |
Examples:
```
string baz(Captures!(string) m)
{
import std.string : toUpper;
return toUpper(m.hit);
}
// Capitalize the letters 'a' and 'r':
auto s = replaceAll!(baz)("Strap a rocket engine on a chicken.",
regex("[ar]"));
writeln(s); // "StRAp A Rocket engine on A chicken."
```
@trusted void **replaceAllInto**(Sink, R, C, RegEx)(Sink sink, R input, RegEx re, const(C)[] format)
Constraints: if (isOutputRange!(Sink, dchar) && isSomeString!R && is(C : dchar) && isRegexFor!(RegEx, R));
@trusted void **replaceAllInto**(alias fun, Sink, R, RegEx)(Sink sink, R input, RegEx re)
Constraints: if (isOutputRange!(Sink, dchar) && isSomeString!R && isRegexFor!(RegEx, R));
A variation on [`replaceAll`](#replaceAll) that instead of allocating a new string on each call outputs the result piece-wise to the `sink`. In particular this enables efficient construction of a final output incrementally.
As with [`replaceAll`](#replaceAll) there are 2 overloads - one with a format string, the other one with a user defined functor.
Examples:
```
// insert comma as thousands delimiter in fifty randomly produced big numbers
import std.array, std.conv, std.random, std.range;
static re = regex(`(?<=\d)(?=(\d\d\d)+\b)`, "g");
auto sink = appender!(char [])();
enum ulong min = 10UL ^^ 10, max = 10UL ^^ 19;
foreach (i; 0 .. 50)
{
sink.clear();
replaceAllInto(sink, text(uniform(min, max)), re, ",");
foreach (pos; iota(sink.data.length - 4, 0, -4))
writeln(sink.data[pos]); // ','
}
```
R **replace**(alias scheme = match, R, C, RegEx)(R input, RegEx re, const(C)[] format)
Constraints: if (isSomeString!R && isRegexFor!(RegEx, R));
R **replace**(alias fun, R, RegEx)(R input, RegEx re)
Constraints: if (isSomeString!R && isRegexFor!(RegEx, R));
Old API for replacement, operation depends on flags of pattern `re`. With "g" flag it performs the equivalent of [`replaceAll`](#replaceAll) otherwise it works the same as [`replaceFirst`](#replaceFirst).
The use of this function is discouraged, please use [`replaceAll`](#replaceAll) or [`replaceFirst`](#replaceFirst) explicitly.
struct **Splitter**(Flag!"keepSeparators" keepSeparators = No.keepSeparators, Range, alias RegEx = Regex) if (isSomeString!Range && isRegexFor!(RegEx, Range));
Splitter!(keepSeparators, Range, RegEx) **splitter**(Flag!"keepSeparators" keepSeparators = No.keepSeparators, Range, RegEx)(Range r, RegEx pat)
Constraints: if (is(BasicElementOf!Range : dchar) && isRegexFor!(RegEx, Range));
Splits a string `r` using a regular expression `pat` as a separator.
Parameters:
| | |
| --- | --- |
| keepSeparators | flag to specify if the matches should be in the resulting range |
| Range `r` | the string to split |
| RegEx `pat` | the pattern to split on |
Returns:
A lazy range of strings
Examples:
```
import std.algorithm.comparison : equal;
auto s1 = ", abc, de, fg, hi, ";
assert(equal(splitter(s1, regex(", *")),
["", "abc", "de", "fg", "hi", ""]));
```
Examples:
Split on a pattern, but keep the matches in the resulting range
```
import std.algorithm.comparison : equal;
import std.typecons : Yes;
auto pattern = regex(`([\.,])`);
assert("2003.04.05"
.splitter!(Yes.keepSeparators)(pattern)
.equal(["2003", ".", "04", ".", "05"]));
assert(",1,2,3"
.splitter!(Yes.keepSeparators)(pattern)
.equal([",", "1", ",", "2", ",", "3"]));
```
@property Range **front**();
@property bool **empty**();
void **popFront**();
@property auto **save**();
Forward range primitives.
@trusted String[] **split**(String, RegEx)(String input, RegEx rx)
Constraints: if (isSomeString!String && isRegexFor!(RegEx, String));
An eager version of `splitter` that creates an array with splitted slices of `input`.
alias **RegexException** = std.regex.internal.ir.**RegexException**;
Exception object thrown in case of errors during regex compilation.
auto **escaper**(Range)(Range r);
A range that lazily produces a string output escaped to be used inside of a regular expression.
Examples:
```
import std.algorithm.comparison;
import std.regex;
string s = `This is {unfriendly} to *regex*`;
assert(s.escaper.equal(`This is \{unfriendly\} to \*regex\*`));
```
| programming_docs |
d std.experimental.allocator.building_blocks.affix_allocator std.experimental.allocator.building\_blocks.affix\_allocator
============================================================
Source
[std/experimental/allocator/building\_blocks/affix\_allocator.d](https://github.com/dlang/phobos/blob/master/std/experimental/allocator/building_blocks/affix_allocator.d)
struct **AffixAllocator**(Allocator, Prefix, Suffix = void);
Allocator that adds some extra data before (of type `Prefix`) and/or after (of type `Suffix`) any allocation made with its parent allocator. This is useful for uses where additional allocation-related information is needed, such as mutexes, reference counts, or walls for debugging memory corruption errors.
If `Prefix` is not `void`, `Allocator` must guarantee an alignment at least as large as `Prefix.alignof`.
Suffixes are slower to get at because of alignment rounding, so prefixes should be preferred. However, small prefixes blunt the alignment so if a large alignment with a small affix is needed, suffixes should be chosen.
The following methods are defined if `Allocator` defines them, and forward to it: `deallocateAll`, `empty`, `owns`.
Examples:
```
import std.experimental.allocator.mallocator : Mallocator;
// One word before and after each allocation.
alias A = AffixAllocator!(Mallocator, size_t, size_t);
auto b = A.instance.allocate(11);
A.instance.prefix(b) = 0xCAFE_BABE;
A.instance.suffix(b) = 0xDEAD_BEEF;
assert(A.instance.prefix(b) == 0xCAFE_BABE
&& A.instance.suffix(b) == 0xDEAD_BEEF);
```
enum uint **alignment**;
If `Prefix` is `void`, the alignment is that of the parent. Otherwise, the alignment is the same as the `Prefix`'s alignment.
Allocator **\_parent**;
If the parent allocator `Allocator` is stateful, an instance of it is stored as a member. Otherwise, `AffixAllocator` uses `Allocator.instance`. In either case, the name `_parent` is uniformly used for accessing the parent allocator.
pure nothrow @nogc @safe Allocator **parent**();
If the parent allocator `Allocator` is stateful, an instance of it is stored as a member. Otherwise, `AffixAllocator` uses `Allocator.instance`. In either case, the name `_parent` is uniformly used for accessing the parent allocator.
size\_t **goodAllocSize**(size\_t);
void[] **allocate**(size\_t);
Ternary **owns**(void[]);
bool **expand**(ref void[] b, size\_t delta);
bool **reallocate**(ref void[] b, size\_t s);
bool **deallocate**(void[] b);
bool **deallocateAll**();
Ternary **empty**();
Standard allocator methods. Each is defined if and only if the parent allocator defines the homonym method (except for `goodAllocSize`, which may use the global default). Also, the methods will be `shared` if the parent allocator defines them as such.
static AffixAllocator **instance**;
The `instance` singleton is defined if and only if the parent allocator has no state and defines its own `it` object.
ref auto **prefix**(T)(T[] b);
ref auto **suffix**(T)(T[] b);
Affix access functions offering references to the affixes of a block `b` previously allocated with this allocator. `b` may not be null. They are defined if and only if the corresponding affix is not `void`.
The qualifiers of the affix are not always the same as the qualifiers of the argument. This is because the affixes are not part of the data itself, but instead are just *associated* with the data and known to the allocator. The table below documents the type of `preffix(b)` and `affix(b)` depending on the type of `b`.
Result of `prefix`/`suffix` depending on argument (`U` is any unqualified type, `Affix` is `Prefix` or `Suffix`)| Argument Type | Return | Comments |
| `shared(U)[]` | `ref shared Affix` | Data is shared across threads and the affix follows suit. |
| `immutable(U)[]` | `ref shared Affix` | Although the data is immutable, the allocator "knows" the underlying memory is mutable, so `immutable` is elided for the affix which is independent from the data itself. However, the result is `shared` because `immutable` is implicitly shareable so multiple threads may access and manipulate the affix for the same data. |
| `const(shared(U))[]` | `ref shared Affix` | The data is always shareable across threads. Even if the data is `const`, the affix is modifiable by the same reasoning as for `immutable`. |
| `const(U)[]` | `ref const Affix` | The input may have originated from `U[]` or `immutable(U)[]`, so it may be actually shared or not. Returning an unqualified affix may result in race conditions, whereas returning a `shared` affix may result in inadvertent sharing of mutable thread-local data across multiple threads. So the returned type is conservatively `ref const`. |
| `U[]` | `ref Affix` | Unqualified data has unqualified affixes. |
Precondition
`b !is null` and `b` must have been allocated with this allocator.
d std.csv std.csv
=======
Implements functionality to read Comma Separated Values and its variants from an [input range](std_range_primitives#isInputRange) of `dchar`.
Comma Separated Values provide a simple means to transfer and store tabular data. It has been common for programs to use their own variant of the CSV format. This parser will loosely follow the [RFC-4180](http://tools.ietf.org/html/rfc4180). CSV input should adhere to the following criteria (differences from RFC-4180 in parentheses):
* A record is separated by a new line (CRLF,LF,CR)
* A final record may end with a new line
* A header may be provided as the first record in input
* A record has fields separated by a comma (customizable)
* A field containing new lines, commas, or double quotes should be enclosed in double quotes (customizable)
* Double quotes in a field are escaped with a double quote
* Each record should contain the same number of fields
Example
```
import std.algorithm;
import std.array;
import std.csv;
import std.stdio;
import std.typecons;
void main()
{
auto text = "Joe,Carpenter,300000\nFred,Blacksmith,400000\r\n";
foreach (record; csvReader!(Tuple!(string, string, int))(text))
{
writefln("%s works as a %s and earns $%d per year",
record[0], record[1], record[2]);
}
// To read the same string from the file "filename.csv":
auto file = File("filename.csv", "r");
foreach (record;
file.byLine.joiner("\n").csvReader!(Tuple!(string, string, int)))
{
writefln("%s works as a %s and earns $%d per year",
record[0], record[1], record[2]);
}
}
}
```
When an input contains a header the `Contents` can be specified as an associative array. Passing null to signify that a header is present.
```
auto text = "Name,Occupation,Salary\r"
"Joe,Carpenter,300000\nFred,Blacksmith,400000\r\n";
foreach (record; csvReader!(string[string])
(text, null))
{
writefln("%s works as a %s and earns $%s per year.",
record["Name"], record["Occupation"],
record["Salary"]);
}
// To read the same string from the file "filename.csv":
auto file = File("filename.csv", "r");
foreach (record; csvReader!(string[string])
(file.byLine.joiner("\n"), null))
{
writefln("%s works as a %s and earns $%s per year.",
record["Name"], record["Occupation"],
record["Salary"]);
}
```
This module allows content to be iterated by record stored in a struct, class, associative array, or as a range of fields. Upon detection of an error an CSVException is thrown (can be disabled). csvNextToken has been made public to allow for attempted recovery. Disabling exceptions will lift many restrictions specified above. A quote can appear in a field if the field was not quoted. If in a quoted field any quote by itself, not at the end of a field, will end processing for that field. The field is ended when there is no input, even if the quote was not closed.
See Also:
[Wikipedia Comma-separated values](http://en.wikipedia.org/wiki/Comma-separated_values)
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
Jesse Phillips
Source
[std/csv.d](https://github.com/dlang/phobos/blob/master/std/csv.d)
class **CSVException**: object.Exception;
Exception containing the row and column for when an exception was thrown.
Numbering of both row and col start at one and corresponds to the location in the file rather than any specified header. Special consideration should be made when there is failure to match the header see [`HeaderMismatchException`](#%20HeaderMismatchException) for details.
When performing type conversions, [`std.conv.ConvException`](std_conv#ConvException) is stored in the `next` field.
Examples:
```
import std.exception : collectException;
import std.algorithm.searching : count;
string text = "a,b,c\nHello,65";
auto ex = collectException!CSVException(csvReader(text).count);
// "(Row: 0, Col: 0) Row 2's length 2 does not match previous length of 3."
writeln(ex.toString);
```
Examples:
```
import std.exception : collectException;
import std.algorithm.searching : count;
import std.typecons : Tuple;
string text = "a,b\nHello,65";
auto ex = collectException!CSVException(csvReader!(Tuple!(string,int))(text).count);
// "(Row: 1, Col: 2) Unexpected 'b' when converting from type string to type int"
writeln(ex.toString);
```
size\_t **row**; size\_t **col**; class **IncompleteCellException**: std.csv.CSVException;
Exception thrown when a Token is identified to not be completed: a quote is found in an unquoted field, data continues after a closing quote, or the quoted field was not closed before data was empty.
Examples:
```
import std.exception : assertThrown;
string text = "a,\"b,c\nHello,65,2.5";
assertThrown!IncompleteCellException(text.csvReader(["a","b","c"]));
```
dstring **partialData**;
Data pulled from input before finding a problem
This field is populated when using [`csvReader`](#csvReader) but not by [`csvNextToken`](#csvNextToken) as this data will have already been fed to the output range.
class **HeaderMismatchException**: std.csv.CSVException;
Exception thrown under different conditions based on the type of `Contents`.
Structure, Class, and Associative Array * When a header is provided but a matching column is not found
Other * When a header is provided but a matching column is not found
* Order did not match that found in the input
Since a row and column is not meaningful when a column specified by the header is not found in the data, both row and col will be zero. Otherwise row is always one and col is the first instance found in header that occurred before the previous starting at one.
Examples:
```
import std.exception : assertThrown;
string text = "a,b,c\nHello,65,2.5";
assertThrown!HeaderMismatchException(text.csvReader(["b","c","invalid"]));
```
enum **Malformed**: int;
Determines the behavior for when an error is detected.
Disabling exception will follow these rules: * A quote can appear in a field if the field was not quoted.
* If in a quoted field any quote by itself, not at the end of a field, will end processing for that field.
* The field is ended when there is no input, even if the quote was not closed.
* If the given header does not match the order in the input, the content will return as it is found in the input.
* If the given header contains columns not found in the input they will be ignored.
Examples:
```
import std.algorithm.comparison : equal;
import std.algorithm.searching : count;
import std.exception : assertThrown;
string text = "a,b,c\nHello,65,\"2.5";
assertThrown!IncompleteCellException(text.csvReader.count);
// ignore the exceptions and try to handle invalid CSV
auto firstLine = text.csvReader!(string, Malformed.ignore)(null).front;
assert(firstLine.equal(["Hello", "65", "2.5"]));
```
**ignore**
No exceptions are thrown due to incorrect CSV.
**throwException**
Use exceptions when input has incorrect CSV.
auto **csvReader**(Contents = string, Malformed ErrorLevel = Malformed.throwException, Range, Separator = char)(Range input, Separator delimiter = ',', Separator quote = '"')
Constraints: if (isInputRange!Range && is(immutable(ElementType!Range) == immutable(dchar)) && isSomeChar!Separator && !is(Contents T : T[U], U : string));
auto **csvReader**(Contents = string, Malformed ErrorLevel = Malformed.throwException, Range, Header, Separator = char)(Range input, Header header, Separator delimiter = ',', Separator quote = '"')
Constraints: if (isInputRange!Range && is(immutable(ElementType!Range) == immutable(dchar)) && isSomeChar!Separator && isForwardRange!Header && isSomeString!(ElementType!Header));
auto **csvReader**(Contents = string, Malformed ErrorLevel = Malformed.throwException, Range, Header, Separator = char)(Range input, Header header, Separator delimiter = ',', Separator quote = '"')
Constraints: if (isInputRange!Range && is(immutable(ElementType!Range) == immutable(dchar)) && isSomeChar!Separator && is(Header : typeof(null)));
Returns an [input range](std_range_primitives#isInputRange) for iterating over records found in `input`.
An optional `header` can be provided. The first record will be read in as the header. If `Contents` is a struct then the header provided is expected to correspond to the fields in the struct. When `Contents` is not a type which can contain the entire record, the `header` must be provided in the same order as the input or an exception is thrown.
Returns:
An input range R as defined by [`std.range.primitives.isInputRange`](std_range_primitives#isInputRange). When `Contents` is a struct, class, or an associative array, the element type of R is `Contents`, otherwise the element type of R is itself a range with element type `Contents`. If a `header` argument is provided, the returned range provides a `header` field for accessing the header from the input in array form.
Throws:
[`CSVException`](#CSVException) When a quote is found in an unquoted field, data continues after a closing quote, the quoted field was not closed before data was empty, a conversion failed, or when the row's length does not match the previous length. [`HeaderMismatchException`](#HeaderMismatchException) when a header is provided but a matching column is not found or the order did not match that found in the input. Read the exception documentation for specific details of when the exception is thrown for different types of `Contents`.
Examples:
The `Contents` of the input can be provided if all the records are the same type such as all integer data:
```
import std.algorithm.comparison : equal;
string text = "76,26,22";
auto records = text.csvReader!int;
assert(records.equal!equal([
[76, 26, 22],
]));
```
Examples:
Using a struct with modified delimiter:
```
import std.algorithm.comparison : equal;
string text = "Hello;65;2.5\nWorld;123;7.5";
struct Layout
{
string name;
int value;
double other;
}
auto records = text.csvReader!Layout(';');
assert(records.equal([
Layout("Hello", 65, 2.5),
Layout("World", 123, 7.5),
]));
```
Examples:
Specifying `ErrorLevel` as [`Malformed.ignore`](#Malformed.ignore) will lift restrictions on the format. This example shows that an exception is not thrown when finding a quote in a field not quoted.
```
string text = "A \" is now part of the data";
auto records = text.csvReader!(string, Malformed.ignore);
auto record = records.front;
writeln(record.front); // text
```
Examples:
Read only column "b"
```
import std.algorithm.comparison : equal;
string text = "a,b,c\nHello,65,63.63\nWorld,123,3673.562";
auto records = text.csvReader!int(["b"]);
assert(records.equal!equal([
[65],
[123],
]));
```
Examples:
Read while rearranging the columns by specifying a header with a different order"
```
import std.algorithm.comparison : equal;
string text = "a,b,c\nHello,65,2.5\nWorld,123,7.5";
struct Layout
{
int value;
double other;
string name;
}
auto records = text.csvReader!Layout(["b","c","a"]);
assert(records.equal([
Layout(65, 2.5, "Hello"),
Layout(123, 7.5, "World")
]));
```
Examples:
The header can also be left empty if the input contains a header row and all columns should be iterated. The header from the input can always be accessed from the `header` field.
```
string text = "a,b,c\nHello,65,63.63";
auto records = text.csvReader(null);
writeln(records.header); // ["a", "b", "c"]
```
void **csvNextToken**(Range, Malformed ErrorLevel = Malformed.throwException, Separator, Output)(ref Range input, ref Output ans, Separator sep, Separator quote, bool startQuoted = false)
Constraints: if (isSomeChar!Separator && isInputRange!Range && is(immutable(ElementType!Range) == immutable(dchar)) && isOutputRange!(Output, dchar));
Lower level control over parsing CSV
This function consumes the input. After each call the input will start with either a delimiter or record break (\n, \r\n, \r) which must be removed for subsequent calls.
Parameters:
| | |
| --- | --- |
| Range `input` | Any CSV input |
| Output `ans` | The first field in the input |
| Separator `sep` | The character to represent a comma in the specification |
| Separator `quote` | The character to represent a quote in the specification |
| bool `startQuoted` | Whether the input should be considered to already be in quotes |
Throws:
[`IncompleteCellException`](#IncompleteCellException) When a quote is found in an unquoted field, data continues after a closing quote, or the quoted field was not closed before data was empty.
Examples:
```
import std.array : appender;
import std.range.primitives : popFront;
string str = "65,63\n123,3673";
auto a = appender!(char[])();
csvNextToken(str,a,',','"');
writeln(a.data); // "65"
writeln(str); // ",63\n123,3673"
str.popFront();
a.shrinkTo(0);
csvNextToken(str,a,',','"');
writeln(a.data); // "63"
writeln(str); // "\n123,3673"
str.popFront();
a.shrinkTo(0);
csvNextToken(str,a,',','"');
writeln(a.data); // "123"
writeln(str); // ",3673"
```
d core.atomic core.atomic
===========
The atomic module provides basic support for lock-free concurrent programming.
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt)
Authors:
Sean Kelly, Alex Rønne Petersen, Manu Evans
Source
[core/atomic.d](https://github.com/dlang/druntime/blob/master/src/core/atomic.d)
enum **MemoryOrder**: int;
Specifies the memory ordering semantics of an atomic operation.
See Also:
[en.cppreference.com/w/cpp/atomic/memory\_order](http://en.cppreference.com/w/cpp/atomic/memory_order)
**raw**
Not sequenced. Corresponds to [LLVM AtomicOrdering.Monotonic](https://llvm.org/docs/Atomics.html#monotonic) and C++11/C11 `memory_order_relaxed`.
**acq**
Hoist-load + hoist-store barrier. Corresponds to [LLVM AtomicOrdering.Acquire](https://llvm.org/docs/Atomics.html#acquire) and C++11/C11 `memory_order_acquire`.
**rel**
Sink-load + sink-store barrier. Corresponds to [LLVM AtomicOrdering.Release](https://llvm.org/docs/Atomics.html#release) and C++11/C11 `memory_order_release`.
**acq\_rel**
Acquire + release barrier. Corresponds to [LLVM AtomicOrdering.AcquireRelease](https://llvm.org/docs/Atomics.html#acquirerelease) and C++11/C11 `memory_order_acq_rel`.
**seq**
Fully sequenced (acquire + release). Corresponds to [LLVM AtomicOrdering.SequentiallyConsistent](https://llvm.org/docs/Atomics.html#sequentiallyconsistent) and C++11/C11 `memory_order_seq_cst`.
pure nothrow @nogc @trusted T **atomicLoad**(MemoryOrder ms = MemoryOrder.seq, T)(ref const T val)
Constraints: if (!is(T == shared(U), U) && !is(T == shared(inout(U)), U) && !is(T == shared(const(U)), U));
pure nothrow @nogc @trusted T **atomicLoad**(MemoryOrder ms = MemoryOrder.seq, T)(ref const shared T val)
Constraints: if (!hasUnsharedIndirections!T);
pure nothrow @nogc @trusted TailShared!T **atomicLoad**(MemoryOrder ms = MemoryOrder.seq, T)(ref const shared T val)
Constraints: if (hasUnsharedIndirections!T);
Loads 'val' from memory and returns it. The memory barrier specified by 'ms' is applied to the operation, which is fully sequenced by default. Valid memory orders are MemoryOrder.raw, MemoryOrder.acq, and MemoryOrder.seq.
Parameters:
| | |
| --- | --- |
| T `val` | The target variable. |
Returns:
The value of 'val'.
pure nothrow @nogc @trusted void **atomicStore**(MemoryOrder ms = MemoryOrder.seq, T, V)(ref T val, V newval)
Constraints: if (!is(T == shared) && !is(V == shared));
pure nothrow @nogc @trusted void **atomicStore**(MemoryOrder ms = MemoryOrder.seq, T, V)(ref shared T val, V newval)
Constraints: if (!is(T == class));
pure nothrow @nogc @trusted void **atomicStore**(MemoryOrder ms = MemoryOrder.seq, T, V)(ref shared T val, shared V newval)
Constraints: if (is(T == class));
Writes 'newval' into 'val'. The memory barrier specified by 'ms' is applied to the operation, which is fully sequenced by default. Valid memory orders are MemoryOrder.raw, MemoryOrder.rel, and MemoryOrder.seq.
Parameters:
| | |
| --- | --- |
| T `val` | The target variable. |
| V `newval` | The value to store. |
pure nothrow @nogc @trusted T **atomicFetchAdd**(MemoryOrder ms = MemoryOrder.seq, T)(ref T val, size\_t mod)
Constraints: if ((\_\_traits(isIntegral, T) || is(T == U\*, U)) && !is(T == shared));
pure nothrow @nogc @trusted T **atomicFetchAdd**(MemoryOrder ms = MemoryOrder.seq, T)(ref shared T val, size\_t mod)
Constraints: if (\_\_traits(isIntegral, T) || is(T == U\*, U));
Atomically adds `mod` to the value referenced by `val` and returns the value `val` held previously. This operation is both lock-free and atomic.
Parameters:
| | |
| --- | --- |
| T `val` | Reference to the value to modify. |
| size\_t `mod` | The value to add. |
Returns:
The value held previously by `val`.
pure nothrow @nogc @trusted T **atomicFetchSub**(MemoryOrder ms = MemoryOrder.seq, T)(ref T val, size\_t mod)
Constraints: if ((\_\_traits(isIntegral, T) || is(T == U\*, U)) && !is(T == shared));
pure nothrow @nogc @trusted T **atomicFetchSub**(MemoryOrder ms = MemoryOrder.seq, T)(ref shared T val, size\_t mod)
Constraints: if (\_\_traits(isIntegral, T) || is(T == U\*, U));
Atomically subtracts `mod` from the value referenced by `val` and returns the value `val` held previously. This operation is both lock-free and atomic.
Parameters:
| | |
| --- | --- |
| T `val` | Reference to the value to modify. |
| size\_t `mod` | The value to subtract. |
Returns:
The value held previously by `val`.
pure nothrow @nogc @trusted T **atomicExchange**(MemoryOrder ms = MemoryOrder.seq, T, V)(T\* here, V exchangeWith)
Constraints: if (!is(T == shared) && !is(V == shared));
pure nothrow @nogc @trusted TailShared!T **atomicExchange**(MemoryOrder ms = MemoryOrder.seq, T, V)(shared(T)\* here, V exchangeWith)
Constraints: if (!is(T == class) && !is(T == interface));
pure nothrow @nogc @trusted shared(T) **atomicExchange**(MemoryOrder ms = MemoryOrder.seq, T, V)(shared(T)\* here, shared(V) exchangeWith)
Constraints: if (is(T == class) || is(T == interface));
Exchange `exchangeWith` with the memory referenced by `here`. This operation is both lock-free and atomic.
Parameters:
| | |
| --- | --- |
| T\* `here` | The address of the destination variable. |
| V `exchangeWith` | The value to exchange. |
Returns:
The value held previously by `here`.
template **cas**(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq)
Performs either compare-and-set or compare-and-swap (or exchange).
There are two categories of overloads in this template: The first category does a simple compare-and-set. The comparison value (`ifThis`) is treated as an rvalue.
The second category does a compare-and-swap (a.k.a. compare-and-exchange), and expects `ifThis` to be a pointer type, where the previous value of `here` will be written.
This operation is both lock-free and atomic.
Parameters:
| | |
| --- | --- |
| T\* here | The address of the destination variable. |
| V2 writeThis | The value to store. |
| V1 ifThis | The comparison value. |
Returns:
true if the store occurred, false if not.
pure nothrow @nogc @trusted bool **cas**(T, V1, V2)(T\* here, V1 ifThis, V2 writeThis)
Constraints: if (!is(T == shared) && is(T : V1));
Compare-and-set for non-shared values
pure nothrow @nogc @trusted bool **cas**(T, V1, V2)(shared(T)\* here, V1 ifThis, V2 writeThis)
Constraints: if (!is(T == class) && (is(T : V1) || is(shared(T) : V1)));
Compare-and-set for shared value type
pure nothrow @nogc @trusted bool **cas**(T, V1, V2)(shared(T)\* here, shared(V1) ifThis, shared(V2) writeThis)
Constraints: if (is(T == class));
Compare-and-set for `shared` reference type (`class`)
pure nothrow @nogc @trusted bool **cas**(T, V)(T\* here, T\* ifThis, V writeThis)
Constraints: if (!is(T == shared) && !is(V == shared));
Compare-and-exchange for non-`shared` types
pure nothrow @nogc @trusted bool **cas**(T, V1, V2)(shared(T)\* here, V1\* ifThis, V2 writeThis)
Constraints: if (!is(T == class) && (is(T : V1) || is(shared(T) : V1)));
Compare and exchange for mixed-`shared`ness types
pure nothrow @nogc @trusted bool **cas**(T, V)(shared(T)\* here, shared(T)\* ifThis, shared(V) writeThis)
Constraints: if (is(T == class));
Compare-and-exchange for `class`
pure nothrow @nogc @trusted bool **casWeak**(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq, T, V1, V2)(T\* here, V1 ifThis, V2 writeThis)
Constraints: if (!is(T == shared) && is(T : V1));
pure nothrow @nogc @trusted bool **casWeak**(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq, T, V1, V2)(shared(T)\* here, V1 ifThis, V2 writeThis)
Constraints: if (!is(T == class) && (is(T : V1) || is(shared(T) : V1)));
pure nothrow @nogc @trusted bool **casWeak**(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq, T, V1, V2)(shared(T)\* here, shared(V1) ifThis, shared(V2) writeThis)
Constraints: if (is(T == class));
Stores 'writeThis' to the memory referenced by 'here' if the value referenced by 'here' is equal to 'ifThis'. The 'weak' version of cas may spuriously fail. It is recommended to use `casWeak` only when `cas` would be used in a loop. This operation is both lock-free and atomic.
Parameters:
| | |
| --- | --- |
| T\* `here` | The address of the destination variable. |
| V2 `writeThis` | The value to store. |
| V1 `ifThis` | The comparison value. |
Returns:
true if the store occurred, false if not.
pure nothrow @nogc @trusted bool **casWeak**(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq, T, V)(T\* here, T\* ifThis, V writeThis)
Constraints: if (!is(T == shared(S), S) && !is(V == shared(U), U));
pure nothrow @nogc @trusted bool **casWeak**(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq, T, V1, V2)(shared(T)\* here, V1\* ifThis, V2 writeThis)
Constraints: if (!is(T == class) && (is(T : V1) || is(shared(T) : V1)));
pure nothrow @nogc @trusted bool **casWeak**(MemoryOrder succ = MemoryOrder.seq, MemoryOrder fail = MemoryOrder.seq, T, V)(shared(T)\* here, shared(T)\* ifThis, shared(V) writeThis)
Constraints: if (is(T == class));
Stores 'writeThis' to the memory referenced by 'here' if the value referenced by 'here' is equal to the value referenced by 'ifThis'. The prior value referenced by 'here' is written to `ifThis` and returned to the user. The 'weak' version of cas may spuriously fail. It is recommended to use `casWeak` only when `cas` would be used in a loop. This operation is both lock-free and atomic.
Parameters:
| | |
| --- | --- |
| T\* `here` | The address of the destination variable. |
| V `writeThis` | The value to store. |
| T\* `ifThis` | The address of the value to compare, and receives the prior value of `here` as output. |
Returns:
true if the store occurred, false if not.
pure nothrow @nogc @safe void **atomicFence**(MemoryOrder order = MemoryOrder.seq)();
Inserts a full load/store memory fence (on platforms that need it). This ensures that all loads and stores before a call to this function are executed before any loads and stores after the call.
pure nothrow @nogc @safe void **pause**();
Gives a hint to the processor that the calling thread is in a 'spin-wait' loop, allowing to more efficiently allocate resources.
pure nothrow @nogc @safe TailShared!T **atomicOp**(string op, T, V1)(ref shared T val, V1 mod)
Constraints: if (\_\_traits(compiles, mixin("\*cast(T\*)&val" ~ op ~ "mod")));
Performs the binary operation 'op' on val using 'mod' as the modifier.
Parameters:
| | |
| --- | --- |
| T `val` | The target variable. |
| V1 `mod` | The modifier to apply. |
Returns:
The result of the operation.
| programming_docs |
d std.ascii std.ascii
=========
Functions which operate on ASCII characters.
All of the functions in std.ascii accept Unicode characters but effectively ignore them if they're not ASCII. All `isX` functions return `false` for non-ASCII characters, and all `toX` functions do nothing to non-ASCII characters.
For functions which operate on Unicode characters, see [`std.uni`](std_uni).
| Category | Functions |
| --- | --- |
| Validation | [`isAlpha`](#isAlpha) [`isAlphaNum`](#isAlphaNum) [`isASCII`](#isASCII) [`isControl`](#isControl) [`isDigit`](#isDigit) [`isGraphical`](#isGraphical) [`isHexDigit`](#isHexDigit) [`isOctalDigit`](#isOctalDigit) [`isPrintable`](#isPrintable) [`isPunctuation`](#isPunctuation) [`isUpper`](#isUpper) [`isWhite`](#isWhite) |
| Conversions | [`toLower`](#toLower) [`toUpper`](#toUpper) |
| Constants | [`digits`](#digits) [`fullHexDigits`](#fullHexDigits) [`hexDigits`](#hexDigits) [`letters`](#letters) [`lowercase`](#lowercase) [`lowerHexDigits`](#lowerHexDigits) [`newline`](#newline) [`octalDigits`](#octalDigits) [`uppercase`](#uppercase) [`whitespace`](#whitespace) |
| Enums | [`ControlChar`](#ControlChar) [`LetterCase`](#LetterCase) |
References
[ASCII Table](http://www.digitalmars.com/d/ascii-table.html), [Wikipedia](http://en.wikipedia.org/wiki/Ascii)
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
[Walter Bright](http://digitalmars.com) and [Jonathan M Davis](http://jmdavisprog.com)
Source
[std/ascii.d](https://github.com/dlang/phobos/blob/master/std/ascii.d)
immutable string **fullHexDigits**;
0 .. 9A .. Fa .. f
immutable string **hexDigits**;
0 .. 9A .. F
immutable string **lowerHexDigits**;
0 .. 9a .. f
immutable string **digits**;
0 .. 9
immutable string **octalDigits**;
0 .. 7
immutable string **letters**;
A .. Za .. z
immutable string **uppercase**;
A .. Z
immutable string **lowercase**;
a .. z
immutable string **whitespace**;
ASCII whitespace
enum **LetterCase**: bool;
Letter case specifier.
Examples:
```
import std.conv : to;
writeln(42.to!string(16, LetterCase.upper)); // "2A"
writeln(42.to!string(16, LetterCase.lower)); // "2a"
```
Examples:
```
import std.digest.hmac : hmac;
import std.digest : toHexString;
import std.digest.sha : SHA1;
import std.string : representation;
const sha1HMAC = "A very long phrase".representation
.hmac!SHA1("secret".representation)
.toHexString!(LetterCase.lower);
writeln(sha1HMAC); // "49f2073c7bf58577e8c9ae59fe8cfd37c9ab94e5"
```
**upper**
Upper case letters
**lower**
Lower case letters
enum **ControlChar**: char;
All control characters in the ASCII table ([source](https://www.asciitable.com)).
Examples:
```
import std.algorithm.comparison, std.algorithm.searching, std.range, std.traits;
// Because all ASCII characters fit in char, so do these
static assert(ControlChar.ack.sizeof == 1);
// All control characters except del are in row starting from 0
static assert(EnumMembers!ControlChar.only.until(ControlChar.del).equal(iota(32)));
static assert(ControlChar.nul == '\0');
static assert(ControlChar.bel == '\a');
static assert(ControlChar.bs == '\b');
static assert(ControlChar.ff == '\f');
static assert(ControlChar.lf == '\n');
static assert(ControlChar.cr == '\r');
static assert(ControlChar.tab == '\t');
static assert(ControlChar.vt == '\v');
```
Examples:
```
import std.conv;
//Control character table can be used in place of hexcodes.
with (ControlChar) assert(text("Phobos", us, "Deimos", us, "Tango", rs) == "Phobos\x1FDeimos\x1FTango\x1E");
```
**nul**
Null
**soh**
Start of heading
**stx**
Start of text
**etx**
End of text
**eot**
End of transmission
**enq**
Enquiry
**ack**
Acknowledge
**bel**
Bell
**bs**
Backspace
**tab**
Horizontal tab
**lf**
NL line feed, new line
**vt**
Vertical tab
**ff**
NP form feed, new page
**cr**
Carriage return
**so**
Shift out
**si**
Shift in
**dle**
Data link escape
**dc1**
Device control 1
**dc2**
Device control 2
**dc3**
Device control 3
**dc4**
Device control 4
**nak**
Negative acknowledge
**syn**
Synchronous idle
**etb**
End of transmission block
**can**
Cancel
**em**
End of medium
**sub**
Substitute
**esc**
Escape
**fs**
File separator
**gs**
Group separator
**rs**
Record separator
**us**
Unit separator
**del**
Delete
immutable string **newline**;
Newline sequence for this system.
pure nothrow @nogc @safe bool **isAlphaNum**(dchar c);
Parameters:
| | |
| --- | --- |
| dchar `c` | The character to test. |
Returns:
Whether `c` is a letter or a number (0 .. 9, a .. z, A .. Z).
Examples:
```
assert( isAlphaNum('A'));
assert( isAlphaNum('1'));
assert(!isAlphaNum('#'));
// N.B.: does not return true for non-ASCII Unicode alphanumerics:
assert(!isAlphaNum('á'));
```
pure nothrow @nogc @safe bool **isAlpha**(dchar c);
Parameters:
| | |
| --- | --- |
| dchar `c` | The character to test. |
Returns:
Whether `c` is an ASCII letter (A .. Z, a .. z).
Examples:
```
assert( isAlpha('A'));
assert(!isAlpha('1'));
assert(!isAlpha('#'));
// N.B.: does not return true for non-ASCII Unicode alphabetic characters:
assert(!isAlpha('á'));
```
pure nothrow @nogc @safe bool **isLower**(dchar c);
Parameters:
| | |
| --- | --- |
| dchar `c` | The character to test. |
Returns:
Whether `c` is a lowercase ASCII letter (a .. z).
Examples:
```
assert( isLower('a'));
assert(!isLower('A'));
assert(!isLower('#'));
// N.B.: does not return true for non-ASCII Unicode lowercase letters
assert(!isLower('á'));
assert(!isLower('Á'));
```
pure nothrow @nogc @safe bool **isUpper**(dchar c);
Parameters:
| | |
| --- | --- |
| dchar `c` | The character to test. |
Returns:
Whether `c` is an uppercase ASCII letter (A .. Z).
Examples:
```
assert( isUpper('A'));
assert(!isUpper('a'));
assert(!isUpper('#'));
// N.B.: does not return true for non-ASCII Unicode uppercase letters
assert(!isUpper('á'));
assert(!isUpper('Á'));
```
pure nothrow @nogc @safe bool **isDigit**(dchar c);
Parameters:
| | |
| --- | --- |
| dchar `c` | The character to test. |
Returns:
Whether `c` is a digit (0 .. 9).
Examples:
```
assert( isDigit('3'));
assert( isDigit('8'));
assert(!isDigit('B'));
assert(!isDigit('#'));
// N.B.: does not return true for non-ASCII Unicode numbers
assert(!isDigit('0')); // full-width digit zero (U+FF10)
assert(!isDigit('4')); // full-width digit four (U+FF14)
```
pure nothrow @nogc @safe bool **isOctalDigit**(dchar c);
Parameters:
| | |
| --- | --- |
| dchar `c` | The character to test. |
Returns:
Whether `c` is a digit in base 8 (0 .. 7).
Examples:
```
assert( isOctalDigit('0'));
assert( isOctalDigit('7'));
assert(!isOctalDigit('8'));
assert(!isOctalDigit('A'));
assert(!isOctalDigit('#'));
```
pure nothrow @nogc @safe bool **isHexDigit**(dchar c);
Parameters:
| | |
| --- | --- |
| dchar `c` | The character to test. |
Returns:
Whether `c` is a digit in base 16 (0 .. 9, A .. F, a .. f).
Examples:
```
assert( isHexDigit('0'));
assert( isHexDigit('A'));
assert( isHexDigit('f')); // lowercase hex digits are accepted
assert(!isHexDigit('g'));
assert(!isHexDigit('G'));
assert(!isHexDigit('#'));
```
pure nothrow @nogc @safe bool **isWhite**(dchar c);
Parameters:
| | |
| --- | --- |
| dchar `c` | The character to test. |
Returns:
Whether or not `c` is a whitespace character. That includes the space, tab, vertical tab, form feed, carriage return, and linefeed characters.
Examples:
```
assert( isWhite(' '));
assert( isWhite('\t'));
assert( isWhite('\n'));
assert(!isWhite('1'));
assert(!isWhite('a'));
assert(!isWhite('#'));
// N.B.: Does not return true for non-ASCII Unicode whitespace characters.
static import std.uni;
assert(std.uni.isWhite('\u00A0'));
assert(!isWhite('\u00A0')); // std.ascii.isWhite
```
pure nothrow @nogc @safe bool **isControl**(dchar c);
Parameters:
| | |
| --- | --- |
| dchar `c` | The character to test. |
Returns:
Whether `c` is a control character.
Examples:
```
assert( isControl('\0'));
assert( isControl('\022'));
assert( isControl('\n')); // newline is both whitespace and control
assert(!isControl(' '));
assert(!isControl('1'));
assert(!isControl('a'));
assert(!isControl('#'));
// N.B.: non-ASCII Unicode control characters are not recognized:
assert(!isControl('\u0080'));
assert(!isControl('\u2028'));
assert(!isControl('\u2029'));
```
pure nothrow @nogc @safe bool **isPunctuation**(dchar c);
Parameters:
| | |
| --- | --- |
| dchar `c` | The character to test. |
Returns:
Whether or not `c` is a punctuation character. That includes all ASCII characters which are not control characters, letters, digits, or whitespace.
Examples:
```
assert( isPunctuation('.'));
assert( isPunctuation(','));
assert( isPunctuation(':'));
assert( isPunctuation('!'));
assert( isPunctuation('#'));
assert( isPunctuation('~'));
assert( isPunctuation('+'));
assert( isPunctuation('_'));
assert(!isPunctuation('1'));
assert(!isPunctuation('a'));
assert(!isPunctuation(' '));
assert(!isPunctuation('\n'));
assert(!isPunctuation('\0'));
// N.B.: Non-ASCII Unicode punctuation characters are not recognized.
assert(!isPunctuation('\u2012')); // (U+2012 = en-dash)
```
pure nothrow @nogc @safe bool **isGraphical**(dchar c);
Parameters:
| | |
| --- | --- |
| dchar `c` | The character to test. |
Returns:
Whether or not `c` is a printable character other than the space character.
Examples:
```
assert( isGraphical('1'));
assert( isGraphical('a'));
assert( isGraphical('#'));
assert(!isGraphical(' ')); // whitespace is not graphical
assert(!isGraphical('\n'));
assert(!isGraphical('\0'));
// N.B.: Unicode graphical characters are not regarded as such.
assert(!isGraphical('á'));
```
pure nothrow @nogc @safe bool **isPrintable**(dchar c);
Parameters:
| | |
| --- | --- |
| dchar `c` | The character to test. |
Returns:
Whether or not `c` is a printable character - including the space character.
Examples:
```
assert( isPrintable(' ')); // whitespace is printable
assert( isPrintable('1'));
assert( isPrintable('a'));
assert( isPrintable('#'));
assert(!isPrintable('\0')); // control characters are not printable
// N.B.: Printable non-ASCII Unicode characters are not recognized.
assert(!isPrintable('á'));
```
pure nothrow @nogc @safe bool **isASCII**(dchar c);
Parameters:
| | |
| --- | --- |
| dchar `c` | The character to test. |
Returns:
Whether or not `c` is in the ASCII character set - i.e. in the range 0 .. 0x7F.
Examples:
```
assert( isASCII('a'));
assert(!isASCII('á'));
```
auto **toLower**(C)(C c)
Constraints: if (is(C : dchar));
Converts an ASCII letter to lowercase.
Parameters:
| | |
| --- | --- |
| C `c` | A character of any type that implicitly converts to `dchar`. In the case where it's a built-in type, or an enum of a built-in type, `Unqual!(OriginalType!C)` is returned, whereas if it's a user-defined type, `dchar` is returned. |
Returns:
The corresponding lowercase letter, if `c` is an uppercase ASCII character, otherwise `c` itself.
Examples:
```
writeln(toLower('a')); // 'a'
writeln(toLower('A')); // 'a'
writeln(toLower('#')); // '#'
// N.B.: Non-ASCII Unicode uppercase letters are not converted.
writeln(toLower('Á')); // 'Á'
```
auto **toUpper**(C)(C c)
Constraints: if (is(C : dchar));
Converts an ASCII letter to uppercase.
Parameters:
| | |
| --- | --- |
| C `c` | Any type which implicitly converts to `dchar`. In the case where it's a built-in type, or an enum of a built-in type, `Unqual!(OriginalType!C)` is returned, whereas if it's a user-defined type, `dchar` is returned. |
Returns:
The corresponding uppercase letter, if `c` is a lowercase ASCII character, otherwise `c` itself.
Examples:
```
writeln(toUpper('a')); // 'A'
writeln(toUpper('A')); // 'A'
writeln(toUpper('#')); // '#'
// N.B.: Non-ASCII Unicode lowercase letters are not converted.
writeln(toUpper('á')); // 'á'
```
d std.windows.charset std.windows.charset
===================
Support UTF-8 on Windows 95, 98 and ME systems.
License:
[Boost License 1.0](http://www.boost.org/LICENSE_1_0.txt).
Authors:
[Walter Bright](http://digitalmars.com)
const(char)\* **toMBSz**(scope const(char)[] s, uint codePage = 0);
Converts the UTF-8 string s into a null-terminated string in a Windows 8-bit character set.
Parameters:
| | |
| --- | --- |
| const(char)[] `s` | UTF-8 string to convert. |
| uint `codePage` | is the number of the target codepage, or 0 - ANSI, 1 - OEM, 2 - Mac |
Authors:
yaneurao, Walter Bright, Stewart Gordon
string **fromMBSz**(immutable(char)\* s, int codePage = 0);
Converts the null-terminated string s from a Windows 8-bit character set into a UTF-8 char array.
Parameters:
| | |
| --- | --- |
| immutable(char)\* `s` | UTF-8 string to convert. |
| int `codePage` | is the number of the source codepage, or 0 - ANSI, 1 - OEM, 2 - Mac |
Authors:
Stewart Gordon, Walter Bright
d core.lifetime core.lifetime
=============
pure nothrow @safe T\* **emplace**(T)(T\* chunk);
Given a pointer `chunk` to uninitialized memory (but already typed as `T`), constructs an object of non-`class` type `T` at that address. If `T` is a class, initializes the class reference to null.
Returns:
A pointer to the newly constructed object (which is the same as `chunk`).
Examples:
```
static struct S
{
int i = 42;
}
S[2] s2 = void;
emplace(&s2);
assert(s2[0].i == 42 && s2[1].i == 42);
```
Examples:
```
interface I {}
class K : I {}
K k = void;
emplace(&k);
assert(k is null);
I i = void;
emplace(&i);
assert(i is null);
```
T\* **emplace**(T, Args...)(T\* chunk, auto ref Args args)
Constraints: if (is(T == struct) || Args.length == 1);
Given a pointer `chunk` to uninitialized memory (but already typed as a non-class type `T`), constructs an object of type `T` at that address from arguments `args`. If `T` is a class, initializes the class reference to `args[0]`. This function can be `@trusted` if the corresponding constructor of `T` is `@safe`.
Returns:
A pointer to the newly constructed object (which is the same as `chunk`).
Examples:
```
int a;
int b = 42;
assert(*emplace!int(&a, b) == 42);
```
T **emplace**(T, Args...)(T chunk, auto ref Args args)
Constraints: if (is(T == class));
Given a raw memory area `chunk` (but already typed as a class type `T`), constructs an object of `class` type `T` at that address. The constructor is passed the arguments `Args`. If `T` is an inner class whose `outer` field can be used to access an instance of the enclosing class, then `Args` must not be empty, and the first member of it must be a valid initializer for that `outer` field. Correct initialization of this field is essential to access members of the outer class inside `T` methods.
Note
This function is `@safe` if the corresponding constructor of `T` is `@safe`.
Returns:
The newly constructed object.
Examples:
```
() @safe {
class SafeClass
{
int x;
@safe this(int x) { this.x = x; }
}
auto buf = new void[__traits(classInstanceSize, SafeClass)];
auto support = (() @trusted => cast(SafeClass)(buf.ptr))();
auto safeClass = emplace!SafeClass(support, 5);
assert(safeClass.x == 5);
class UnsafeClass
{
int x;
@system this(int x) { this.x = x; }
}
auto buf2 = new void[__traits(classInstanceSize, UnsafeClass)];
auto support2 = (() @trusted => cast(UnsafeClass)(buf2.ptr))();
static assert(!__traits(compiles, emplace!UnsafeClass(support2, 5)));
static assert(!__traits(compiles, emplace!UnsafeClass(buf2, 5)));
}();
```
T **emplace**(T, Args...)(void[] chunk, auto ref Args args)
Constraints: if (is(T == class));
Given a raw memory area `chunk`, constructs an object of `class` type `T` at that address. The constructor is passed the arguments `Args`. If `T` is an inner class whose `outer` field can be used to access an instance of the enclosing class, then `Args` must not be empty, and the first member of it must be a valid initializer for that `outer` field. Correct initialization of this field is essential to access members of the outer class inside `T` methods.
Preconditions
`chunk` must be at least as large as `T` needs and should have an alignment multiple of `T`'s alignment. (The size of a `class` instance is obtained by using `_traits(classInstanceSize, T)`).
Note
This function can be `@trusted` if the corresponding constructor of `T` is `@safe`.
Returns:
The newly constructed object.
Examples:
```
static class C
{
int i;
this(int i){this.i = i;}
}
auto buf = new void[__traits(classInstanceSize, C)];
auto c = emplace!C(buf, 5);
assert(c.i == 5);
```
T\* **emplace**(T, Args...)(void[] chunk, auto ref Args args)
Constraints: if (!is(T == class));
Given a raw memory area `chunk`, constructs an object of non-`class` type `T` at that address. The constructor is passed the arguments `args`, if any.
Preconditions
`chunk` must be at least as large as `T` needs and should have an alignment multiple of `T`'s alignment.
Note
This function can be `@trusted` if the corresponding constructor of `T` is `@safe`.
Returns:
A pointer to the newly constructed object.
Examples:
```
struct S
{
int a, b;
}
auto buf = new void[S.sizeof];
S s;
s.a = 42;
s.b = 43;
auto s1 = emplace!S(buf, s);
assert(s1.a == 42 && s1.b == 43);
```
@system void **copyEmplace**(S, T)(ref S source, ref T target)
Constraints: if (is(immutable(S) == immutable(T)));
Emplaces a copy of the specified source value into uninitialized memory, i.e., simulates `T target = source` copy-construction for cases where the target memory is already allocated and to be initialized with a copy.
Parameters:
| | |
| --- | --- |
| S `source` | value to be copied into target |
| T `target` | uninitialized value to be initialized with a copy of source |
Examples:
```
int source = 123;
int target = void;
copyEmplace(source, target);
assert(target == 123);
```
Examples:
```
immutable int[1][1] source = [ [123] ];
immutable int[1][1] target = void;
copyEmplace(source, target);
assert(target[0][0] == 123);
```
Examples:
```
struct S
{
int x;
void opAssign(const scope ref S rhs) @safe pure nothrow @nogc
{
assert(0);
}
}
S source = S(42);
S target = void;
copyEmplace(source, target);
assert(target.x == 42);
```
template **forward**(args...)
Forwards function arguments while keeping `out`, `ref`, and `lazy` on the parameters.
Parameters:
| | |
| --- | --- |
| args | a parameter list or an [`std.meta.AliasSeq`](std_meta#AliasSeq). |
Returns:
An `AliasSeq` of `args` with `out`, `ref`, and `lazy` saved.
Examples:
```
class C
{
static int foo(int n) { return 1; }
static int foo(ref int n) { return 2; }
}
// with forward
int bar()(auto ref int x) { return C.foo(forward!x); }
// without forward
int baz()(auto ref int x) { return C.foo(x); }
int i;
assert(bar(1) == 1);
assert(bar(i) == 2);
assert(baz(1) == 2);
assert(baz(i) == 2);
```
Examples:
```
void foo(int n, ref string s) { s = null; foreach (i; 0 .. n) s ~= "Hello"; }
// forwards all arguments which are bound to parameter tuple
void bar(Args...)(auto ref Args args) { return foo(forward!args); }
// forwards all arguments with swapping order
void baz(Args...)(auto ref Args args) { return foo(forward!args[$/2..$], forward!args[0..$/2]); }
string s;
bar(1, s);
assert(s == "Hello");
baz(s, 2);
assert(s == "HelloHello");
```
Examples:
```
struct X {
int i;
this(this)
{
++i;
}
}
struct Y
{
private X x_;
this()(auto ref X x)
{
x_ = forward!x;
}
}
struct Z
{
private const X x_;
this()(auto ref X x)
{
x_ = forward!x;
}
this()(auto const ref X x)
{
x_ = forward!x;
}
}
X x;
const X cx;
auto constX = (){ const X x; return x; };
static assert(__traits(compiles, { Y y = x; }));
static assert(__traits(compiles, { Y y = X(); }));
static assert(!__traits(compiles, { Y y = cx; }));
static assert(!__traits(compiles, { Y y = constX(); }));
static assert(__traits(compiles, { Z z = x; }));
static assert(__traits(compiles, { Z z = X(); }));
static assert(__traits(compiles, { Z z = cx; }));
static assert(__traits(compiles, { Z z = constX(); }));
Y y1 = x;
// ref lvalue, copy
assert(y1.x_.i == 1);
Y y2 = X();
// rvalue, move
assert(y2.x_.i == 0);
Z z1 = x;
// ref lvalue, copy
assert(z1.x_.i == 1);
Z z2 = X();
// rvalue, move
assert(z2.x_.i == 0);
Z z3 = cx;
// ref const lvalue, copy
assert(z3.x_.i == 1);
Z z4 = constX();
// const rvalue, copy
assert(z4.x_.i == 1);
```
void **move**(T)(ref T source, ref T target);
T **move**(T)(return ref scope T source);
Moves `source` into `target`, via a destructive copy when necessary.
If `T` is a struct with a destructor or postblit defined, source is reset to its `.init` value after it is moved into target, otherwise it is left unchanged.
Preconditions
If source has internal pointers that point to itself and doesn't define opPostMove, it cannot be moved, and will trigger an assertion failure.
Parameters:
| | |
| --- | --- |
| T `source` | Data to copy. |
| T `target` | Where to copy into. The destructor, if any, is invoked before the copy is performed. |
Examples:
For non-struct types, `move` just performs `target = source`:
```
Object obj1 = new Object;
Object obj2 = obj1;
Object obj3;
move(obj2, obj3);
assert(obj3 is obj1);
// obj2 unchanged
assert(obj2 is obj1);
```
Examples:
```
// Structs without destructors are simply copied
struct S1
{
int a = 1;
int b = 2;
}
S1 s11 = { 10, 11 };
S1 s12;
move(s11, s12);
assert(s12 == S1(10, 11));
assert(s11 == s12);
// But structs with destructors or postblits are reset to their .init value
// after copying to the target.
struct S2
{
int a = 1;
int b = 2;
~this() pure nothrow @safe @nogc { }
}
S2 s21 = { 3, 4 };
S2 s22;
move(s21, s22);
assert(s21 == S2(1, 2));
assert(s22 == S2(3, 4));
```
Examples:
Non-copyable structs can still be moved:
```
struct S
{
int a = 1;
@disable this(this);
~this() pure nothrow @safe @nogc {}
}
S s1;
s1.a = 2;
S s2 = move(s1);
assert(s1.a == 1);
assert(s2.a == 2);
```
@system void **moveEmplace**(T)(ref T source, ref T target);
Similar to [`move`](#move) but assumes `target` is uninitialized. This is more efficient because `source` can be blitted over `target` without destroying or initializing it first.
Parameters:
| | |
| --- | --- |
| T `source` | value to be moved into target |
| T `target` | uninitialized value to be filled by source |
Examples:
```
static struct Foo
{
pure nothrow @nogc:
this(int* ptr) { _ptr = ptr; }
~this() { if (_ptr) ++*_ptr; }
int* _ptr;
}
int val;
Foo foo1 = void; // uninitialized
auto foo2 = Foo(&val); // initialized
assert(foo2._ptr is &val);
// Using `move(foo2, foo1)` would have an undefined effect because it would destroy
// the uninitialized foo1.
// moveEmplace directly overwrites foo1 without destroying or initializing it first.
moveEmplace(foo2, foo1);
assert(foo1._ptr is &val);
assert(foo2._ptr is null);
assert(val == 0);
```
| programming_docs |
d std.experimental.allocator.mallocator std.experimental.allocator.mallocator
=====================================
The C heap allocator.
Source
[std/experimental/allocator/mallocator.d](https://github.com/dlang/phobos/blob/master/std/experimental/allocator/mallocator.d)
struct **Mallocator**;
The C heap allocator.
Examples:
```
auto buffer = Mallocator.instance.allocate(1024 * 1024 * 4);
scope(exit) Mallocator.instance.deallocate(buffer);
//...
```
enum uint **alignment**;
The alignment is a static constant equal to `platformAlignment`, which ensures proper alignment for any D data type.
shared pure nothrow @nogc @trusted void[] **allocate**(size\_t bytes);
shared pure nothrow @nogc @system bool **deallocate**(void[] b);
shared pure nothrow @nogc @system bool **reallocate**(ref void[] b, size\_t s);
Standard allocator methods per the semantics defined above. The `deallocate` and `reallocate` methods are `@system` because they may move memory around, leaving dangling pointers in user code. Somewhat paradoxically, `malloc` is `@safe` but that's only useful to safe programs that can afford to leak memory allocated.
static shared Mallocator **instance**;
Returns the global instance of this allocator type. The C heap allocator is thread-safe, therefore all of its methods and `it` itself are `shared`.
struct **AlignedMallocator**;
Aligned allocator using OS-specific primitives, under a uniform API.
Examples:
```
auto buffer = AlignedMallocator.instance.alignedAllocate(1024 * 1024 * 4,
128);
scope(exit) AlignedMallocator.instance.deallocate(buffer);
//...
```
enum uint **alignment**;
The default alignment is `platformAlignment`.
shared nothrow @nogc @trusted void[] **allocate**(size\_t bytes);
Forwards to `alignedAllocate(bytes, platformAlignment)`.
shared nothrow @nogc @trusted void[] **alignedAllocate**(size\_t bytes, uint a);
Uses [`posix_memalign`](http://man7.org/linux/man-pages/man3/posix_memalign.3.html) on Posix and [`__aligned_malloc`](http://msdn.microsoft.com/en-us/library/8z34s9c6(v=vs.80).aspx) on Windows.
shared nothrow @nogc @system bool **deallocate**(void[] b);
Calls `free(b.ptr)` on Posix and [`__aligned_free(b.ptr)`](http://msdn.microsoft.com/en-US/library/17b5h8td(v=vs.80).aspx) on Windows.
shared nothrow @nogc @system bool **reallocate**(ref void[] b, size\_t newSize);
shared nothrow @nogc @system bool **alignedReallocate**(ref void[] b, size\_t s, uint a);
Forwards to `alignedReallocate(b, newSize, platformAlignment)`. Should be used with blocks obtained with `allocate` otherwise the custom alignment passed with `alignedAllocate` can be lost.
static shared AlignedMallocator **instance**;
Returns the global instance of this allocator type. The C heap allocator is thread-safe, therefore all of its methods and `instance` itself are `shared`.
d core.stdc.inttypes core.stdc.inttypes
==================
D header file for C99.
This module contains bindings to selected types and functions from the standard C header [`<inttypes.h>`](http://pubs.opengroup.org/onlinepubs/009695399/basedefs/inttypes.h.html). Note that this is not automatically generated, and may omit some types/functions from the original C header.
License:
Distributed under the [Boost Software License 1.0](http://www.boost.org/LICENSE_1_0.txt). (See accompanying file LICENSE)
Authors:
Sean Kelly
Source
[core/stdc/inttypes.d](https://github.com/dlang/druntime/blob/master/src/core/stdc/inttypes.d)
Standards:
ISO/IEC 9899:1999 (E)
struct **imaxdiv\_t**; enum \_cstr **PRId8**; enum \_cstr **PRId16**; enum \_cstr **PRId32**; enum \_cstr **PRId64**; enum \_cstr **PRIdLEAST8**; enum \_cstr **PRIdLEAST16**; enum \_cstr **PRIdLEAST32**; enum \_cstr **PRIdLEAST64**; enum \_cstr **PRIdFAST8**; enum \_cstr **PRIdFAST16**; enum \_cstr **PRIdFAST32**; enum \_cstr **PRIdFAST64**; enum \_cstr **PRIi8**; enum \_cstr **PRIi16**; enum \_cstr **PRIi32**; enum \_cstr **PRIi64**; enum \_cstr **PRIiLEAST8**; enum \_cstr **PRIiLEAST16**; enum \_cstr **PRIiLEAST32**; enum \_cstr **PRIiLEAST64**; enum \_cstr **PRIiFAST8**; enum \_cstr **PRIiFAST16**; enum \_cstr **PRIiFAST32**; enum \_cstr **PRIiFAST64**; enum \_cstr **PRIo8**; enum \_cstr **PRIo16**; enum \_cstr **PRIo32**; enum \_cstr **PRIo64**; enum \_cstr **PRIoLEAST8**; enum \_cstr **PRIoLEAST16**; enum \_cstr **PRIoLEAST32**; enum \_cstr **PRIoLEAST64**; enum \_cstr **PRIoFAST8**; enum \_cstr **PRIoFAST16**; enum \_cstr **PRIoFAST32**; enum \_cstr **PRIoFAST64**; enum \_cstr **PRIu8**; enum \_cstr **PRIu16**; enum \_cstr **PRIu32**; enum \_cstr **PRIu64**; enum \_cstr **PRIuLEAST8**; enum \_cstr **PRIuLEAST16**; enum \_cstr **PRIuLEAST32**; enum \_cstr **PRIuLEAST64**; enum \_cstr **PRIuFAST8**; enum \_cstr **PRIuFAST16**; enum \_cstr **PRIuFAST32**; enum \_cstr **PRIuFAST64**; enum \_cstr **PRIx8**; enum \_cstr **PRIx16**; enum \_cstr **PRIx32**; enum \_cstr **PRIx64**; enum \_cstr **PRIxLEAST8**; enum \_cstr **PRIxLEAST16**; enum \_cstr **PRIxLEAST32**; enum \_cstr **PRIxLEAST64**; enum \_cstr **PRIxFAST8**; enum \_cstr **PRIxFAST16**; enum \_cstr **PRIxFAST32**; enum \_cstr **PRIxFAST64**; enum \_cstr **PRIX8**; enum \_cstr **PRIX16**; enum \_cstr **PRIX32**; enum \_cstr **PRIX64**; enum \_cstr **PRIXLEAST8**; enum \_cstr **PRIXLEAST16**; enum \_cstr **PRIXLEAST32**; enum \_cstr **PRIXLEAST64**; enum \_cstr **PRIXFAST8**; enum \_cstr **PRIXFAST16**; enum \_cstr **PRIXFAST32**; enum \_cstr **PRIXFAST64**; enum \_cstr **SCNd8**; enum \_cstr **SCNd16**; enum \_cstr **SCNd32**; enum \_cstr **SCNd64**; enum \_cstr **SCNdLEAST8**; enum \_cstr **SCNdLEAST16**; enum \_cstr **SCNdLEAST32**; enum \_cstr **SCNdLEAST64**; enum \_cstr **SCNdFAST8**; enum \_cstr **SCNdFAST16**; enum \_cstr **SCNdFAST32**; enum \_cstr **SCNdFAST64**; enum \_cstr **SCNi8**; enum \_cstr **SCNi16**; enum \_cstr **SCNi32**; enum \_cstr **SCNi64**; enum \_cstr **SCNiLEAST8**; enum \_cstr **SCNiLEAST16**; enum \_cstr **SCNiLEAST32**; enum \_cstr **SCNiLEAST64**; enum \_cstr **SCNiFAST8**; enum \_cstr **SCNiFAST16**; enum \_cstr **SCNiFAST32**; enum \_cstr **SCNiFAST64**; enum \_cstr **SCNo8**; enum \_cstr **SCNo16**; enum \_cstr **SCNo32**; enum \_cstr **SCNo64**; enum \_cstr **SCNoLEAST8**; enum \_cstr **SCNoLEAST16**; enum \_cstr **SCNoLEAST32**; enum \_cstr **SCNoLEAST64**; enum \_cstr **SCNoFAST8**; enum \_cstr **SCNoFAST16**; enum \_cstr **SCNoFAST32**; enum \_cstr **SCNoFAST64**; enum \_cstr **SCNu8**; enum \_cstr **SCNu16**; enum \_cstr **SCNu32**; enum \_cstr **SCNu64**; enum \_cstr **SCNuLEAST8**; enum \_cstr **SCNuLEAST16**; enum \_cstr **SCNuLEAST32**; enum \_cstr **SCNuLEAST64**; enum \_cstr **SCNuFAST8**; enum \_cstr **SCNuFAST16**; enum \_cstr **SCNuFAST32**; enum \_cstr **SCNuFAST64**; enum \_cstr **SCNx8**; enum \_cstr **SCNx16**; enum \_cstr **SCNx32**; enum \_cstr **SCNx64**; enum \_cstr **SCNxLEAST8**; enum \_cstr **SCNxLEAST16**; enum \_cstr **SCNxLEAST32**; enum \_cstr **SCNxLEAST64**; enum \_cstr **SCNxFAST8**; enum \_cstr **SCNxFAST16**; enum \_cstr **SCNxFAST32**; enum \_cstr **SCNxFAST64**; enum \_cstr **PRIdMAX**; enum \_cstr **PRIiMAX**; enum \_cstr **PRIoMAX**; enum \_cstr **PRIuMAX**; enum \_cstr **PRIxMAX**; enum \_cstr **PRIXMAX**; enum \_cstr **SCNdMAX**; enum \_cstr **SCNiMAX**; enum \_cstr **SCNoMAX**; enum \_cstr **SCNuMAX**; enum \_cstr **SCNxMAX**; enum \_cstr **PRIdPTR**; enum \_cstr **PRIiPTR**; enum \_cstr **PRIoPTR**; enum \_cstr **PRIuPTR**; enum \_cstr **PRIxPTR**; enum \_cstr **PRIXPTR**; enum \_cstr **SCNdPTR**; enum \_cstr **SCNiPTR**; enum \_cstr **SCNoPTR**; enum \_cstr **SCNuPTR**; enum \_cstr **SCNxPTR**; nothrow @nogc @trusted intmax\_t **imaxabs**(intmax\_t j); nothrow @nogc @trusted imaxdiv\_t **imaxdiv**(intmax\_t numer, intmax\_t denom); nothrow @nogc @trusted intmax\_t **strtoimax**(scope const char\* nptr, char\*\* endptr, int base); nothrow @nogc @trusted uintmax\_t **strtoumax**(scope const char\* nptr, char\*\* endptr, int base); nothrow @nogc @trusted intmax\_t **wcstoimax**(scope const wchar\_t\* nptr, wchar\_t\*\* endptr, int base); nothrow @nogc @trusted uintmax\_t **wcstoumax**(scope const wchar\_t\* nptr, wchar\_t\*\* endptr, int base);
ansible Ansible Documentation Ansible Documentation
=====================
About Ansible
-------------
Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
Ansible’s main goals are simplicity and ease-of-use. It also has a strong focus on security and reliability, featuring a minimum of moving parts, usage of OpenSSH for transport (with other transports and pull modes as alternatives), and a language that is designed around auditability by humans–even those not familiar with the program.
We believe simplicity is relevant to all sizes of environments, so we design for busy users of all types: developers, sysadmins, release engineers, IT managers, and everyone in between. Ansible is appropriate for managing all environments, from small setups with a handful of instances to enterprise environments with many thousands of instances.
You can learn more at [AnsibleFest](https://www.ansible.com/ansiblefest), the annual event for all Ansible contributors, users, and customers hosted by Red Hat. AnsibleFest is the place to connect with others, learn new skills, and find a new friend to automate with.
Ansible manages machines in an agent-less manner. There is never a question of how to upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. Because OpenSSH is one of the most peer-reviewed open source components, security exposure is greatly reduced. Ansible is decentralized–it relies on your existing OS credentials to control access to remote machines. If needed, Ansible can easily connect with Kerberos, LDAP, and other centralized authentication management systems.
This documentation covers the version of Ansible noted in the upper left corner of this page. We maintain multiple versions of Ansible and of the documentation, so please be sure you are using the version of the documentation that covers the version of Ansible you’re using. For recent features, we note the version of Ansible where the feature was added.
Ansible releases a new major release approximately twice a year. The core application evolves somewhat conservatively, valuing simplicity in language design and setup. Contributors develop and change modules and plugins, hosted in collections since version 2.10, much more quickly.
Installation, Upgrade & Configuration
* [Installation Guide](https://docs.ansible.com/ansible/latest/installation_guide/index.html)
+ [Installing Ansible](installation_guide/intro_installation)
+ [Configuring Ansible](installation_guide/intro_configuration)
* [Ansible Porting Guides](porting_guides/porting_guides)
Using Ansible
* [User Guide](user_guide/index)
+ [Getting started](user_guide/index#getting-started)
+ [Writing tasks, plays, and playbooks](user_guide/index#writing-tasks-plays-and-playbooks)
+ [Working with inventory](user_guide/index#working-with-inventory)
+ [Interacting with data](user_guide/index#interacting-with-data)
+ [Executing playbooks](user_guide/index#executing-playbooks)
+ [Advanced features and reference](user_guide/index#advanced-features-and-reference)
+ [Traditional Table of Contents](user_guide/index#traditional-table-of-contents)
Contributing to Ansible
* [Ansible Community Guide](https://docs.ansible.com/ansible/latest/community/index.html)
+ [Getting started](https://docs.ansible.com/ansible/latest/community/index.html#getting-started)
+ [Going deeper](https://docs.ansible.com/ansible/latest/community/index.html#going-deeper)
+ [Working with the Ansible repo](https://docs.ansible.com/ansible/latest/community/index.html#working-with-the-ansible-repo)
+ [Traditional Table of Contents](https://docs.ansible.com/ansible/latest/community/index.html#traditional-table-of-contents)
Extending Ansible
* [Developer Guide](https://docs.ansible.com/ansible/latest/dev_guide/index.html)
+ [Adding modules and plugins locally](https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html)
+ [Should you develop a module?](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html)
+ [Developing Ansible modules](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html)
+ [Contributing your module to an existing Ansible collection](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_checklist.html)
+ [Conventions, tips, and pitfalls](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_best_practices.html)
+ [Ansible and Python 3](https://docs.ansible.com/ansible/latest/dev_guide/developing_python_3.html)
+ [Debugging modules](https://docs.ansible.com/ansible/latest/dev_guide/debugging.html)
+ [Module format and documentation](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_documenting.html)
+ [Windows module development walkthrough](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general_windows.html)
+ [Developing Cisco ACI modules](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general_aci.html)
+ [Guidelines for Ansible Amazon AWS module development](https://docs.ansible.com/ansible/latest/dev_guide/platforms/aws_guidelines.html)
+ [OpenStack Ansible Modules](https://docs.ansible.com/ansible/latest/dev_guide/platforms/openstack_guidelines.html)
+ [oVirt Ansible Modules](https://docs.ansible.com/ansible/latest/dev_guide/platforms/ovirt_dev_guide.html)
+ [Guidelines for VMware module development](https://docs.ansible.com/ansible/latest/dev_guide/platforms/vmware_guidelines.html)
+ [Guidelines for VMware REST module development](https://docs.ansible.com/ansible/latest/dev_guide/platforms/vmware_rest_guidelines.html)
+ [Creating a new collection](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_in_groups.html)
+ [Testing Ansible](https://docs.ansible.com/ansible/latest/dev_guide/testing.html)
+ [The lifecycle of an Ansible module or plugin](https://docs.ansible.com/ansible/latest/dev_guide/module_lifecycle.html)
+ [Developing plugins](https://docs.ansible.com/ansible/latest/dev_guide/developing_plugins.html)
+ [Developing dynamic inventory](https://docs.ansible.com/ansible/latest/dev_guide/developing_inventory.html)
+ [Developing `ansible-core`](https://docs.ansible.com/ansible/latest/dev_guide/developing_core.html)
+ [Ansible module architecture](https://docs.ansible.com/ansible/latest/dev_guide/developing_program_flow_modules.html)
+ [Python API](https://docs.ansible.com/ansible/latest/dev_guide/developing_api.html)
+ [Rebasing a pull request](https://docs.ansible.com/ansible/latest/dev_guide/developing_rebasing.html)
+ [Using and developing module utilities](https://docs.ansible.com/ansible/latest/dev_guide/developing_module_utilities.html)
+ [Developing collections](https://docs.ansible.com/ansible/latest/dev_guide/developing_collections.html)
+ [Migrating Roles to Roles in Collections on Galaxy](https://docs.ansible.com/ansible/latest/dev_guide/migrating_roles.html)
+ [Collection Galaxy metadata structure](https://docs.ansible.com/ansible/latest/dev_guide/collections_galaxy_meta.html)
+ [Migrating Roles to Roles in Collections on Galaxy](https://docs.ansible.com/ansible/latest/dev_guide/migrating_roles.html)
+ [Ansible architecture](https://docs.ansible.com/ansible/latest/dev_guide/overview_architecture.html)
Common Ansible Scenarios
* [Public Cloud Guides](scenario_guides/cloud_guides)
* [Network Technology Guides](scenario_guides/network_guides)
* [Virtualization and Containerization Guides](scenario_guides/virt_guides)
Network Automation
* [Network Getting Started](network/getting_started/index)
+ [Basic Concepts](network/getting_started/basic_concepts)
+ [How Network Automation is Different](network/getting_started/network_differences)
+ [Run Your First Command and Playbook](network/getting_started/first_playbook)
+ [Build Your Inventory](network/getting_started/first_inventory)
+ [Use Ansible network roles](network/getting_started/network_roles)
+ [Beyond the basics](network/getting_started/intermediate_concepts)
+ [Working with network connection options](network/getting_started/network_connection_options)
+ [Resources and next steps](network/getting_started/network_resources)
* [Network Advanced Topics](network/user_guide/index)
+ [Network Resource Modules](network/user_guide/network_resource_modules)
+ [Ansible Network Examples](network/user_guide/network_best_practices_2.5)
+ [Parsing semi-structured text with Ansible](network/user_guide/cli_parsing)
+ [Network Debug and Troubleshooting Guide](network/user_guide/network_debug_troubleshooting)
+ [Working with command output and prompts in network modules](network/user_guide/network_working_with_command_output)
+ [Ansible Network FAQ](network/user_guide/faq)
+ [Platform Options](network/user_guide/platform_index)
* [Network Developer Guide](network/dev_guide/index)
+ [Developing network resource modules](network/dev_guide/developing_resource_modules_network)
+ [Developing network plugins](network/dev_guide/developing_plugins_network)
+ [Documenting new network platforms](network/dev_guide/documenting_modules_network)
Ansible Galaxy
* [Galaxy User Guide](galaxy/user_guide)
+ [Finding collections on Galaxy](galaxy/user_guide#finding-collections-on-galaxy)
+ [Installing collections](galaxy/user_guide#installing-collections)
+ [Finding roles on Galaxy](galaxy/user_guide#finding-roles-on-galaxy)
+ [Installing roles from Galaxy](galaxy/user_guide#installing-roles-from-galaxy)
* [Galaxy Developer Guide](galaxy/dev_guide)
+ [Creating collections for Galaxy](galaxy/dev_guide#creating-collections-for-galaxy)
+ [Creating roles for Galaxy](galaxy/dev_guide#creating-roles-for-galaxy)
Reference & Appendices
* [Collection Index](collections/index)
* [Indexes of all modules and plugins](https://docs.ansible.com/ansible/latest/collections/all_plugins.html)
* [Playbook Keywords](reference_appendices/playbooks_keywords)
* [Return Values](reference_appendices/common_return_values)
* [Ansible Configuration Settings](reference_appendices/config)
* [Controlling how Ansible behaves: precedence rules](reference_appendices/general_precedence)
* [YAML Syntax](reference_appendices/yamlsyntax)
* [Python 3 Support](reference_appendices/python_3_support)
* [Interpreter Discovery](reference_appendices/interpreter_discovery)
* [Releases and maintenance](reference_appendices/release_and_maintenance)
* [Testing Strategies](reference_appendices/test_strategies)
* [Sanity Tests](https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/index.html)
* [Frequently Asked Questions](https://docs.ansible.com/ansible/latest/reference_appendices/faq.html)
* [Glossary](https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html)
* [Ansible Reference: Module Utilities](reference_appendices/module_utils)
* [Special Variables](reference_appendices/special_variables)
* [Red Hat Ansible Automation Platform](https://docs.ansible.com/ansible/latest/reference_appendices/tower.html)
* [Ansible Automation Hub](reference_appendices/automationhub)
* [Logging Ansible output](reference_appendices/logging)
Roadmaps
* [Ansible Roadmap](https://docs.ansible.com/ansible/latest/roadmap/ansible_roadmap_index.html)
+ [Ansible project 4.0](https://docs.ansible.com/ansible/latest/roadmap/COLLECTIONS_4.html)
+ [Ansible project 3.0](https://docs.ansible.com/ansible/latest/roadmap/COLLECTIONS_3_0.html)
+ [Ansible project 2.10](https://docs.ansible.com/ansible/latest/roadmap/COLLECTIONS_2_10.html)
+ [Older Roadmaps](https://docs.ansible.com/ansible/latest/roadmap/old_roadmap_index.html)
| programming_docs |
ansible Working With Plugins Working With Plugins
====================
Plugins are pieces of code that augment Ansible’s core functionality. Ansible uses a plugin architecture to enable a rich, flexible and expandable feature set.
Ansible ships with a number of handy plugins, and you can easily write your own.
This section covers the various types of plugins that are included with Ansible:
* [Action Plugins](action)
* [Become Plugins](become)
* [Cache Plugins](cache)
* [Callback Plugins](callback)
* [Cliconf Plugins](cliconf)
* [Connection Plugins](connection)
* [Httpapi Plugins](httpapi)
* [Inventory Plugins](inventory)
* [Lookup Plugins](lookup)
* [Netconf Plugins](netconf)
* [Shell Plugins](shell)
* [Strategy Plugins](strategy)
* [Vars Plugins](vars)
* [Using filters to manipulate data](../user_guide/playbooks_filters)
* [Tests](../user_guide/playbooks_tests)
* [Rejecting modules](../user_guide/plugin_filtering_config)
See also
[Intro to playbooks](../user_guide/playbooks_intro#about-playbooks)
An introduction to playbooks
[Ansible Configuration Settings](../reference_appendices/config#ansible-configuration-settings)
Ansible configuration documentation and settings
[Working with command line tools](../user_guide/command_line_tools#command-line-tools)
Ansible tools, description and options
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Cliconf Plugins Cliconf Plugins
===============
* [Adding cliconf plugins](#adding-cliconf-plugins)
* [Using cliconf plugins](#using-cliconf-plugins)
* [Viewing cliconf plugins](#viewing-cliconf-plugins)
Cliconf plugins are abstractions over the CLI interface to network devices. They provide a standard interface for Ansible to execute tasks on those network devices.
These plugins generally correspond one-to-one to network device platforms. Ansible loads the appropriate cliconf plugin automatically based on the `ansible_network_os` variable.
Adding cliconf plugins
----------------------
You can extend Ansible to support other network devices by dropping a custom plugin into the `cliconf_plugins` directory.
Using cliconf plugins
---------------------
The cliconf plugin to use is determined automatically from the `ansible_network_os` variable. There should be no reason to override this functionality.
Most cliconf plugins can operate without configuration. A few have additional options that can be set to affect how tasks are translated into CLI commands.
Plugins are self-documenting. Each plugin should document its configuration options.
Viewing cliconf plugins
-----------------------
These plugins have migrated to collections on [Ansible Galaxy](https://galaxy.ansible.com). If you installed Ansible version 2.10 or later using `pip`, you have access to several cliconf plugins. To list all available cliconf plugins on your control node, type `ansible-doc -t cliconf -l`. To view plugin-specific documentation and examples, use `ansible-doc -t cliconf`.
See also
[Ansible for Network Automation](../network/index#network-guide)
An overview of using Ansible to automate networking devices.
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible-network IRC chat channel
ansible Netconf Plugins Netconf Plugins
===============
* [Adding netconf plugins](#adding-netconf-plugins)
* [Using netconf plugins](#using-netconf-plugins)
* [Listing netconf plugins](#listing-netconf-plugins)
Netconf plugins are abstractions over the Netconf interface to network devices. They provide a standard interface for Ansible to execute tasks on those network devices.
These plugins generally correspond one-to-one to network device platforms. Ansible loads the appropriate netconf plugin automatically based on the `ansible_network_os` variable. If the platform supports standard Netconf implementation as defined in the Netconf RFC specification, Ansible loads the `default` netconf plugin. If the platform supports propriety Netconf RPCs, Ansible loads the platform-specific netconf plugin.
Adding netconf plugins
----------------------
You can extend Ansible to support other network devices by dropping a custom plugin into the `netconf_plugins` directory.
Using netconf plugins
---------------------
The netconf plugin to use is determined automatically from the `ansible_network_os` variable. There should be no reason to override this functionality.
Most netconf plugins can operate without configuration. A few have additional options that can be set to affect how tasks are translated into netconf commands. A ncclient device specific handler name can be set in the netconf plugin or else the value of `default` is used as per ncclient device handler.
Plugins are self-documenting. Each plugin should document its configuration options.
Listing netconf plugins
-----------------------
These plugins have migrated to collections on [Ansible Galaxy](https://galaxy.ansible.com). If you installed Ansible version 2.10 or later using `pip`, you have access to several netconf plugins. To list all available netconf plugins on your control node, type `ansible-doc -t netconf -l`. To view plugin-specific documentation and examples, use `ansible-doc -t netconf`.
See also
[Ansible for Network Automation](../network/index#network-guide)
An overview of using Ansible to automate networking devices.
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible-network IRC chat channel
ansible Strategy Plugins Strategy Plugins
================
* [Enabling strategy plugins](#enabling-strategy-plugins)
* [Using strategy plugins](#using-strategy-plugins)
* [Plugin list](#plugin-list)
Strategy plugins control the flow of play execution by handling task and host scheduling.
Enabling strategy plugins
-------------------------
All strategy plugins shipped with Ansible are enabled by default. You can enable a custom strategy plugin by putting it in one of the lookup directory sources configured in [ansible.cfg](../reference_appendices/config#ansible-configuration-settings).
Using strategy plugins
----------------------
Only one strategy plugin can be used in a play, but you can use different ones for each play in a playbook or ansible run. The default is the [linear](../collections/ansible/builtin/linear_strategy#linear-strategy) plugin. You can change this default in Ansible [configuration](../reference_appendices/config#ansible-configuration-settings) using an environment variable:
```
export ANSIBLE_STRATEGY=free
```
or in the `ansible.cfg` file:
```
[defaults]
strategy=linear
```
You can also specify the strategy plugin in the play via the [strategy keyword](../reference_appendices/playbooks_keywords#playbook-keywords) in a play:
```
- hosts: all
strategy: debug
tasks:
- copy: src=myhosts dest=/etc/hosts
notify: restart_tomcat
- package: name=tomcat state=present
handlers:
- name: restart_tomcat
service: name=tomcat state=restarted
```
Plugin list
-----------
You can use `ansible-doc -t strategy -l` to see the list of available plugins. Use `ansible-doc -t strategy <plugin name>` to see plugin-specific specific documentation and examples.
See also
[Intro to playbooks](../user_guide/playbooks_intro#about-playbooks)
An introduction to playbooks
[Inventory Plugins](inventory#inventory-plugins)
Ansible inventory plugins
[Callback Plugins](callback#callback-plugins)
Ansible callback plugins
[Using filters to manipulate data](../user_guide/playbooks_filters#playbooks-filters)
Jinja2 filter plugins
[Tests](../user_guide/playbooks_tests#playbooks-tests)
Jinja2 test plugins
[Lookups](../user_guide/playbooks_lookups#playbooks-lookups)
Jinja2 lookup plugins
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Lookup Plugins Lookup Plugins
==============
* [Enabling lookup plugins](#enabling-lookup-plugins)
* [Using lookup plugins](#using-lookup-plugins)
* [Forcing lookups to return lists: `query` and `wantlist=True`](#forcing-lookups-to-return-lists-query-and-wantlist-true)
* [Plugin list](#plugin-list)
Lookup plugins are an Ansible-specific extension to the Jinja2 templating language. You can use lookup plugins to access data from outside sources (files, databases, key/value stores, APIs, and other services) within your playbooks. Like all [templating](../user_guide/playbooks_templating#playbooks-templating), lookups execute and are evaluated on the Ansible control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. You can use lookup plugins to load variables or templates with information from external sources.
Note
* Lookups are executed with a working directory relative to the role or play, as opposed to local tasks, which are executed relative the executed script.
* Pass `wantlist=True` to lookups to use in Jinja2 template “for” loops.
* By default, lookup return values are marked as unsafe for security reasons. If you trust the outside source your lookup accesses, pass `allow_unsafe=True` to allow Jinja2 templates to evaluate lookup values.
Warning
* Some lookups pass arguments to a shell. When using variables from a remote/untrusted source, use the `|quote` filter to ensure safe usage.
Enabling lookup plugins
-----------------------
Ansible enables all lookup plugins it can find. You can activate a custom lookup by either dropping it into a `lookup_plugins` directory adjacent to your play, inside the `plugins/lookup/` directory of a collection you have installed, inside a standalone role, or in one of the lookup directory sources configured in [ansible.cfg](../reference_appendices/config#ansible-configuration-settings).
Using lookup plugins
--------------------
You can use lookup plugins anywhere you can use templating in Ansible: in a play, in variables file, or in a Jinja2 template for the [template](../collections/ansible/builtin/template_module#template-module) module.
```
vars:
file_contents: "{{ lookup('file', 'path/to/file.txt') }}"
```
Lookups are an integral part of loops. Wherever you see `with_`, the part after the underscore is the name of a lookup. For this reason, most lookups output lists and take lists as input; for example, `with_items` uses the [items](../collections/ansible/builtin/items_lookup#items-lookup) lookup:
```
tasks:
- name: count to 3
debug: msg={{ item }}
with_items: [1, 2, 3]
```
You can combine lookups with [filters](../user_guide/playbooks_filters#playbooks-filters), [tests](../user_guide/playbooks_tests#playbooks-tests) and even each other to do some complex data generation and manipulation. For example:
```
tasks:
- name: valid but useless and over complicated chained lookups and filters
debug: msg="find the answer here:\n{{ lookup('url', 'https://google.com/search/?q=' + item|urlencode)|join(' ') }}"
with_nested:
- "{{ lookup('consul_kv', 'bcs/' + lookup('file', '/the/question') + ', host=localhost, port=2000')|shuffle }}"
- "{{ lookup('sequence', 'end=42 start=2 step=2')|map('log', 4)|list) }}"
- ['a', 'c', 'd', 'c']
```
New in version 2.6.
You can control how errors behave in all lookup plugins by setting `errors` to `ignore`, `warn`, or `strict`. The default setting is `strict`, which causes the task to fail if the lookup returns an error. For example:
To ignore lookup errors:
```
- name: if this file does not exist, I do not care .. file plugin itself warns anyway ...
debug: msg="{{ lookup('file', '/nosuchfile', errors='ignore') }}"
```
```
[WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths)
ok: [localhost] => {
"msg": ""
}
```
To get a warning instead of a failure:
```
- name: if this file does not exist, let me know, but continue
debug: msg="{{ lookup('file', '/nosuchfile', errors='warn') }}"
```
```
[WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths)
[WARNING]: An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /nosuchfile
ok: [localhost] => {
"msg": ""
}
```
To get a fatal error (the default):
```
- name: if this file does not exist, FAIL (this is the default)
debug: msg="{{ lookup('file', '/nosuchfile', errors='strict') }}"
```
```
[WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths)
fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /nosuchfile"}
```
Forcing lookups to return lists: `query` and `wantlist=True`
------------------------------------------------------------
New in version 2.5.
In Ansible 2.5, a new Jinja2 function called `query` was added for invoking lookup plugins. The difference between `lookup` and `query` is largely that `query` will always return a list. The default behavior of `lookup` is to return a string of comma separated values. `lookup` can be explicitly configured to return a list using `wantlist=True`.
This feature provides an easier and more consistent interface for interacting with the new `loop` keyword, while maintaining backwards compatibility with other uses of `lookup`.
The following examples are equivalent:
```
lookup('dict', dict_variable, wantlist=True)
query('dict', dict_variable)
```
As demonstrated above, the behavior of `wantlist=True` is implicit when using `query`.
Additionally, `q` was introduced as a shortform of `query`:
```
q('dict', dict_variable)
```
Plugin list
-----------
You can use `ansible-doc -t lookup -l` to see the list of available plugins. Use `ansible-doc -t lookup <plugin name>` to see specific documents and examples.
See also
[Intro to playbooks](../user_guide/playbooks_intro#about-playbooks)
An introduction to playbooks
[Inventory Plugins](inventory#inventory-plugins)
Ansible inventory plugins
[Callback Plugins](callback#callback-plugins)
Ansible callback plugins
[Using filters to manipulate data](../user_guide/playbooks_filters#playbooks-filters)
Jinja2 filter plugins
[Tests](../user_guide/playbooks_tests#playbooks-tests)
Jinja2 test plugins
[Lookups](../user_guide/playbooks_lookups#playbooks-lookups)
Jinja2 lookup plugins
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Inventory Plugins Inventory Plugins
=================
* [Enabling inventory plugins](#enabling-inventory-plugins)
* [Using inventory plugins](#using-inventory-plugins)
* [Plugin List](#plugin-list)
Inventory plugins allow users to point at data sources to compile the inventory of hosts that Ansible uses to target tasks, either using the `-i /path/to/file` and/or `-i 'host1, host2'` command line parameters or from other configuration sources.
Enabling inventory plugins
--------------------------
Most inventory plugins shipped with Ansible are enabled by default or can be used by with the `auto` plugin.
In some circumstances, for example, if the inventory plugin does not use a YAML configuration file, you may need to enable the specific plugin. You can do this by setting `enable_plugins` in your [ansible.cfg](../reference_appendices/config#ansible-configuration-settings) file in the `[inventory]` section. Modifying this will override the default list of enabled plugins. Here is the default list of enabled plugins that ships with Ansible:
```
[inventory]
enable_plugins = host_list, script, auto, yaml, ini, toml
```
If the plugin is in a collection, use the fully qualified name:
```
[inventory]
enable_plugins = namespace.collection_name.inventory_plugin_name
```
Using inventory plugins
-----------------------
To use an inventory plugin, you must provide an inventory source. Most of the time this is a file containing host information or a YAML configuration file with options for the plugin. You can use the `-i` flag to provide inventory sources or configure a default inventory path.
```
ansible hostname -i inventory_source -m ansible.builtin.ping
```
To start using an inventory plugin with a YAML configuration source, create a file with the accepted filename schema documented for the plugin in question, then add `plugin: plugin_name`. Use the fully qualified name if the plugin is in a collection.
```
# demo.aws_ec2.yml
plugin: amazon.aws.aws_ec2
```
Each plugin should document any naming restrictions. In addition, the YAML config file must end with the extension `yml` or `yaml` to be enabled by default with the `auto` plugin (otherwise, see the section above on enabling plugins).
After providing any required options, you can view the populated inventory with `ansible-inventory -i demo.aws_ec2.yml --graph`:
```
@all:
|--@aws_ec2:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
| |--ec2-98-765-432-10.compute-1.amazonaws.com
|--@ungrouped:
```
If you are using an inventory plugin in a playbook-adjacent collection and want to test your setup with `ansible-inventory`, use the `--playbook-dir` flag.
Your inventory source might be a directory of inventory configuration files. The constructed inventory plugin only operates on those hosts already in inventory, so you may want the constructed inventory configuration parsed at a particular point (such as last). Ansible parses the directory recursively, alphabetically. You cannot configure the parsing approach, so name your files to make it work predictably. Inventory plugins that extend constructed features directly can work around that restriction by adding constructed options in addition to the inventory plugin options. Otherwise, you can use `-i` with multiple sources to impose a specific order, for example `-i demo.aws_ec2.yml -i clouds.yml -i constructed.yml`.
You can create dynamic groups using host variables with the constructed `keyed_groups` option. The option `groups` can also be used to create groups and `compose` creates and modifies host variables. Here is an aws\_ec2 example utilizing constructed features:
```
# demo.aws_ec2.yml
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
- us-east-2
keyed_groups:
# add hosts to tag_Name_value groups for each aws_ec2 host's tags.Name variable
- key: tags.Name
prefix: tag_Name_
separator: ""
groups:
# add hosts to the group development if any of the dictionary's keys or values is the word 'devel'
development: "'devel' in (tags|list)"
compose:
# set the ansible_host variable to connect with the private IP address without changing the hostname
ansible_host: private_ip_address
```
Now the output of `ansible-inventory -i demo.aws_ec2.yml --graph`:
```
@all:
|--@aws_ec2:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
| |--ec2-98-765-432-10.compute-1.amazonaws.com
| |--...
|--@development:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
| |--ec2-98-765-432-10.compute-1.amazonaws.com
|--@tag_Name_ECS_Instance:
| |--ec2-98-765-432-10.compute-1.amazonaws.com
|--@tag_Name_Test_Server:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
|--@ungrouped
```
If a host does not have the variables in the configuration above (in other words, `tags.Name`, `tags`, `private_ip_address`), the host will not be added to groups other than those that the inventory plugin creates and the `ansible_host` host variable will not be modified.
Inventory plugins that support caching can use the general settings for the fact cache defined in the `ansible.cfg` file’s `[defaults]` section or define inventory-specific settings in the `[inventory]` section. Individual plugins can define plugin-specific cache settings in their config file:
```
# demo.aws_ec2.yml
plugin: amazon.aws.aws_ec2
cache: yes
cache_plugin: ansible.builtin.jsonfile
cache_timeout: 7200
cache_connection: /tmp/aws_inventory
cache_prefix: aws_ec2
```
Here is an example of setting inventory caching with some fact caching defaults for the cache plugin used and the timeout in an `ansible.cfg` file:
```
[defaults]
fact_caching = ansible.builtin.jsonfile
fact_caching_connection = /tmp/ansible_facts
cache_timeout = 3600
[inventory]
cache = yes
cache_connection = /tmp/ansible_inventory
```
Plugin List
-----------
You can use `ansible-doc -t inventory -l` to see the list of available plugins. Use `ansible-doc -t inventory <plugin name>` to see plugin-specific documentation and examples.
See also
[Intro to playbooks](../user_guide/playbooks_intro#about-playbooks)
An introduction to playbooks
[Callback Plugins](callback#callback-plugins)
Ansible callback plugins
[Connection Plugins](connection#connection-plugins)
Ansible connection plugins
[Using filters to manipulate data](../user_guide/playbooks_filters#playbooks-filters)
Jinja2 filter plugins
[Tests](../user_guide/playbooks_tests#playbooks-tests)
Jinja2 test plugins
[Lookups](../user_guide/playbooks_lookups#playbooks-lookups)
Jinja2 lookup plugins
[Vars Plugins](vars#vars-plugins)
Ansible vars plugins
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Become Plugins Become Plugins
==============
* [Enabling Become Plugins](#enabling-become-plugins)
* [Using Become Plugins](#using-become-plugins)
* [Plugin List](#plugin-list)
New in version 2.8.
Become plugins work to ensure that Ansible can use certain privilege escalation systems when running the basic commands to work with the target machine as well as the modules required to execute the tasks specified in the play.
These utilities (`sudo`, `su`, `doas`, and so on) generally let you ‘become’ another user to execute a command with the permissions of that user.
Enabling Become Plugins
-----------------------
The become plugins shipped with Ansible are already enabled. Custom plugins can be added by placing them into a `become_plugins` directory adjacent to your play, inside a role, or by placing them in one of the become plugin directory sources configured in [ansible.cfg](../reference_appendices/config#ansible-configuration-settings).
Using Become Plugins
--------------------
In addition to the default configuration settings in [Ansible Configuration Settings](../reference_appendices/config#ansible-configuration-settings) or the `--become-method` command line option, you can use the `become_method` keyword in a play or, if you need to be ‘host specific’, the connection variable `ansible_become_method` to select the plugin to use.
You can further control the settings for each plugin via other configuration options detailed in the plugin themselves (linked below).
Plugin List
-----------
You can use `ansible-doc -t become -l` to see the list of available plugins. Use `ansible-doc -t become <plugin name>` to see specific documentation and examples.
See also
[Intro to playbooks](../user_guide/playbooks_intro#about-playbooks)
An introduction to playbooks
[Inventory Plugins](inventory#inventory-plugins)
Ansible inventory plugins
[Callback Plugins](callback#callback-plugins)
Ansible callback plugins
[Using filters to manipulate data](../user_guide/playbooks_filters#playbooks-filters)
Jinja2 filter plugins
[Tests](../user_guide/playbooks_tests#playbooks-tests)
Jinja2 test plugins
[Lookups](../user_guide/playbooks_lookups#playbooks-lookups)
Jinja2 lookup plugins
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Httpapi Plugins Httpapi Plugins
===============
* [Adding httpapi plugins](#adding-httpapi-plugins)
* [Using httpapi plugins](#using-httpapi-plugins)
* [Viewing httpapi plugins](#viewing-httpapi-plugins)
Httpapi plugins tell Ansible how to interact with a remote device’s HTTP-based API and execute tasks on the device.
Each plugin represents a particular dialect of API. Some are platform-specific (Arista eAPI, Cisco NXAPI), while others might be usable on a variety of platforms (RESTCONF). Ansible loads the appropriate httpapi plugin automatically based on the `ansible_network_os` variable.
Adding httpapi plugins
----------------------
You can extend Ansible to support other APIs by dropping a custom plugin into the `httpapi_plugins` directory. See [Developing httpapi plugins](../network/dev_guide/developing_plugins_network#developing-plugins-httpapi) for details.
Using httpapi plugins
---------------------
The httpapi plugin to use is determined automatically from the `ansible_network_os` variable.
Most httpapi plugins can operate without configuration. Additional options may be defined by each plugin.
Plugins are self-documenting. Each plugin should document its configuration options.
The following sample playbook shows the httpapi plugin for an Arista network device, assuming an inventory variable set as `ansible_network_os=eos` for the httpapi plugin to trigger off:
```
- hosts: leaf01
connection: httpapi
gather_facts: false
tasks:
- name: type a simple arista command
eos_command:
commands:
- show version | json
register: command_output
- name: print command output to terminal window
debug:
var: command_output.stdout[0]["version"]
```
See the full working example [on GitHub](https://github.com/network-automation/httpapi).
Viewing httpapi plugins
-----------------------
These plugins have migrated to collections on [Ansible Galaxy](https://galaxy.ansible.com). If you installed Ansible version 2.10 or later using `pip`, you have access to several httpapi plugins. To list all available httpapi plugins on your control node, type `ansible-doc -t httpapi -l`. To view plugin-specific documentation and examples, use `ansible-doc -t httpapi`.
See also
[Ansible for Network Automation](../network/index#network-guide)
An overview of using Ansible to automate networking devices.
[Developing network modules](../network/dev_guide/developing_plugins_network#developing-modules-network)
How to develop network modules.
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible-network IRC chat channel
ansible Shell Plugins Shell Plugins
=============
* [Enabling shell plugins](#enabling-shell-plugins)
* [Using shell plugins](#using-shell-plugins)
Shell plugins work to ensure that the basic commands Ansible runs are properly formatted to work with the target machine and allow the user to configure certain behaviors related to how Ansible executes tasks.
Enabling shell plugins
----------------------
You can add a custom shell plugin by dropping it into a `shell_plugins` directory adjacent to your play, inside a role, or by putting it in one of the shell plugin directory sources configured in [ansible.cfg](../reference_appendices/config#ansible-configuration-settings).
Warning
You should not alter which plugin is used unless you have a setup in which the default `/bin/sh` is not a POSIX compatible shell or is not available for execution.
Using shell plugins
-------------------
In addition to the default configuration settings in [Ansible Configuration Settings](../reference_appendices/config#ansible-configuration-settings), you can use the connection variable [ansible\_shell\_type](../user_guide/intro_inventory#ansible-shell-type) to select the plugin to use. In this case, you will also want to update the [ansible\_shell\_executable](../user_guide/intro_inventory#ansible-shell-executable) to match.
You can further control the settings for each plugin via other configuration options detailed in the plugin themselves (linked below).
See also
[Intro to playbooks](../user_guide/playbooks_intro#about-playbooks)
An introduction to playbooks
[Inventory Plugins](inventory#inventory-plugins)
Ansible inventory plugins
[Callback Plugins](callback#callback-plugins)
Ansible callback plugins
[Using filters to manipulate data](../user_guide/playbooks_filters#playbooks-filters)
Jinja2 filter plugins
[Tests](../user_guide/playbooks_tests#playbooks-tests)
Jinja2 test plugins
[Lookups](../user_guide/playbooks_lookups#playbooks-lookups)
Jinja2 lookup plugins
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Cache Plugins Cache Plugins
=============
* [Enabling Fact Cache Plugins](#enabling-fact-cache-plugins)
* [Enabling Inventory Cache Plugins](#enabling-inventory-cache-plugins)
* [Using Cache Plugins](#using-cache-plugins)
* [Plugin List](#plugin-list)
Cache plugins allow Ansible to store gathered facts or inventory source data without the performance hit of retrieving them from source.
The default cache plugin is the [memory](../collections/ansible/builtin/memory_cache#memory-cache) plugin, which only caches the data for the current execution of Ansible. Other plugins with persistent storage are available to allow caching the data across runs. Some of these cache plugins write to files, others write to databases.
You can use different cache plugins for inventory and facts. If you enable inventory caching without setting an inventory-specific cache plugin, Ansible uses the fact cache plugin for both facts and inventory.
Enabling Fact Cache Plugins
---------------------------
Fact caching is always enabled. However, only one fact cache plugin can be active at a time. You can select the cache plugin to use for fact caching in the Ansible configuration, either with an environment variable:
```
export ANSIBLE_CACHE_PLUGIN=jsonfile
```
or in the `ansible.cfg` file:
```
[defaults]
fact_caching=redis
```
If the cache plugin is in a collection use the fully qualified name:
```
[defaults]
fact_caching = namespace.collection_name.cache_plugin_name
```
To enable a custom cache plugin, save it in a `cache_plugins` directory adjacent to your play, inside a role, or in one of the directory sources configured in [ansible.cfg](../reference_appendices/config#ansible-configuration-settings).
You also need to configure other settings specific to each plugin. Consult the individual plugin documentation or the Ansible [configuration](../reference_appendices/config#ansible-configuration-settings) for more details.
Enabling Inventory Cache Plugins
--------------------------------
Inventory caching is disabled by default. To cache inventory data, you must enable inventory caching and then select the specific cache plugin you want to use. Not all inventory plugins support caching, so check the documentation for the inventory plugin(s) you want to use. You can enable inventory caching with an environment variable:
```
export ANSIBLE_INVENTORY_CACHE=True
```
or in the `ansible.cfg` file:
```
[inventory]
cache=True
```
or if the inventory plugin accepts a YAML configuration source, in the configuration file:
```
# dev.aws_ec2.yaml
plugin: aws_ec2
cache: True
```
Only one inventory cache plugin can be active at a time. You can set it with an environment variable:
```
export ANSIBLE_INVENTORY_CACHE_PLUGIN=jsonfile
```
or in the ansible.cfg file:
```
[inventory]
cache_plugin=jsonfile
```
or if the inventory plugin accepts a YAML configuration source, in the configuration file:
```
# dev.aws_ec2.yaml
plugin: aws_ec2
cache_plugin: jsonfile
```
To cache inventory with a custom plugin in your plugin path, follow the [developer guide on cache plugins](https://docs.ansible.com/ansible/latest/dev_guide/developing_plugins.html#developing-cache-plugins).
To cache inventory with a cache plugin in a collection, use the FQCN:
```
[inventory]
cache_plugin=collection_namespace.collection_name.cache_plugin
```
If you enable caching for inventory plugins without selecting an inventory-specific cache plugin, Ansible falls back to caching inventory with the fact cache plugin you configured. Consult the individual inventory plugin documentation or the Ansible [configuration](../reference_appendices/config#ansible-configuration-settings) for more details.
Using Cache Plugins
-------------------
Cache plugins are used automatically once they are enabled.
Plugin List
-----------
You can use `ansible-doc -t cache -l` to see the list of available plugins. Use `ansible-doc -t cache <plugin name>` to see specific documentation and examples.
See also
[Action Plugins](action#action-plugins)
Ansible Action plugins
[Callback Plugins](callback#callback-plugins)
Ansible callback plugins
[Connection Plugins](connection#connection-plugins)
Ansible connection plugins
[Inventory Plugins](inventory#inventory-plugins)
Ansible inventory plugins
[Shell Plugins](shell#shell-plugins)
Ansible Shell plugins
[Strategy Plugins](strategy#strategy-plugins)
Ansible Strategy plugins
[Vars Plugins](vars#vars-plugins)
Ansible Vars plugins
[User Mailing List](https://groups.google.com/forum/#!forum/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Callback Plugins Callback Plugins
================
* [Example callback plugins](#example-callback-plugins)
* [Enabling callback plugins](#enabling-callback-plugins)
* [Setting a callback plugin for `ansible-playbook`](#setting-a-callback-plugin-for-ansible-playbook)
* [Setting a callback plugin for ad hoc commands](#setting-a-callback-plugin-for-ad-hoc-commands)
* [Plugin list](#plugin-list)
Callback plugins enable adding new behaviors to Ansible when responding to events. By default, callback plugins control most of the output you see when running the command line programs, but can also be used to add additional output, integrate with other tools and marshall the events to a storage backend.
Example callback plugins
------------------------
The [log\_plays](https://docs.ansible.com/ansible/2.9/plugins/callback/log_plays.html#log-plays-callback "(in Ansible v2.9)") callback is an example of how to record playbook events to a log file, and the [mail](https://docs.ansible.com/ansible/2.9/plugins/callback/mail.html#mail-callback "(in Ansible v2.9)") callback sends email on playbook failures.
The [say](https://docs.ansible.com/ansible/2.9/plugins/callback/say.html#say-callback "(in Ansible v2.9)") callback responds with computer synthesized speech in relation to playbook events.
Enabling callback plugins
-------------------------
You can activate a custom callback by either dropping it into a `callback_plugins` directory adjacent to your play, inside a role, or by putting it in one of the callback directory sources configured in [ansible.cfg](../reference_appendices/config#ansible-configuration-settings).
Plugins are loaded in alphanumeric order. For example, a plugin implemented in a file named `1_first.py` would run before a plugin file named `2_second.py`.
Most callbacks shipped with Ansible are disabled by default and need to be enabled in your [ansible.cfg](../reference_appendices/config#ansible-configuration-settings) file in order to function. For example:
```
#callbacks_enabled = timer, mail, profile_roles, collection_namespace.collection_name.custom_callback
```
Setting a callback plugin for `ansible-playbook`
------------------------------------------------
You can only have one plugin be the main manager of your console output. If you want to replace the default, you should define CALLBACK\_TYPE = stdout in the subclass and then configure the stdout plugin in [ansible.cfg](../reference_appendices/config#ansible-configuration-settings). For example:
```
stdout_callback = dense
```
or for my custom callback:
```
stdout_callback = mycallback
```
This only affects [ansible-playbook](../cli/ansible-playbook#ansible-playbook) by default.
Setting a callback plugin for ad hoc commands
---------------------------------------------
The [ansible](../cli/ansible#ansible) ad hoc command specifically uses a different callback plugin for stdout, so there is an extra setting in [Ansible Configuration Settings](../reference_appendices/config#ansible-configuration-settings) you need to add to use the stdout callback defined above:
```
[defaults]
bin_ansible_callbacks=True
```
You can also set this as an environment variable:
```
export ANSIBLE_LOAD_CALLBACK_PLUGINS=1
```
Plugin list
-----------
You can use `ansible-doc -t callback -l` to see the list of available plugins. Use `ansible-doc -t callback <plugin name>` to see specific documents and examples.
See also
[Action Plugins](action#action-plugins)
Ansible Action plugins
[Cache Plugins](cache#cache-plugins)
Ansible cache plugins
[Connection Plugins](connection#connection-plugins)
Ansible connection plugins
[Inventory Plugins](inventory#inventory-plugins)
Ansible inventory plugins
[Shell Plugins](shell#shell-plugins)
Ansible Shell plugins
[Strategy Plugins](strategy#strategy-plugins)
Ansible Strategy plugins
[Vars Plugins](vars#vars-plugins)
Ansible Vars plugins
[User Mailing List](https://groups.google.com/forum/#!forum/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Vars Plugins Vars Plugins
============
* [Enabling vars plugins](#enabling-vars-plugins)
* [Using vars plugins](#using-vars-plugins)
* [Plugin Lists](#plugin-lists)
Vars plugins inject additional variable data into Ansible runs that did not come from an inventory source, playbook, or command line. Playbook constructs like ‘host\_vars’ and ‘group\_vars’ work using vars plugins.
Vars plugins were partially implemented in Ansible 2.0 and rewritten to be fully implemented starting with Ansible 2.4.
The [host\_group\_vars](../collections/ansible/builtin/host_group_vars_vars#host-group-vars-vars) plugin shipped with Ansible enables reading variables from [Assigning a variable to one machine: host variables](../user_guide/intro_inventory#host-variables) and [Assigning a variable to many machines: group variables](../user_guide/intro_inventory#group-variables).
Enabling vars plugins
---------------------
You can activate a custom vars plugin by either dropping it into a `vars_plugins` directory adjacent to your play, inside a role, or by putting it in one of the directory sources configured in [ansible.cfg](../reference_appendices/config#ansible-configuration-settings).
Most vars plugins are disabled by default. To enable a vars plugin, set `vars_plugins_enabled` in the `defaults` section of [ansible.cfg](../reference_appendices/config#ansible-configuration-settings) or set the `ANSIBLE_VARS_ENABLED` environment variable to the list of vars plugins you want to execute. By default, the [host\_group\_vars](../collections/ansible/builtin/host_group_vars_vars#host-group-vars-vars) plugin shipped with Ansible is enabled.
Starting in Ansible 2.10, you can use vars plugins in collections. All vars plugins in collections must be explicitly enabled and must use the fully qualified collection name in the format `namespace.collection_name.vars_plugin_name`.
```
[defaults]
vars_plugins_enabled = host_group_vars,namespace.collection_name.vars_plugin_name
```
Using vars plugins
------------------
By default, vars plugins are used on demand automatically after they are enabled.
Starting in Ansible 2.10, vars plugins can be made to run at specific times. `ansible-inventory` does not use these settings, and always loads vars plugins.
The global setting `RUN_VARS_PLUGINS` can be set in `ansible.cfg` using `run_vars_plugins` in the `defaults` section or by the `ANSIBLE_RUN_VARS_PLUGINS` environment variable. The default option, `demand`, runs any enabled vars plugins relative to inventory sources whenever variables are demanded by tasks. You can use the option `start` to run any enabled vars plugins relative to inventory sources after importing that inventory source instead.
You can also control vars plugin execution on a per-plugin basis for vars plugins that support the `stage` option. To run the [host\_group\_vars](../collections/ansible/builtin/host_group_vars_vars#host-group-vars-vars) plugin after importing inventory you can add the following to [ansible.cfg](../reference_appendices/config#ansible-configuration-settings):
```
[vars_host_group_vars]
stage = inventory
```
Plugin Lists
------------
You can use `ansible-doc -t vars -l` to see the list of available plugins. Use `ansible-doc -t vars <plugin name>` to see specific plugin-specific documentation and examples.
See also
[Action Plugins](action#action-plugins)
Ansible Action plugins
[Cache Plugins](cache#cache-plugins)
Ansible Cache plugins
[Callback Plugins](callback#callback-plugins)
Ansible callback plugins
[Connection Plugins](connection#connection-plugins)
Ansible connection plugins
[Inventory Plugins](inventory#inventory-plugins)
Ansible inventory plugins
[Shell Plugins](shell#shell-plugins)
Ansible Shell plugins
[Strategy Plugins](strategy#strategy-plugins)
Ansible Strategy plugins
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Connection Plugins Connection Plugins
==================
* [`ssh` plugins](#ssh-plugins)
* [Adding connection plugins](#adding-connection-plugins)
* [Using connection plugins](#using-connection-plugins)
* [Plugin List](#plugin-list)
Connection plugins allow Ansible to connect to the target hosts so it can execute tasks on them. Ansible ships with many connection plugins, but only one can be used per host at a time.
By default, Ansible ships with several plugins. The most commonly used are the [paramiko SSH](https://docs.ansible.com/ansible/2.9/plugins/connection/paramiko_ssh.html#paramiko-ssh-connection "(in Ansible v2.9)"), native ssh (just called [ssh](../collections/ansible/builtin/ssh_connection#ssh-connection)), and [local](../collections/ansible/builtin/local_connection#local-connection) connection types. All of these can be used in playbooks and with **/usr/bin/ansible** to decide how you want to talk to remote machines.
The basics of these connection types are covered in the [getting started](../user_guide/intro_getting_started#intro-getting-started) section.
`ssh` plugins
--------------
Because ssh is the default protocol used in system administration and the protocol most used in Ansible, ssh options are included in the command line tools. See [ansible-playbook](../cli/ansible-playbook#ansible-playbook) for more details.
Adding connection plugins
-------------------------
You can extend Ansible to support other transports (such as SNMP or message bus) by dropping a custom plugin into the `connection_plugins` directory.
Using connection plugins
------------------------
You can set the connection plugin globally via [configuration](../reference_appendices/config#ansible-configuration-settings), at the command line (`-c`, `--connection`), as a [keyword](../reference_appendices/playbooks_keywords#playbook-keywords) in your play, or by setting a [variable](../user_guide/intro_inventory#behavioral-parameters), most often in your inventory. For example, for Windows machines you might want to set the [winrm](../collections/ansible/builtin/winrm_connection#winrm-connection) plugin as an inventory variable.
Most connection plugins can operate with minimal configuration. By default they use the [inventory hostname](../collections/ansible/builtin/inventory_hostnames_lookup#inventory-hostnames-lookup) and defaults to find the target host.
Plugins are self-documenting. Each plugin should document its configuration options. The following are connection variables common to most connection plugins:
[ansible\_host](../user_guide/playbooks_vars_facts#magic-variables-and-hostvars)
The name of the host to connect to, if different from the [inventory](../user_guide/intro_inventory#intro-inventory) hostname.
[ansible\_port](https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#faq-setting-users-and-ports)
The ssh port number, for [ssh](../collections/ansible/builtin/ssh_connection#ssh-connection) and [paramiko\_ssh](https://docs.ansible.com/ansible/2.9/plugins/connection/paramiko_ssh.html#paramiko-ssh-connection "(in Ansible v2.9)") it defaults to 22.
[ansible\_user](https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#faq-setting-users-and-ports)
The default user name to use for log in. Most plugins default to the ‘current user running Ansible’.
Each plugin might also have a specific version of a variable that overrides the general version. For example, `ansible_ssh_host` for the [ssh](../collections/ansible/builtin/ssh_connection#ssh-connection) plugin.
Plugin List
-----------
You can use `ansible-doc -t connection -l` to see the list of available plugins. Use `ansible-doc -t connection <plugin name>` to see detailed documentation and examples.
See also
[Working with Playbooks](../user_guide/playbooks#working-with-playbooks)
An introduction to playbooks
[Callback Plugins](callback#callback-plugins)
Ansible callback plugins
[Filters](../user_guide/playbooks_filters#playbooks-filters)
Jinja2 filter plugins
[Tests](../user_guide/playbooks_tests#playbooks-tests)
Jinja2 test plugins
[Lookups](../user_guide/playbooks_lookups#playbooks-lookups)
Jinja2 lookup plugins
[Vars Plugins](vars#vars-plugins)
Ansible vars plugins
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Action Plugins Action Plugins
==============
* [Enabling action plugins](#enabling-action-plugins)
* [Using action plugins](#using-action-plugins)
* [Plugin list](#plugin-list)
Action plugins act in conjunction with [modules](../user_guide/modules#working-with-modules) to execute the actions required by playbook tasks. They usually execute automatically in the background doing prerequisite work before modules execute.
The ‘normal’ action plugin is used for modules that do not already have an action plugin.
Enabling action plugins
-----------------------
You can enable a custom action plugin by either dropping it into the `action_plugins` directory adjacent to your play, inside a role, or by putting it in one of the action plugin directory sources configured in [ansible.cfg](../reference_appendices/config#ansible-configuration-settings).
Using action plugins
--------------------
Action plugin are executed by default when an associated module is used; no action is required.
Plugin list
-----------
You cannot list action plugins directly, they show up as their counterpart modules:
Use `ansible-doc -l` to see the list of available modules. Use `ansible-doc <name>` to see specific documentation and examples, this should note if the module has a corresponding action plugin.
See also
[Cache Plugins](cache#cache-plugins)
Ansible Cache plugins
[Callback Plugins](callback#callback-plugins)
Ansible callback plugins
[Connection Plugins](connection#connection-plugins)
Ansible connection plugins
[Inventory Plugins](inventory#inventory-plugins)
Ansible inventory plugins
[Shell Plugins](shell#shell-plugins)
Ansible Shell plugins
[Strategy Plugins](strategy#strategy-plugins)
Ansible Strategy plugins
[Vars Plugins](vars#vars-plugins)
Ansible Vars plugins
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible ansible.builtin.default – default Ansible screen output ansible.builtin.default – default Ansible screen output
=======================================================
Note
This callback plugin is part of `ansible-core` and included in all Ansible installations. In most cases, you can use the short plugin name `default` even without specifying the `collections:` keyword. However, we recommend you use the FQCN for easy linking to the plugin documentation and to avoid conflicting with other collections that may have the same callback plugin name.
* [Synopsis](#synopsis)
* [Requirements](#requirements)
* [Parameters](#parameters)
Synopsis
--------
* This is the default output callback for ansible-playbook.
Requirements
------------
The below requirements are needed on the local controller node that executes this callback.
* set as stdout in configuration
Parameters
----------
| Parameter | Choices/Defaults | Configuration | Comments |
| --- | --- | --- | --- |
| **check\_mode\_markers** boolean added in 2.9 of ansible.builtin | **Choices:*** **no** ←
* yes
| ini entries: [defaults]check\_mode\_markers = no env:ANSIBLE\_CHECK\_MODE\_MARKERS | Toggle to control displaying markers when running in check mode. The markers are `DRY RUN` at the beggining and ending of playbook execution (when calling `ansible-playbook --check`) and `CHECK MODE` as a suffix at every play and task that is run in check mode. |
| **display\_failed\_stderr** boolean added in 2.7 of ansible.builtin | **Choices:*** **no** ←
* yes
| ini entries: [defaults]display\_failed\_stderr = no env:ANSIBLE\_DISPLAY\_FAILED\_STDERR | Toggle to control whether failed and unreachable tasks are displayed to STDERR (vs. STDOUT) |
| **display\_ok\_hosts** boolean added in 2.7 of ansible.builtin | **Choices:*** no
* **yes** ←
| ini entries: [defaults]display\_ok\_hosts = yes env:ANSIBLE\_DISPLAY\_OK\_HOSTS | Toggle to control displaying 'ok' task/host results in a task |
| **display\_skipped\_hosts** boolean | **Choices:*** no
* **yes** ←
| ini entries: [defaults]display\_skipped\_hosts = yes env:DISPLAY\_SKIPPED\_HOSTS Removed in: version 2.12 Why: environment variables without "ANSIBLE\_" prefix are deprecated Alternative: the "ANSIBLE\_DISPLAY\_SKIPPED\_HOSTS" environment variable env:ANSIBLE\_DISPLAY\_SKIPPED\_HOSTS | Toggle to control displaying skipped task/host results in a task |
| **show\_custom\_stats** boolean | **Choices:*** **no** ←
* yes
| ini entries: [defaults]show\_custom\_stats = no env:ANSIBLE\_SHOW\_CUSTOM\_STATS | This adds the custom stats set via the set\_stats plugin to the play recap |
| **show\_per\_host\_start** boolean added in 2.9 of ansible.builtin | **Choices:*** **no** ←
* yes
| ini entries: [defaults]show\_per\_host\_start = no env:ANSIBLE\_SHOW\_PER\_HOST\_START | This adds output that shows when a task is started to execute for each host |
| **show\_task\_path\_on\_failure** boolean added in 2.11 of ansible.builtin | **Choices:*** **no** ←
* yes
| ini entries: [defaults]show\_task\_path\_on\_failure = no env:ANSIBLE\_SHOW\_TASK\_PATH\_ON\_FAILURE | When a task fails, display the path to the file containing the failed task and the line number. This information is displayed automatically for every task when running with `-vv` or greater verbosity. |
ansible Ansible for Network Automation Ansible for Network Automation
==============================
Ansible Network modules extend the benefits of simple, powerful, agentless automation to network administrators and teams. Ansible Network modules can configure your network stack, test and validate existing network state, and discover and correct network configuration drift.
If you’re new to Ansible, or new to using Ansible for network management, start with [Network Getting Started](getting_started/index#network-getting-started). If you are already familiar with network automation with Ansible, see [Network Advanced Topics](user_guide/index#network-advanced).
For documentation on using a particular network module, consult the [list of all network modules](https://docs.ansible.com/ansible/2.9/modules/list_of_network_modules.html#network-modules "(in Ansible v2.9)"). Network modules for various hardware are supported by different teams including the hardware vendors themselves, volunteers from the Ansible community, and the Ansible Network Team.
* [Network Getting Started](getting_started/index)
+ [Basic Concepts](getting_started/basic_concepts)
- [Control node](getting_started/basic_concepts#control-node)
- [Managed nodes](getting_started/basic_concepts#managed-nodes)
- [Inventory](getting_started/basic_concepts#inventory)
- [Collections](getting_started/basic_concepts#collections)
- [Modules](getting_started/basic_concepts#modules)
- [Tasks](getting_started/basic_concepts#tasks)
- [Playbooks](getting_started/basic_concepts#playbooks)
+ [How Network Automation is Different](getting_started/network_differences)
- [Execution on the control node](getting_started/network_differences#execution-on-the-control-node)
- [Multiple communication protocols](getting_started/network_differences#multiple-communication-protocols)
- [Collections organized by network platform](getting_started/network_differences#collections-organized-by-network-platform)
- [Privilege Escalation: `enable` mode, `become`, and `authorize`](getting_started/network_differences#privilege-escalation-enable-mode-become-and-authorize)
+ [Run Your First Command and Playbook](getting_started/first_playbook)
- [Prerequisites](getting_started/first_playbook#prerequisites)
- [Install Ansible](getting_started/first_playbook#install-ansible)
- [Establish a manual connection to a managed node](getting_started/first_playbook#establish-a-manual-connection-to-a-managed-node)
- [Run your first network Ansible command](getting_started/first_playbook#run-your-first-network-ansible-command)
- [Create and run your first network Ansible Playbook](getting_started/first_playbook#create-and-run-your-first-network-ansible-playbook)
- [Gathering facts from network devices](getting_started/first_playbook#gathering-facts-from-network-devices)
+ [Build Your Inventory](getting_started/first_inventory)
- [Basic inventory](getting_started/first_inventory#basic-inventory)
- [Add variables to the inventory](getting_started/first_inventory#add-variables-to-the-inventory)
- [Group variables within inventory](getting_started/first_inventory#group-variables-within-inventory)
- [Variable syntax](getting_started/first_inventory#variable-syntax)
- [Group inventory by platform](getting_started/first_inventory#group-inventory-by-platform)
- [Verifying the inventory](getting_started/first_inventory#verifying-the-inventory)
- [Protecting sensitive variables with `ansible-vault`](getting_started/first_inventory#protecting-sensitive-variables-with-ansible-vault)
+ [Use Ansible network roles](getting_started/network_roles)
- [Understanding roles](getting_started/network_roles#understanding-roles)
+ [Beyond the basics](getting_started/intermediate_concepts)
- [A typical Ansible filetree](getting_started/intermediate_concepts#a-typical-ansible-filetree)
- [Tracking changes to inventory and playbooks: source control with git](getting_started/intermediate_concepts#tracking-changes-to-inventory-and-playbooks-source-control-with-git)
+ [Working with network connection options](getting_started/network_connection_options)
- [Setting timeout options](getting_started/network_connection_options#setting-timeout-options)
+ [Resources and next steps](getting_started/network_resources)
- [Documents](getting_started/network_resources#documents)
- [Events (on video and in person)](getting_started/network_resources#events-on-video-and-in-person)
- [GitHub repos](getting_started/network_resources#github-repos)
- [IRC and Slack](getting_started/network_resources#irc-and-slack)
* [Network Advanced Topics](user_guide/index)
+ [Network Resource Modules](user_guide/network_resource_modules)
- [Network resource module states](user_guide/network_resource_modules#network-resource-module-states)
- [Using network resource modules](user_guide/network_resource_modules#using-network-resource-modules)
- [Example: Verifying the network device configuration has not changed](user_guide/network_resource_modules#example-verifying-the-network-device-configuration-has-not-changed)
- [Example: Acquiring and updating VLANs on a network device](user_guide/network_resource_modules#example-acquiring-and-updating-vlans-on-a-network-device)
+ [Ansible Network Examples](user_guide/network_best_practices_2.5)
- [Prerequisites](user_guide/network_best_practices_2.5#prerequisites)
- [Groups and variables in an inventory file](user_guide/network_best_practices_2.5#groups-and-variables-in-an-inventory-file)
- [Example 1: collecting facts and creating backup files with a playbook](user_guide/network_best_practices_2.5#example-1-collecting-facts-and-creating-backup-files-with-a-playbook)
- [Example 2: simplifying playbooks with network agnostic modules](user_guide/network_best_practices_2.5#example-2-simplifying-playbooks-with-network-agnostic-modules)
- [Implementation Notes](user_guide/network_best_practices_2.5#implementation-notes)
- [Troubleshooting](user_guide/network_best_practices_2.5#troubleshooting)
+ [Parsing semi-structured text with Ansible](user_guide/cli_parsing)
- [Understanding the CLI parser](user_guide/cli_parsing#understanding-the-cli-parser)
- [Parsing the CLI](user_guide/cli_parsing#parsing-the-cli)
- [Advanced use cases](user_guide/cli_parsing#advanced-use-cases)
+ [Network Debug and Troubleshooting Guide](user_guide/network_debug_troubleshooting)
- [How to troubleshoot](user_guide/network_debug_troubleshooting#how-to-troubleshoot)
- [Troubleshooting socket path issues](user_guide/network_debug_troubleshooting#troubleshooting-socket-path-issues)
- [Category “Unable to open shell”](user_guide/network_debug_troubleshooting#category-unable-to-open-shell)
- [Timeout issues](user_guide/network_debug_troubleshooting#timeout-issues)
- [Playbook issues](user_guide/network_debug_troubleshooting#playbook-issues)
- [Proxy Issues](user_guide/network_debug_troubleshooting#proxy-issues)
- [Miscellaneous Issues](user_guide/network_debug_troubleshooting#miscellaneous-issues)
+ [Working with command output and prompts in network modules](user_guide/network_working_with_command_output)
- [Conditionals in networking modules](user_guide/network_working_with_command_output#conditionals-in-networking-modules)
- [Handling prompts in network modules](user_guide/network_working_with_command_output#handling-prompts-in-network-modules)
+ [Ansible Network FAQ](user_guide/faq)
- [How can I improve performance for network playbooks?](user_guide/faq#how-can-i-improve-performance-for-network-playbooks)
- [Why is my output sometimes replaced with `********`?](user_guide/faq#why-is-my-output-sometimes-replaced-with)
- [Why do the `*_config` modules always return `changed=true` with abbreviated commands?](user_guide/faq#why-do-the-config-modules-always-return-changed-true-with-abbreviated-commands)
+ [Platform Options](user_guide/platform_index)
- [CloudEngine OS Platform Options](user_guide/platform_ce)
- [CNOS Platform Options](user_guide/platform_cnos)
- [Dell OS6 Platform Options](user_guide/platform_dellos6)
- [Dell OS9 Platform Options](user_guide/platform_dellos9)
- [Dell OS10 Platform Options](user_guide/platform_dellos10)
- [ENOS Platform Options](user_guide/platform_enos)
- [EOS Platform Options](user_guide/platform_eos)
- [ERIC\_ECCLI Platform Options](user_guide/platform_eric_eccli)
- [EXOS Platform Options](user_guide/platform_exos)
- [FRR Platform Options](user_guide/platform_frr)
- [ICX Platform Options](user_guide/platform_icx)
- [IOS Platform Options](user_guide/platform_ios)
- [IOS-XR Platform Options](user_guide/platform_iosxr)
- [IronWare Platform Options](user_guide/platform_ironware)
- [Junos OS Platform Options](user_guide/platform_junos)
- [Meraki Platform Options](user_guide/platform_meraki)
- [Pluribus NETVISOR Platform Options](user_guide/platform_netvisor)
- [NOS Platform Options](user_guide/platform_nos)
- [NXOS Platform Options](user_guide/platform_nxos)
- [RouterOS Platform Options](user_guide/platform_routeros)
- [SLX-OS Platform Options](user_guide/platform_slxos)
- [VOSS Platform Options](user_guide/platform_voss)
- [VyOS Platform Options](user_guide/platform_vyos)
- [WeOS 4 Platform Options](user_guide/platform_weos4)
- [Netconf enabled Platform Options](user_guide/platform_netconf_enabled)
- [Settings by Platform](user_guide/platform_index#settings-by-platform)
* [Network Developer Guide](dev_guide/index)
+ [Developing network resource modules](dev_guide/developing_resource_modules_network)
- [Understanding network and security resource modules](dev_guide/developing_resource_modules_network#understanding-network-and-security-resource-modules)
- [Developing network and security resource modules](dev_guide/developing_resource_modules_network#developing-network-and-security-resource-modules)
- [Examples](dev_guide/developing_resource_modules_network#examples)
- [Resource module structure and workflow](dev_guide/developing_resource_modules_network#resource-module-structure-and-workflow)
- [Running `ansible-test sanity` and `tox` on resource modules](dev_guide/developing_resource_modules_network#running-ansible-test-sanity-and-tox-on-resource-modules)
- [Testing resource modules](dev_guide/developing_resource_modules_network#testing-resource-modules)
- [Example: Unit testing Ansible network resource modules](dev_guide/developing_resource_modules_network#example-unit-testing-ansible-network-resource-modules)
+ [Developing network plugins](dev_guide/developing_plugins_network)
- [Network connection plugins](dev_guide/developing_plugins_network#network-connection-plugins)
- [Developing httpapi plugins](dev_guide/developing_plugins_network#developing-httpapi-plugins)
- [Developing NETCONF plugins](dev_guide/developing_plugins_network#developing-netconf-plugins)
- [Developing network\_cli plugins](dev_guide/developing_plugins_network#developing-network-cli-plugins)
- [Developing cli\_parser plugins in a collection](dev_guide/developing_plugins_network#developing-cli-parser-plugins-in-a-collection)
+ [Documenting new network platforms](dev_guide/documenting_modules_network)
- [Modifying the platform options table](dev_guide/documenting_modules_network#modifying-the-platform-options-table)
- [Adding a platform-specific options section](dev_guide/documenting_modules_network#adding-a-platform-specific-options-section)
- [Adding your new file to the table of contents](dev_guide/documenting_modules_network#adding-your-new-file-to-the-table-of-contents)
| programming_docs |
ansible SLX-OS Platform Options SLX-OS Platform Options
=======================
Extreme SLX-OS is part of the [community.network](https://galaxy.ansible.com/community/network) collection and only supports CLI connections today. `httpapi` modules may be added in future. This page offers details on how to use `ansible.netcommon.network_cli` on SLX-OS in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/slxos.yml`](#example-cli-group-vars-slxos-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | not supported by SLX-OS |
| Returned Data Format | `stdout[0].` |
SLX-OS does not support `ansible_connection: local`. You must use `ansible_connection: ansible.netcommon.network_cli`.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/slxos.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: community.network.slxos
ansible_user: myuser
ansible_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Backup current switch config (slxos)
community.network.slxos_config:
backup: yes
register: backup_slxos_location
when: ansible_network_os == 'community.network.slxos'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible Pluribus NETVISOR Platform Options Pluribus NETVISOR Platform Options
==================================
Pluribus NETVISOR Ansible is part of the [community.network](https://galaxy.ansible.com/community/network) collection and only supports CLI connections today. `httpapi` modules may be added in future. This page offers details on how to use `ansible.netcommon.network_cli` on NETVISOR in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/netvisor.yml`](#example-cli-group-vars-netvisor-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | not supported by NETVISOR |
| Returned Data Format | `stdout[0].` |
Pluribus NETVISOR does not support `ansible_connection: local`. You must use `ansible_connection: ansible.netcommon.network_cli`.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/netvisor.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: community.netcommon.netvisor
ansible_user: myuser
ansible_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Create access list
community.network.pn_access_list:
pn_name: "foo"
pn_scope: "local"
state: "present"
register: acc_list
when: ansible_network_os == 'community.network.netvisor'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible IronWare Platform Options IronWare Platform Options
=========================
IronWare is part of the [community.network](https://galaxy.ansible.com/community/network) collection and supports Enable Mode (Privilege Escalation). This page offers details on how to use Enable Mode on IronWare in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/mlx.yml`](#example-cli-group-vars-mlx-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | supported: use `ansible_become: yes` with `ansible_become_method: enable` and `ansible_become_password:` |
| Returned Data Format | `stdout[0].` |
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.network_cli` instead.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/mlx.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: community.network.ironware
ansible_user: myuser
ansible_password: !vault...
ansible_become: yes
ansible_become_method: enable
ansible_become_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Backup current switch config (ironware)
community.network.ironware_config:
backup: yes
register: backup_ironware_location
when: ansible_network_os == 'community.network.ironware'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible Network Advanced Topics Network Advanced Topics
=======================
Once you have mastered the basics of network automation with Ansible, as presented in [Network Getting Started](../getting_started/index#network-getting-started), use this guide understand platform-specific details, optimization, and troubleshooting tips for Ansible for network automation.
**Who should use this guide?**
This guide is intended for network engineers using Ansible for automation. It covers advanced topics. If you understand networks and Ansible, this guide is for you. You may read through the entire guide if you choose, or use the links below to find the specific information you need.
If you’re new to Ansible, or new to using Ansible for network automation, start with the [Network Getting Started](../getting_started/index#network-getting-started).
Advanced Topics
* [Network Resource Modules](network_resource_modules)
+ [Network resource module states](network_resource_modules#network-resource-module-states)
+ [Using network resource modules](network_resource_modules#using-network-resource-modules)
+ [Example: Verifying the network device configuration has not changed](network_resource_modules#example-verifying-the-network-device-configuration-has-not-changed)
+ [Example: Acquiring and updating VLANs on a network device](network_resource_modules#example-acquiring-and-updating-vlans-on-a-network-device)
* [Ansible Network Examples](network_best_practices_2.5)
+ [Prerequisites](network_best_practices_2.5#prerequisites)
+ [Groups and variables in an inventory file](network_best_practices_2.5#groups-and-variables-in-an-inventory-file)
+ [Example 1: collecting facts and creating backup files with a playbook](network_best_practices_2.5#example-1-collecting-facts-and-creating-backup-files-with-a-playbook)
+ [Example 2: simplifying playbooks with network agnostic modules](network_best_practices_2.5#example-2-simplifying-playbooks-with-network-agnostic-modules)
+ [Implementation Notes](network_best_practices_2.5#implementation-notes)
+ [Troubleshooting](network_best_practices_2.5#troubleshooting)
* [Parsing semi-structured text with Ansible](cli_parsing)
+ [Understanding the CLI parser](cli_parsing#understanding-the-cli-parser)
+ [Parsing the CLI](cli_parsing#parsing-the-cli)
+ [Advanced use cases](cli_parsing#advanced-use-cases)
* [Network Debug and Troubleshooting Guide](network_debug_troubleshooting)
+ [How to troubleshoot](network_debug_troubleshooting#how-to-troubleshoot)
+ [Troubleshooting socket path issues](network_debug_troubleshooting#troubleshooting-socket-path-issues)
+ [Category “Unable to open shell”](network_debug_troubleshooting#category-unable-to-open-shell)
+ [Timeout issues](network_debug_troubleshooting#timeout-issues)
+ [Playbook issues](network_debug_troubleshooting#playbook-issues)
+ [Proxy Issues](network_debug_troubleshooting#proxy-issues)
+ [Miscellaneous Issues](network_debug_troubleshooting#miscellaneous-issues)
* [Working with command output and prompts in network modules](network_working_with_command_output)
+ [Conditionals in networking modules](network_working_with_command_output#conditionals-in-networking-modules)
+ [Handling prompts in network modules](network_working_with_command_output#handling-prompts-in-network-modules)
* [Ansible Network FAQ](faq)
+ [How can I improve performance for network playbooks?](faq#how-can-i-improve-performance-for-network-playbooks)
+ [Why is my output sometimes replaced with `********`?](faq#why-is-my-output-sometimes-replaced-with)
+ [Why do the `*_config` modules always return `changed=true` with abbreviated commands?](faq#why-do-the-config-modules-always-return-changed-true-with-abbreviated-commands)
* [Platform Options](platform_index)
+ [CloudEngine OS Platform Options](platform_ce)
+ [CNOS Platform Options](platform_cnos)
+ [Dell OS6 Platform Options](platform_dellos6)
+ [Dell OS9 Platform Options](platform_dellos9)
+ [Dell OS10 Platform Options](platform_dellos10)
+ [ENOS Platform Options](platform_enos)
+ [EOS Platform Options](platform_eos)
+ [ERIC\_ECCLI Platform Options](platform_eric_eccli)
+ [EXOS Platform Options](platform_exos)
+ [FRR Platform Options](platform_frr)
+ [ICX Platform Options](platform_icx)
+ [IOS Platform Options](platform_ios)
+ [IOS-XR Platform Options](platform_iosxr)
+ [IronWare Platform Options](platform_ironware)
+ [Junos OS Platform Options](platform_junos)
+ [Meraki Platform Options](platform_meraki)
+ [Pluribus NETVISOR Platform Options](platform_netvisor)
+ [NOS Platform Options](platform_nos)
+ [NXOS Platform Options](platform_nxos)
+ [RouterOS Platform Options](platform_routeros)
+ [SLX-OS Platform Options](platform_slxos)
+ [VOSS Platform Options](platform_voss)
+ [VyOS Platform Options](platform_vyos)
+ [WeOS 4 Platform Options](platform_weos4)
+ [Netconf enabled Platform Options](platform_netconf_enabled)
+ [Settings by Platform](platform_index#settings-by-platform)
ansible Network Resource Modules Network Resource Modules
========================
Ansible network resource modules simplify and standardize how you manage different network devices. Network devices separate configuration into sections (such as interfaces and VLANs) that apply to a network service. Ansible network resource modules take advantage of this to allow you to configure subsections or *resources* within the network device configuration. Network resource modules provide a consistent experience across different network devices.
* [Network resource module states](#network-resource-module-states)
* [Using network resource modules](#using-network-resource-modules)
* [Example: Verifying the network device configuration has not changed](#example-verifying-the-network-device-configuration-has-not-changed)
* [Example: Acquiring and updating VLANs on a network device](#example-acquiring-and-updating-vlans-on-a-network-device)
Network resource module states
------------------------------
You use the network resource modules by assigning a state to what you want the module to do. The resource modules support the following states:
merged
Ansible merges the on-device configuration with the provided configuration in the task.
replaced
Ansible replaces the on-device configuration subsection with the provided configuration subsection in the task.
overridden
Ansible overrides the on-device configuration for the resource with the provided configuration in the task. Use caution with this state as you could remove your access to the device (for example, by overriding the management interface configuration).
deleted
Ansible deletes the on-device configuration subsection and restores any default settings.
gathered
Ansible displays the resource details gathered from the network device and accessed with the `gathered` key in the result.
rendered
Ansible renders the provided configuration in the task in the device-native format (for example, Cisco IOS CLI). Ansible returns this rendered configuration in the `rendered` key in the result. Note this state does not communicate with the network device and can be used offline.
parsed
Ansible parses the configuration from the `running_config` option into Ansible structured data in the `parsed` key in the result. Note this does not gather the configuration from the network device so this state can be used offline.
Using network resource modules
------------------------------
This example configures the L3 interface resource on a Cisco IOS device, based on different state settings.
```
- name: configure l3 interface
cisco.ios.ios_l3_interfaces:
config: "{{ config }}"
state: <state>
```
The following table shows an example of how an initial resource configuration changes with this task for different states.
| Resource starting configuration | task-provided configuration (YAML) | Final resource configuration on device |
| --- | --- | --- |
|
```
interface loopback100
ip address 10.10.1.100 255.255.255.0
ipv6 address FC00:100/64
```
|
```
config:
- ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: loopback100
```
| *merged*
```
interface loopback100
ip address 10.10.1.100 255.255.255.0
ipv6 address FC00:100/64
ipv6 address FC00:101/64
```
|
| *replaced*
```
interface loopback100
no ip address
ipv6 address FC00:100/64
ipv6 address FC00:101/64
```
|
| *overridden*
Incorrect use case. This would remove all interfaces from the device (including the mgmt interface) except
the configured loopback100 |
| *deleted*
```
interface loopback100
no ip address
```
|
Network resource modules return the following details:
* The *before* state - the existing resource configuration before the task was executed.
* The *after* state - the new resource configuration that exists on the network device after the task was executed.
* Commands - any commands configured on the device.
```
ok: [nxos101] =>
result:
after:
contact: IT Support
location: Room E, Building 6, Seattle, WA 98134
users:
- algorithm: md5
group: network-admin
localized_key: true
password: '0x73fd9a2cc8c53ed3dd4ed8f4ff157e69'
privacy_password: '0x73fd9a2cc8c53ed3dd4ed8f4ff157e69'
username: admin
before:
contact: IT Support
location: Room E, Building 5, Seattle HQ
users:
- algorithm: md5
group: network-admin
localized_key: true
password: '0x73fd9a2cc8c53ed3dd4ed8f4ff157e69'
privacy_password: '0x73fd9a2cc8c53ed3dd4ed8f4ff157e69'
username: admin
changed: true
commands:
- snmp-server location Room E, Building 6, Seattle, WA 98134
failed: false
```
Example: Verifying the network device configuration has not changed
-------------------------------------------------------------------
The following playbook uses the [arista.eos.eos\_l3\_interfaces](../../collections/arista/eos/eos_l3_interfaces_module#ansible-collections-arista-eos-eos-l3-interfaces-module) module to gather a subset of the network device configuration (Layer 3 interfaces only) and verifies the information is accurate and has not changed. This playbook passes the results of [arista.eos.eos\_facts](../../collections/arista/eos/eos_facts_module#ansible-collections-arista-eos-eos-facts-module) directly to the `arista.eos.eos_l3_interfaces` module.
```
- name: Example of facts being pushed right back to device.
hosts: arista
gather_facts: false
tasks:
- name: grab arista eos facts
arista.eos.eos_facts:
gather_subset: min
gather_network_resources: l3_interfaces
- name: Ensure that the IP address information is accurate.
arista.eos.eos_l3_interfaces:
config: "{{ ansible_network_resources['l3_interfaces'] }}"
register: result
- name: Ensure config did not change.
assert:
that: not result.changed
```
Example: Acquiring and updating VLANs on a network device
---------------------------------------------------------
This example shows how you can use resource modules to:
1. Retrieve the current configuration on a network device.
2. Save that configuration locally.
3. Update that configuration and apply it to the network device.
This example uses the `cisco.ios.ios_vlans` resource module to retrieve and update the VLANs on an IOS device.
1. Retrieve the current IOS VLAN configuration:
```
- name: Gather VLAN information as structured data
cisco.ios.ios_facts:
gather_subset:
- '!all'
- '!min'
gather_network_resources:
- 'vlans'
```
2. Store the VLAN configuration locally:
```
- name: Store VLAN facts to host_vars
copy:
content: "{{ ansible_network_resources | to_nice_yaml }}"
dest: "{{ playbook_dir }}/host_vars/{{ inventory_hostname }}"
```
3. Modify the stored file to update the VLAN configuration locally.
4. Merge the updated VLAN configuration with the existing configuration on the device:
```
- name: Make VLAN config changes by updating stored facts on the controller.
cisco.ios.ios_vlans:
config: "{{ vlans }}"
state: merged
tags: update_config
```
See also
[Network Features in Ansible 2.9](https://www.ansible.com/blog/network-features-coming-soon-in-ansible-engine-2.9)
A introductory blog post on network resource modules.
[Deep Dive into Network Resource Modules](https://www.ansible.com/deep-dive-into-ansible-network-resource-module)
A deeper dive presentation into network resource modules.
| programming_docs |
ansible Dell OS6 Platform Options Dell OS6 Platform Options
=========================
The [dellemc.os6](https://github.com/ansible-collections/dellemc.os6) collection supports Enable Mode (Privilege Escalation). This page offers details on how to use Enable Mode on OS6 in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/dellos6.yml`](#example-cli-group-vars-dellos6-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | supported: use `ansible_become: yes` with `ansible_become_method: enable` and `ansible_become_password:` |
| Returned Data Format | `stdout[0].` |
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.network_cli` instead.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/dellos6.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: dellemc.os6.os6
ansible_user: myuser
ansible_password: !vault...
ansible_become: yes
ansible_become_method: enable
ansible_become_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Backup current switch config (dellos6)
dellemc.os6.os6_config:
backup: yes
register: backup_dellso6_location
when: ansible_network_os == 'dellemc.os6.os6'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible CNOS Platform Options CNOS Platform Options
=====================
CNOS is part of the [community.network](https://galaxy.ansible.com/community/network) collection and supports Enable Mode (Privilege Escalation). This page offers details on how to use Enable Mode on CNOS in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/cnos.yml`](#example-cli-group-vars-cnos-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | supported: use `ansible_become: yes` with `ansible_become_method: enable` and `ansible_become_password:` |
| Returned Data Format | `stdout[0].` |
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.network_cli` instead.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/cnos.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: community.network.cnos
ansible_user: myuser
ansible_password: !vault...
ansible_become: yes
ansible_become_method: enable
ansible_become_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Retrieve CNOS OS version
community.network.cnos_command:
commands: show version
when: ansible_network_os == 'community.network.cnos'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible ERIC_ECCLI Platform Options ERIC\_ECCLI Platform Options
============================
Extreme ERIC\_ECCLI is part of the [community.network](https://galaxy.ansible.com/community/network) collection and only supports CLI connections today. This page offers details on how to use `ansible.netcommon.network_cli` on ERIC\_ECCLI in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/eric_eccli.yml`](#example-cli-group-vars-eric-eccli-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | not supported by ERIC\_ECCLI |
| Returned Data Format | `stdout[0].` |
ERIC\_ECCLI does not support `ansible_connection: local`. You must use `ansible_connection: ansible.netcommon.network_cli`.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/eric_eccli.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: community.network.eric_eccli
ansible_user: myuser
ansible_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: run show version on remote devices (eric_eccli)
community.network.eric_eccli_command:
commands: show version
when: ansible_network_os == 'community.network.eric_eccli'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible Platform Options Platform Options
================
Some Ansible Network platforms support multiple connection types, privilege escalation (`enable` mode), or other options. The pages in this section offer standardized guides to understanding available options on each network platform. We welcome contributions from community-maintained platforms to this section.
Platform Options
* [CloudEngine OS Platform Options](platform_ce)
+ [Connections available](platform_ce#connections-available)
+ [Using CLI in Ansible](platform_ce#using-cli-in-ansible)
+ [Using NETCONF in Ansible](platform_ce#using-netconf-in-ansible)
+ [Notes](platform_ce#notes)
* [CNOS Platform Options](platform_cnos)
+ [Connections available](platform_cnos#connections-available)
+ [Using CLI in Ansible](platform_cnos#using-cli-in-ansible)
* [Dell OS6 Platform Options](platform_dellos6)
+ [Connections available](platform_dellos6#connections-available)
+ [Using CLI in Ansible](platform_dellos6#using-cli-in-ansible)
* [Dell OS9 Platform Options](platform_dellos9)
+ [Connections available](platform_dellos9#connections-available)
+ [Using CLI in Ansible](platform_dellos9#using-cli-in-ansible)
* [Dell OS10 Platform Options](platform_dellos10)
+ [Connections available](platform_dellos10#connections-available)
+ [Using CLI in Ansible](platform_dellos10#using-cli-in-ansible)
* [ENOS Platform Options](platform_enos)
+ [Connections available](platform_enos#connections-available)
+ [Using CLI in Ansible](platform_enos#using-cli-in-ansible)
* [EOS Platform Options](platform_eos)
+ [Connections available](platform_eos#connections-available)
+ [Using CLI in Ansible](platform_eos#using-cli-in-ansible)
+ [Using eAPI in Ansible](platform_eos#using-eapi-in-ansible)
* [ERIC\_ECCLI Platform Options](platform_eric_eccli)
+ [Connections available](platform_eric_eccli#connections-available)
+ [Using CLI in Ansible](platform_eric_eccli#using-cli-in-ansible)
* [EXOS Platform Options](platform_exos)
+ [Connections available](platform_exos#connections-available)
+ [Using CLI in Ansible](platform_exos#using-cli-in-ansible)
+ [Using EXOS-API in Ansible](platform_exos#using-exos-api-in-ansible)
* [FRR Platform Options](platform_frr)
+ [Connections available](platform_frr#connections-available)
+ [Using CLI in Ansible](platform_frr#using-cli-in-ansible)
* [ICX Platform Options](platform_icx)
+ [Connections available](platform_icx#connections-available)
+ [Using CLI in Ansible](platform_icx#using-cli-in-ansible)
* [IOS Platform Options](platform_ios)
+ [Connections available](platform_ios#connections-available)
+ [Using CLI in Ansible](platform_ios#using-cli-in-ansible)
* [IOS-XR Platform Options](platform_iosxr)
+ [Connections available](platform_iosxr#connections-available)
+ [Using CLI in Ansible](platform_iosxr#using-cli-in-ansible)
+ [Using NETCONF in Ansible](platform_iosxr#using-netconf-in-ansible)
* [IronWare Platform Options](platform_ironware)
+ [Connections available](platform_ironware#connections-available)
+ [Using CLI in Ansible](platform_ironware#using-cli-in-ansible)
* [Junos OS Platform Options](platform_junos)
+ [Connections available](platform_junos#connections-available)
+ [Using CLI in Ansible](platform_junos#using-cli-in-ansible)
+ [Using NETCONF in Ansible](platform_junos#using-netconf-in-ansible)
* [Meraki Platform Options](platform_meraki)
+ [Connections available](platform_meraki#connections-available)
* [Pluribus NETVISOR Platform Options](platform_netvisor)
+ [Connections available](platform_netvisor#connections-available)
+ [Using CLI in Ansible](platform_netvisor#using-cli-in-ansible)
* [NOS Platform Options](platform_nos)
+ [Connections available](platform_nos#connections-available)
+ [Using CLI in Ansible](platform_nos#using-cli-in-ansible)
* [NXOS Platform Options](platform_nxos)
+ [Connections available](platform_nxos#connections-available)
+ [Using CLI in Ansible](platform_nxos#using-cli-in-ansible)
+ [Using NX-API in Ansible](platform_nxos#using-nx-api-in-ansible)
+ [Cisco Nexus platform support matrix](platform_nxos#cisco-nexus-platform-support-matrix)
* [RouterOS Platform Options](platform_routeros)
+ [Connections available](platform_routeros#connections-available)
+ [Using CLI in Ansible](platform_routeros#using-cli-in-ansible)
* [SLX-OS Platform Options](platform_slxos)
+ [Connections available](platform_slxos#connections-available)
+ [Using CLI in Ansible](platform_slxos#using-cli-in-ansible)
* [VOSS Platform Options](platform_voss)
+ [Connections available](platform_voss#connections-available)
+ [Using CLI in Ansible](platform_voss#using-cli-in-ansible)
* [VyOS Platform Options](platform_vyos)
+ [Connections available](platform_vyos#connections-available)
+ [Using CLI in Ansible](platform_vyos#using-cli-in-ansible)
* [WeOS 4 Platform Options](platform_weos4)
+ [Connections available](platform_weos4#connections-available)
+ [Using CLI in Ansible](platform_weos4#using-cli-in-ansible)
* [Netconf enabled Platform Options](platform_netconf_enabled)
+ [Connections available](platform_netconf_enabled#connections-available)
+ [Using NETCONF in Ansible](platform_netconf_enabled#using-netconf-in-ansible)
Settings by Platform
--------------------
| | `ansible_connection:` settings available |
| --- | --- |
| Network OS | `ansible_network_os:` | network\_cli | netconf | httpapi | local |
| [Arista EOS](https://galaxy.ansible.com/arista/eos) [[†]](#id3) | `arista.eos.eos` | ✓ | | ✓ | ✓ |
| [Ciena SAOS6](https://galaxy.ansible.com/ciena/saos6) | `ciena.saos6.saos6` | ✓ | | | ✓ |
| [Cisco ASA](https://galaxy.ansible.com/cisco/asa) [[†]](#id3) | `cisco.asa.asa` | ✓ | | | ✓ |
| [Cisco IOS](https://galaxy.ansible.com/cisco/ios) [[†]](#id3) | `cisco.ios.ios` | ✓ | | | ✓ |
| [Cisco IOS XR](https://galaxy.ansible.com/cisco/iosxr) [[†]](#id3) | `cisco.iosxr.iosxr` | ✓ | | | ✓ |
| [Cisco NX-OS](https://galaxy.ansible.com/cisco/nxos) [[†]](#id3) | `cisco.nxos.nxos` | ✓ | | ✓ | ✓ |
| [Cloudengine OS](https://galaxy.ansible.com/community/network) | `community.network.ce` | ✓ | ✓ | | ✓ |
| [Dell OS6](https://github.com/ansible-collections/dellemc.os6) | `dellemc.os6.os6` | ✓ | | | ✓ |
| [Dell OS9](https://github.com/ansible-collections/dellemc.os9) | `dellemc.os9.os9` | ✓ | | | ✓ |
| [Dell OS10](https://galaxy.ansible.com/dellemc/os10) | `dellemc.os10.os10` | ✓ | | | ✓ |
| [Ericsson ECCLI](https://galaxy.ansible.com/community/network) | `community.network.eric_eccli` | ✓ | | | ✓ |
| [Extreme EXOS](https://galaxy.ansible.com/community/network) | `community.network.exos` | ✓ | | ✓ | |
| [Extreme IronWare](https://galaxy.ansible.com/community/network) | `community.network.ironware` | ✓ | | | ✓ |
| [Extreme NOS](https://galaxy.ansible.com/community/network) | `community.network.nos` | ✓ | | | |
| [Extreme SLX-OS](https://galaxy.ansible.com/community/network) | `community.network.slxos` | ✓ | | | |
| [Extreme VOSS](https://galaxy.ansible.com/community/network) | `community.network.voss` | ✓ | | | |
| [F5 BIG-IP](https://galaxy.ansible.com/f5networks/f5_modules) | | | | | ✓ |
| [F5 BIG-IQ](https://galaxy.ansible.com/f5networks/f5_modules) | | | | | ✓ |
| [Junos OS](https://galaxy.ansible.com/junipernetworks/junos) [[†]](#id3) | `junipernetworks.junos.junos` | ✓ | ✓ | | ✓ |
| [Lenovo CNOS](https://galaxy.ansible.com/community/network) | `community.network.cnos` | ✓ | | | ✓ |
| [Lenovo ENOS](https://galaxy.ansible.com/community/network) | `community.network.enos` | ✓ | | | ✓ |
| [Meraki](https://galaxy.ansible.com/cisco/meraki) | | | | | ✓ |
| [MikroTik RouterOS](https://galaxy.ansible.com/community/network) | `community.network.routeros` | ✓ | | | |
| [Nokia SR OS](https://galaxy.ansible.com/community/network) | | | | | ✓ |
| [Pluribus Netvisor](https://galaxy.ansible.com/community/network) | `community.network.netvisor` | ✓ | | | |
| [Ruckus ICX](https://galaxy.ansible.com/community/network) | `community.network.icx` | ✓ | | | |
| [VyOS](https://galaxy.ansible.com/vyos/vyos) [[†]](#id3) | `vyos.vyos.vyos` | ✓ | | | ✓ |
| [Westermo WeOS 4](https://galaxy.ansible.com/community/network) | `community.network.weos4` | ✓ | | | |
| OS that supports Netconf [[†]](#id3) | `<network-os>` | | ✓ | | ✓ |
**[†]** Maintained by Ansible Network Team
ansible Network Debug and Troubleshooting Guide Network Debug and Troubleshooting Guide
=======================================
This section discusses how to debug and troubleshoot network modules in Ansible.
* [How to troubleshoot](#how-to-troubleshoot)
+ [Enabling Networking logging and how to read the logfile](#enabling-networking-logging-and-how-to-read-the-logfile)
+ [Enabling Networking device interaction logging](#enabling-networking-device-interaction-logging)
+ [Isolating an error](#isolating-an-error)
* [Troubleshooting socket path issues](#troubleshooting-socket-path-issues)
* [Category “Unable to open shell”](#category-unable-to-open-shell)
+ [Error: “[Errno -2] Name or service not known”](#error-errno-2-name-or-service-not-known)
+ [Error: “Authentication failed”](#error-authentication-failed)
+ [Error: “connecting to host <hostname> returned an error” or “Bad address”](#error-connecting-to-host-hostname-returned-an-error-or-bad-address)
+ [Error: “No authentication methods available”](#error-no-authentication-methods-available)
+ [Clearing Out Persistent Connections](#clearing-out-persistent-connections)
* [Timeout issues](#timeout-issues)
+ [Persistent connection idle timeout](#persistent-connection-idle-timeout)
+ [Command timeout](#command-timeout)
+ [Persistent connection retry timeout](#persistent-connection-retry-timeout)
+ [Timeout issue due to platform specific login menu with `network_cli` connection type](#timeout-issue-due-to-platform-specific-login-menu-with-network-cli-connection-type)
* [Playbook issues](#playbook-issues)
+ [Error: “Unable to enter configuration mode”](#error-unable-to-enter-configuration-mode)
* [Proxy Issues](#proxy-issues)
+ [delegate\_to vs ProxyCommand](#delegate-to-vs-proxycommand)
+ [Using bastion/jump host with netconf connection](#using-bastion-jump-host-with-netconf-connection)
+ [Enabling jump host setting](#enabling-jump-host-setting)
+ [Example ssh config file (~/.ssh/config)](#example-ssh-config-file-ssh-config)
* [Miscellaneous Issues](#miscellaneous-issues)
+ [Intermittent failure while using `ansible.netcommon.network_cli` connection type](#intermittent-failure-while-using-ansible-netcommon-network-cli-connection-type)
+ [Task failure due to mismatched error regex within command response using `ansible.netcommon.network_cli` connection type](#task-failure-due-to-mismatched-error-regex-within-command-response-using-ansible-netcommon-network-cli-connection-type)
+ [Intermittent failure while using `ansible.netcommon.network_cli` connection type due to slower network or remote target host](#intermittent-failure-while-using-ansible-netcommon-network-cli-connection-type-due-to-slower-network-or-remote-target-host)
How to troubleshoot
-------------------
Ansible network automation errors generally fall into one of the following categories:
Authentication issues
* Not correctly specifying credentials
* Remote device (network switch/router) not falling back to other other authentication methods
* SSH key issues
Timeout issues
* Can occur when trying to pull a large amount of data
* May actually be masking a authentication issue
Playbook issues
* Use of `delegate_to`, instead of `ProxyCommand`. See [network proxy guide](#network-delegate-to-vs-proxycommand) for more information.
Warning
`unable to open shell`
The `unable to open shell` message means that the `ansible-connection` daemon has not been able to successfully talk to the remote network device. This generally means that there is an authentication issue. See the “Authentication and connection issues” section in this document for more information.
### Enabling Networking logging and how to read the logfile
**Platforms:** Any
Ansible includes logging to help diagnose and troubleshoot issues regarding Ansible Networking modules.
Because logging is very verbose, it is disabled by default. It can be enabled with the [`ANSIBLE_LOG_PATH`](../../reference_appendices/config#envvar-ANSIBLE_LOG_PATH) and [`ANSIBLE_DEBUG`](../../reference_appendices/config#envvar-ANSIBLE_DEBUG) options on the ansible-controller, that is the machine running `ansible-playbook`.
Before running `ansible-playbook`, run the following commands to enable logging:
```
# Specify the location for the log file
export ANSIBLE_LOG_PATH=~/ansible.log
# Enable Debug
export ANSIBLE_DEBUG=True
# Run with 4*v for connection level verbosity
ansible-playbook -vvvv ...
```
After Ansible has finished running you can inspect the log file which has been created on the ansible-controller:
```
less $ANSIBLE_LOG_PATH
2017-03-30 13:19:52,740 p=28990 u=fred | creating new control socket for host veos01:22 as user admin
2017-03-30 13:19:52,741 p=28990 u=fred | control socket path is /home/fred/.ansible/pc/ca5960d27a
2017-03-30 13:19:52,741 p=28990 u=fred | current working directory is /home/fred/ansible/test/integration
2017-03-30 13:19:52,741 p=28990 u=fred | using connection plugin network_cli
...
2017-03-30 13:20:14,771 paramiko.transport userauth is OK
2017-03-30 13:20:15,283 paramiko.transport Authentication (keyboard-interactive) successful!
2017-03-30 13:20:15,302 p=28990 u=fred | ssh connection done, setting terminal
2017-03-30 13:20:15,321 p=28990 u=fred | ssh connection has completed successfully
2017-03-30 13:20:15,322 p=28990 u=fred | connection established to veos01 in 0:00:22.580626
```
From the log notice:
* `p=28990` Is the PID (Process ID) of the `ansible-connection` process
* `u=fred` Is the user `running` ansible, not the remote-user you are attempting to connect as
* `creating new control socket for host veos01:22 as user admin` host:port as user
* `control socket path is` location on disk where the persistent connection socket is created
* `using connection plugin network_cli` Informs you that persistent connection is being used
* `connection established to veos01 in 0:00:22.580626` Time taken to obtain a shell on the remote device
Because the log files are verbose, you can use grep to look for specific information. For example, once you have identified the `pid` from the `creating new control socket for host` line you can search for other connection log entries:
```
grep "p=28990" $ANSIBLE_LOG_PATH
```
### Enabling Networking device interaction logging
**Platforms:** Any
Ansible includes logging of device interaction in the log file to help diagnose and troubleshoot issues regarding Ansible Networking modules. The messages are logged in the file pointed to by the `log_path` configuration option in the Ansible configuration file or by setting the [`ANSIBLE_LOG_PATH`](../../reference_appendices/config#envvar-ANSIBLE_LOG_PATH).
Warning
The device interaction messages consist of command executed on the target device and the returned response. Since this log data can contain sensitive information including passwords in plain text it is disabled by default. Additionally, in order to prevent accidental leakage of data, a warning will be shown on every task with this setting enabled, specifying which host has it enabled and where the data is being logged.
Be sure to fully understand the security implications of enabling this option. The device interaction logging can be enabled either globally by setting in configuration file or by setting environment or enabled on per task basis by passing a special variable to the task.
Before running `ansible-playbook` run the following commands to enable logging:
```
# Specify the location for the log file
export ANSIBLE_LOG_PATH=~/ansible.log
```
Enable device interaction logging for a given task
```
- name: get version information
cisco.ios.ios_command:
commands:
- show version
vars:
ansible_persistent_log_messages: True
```
To make this a global setting, add the following to your `ansible.cfg` file:
```
[persistent_connection]
log_messages = True
```
or enable the environment variable `ANSIBLE_PERSISTENT_LOG_MESSAGES`:
```
# Enable device interaction logging
export ANSIBLE_PERSISTENT_LOG_MESSAGES=True
```
If the task is failing on connection initialization itself, you should enable this option globally. If an individual task is failing intermittently this option can be enabled for that task itself to find the root cause.
After Ansible has finished running you can inspect the log file which has been created on the ansible-controller
Note
Be sure to fully understand the security implications of enabling this option as it can log sensitive information in log file thus creating security vulnerability.
### Isolating an error
**Platforms:** Any
As with any effort to troubleshoot it’s important to simplify the test case as much as possible.
For Ansible this can be done by ensuring you are only running against one remote device:
* Using `ansible-playbook --limit switch1.example.net...`
* Using an ad hoc `ansible` command
`ad hoc` refers to running Ansible to perform some quick command using `/usr/bin/ansible`, rather than the orchestration language, which is `/usr/bin/ansible-playbook`. In this case we can ensure connectivity by attempting to execute a single command on the remote device:
```
ansible -m arista.eos.eos_command -a 'commands=?' -i inventory switch1.example.net -e 'ansible_connection=ansible.netcommon.network_cli' -u admin -k
```
In the above example, we:
* connect to `switch1.example.net` specified in the inventory file `inventory`
* use the module `arista.eos.eos_command`
* run the command `?`
* connect using the username `admin`
* inform the `ansible` command to prompt for the SSH password by specifying `-k`
If you have SSH keys configured correctly, you don’t need to specify the `-k` parameter.
If the connection still fails you can combine it with the enable\_network\_logging parameter. For example:
```
# Specify the location for the log file
export ANSIBLE_LOG_PATH=~/ansible.log
# Enable Debug
export ANSIBLE_DEBUG=True
# Run with ``-vvvv`` for connection level verbosity
ansible -m arista.eos.eos_command -a 'commands=?' -i inventory switch1.example.net -e 'ansible_connection=ansible.netcommon.network_cli' -u admin -k
```
Then review the log file and find the relevant error message in the rest of this document.
Troubleshooting socket path issues
----------------------------------
**Platforms:** Any
The `Socket path does not exist or cannot be found` and `Unable to connect to socket` messages indicate that the socket used to communicate with the remote network device is unavailable or does not exist.
For example:
```
fatal: [spine02]: FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_TSqk5J/ansible_modlib.zip/ansible/module_utils/connection.py\", line 115, in _exec_jsonrpc\nansible.module_utils.connection.ConnectionError: Socket path XX does not exist or cannot be found. See Troubleshooting socket path issues in the Network Debug and Troubleshooting Guide\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 1
}
```
or
```
fatal: [spine02]: FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_TSqk5J/ansible_modlib.zip/ansible/module_utils/connection.py\", line 123, in _exec_jsonrpc\nansible.module_utils.connection.ConnectionError: Unable to connect to socket XX. See Troubleshooting socket path issues in Network Debug and Troubleshooting Guide\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 1
}
```
Suggestions to resolve:
1. Verify that you have write access to the socket path described in the error message.
2. Follow the steps detailed in [enable network logging](#enable-network-logging).
If the identified error message from the log file is:
```
2017-04-04 12:19:05,670 p=18591 u=fred | command timeout triggered, timeout value is 30 secs
```
or
```
2017-04-04 12:19:05,670 p=18591 u=fred | persistent connection idle timeout triggered, timeout value is 30 secs
```
Follow the steps detailed in [timeout issues](#timeout-issues)
Category “Unable to open shell”
-------------------------------
**Platforms:** Any
The `unable to open shell` message means that the `ansible-connection` daemon has not been able to successfully talk to the remote network device. This generally means that there is an authentication issue. It is a “catch all” message, meaning you need to enable [logging](https://docs.ansible.com/ansible/2.8/user_guide/intro_getting_started.html#a-note-about-logging "(in Ansible v2.8)") to find the underlying issues.
For example:
```
TASK [prepare_eos_tests : enable cli on remote device] **************************************************
fatal: [veos01]: FAILED! => {"changed": false, "failed": true, "msg": "unable to open shell"}
```
or:
```
TASK [ios_system : configure name_servers] *************************************************************
task path:
fatal: [ios-csr1000v]: FAILED! => {
"changed": false,
"failed": true,
"msg": "unable to open shell",
}
```
Suggestions to resolve:
Follow the steps detailed in [enable\_network\_logging](#enable-network-logging).
Once you’ve identified the error message from the log file, the specific solution can be found in the rest of this document.
### Error: “[Errno -2] Name or service not known”
**Platforms:** Any
Indicates that the remote host you are trying to connect to can not be reached
For example:
```
2017-04-04 11:39:48,147 p=15299 u=fred | control socket path is /home/fred/.ansible/pc/ca5960d27a
2017-04-04 11:39:48,147 p=15299 u=fred | current working directory is /home/fred/git/ansible-inc/stable-2.3/test/integration
2017-04-04 11:39:48,147 p=15299 u=fred | using connection plugin network_cli
2017-04-04 11:39:48,340 p=15299 u=fred | connecting to host veos01 returned an error
2017-04-04 11:39:48,340 p=15299 u=fred | [Errno -2] Name or service not known
```
Suggestions to resolve:
* If you are using the `provider:` options ensure that its suboption `host:` is set correctly.
* If you are not using `provider:` nor top-level arguments ensure your inventory file is correct.
### Error: “Authentication failed”
**Platforms:** Any
Occurs if the credentials (username, passwords, or ssh keys) passed to `ansible-connection` (via `ansible` or `ansible-playbook`) can not be used to connect to the remote device.
For example:
```
<ios01> ESTABLISH CONNECTION FOR USER: cisco on PORT 22 TO ios01
<ios01> Authentication failed.
```
Suggestions to resolve:
If you are specifying credentials via `password:` (either directly or via `provider:`) or the environment variable `ANSIBLE_NET_PASSWORD` it is possible that `paramiko` (the Python SSH library that Ansible uses) is using ssh keys, and therefore the credentials you are specifying are being ignored. To find out if this is the case, disable “look for keys”. This can be done like this:
```
export ANSIBLE_PARAMIKO_LOOK_FOR_KEYS=False
```
To make this a permanent change, add the following to your `ansible.cfg` file:
```
[paramiko_connection]
look_for_keys = False
```
### Error: “connecting to host <hostname> returned an error” or “Bad address”
This may occur if the SSH fingerprint hasn’t been added to Paramiko’s (the Python SSH library) know hosts file.
When using persistent connections with Paramiko, the connection runs in a background process. If the host doesn’t already have a valid SSH key, by default Ansible will prompt to add the host key. This will cause connections running in background processes to fail.
For example:
```
2017-04-04 12:06:03,486 p=17981 u=fred | using connection plugin network_cli
2017-04-04 12:06:04,680 p=17981 u=fred | connecting to host veos01 returned an error
2017-04-04 12:06:04,682 p=17981 u=fred | (14, 'Bad address')
2017-04-04 12:06:33,519 p=17981 u=fred | number of connection attempts exceeded, unable to connect to control socket
2017-04-04 12:06:33,520 p=17981 u=fred | persistent_connect_interval=1, persistent_connect_retries=30
```
Suggestions to resolve:
Use `ssh-keyscan` to pre-populate the known\_hosts. You need to ensure the keys are correct.
```
ssh-keyscan veos01
```
or
You can tell Ansible to automatically accept the keys
Environment variable method:
```
export ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD=True
ansible-playbook ...
```
`ansible.cfg` method:
ansible.cfg
```
[paramiko_connection]
host_key_auto_add = True
```
### Error: “No authentication methods available”
For example:
```
2017-04-04 12:19:05,670 p=18591 u=fred | creating new control socket for host veos01:None as user admin
2017-04-04 12:19:05,670 p=18591 u=fred | control socket path is /home/fred/.ansible/pc/ca5960d27a
2017-04-04 12:19:05,670 p=18591 u=fred | current working directory is /home/fred/git/ansible-inc/ansible-workspace-2/test/integration
2017-04-04 12:19:05,670 p=18591 u=fred | using connection plugin network_cli
2017-04-04 12:19:06,606 p=18591 u=fred | connecting to host veos01 returned an error
2017-04-04 12:19:06,606 p=18591 u=fred | No authentication methods available
2017-04-04 12:19:35,708 p=18591 u=fred | connect retry timeout expired, unable to connect to control socket
2017-04-04 12:19:35,709 p=18591 u=fred | persistent_connect_retry_timeout is 15 secs
```
Suggestions to resolve:
No password or SSH key supplied
### Clearing Out Persistent Connections
**Platforms:** Any
In Ansible 2.3, persistent connection sockets are stored in `~/.ansible/pc` for all network devices. When an Ansible playbook runs, the persistent socket connection is displayed when verbose output is specified.
`<switch> socket_path: /home/fred/.ansible/pc/f64ddfa760`
To clear out a persistent connection before it times out (the default timeout is 30 seconds of inactivity), simple delete the socket file.
Timeout issues
--------------
### Persistent connection idle timeout
By default, `ANSIBLE_PERSISTENT_CONNECT_TIMEOUT` is set to 30 (seconds). You may see the following error if this value is too low:
```
2017-04-04 12:19:05,670 p=18591 u=fred | persistent connection idle timeout triggered, timeout value is 30 secs
```
Suggestions to resolve:
Increase value of persistent connection idle timeout:
```
export ANSIBLE_PERSISTENT_CONNECT_TIMEOUT=60
```
To make this a permanent change, add the following to your `ansible.cfg` file:
```
[persistent_connection]
connect_timeout = 60
```
### Command timeout
By default, `ANSIBLE_PERSISTENT_COMMAND_TIMEOUT` is set to 30 (seconds). Prior versions of Ansible had this value set to 10 seconds by default. You may see the following error if this value is too low:
```
2017-04-04 12:19:05,670 p=18591 u=fred | command timeout triggered, timeout value is 30 secs
```
Suggestions to resolve:
* Option 1 (Global command timeout setting): Increase value of command timeout in configuration file or by setting environment variable.
```
export ANSIBLE_PERSISTENT_COMMAND_TIMEOUT=60
```
To make this a permanent change, add the following to your `ansible.cfg` file:
```
[persistent_connection]
command_timeout = 60
```
* Option 2 (Per task command timeout setting): Increase command timeout per task basis. All network modules support a timeout value that can be set on a per task basis. The timeout value controls the amount of time in seconds before the task will fail if the command has not returned.
For local connection type:
Suggestions to resolve:
```
- name: save running-config
cisco.ios.ios_command:
commands: copy running-config startup-config
provider: "{{ cli }}"
timeout: 30
```
Suggestions to resolve:
```
- name: save running-config
cisco.ios.ios_command:
commands: copy running-config startup-config
vars:
ansible_command_timeout: 60
```
Some operations take longer than the default 30 seconds to complete. One good example is saving the current running config on IOS devices to startup config. In this case, changing the timeout value from the default 30 seconds to 60 seconds will prevent the task from failing before the command completes successfully.
### Persistent connection retry timeout
By default, `ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT` is set to 15 (seconds). You may see the following error if this value is too low:
```
2017-04-04 12:19:35,708 p=18591 u=fred | connect retry timeout expired, unable to connect to control socket
2017-04-04 12:19:35,709 p=18591 u=fred | persistent_connect_retry_timeout is 15 secs
```
Suggestions to resolve:
Increase the value of the persistent connection idle timeout. Note: This value should be greater than the SSH timeout value (the timeout value under the defaults section in the configuration file) and less than the value of the persistent connection idle timeout (connect\_timeout).
```
export ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT=30
```
To make this a permanent change, add the following to your `ansible.cfg` file:
```
[persistent_connection]
connect_retry_timeout = 30
```
### Timeout issue due to platform specific login menu with `network_cli` connection type
In Ansible 2.9 and later, the network\_cli connection plugin configuration options are added to handle the platform specific login menu. These options can be set as group/host or tasks variables.
Example: Handle single login menu prompts with host variables
```
$cat host_vars/<hostname>.yaml
---
ansible_terminal_initial_prompt:
- "Connect to a host"
ansible_terminal_initial_answer:
- "3"
```
Example: Handle remote host multiple login menu prompts with host variables
```
$cat host_vars/<inventory-hostname>.yaml
---
ansible_terminal_initial_prompt:
- "Press any key to enter main menu"
- "Connect to a host"
ansible_terminal_initial_answer:
- "\\r"
- "3"
ansible_terminal_initial_prompt_checkall: True
```
To handle multiple login menu prompts:
* The values of `ansible_terminal_initial_prompt` and `ansible_terminal_initial_answer` should be a list.
* The prompt sequence should match the answer sequence.
* The value of `ansible_terminal_initial_prompt_checkall` should be set to `True`.
Note
If all the prompts in sequence are not received from remote host at the time connection initialization it will result in a timeout.
Playbook issues
---------------
This section details issues are caused by issues with the Playbook itself.
### Error: “Unable to enter configuration mode”
**Platforms:** Arista EOS and Cisco IOS
This occurs when you attempt to run a task that requires privileged mode in a user mode shell.
For example:
```
TASK [ios_system : configure name_servers] *****************************************************************************
task path:
fatal: [ios-csr1000v]: FAILED! => {
"changed": false,
"failed": true,
"msg": "unable to enter configuration mode",
}
```
Suggestions to resolve:
Use `connection: ansible.netcommon.network_cli` and `become: yes`
Proxy Issues
------------
### delegate\_to vs ProxyCommand
In order to use a bastion or intermediate jump host to connect to network devices over `cli` transport, network modules support the use of `ProxyCommand`.
To use `ProxyCommand`, configure the proxy settings in the Ansible inventory file to specify the proxy host.
```
[nxos]
nxos01
nxos02
[nxos:vars]
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
With the configuration above, simply build and run the playbook as normal with no additional changes necessary. The network module will now connect to the network device by first connecting to the host specified in `ansible_ssh_common_args`, which is `bastion01` in the above example.
You can also set the proxy target for all hosts by using environment variables.
```
export ANSIBLE_SSH_ARGS='-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
### Using bastion/jump host with netconf connection
### Enabling jump host setting
Bastion/jump host with netconf connection can be enabled by:
* Setting Ansible variable `ansible_netconf_ssh_config` either to `True` or custom ssh config file path
* Setting environment variable `ANSIBLE_NETCONF_SSH_CONFIG` to `True` or custom ssh config file path
* Setting `ssh_config = 1` or `ssh_config = <ssh-file-path>` under `netconf_connection` section
If the configuration variable is set to 1 the proxycommand and other ssh variables are read from default ssh config file (~/.ssh/config).
If the configuration variable is set to file path the proxycommand and other ssh variables are read from the given custom ssh file path
### Example ssh config file (~/.ssh/config)
```
Host jumphost
HostName jumphost.domain.name.com
User jumphost-user
IdentityFile "/path/to/ssh-key.pem"
Port 22
# Note: Due to the way that Paramiko reads the SSH Config file,
# you need to specify the NETCONF port that the host uses.
# In other words, it does not automatically use ansible_port
# As a result you need either:
Host junos01
HostName junos01
ProxyCommand ssh -W %h:22 jumphost
# OR
Host junos01
HostName junos01
ProxyCommand ssh -W %h:830 jumphost
# Depending on the netconf port used.
```
Example Ansible inventory file
```
[junos]
junos01
[junos:vars]
ansible_connection=ansible.netcommon.netconf
ansible_network_os=junipernetworks.junos.junos
ansible_user=myuser
ansible_password=!vault...
```
Note
Using `ProxyCommand` with passwords via variables
By design, SSH doesn’t support providing passwords via environment variables. This is done to prevent secrets from leaking out, for example in `ps` output.
We recommend using SSH Keys, and if needed an ssh-agent, rather than passwords, where ever possible.
Miscellaneous Issues
--------------------
### Intermittent failure while using `ansible.netcommon.network_cli` connection type
If the command prompt received in response is not matched correctly within the `ansible.netcommon.network_cli` connection plugin the task might fail intermittently with truncated response or with the error message `operation requires privilege escalation`. Starting in 2.7.1 a new buffer read timer is added to ensure prompts are matched properly and a complete response is send in output. The timer default value is 0.2 seconds and can be adjusted on a per task basis or can be set globally in seconds.
Example Per task timer setting
```
- name: gather ios facts
cisco.ios.ios_facts:
gather_subset: all
register: result
vars:
ansible_buffer_read_timeout: 2
```
To make this a global setting, add the following to your `ansible.cfg` file:
```
[persistent_connection]
buffer_read_timeout = 2
```
This timer delay per command executed on remote host can be disabled by setting the value to zero.
### Task failure due to mismatched error regex within command response using `ansible.netcommon.network_cli` connection type
In Ansible 2.9 and later, the `ansible.netcommon.network_cli` connection plugin configuration options are added to handle the stdout and stderr regex to identify if the command execution response consist of a normal response or an error response. These options can be set group/host variables or as tasks variables.
Example: For mismatched error response
```
- name: fetch logs from remote host
cisco.ios.ios_command:
commands:
- show logging
```
Playbook run output:
```
TASK [first fetch logs] ********************************************************
fatal: [ios01]: FAILED! => {
"changed": false,
"msg": "RF Name:\r\n\r\n <--nsip-->
\"IPSEC-3-REPLAY_ERROR: Test log\"\r\n*Aug 1 08:36:18.483: %SYS-7-USERLOG_DEBUG:
Message from tty578(user id: ansible): test\r\nan-ios-02#"}
```
Suggestions to resolve:
Modify the error regex for individual task.
```
- name: fetch logs from remote host
cisco.ios.ios_command:
commands:
- show logging
vars:
ansible_terminal_stderr_re:
- pattern: 'connection timed out'
flags: 're.I'
```
The terminal plugin regex options `ansible_terminal_stderr_re` and `ansible_terminal_stdout_re` have `pattern` and `flags` as keys. The value of the `flags` key should be a value that is accepted by the `re.compile` python method.
### Intermittent failure while using `ansible.netcommon.network_cli` connection type due to slower network or remote target host
In Ansible 2.9 and later, the `ansible.netcommon.network_cli` connection plugin configuration option is added to control the number of attempts to connect to a remote host. The default number of attempts is three. After every retry attempt the delay between retries is increased by power of 2 in seconds until either the maximum attempts are exhausted or either the `persistent_command_timeout` or `persistent_connect_timeout` timers are triggered.
To make this a global setting, add the following to your `ansible.cfg` file:
```
[persistent_connection]
network_cli_retries = 5
```
| programming_docs |
ansible Working with command output and prompts in network modules Working with command output and prompts in network modules
==========================================================
* [Conditionals in networking modules](#conditionals-in-networking-modules)
* [Handling prompts in network modules](#handling-prompts-in-network-modules)
Conditionals in networking modules
----------------------------------
Ansible allows you to use conditionals to control the flow of your playbooks. Ansible networking command modules use the following unique conditional statements.
* `eq` - Equal
* `neq` - Not equal
* `gt` - Greater than
* `ge` - Greater than or equal
* `lt` - Less than
* `le` - Less than or equal
* `contains` - Object contains specified item
Conditional statements evaluate the results from the commands that are executed remotely on the device. Once the task executes the command set, the `wait_for` argument can be used to evaluate the results before returning control to the Ansible playbook.
For example:
```
---
- name: wait for interface to be admin enabled
arista.eos.eos_command:
commands:
- show interface Ethernet4 | json
wait_for:
- "result[0].interfaces.Ethernet4.interfaceStatus eq connected"
```
In the above example task, the command `show interface Ethernet4 | json` is executed on the remote device and the results are evaluated. If the path `(result[0].interfaces.Ethernet4.interfaceStatus)` is not equal to “connected”, then the command is retried. This process continues until either the condition is satisfied or the number of retries has expired (by default, this is 10 retries at 1 second intervals).
The commands module can also evaluate more than one set of command results in an interface. For instance:
```
---
- name: wait for interfaces to be admin enabled
arista.eos.eos_command:
commands:
- show interface Ethernet4 | json
- show interface Ethernet5 | json
wait_for:
- "result[0].interfaces.Ethernet4.interfaceStatus eq connected"
- "result[1].interfaces.Ethernet5.interfaceStatus eq connected"
```
In the above example, two commands are executed on the remote device, and the results are evaluated. By specifying the result index value (0 or 1), the correct result output is checked against the conditional.
The `wait_for` argument must always start with result and then the command index in `[]`, where `0` is the first command in the commands list, `1` is the second command, `2` is the third and so on.
Handling prompts in network modules
-----------------------------------
Network devices may require that you answer a prompt before performing a change on the device. Individual network modules such as [cisco.ios.ios\_command](../../collections/cisco/ios/ios_command_module#ansible-collections-cisco-ios-ios-command-module) and [cisco.nxos.nxos\_command](../../collections/cisco/nxos/nxos_command_module#ansible-collections-cisco-nxos-nxos-command-module) can handle this with a `prompt` parameter.
Note
`prompt` is a Python regex. If you add special characters such as `?` in the `prompt` value, the prompt won’t match and you will get a timeout. To avoid this, ensure that the `prompt` value is a Python regex that matches the actual device prompt. Any special characters must be handled correctly in the `prompt` regex.
You can also use the [ansible.netcommon.cli\_command](../../collections/ansible/netcommon/cli_command_module#ansible-collections-ansible-netcommon-cli-command-module) to handle multiple prompts.
```
---
- name: multiple prompt, multiple answer (mandatory check for all prompts)
ansible.netcommon.cli_command:
command: "copy sftp sftp://user@host//user/test.img"
check_all: True
prompt:
- "Confirm download operation"
- "Password"
- "Do you want to change that to the standby image"
answer:
- 'y'
- <password>
- 'y'
```
You must list the prompt and the answers in the same order (that is, prompt[0] is answered by answer[0]).
In the above example, `check_all: True` ensures that the task gives the matching answer to each prompt. Without that setting, a task with multiple prompts would give the first answer to every prompt.
In the following example, the second answer would be ignored and `y` would be the answer given to both prompts. That is, this task only works because both answers are identical. Also notice again that `prompt` must be a Python regex, which is why the `?` is escaped in the first prompt.
```
---
- name: reboot ios device
ansible.netcommon.cli_command:
command: reload
prompt:
- Save\?
- confirm
answer:
- y
- y
```
See also
[Rebooting network devices with Ansible](https://www.ansible.com/blog/rebooting-network-devices-with-ansible)
Examples using `wait_for`, `wait_for_connection`, and `prompt` for network devices.
[Deep dive on cli\_command](https://www.ansible.com/blog/deep-dive-on-cli-command-for-network-automation)
Detailed overview of how to use the `cli_command`.
ansible EOS Platform Options EOS Platform Options
====================
The [Arista EOS](https://galaxy.ansible.com/arista/eos) collection supports multiple connections. This page offers details on how each connection works in Ansible and how to use it.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/eos.yml`](#example-cli-group-vars-eos-yml)
+ [Example CLI task](#example-cli-task)
* [Using eAPI in Ansible](#using-eapi-in-ansible)
+ [Enabling eAPI](#enabling-eapi)
+ [Example eAPI `group_vars/eos.yml`](#example-eapi-group-vars-eos-yml)
+ [Example eAPI task](#example-eapi-task)
Connections available
---------------------
| | CLI | eAPI |
| --- | --- | --- |
| Protocol | SSH | HTTP(S) |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password | uses HTTPS certificates if present |
| Indirect Access | via a bastion (jump host) | via a web proxy |
| Connection Settings | `ansible_connection:` `ansible.netcommon.network_cli` | `ansible_connection:` `ansible.netcommon.httpapi` |
| Enable Mode (Privilege Escalation) | supported: * use `ansible_become: yes` with `ansible_become_method: enable`
| supported: * `httpapi` uses `ansible_become: yes` with `ansible_become_method: enable`
|
| Returned Data Format | `stdout[0].` | `stdout[0].messages[0].` |
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.network_cli` or `ansible_connection: ansible.netcommon.httpapi` instead.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/eos.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: arista.eos.eos
ansible_user: myuser
ansible_password: !vault...
ansible_become: yes
ansible_become_method: enable
ansible_become_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Backup current switch config (eos)
arista.eos.eos_config:
backup: yes
register: backup_eos_location
when: ansible_network_os == 'arista.eos.eos'
```
Using eAPI in Ansible
---------------------
### Enabling eAPI
Before you can use eAPI to connect to a switch, you must enable eAPI. To enable eAPI on a new switch with Ansible, use the `arista.eos.eos_eapi` module through the CLI connection. Set up `group_vars/eos.yml` just like in the CLI example above, then run a playbook task like this:
```
- name: Enable eAPI
arista.eos.eos_eapi:
enable_http: yes
enable_https: yes
become: true
become_method: enable
when: ansible_network_os == 'arista.eos.eos'
```
You can find more options for enabling HTTP/HTTPS connections in the [arista.eos.eos\_eapi](../../collections/arista/eos/eos_eapi_module#ansible-collections-arista-eos-eos-eapi-module) module documentation.
Once eAPI is enabled, change your `group_vars/eos.yml` to use the eAPI connection.
### Example eAPI `group_vars/eos.yml`
```
ansible_connection: ansible.netcommon.httpapi
ansible_network_os: arista.eos.eos
ansible_user: myuser
ansible_password: !vault...
ansible_become: yes
ansible_become_method: enable
proxy_env:
http_proxy: http://proxy.example.com:8080
```
* If you are accessing your host directly (not through a web proxy) you can remove the `proxy_env` configuration.
* If you are accessing your host through a web proxy using `https`, change `http_proxy` to `https_proxy`.
### Example eAPI task
```
- name: Backup current switch config (eos)
arista.eos.eos_config:
backup: yes
register: backup_eos_location
environment: "{{ proxy_env }}"
when: ansible_network_os == 'arista.eos.eos'
```
In this example the `proxy_env` variable defined in `group_vars` gets passed to the `environment` option of the module in the task.
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible IOS Platform Options IOS Platform Options
====================
The [Cisco IOS](https://galaxy.ansible.com/cisco/ios) collection supports Enable Mode (Privilege Escalation). This page offers details on how to use Enable Mode on IOS in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/ios.yml`](#example-cli-group-vars-ios-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | supported: use `ansible_become: yes` with `ansible_become_method: enable` and `ansible_become_password:` |
| Returned Data Format | `stdout[0].` |
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.network_cli` instead.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/ios.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: cisco.ios.ios
ansible_user: myuser
ansible_password: !vault...
ansible_become: yes
ansible_become_method: enable
ansible_become_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Backup current switch config (ios)
cisco.ios.ios_config:
backup: yes
register: backup_ios_location
when: ansible_network_os == 'cisco.ios.ios'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible Dell OS10 Platform Options Dell OS10 Platform Options
==========================
The [dellemc.os10](https://galaxy.ansible.com/dellemc_networking/os10) collection supports Enable Mode (Privilege Escalation). This page offers details on how to use Enable Mode on OS10 in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/dellos10.yml`](#example-cli-group-vars-dellos10-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | supported: use `ansible_become: yes` with `ansible_become_method: enable` and `ansible_become_password:` |
| Returned Data Format | `stdout[0].` |
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.network_cli` instead.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/dellos10.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: dellemc.os10.os10
ansible_user: myuser
ansible_password: !vault...
ansible_become: yes
ansible_become_method: enable
ansible_become_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Backup current switch config (dellos10)
dellemc.os10.os10_config:
backup: yes
register: backup_dellos10_location
when: ansible_network_os == 'dellemc.os10.os10'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible Parsing semi-structured text with Ansible Parsing semi-structured text with Ansible
=========================================
The [cli\_parse](../../collections/ansible/netcommon/cli_parse_module#ansible-collections-ansible-netcommon-cli-parse-module) module parses semi-structured data such as network configurations into structured data to allow programmatic use of the data from that device. You can pull information from a network device and update a CMDB in one playbook. Use cases include automated troubleshooting, creating dynamic documentation, updating IPAM (IP address management) tools and so on.
* [Understanding the CLI parser](#understanding-the-cli-parser)
+ [Why parse the text?](#why-parse-the-text)
+ [When not to parse the text](#when-not-to-parse-the-text)
* [Parsing the CLI](#parsing-the-cli)
+ [Parsing with the native parsing engine](#parsing-with-the-native-parsing-engine)
- [Networking example](#networking-example)
- [Linux example](#linux-example)
+ [Parsing JSON](#parsing-json)
+ [Parsing with ntc\_templates](#parsing-with-ntc-templates)
+ [Parsing with pyATS](#parsing-with-pyats)
+ [Parsing with textfsm](#parsing-with-textfsm)
+ [Parsing with TTP](#parsing-with-ttp)
+ [Parsing with JC](#parsing-with-jc)
+ [Converting XML](#converting-xml)
* [Advanced use cases](#advanced-use-cases)
+ [Provide a full template path](#provide-a-full-template-path)
+ [Provide command to parser different than the command run](#provide-command-to-parser-different-than-the-command-run)
+ [Provide a custom OS value](#provide-a-custom-os-value)
+ [Parse existing text](#parse-existing-text)
Understanding the CLI parser
----------------------------
The [ansible.netcommon](https://galaxy.ansible.com/ansible/netcommon) collection version 1.2.0 or later includes the [cli\_parse](../../collections/ansible/netcommon/cli_parse_module#ansible-collections-ansible-netcommon-cli-parse-module) module that can run CLI commands and parse the semi-structured text output. You can use the `cli_parse` module on a device, host, or platform that only supports a command-line interface and the commands issued return semi-structured text. The `cli_parse` module can either run a CLI command on a device and return a parsed result or can simply parse any text document. The `cli_parse` module includes cli\_parser plugins to interface with a variety of parsing engines.
### Why parse the text?
Parsing semi-structured data such as network configurations into structured data allows programmatic use of the data from that device. Use cases include automated troubleshooting, creating dynamic documentation, updating IPAM (IP address management) tools and so on. You may prefer to do this with Ansible natively to take advantage of native Ansible constructs such as:
* The `when` clause to conditionally run other tasks or roles
* The `assert` module to check configuration and operational state compliance
* The `template` module to generate reports about configuration and operational state information
* Templates and `command` or `config` modules to generate host, device, or platform commands or configuration
* The current platform `facts` modules to supplement native facts information
By parsing semi-structured text into Ansible native data structures, you can take full advantage of Ansible’s network modules and plugins.
### When not to parse the text
You should not parse semi-structured text when:
* The device, host, or platform has a RESTAPI and returns JSON.
* Existing Ansible facts modules already return the desired data.
* Ansible network resource modules exist for configuration management of the device and resource.
Parsing the CLI
---------------
The `cli_parse` module includes the following cli\_parsing plugins:
`native`
The native parsing engine built into Ansible and requires no addition python libraries
`xml`
Convert XML to an Ansible native data structure
`textfsm`
A python module which implements a template based state machine for parsing semi-formatted text
`ntc_templates`
Predefined `textfsm` templates packages supporting a variety of platforms and commands
`ttp`
A library for semi-structured text parsing using templates, with added capabilities to simplify the process
`pyats`
Uses the parsers included with the Cisco Test Automation & Validation Solution
`jc`
A python module that converts the output of dozens of popular Linux/UNIX/macOS/Windows commands and file types to python dictionaries or lists of dictionaries. Note: this filter plugin can be found in the `community.general` collection.
`json`
Converts JSON output at the CLI to an Ansible native data structure
Although Ansible contains a number of plugins that can convert XML to Ansible native data structures, the `cli_parse` module runs the command on devices that return XML and returns the converted data in a single task.
Because `cli_parse` uses a plugin based architecture, it can use additional parsing engines from any Ansible collection.
Note
The `ansible.netcommon.native` and `ansible.netcommon.json` parsing engines are fully supported with a Red Hat Ansible Automation Platform subscription. Red Hat Ansible Automation Platform subscription support is limited to the use of the `ntc_templates`, pyATS, `textfsm`, `xmltodict`, public APIs as documented.
### Parsing with the native parsing engine
The native parsing engine is included with the `cli_parse` module. It uses data captured using regular expressions to populate the parsed data structure. The native parsing engine requires a YAML template file to parse the command output.
#### Networking example
This example uses the output of a network device command and applies a native template to produce an output in Ansible structured data format.
The `show interface` command output from the network device looks as follows:
```
Ethernet1/1 is up
admin state is up, Dedicated Interface
Hardware: 100/1000/10000 Ethernet, address: 5254.005a.f8bd (bia 5254.005a.f8bd)
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, medium is broadcast
Port mode is access
full-duplex, auto-speed
Beacon is turned off
Auto-Negotiation is turned on FEC mode is Auto
Input flow-control is off, output flow-control is off
Auto-mdix is turned off
Switchport monitor is off
EtherType is 0x8100
EEE (efficient-ethernet) : n/a
Last link flapped 4week(s) 6day(s)
Last clearing of "show interface" counters never
<...>
```
Create the native template to match this output and store it as `templates/nxos_show_interface.yaml`:
```
---
- example: Ethernet1/1 is up
getval: '(?P<name>\S+) is (?P<oper_state>\S+)'
result:
"{{ name }}":
name: "{{ name }}"
state:
operating: "{{ oper_state }}"
shared: true
- example: admin state is up, Dedicated Interface
getval: 'admin state is (?P<admin_state>\S+),'
result:
"{{ name }}":
name: "{{ name }}"
state:
admin: "{{ admin_state }}"
- example: " Hardware: Ethernet, address: 5254.005a.f8b5 (bia 5254.005a.f8b5)"
getval: '\s+Hardware: (?P<hardware>.*), address: (?P<mac>\S+)'
result:
"{{ name }}":
hardware: "{{ hardware }}"
mac_address: "{{ mac }}"
```
This native parser template is structured as a list of parsers, each containing the following key-value pairs:
* `example` - An example line of the text line to be parsed
* `getval` - A regular expression using named capture groups to store the extracted data
* `result` - A data tree, populated as a template, from the parsed data
* `shared` - (optional) The shared key makes the parsed values available to the rest of the parser entries until matched again.
The following example task uses `cli_parse` with the native parser and the example template above to parse the `show interface` command from a Cisco NXOS device:
```
- name: "Run command and parse with native"
ansible.netcommon.cli_parse:
command: show interface
parser:
name: ansible.netcommon.native
set_fact: interfaces
```
Taking a deeper dive into this task:
* The `command` option provides the command you want to run on the device or host. Alternately, you can provide text from a previous command with the `text` option instead.
* The `parser` option provides information specific to the parser engine.
* The `name` suboption provides the fully qualified collection name (FQCN) of the parsing engine (`ansible.netcommon.native`).
* The `cli_parse` module, by default, looks for the template in the templates directory as `{{ short_os }}_{{ command }}.yaml`.
+ The `short_os` in the template filename is derived from either the host `ansible_network_os` or `ansible_distribution`.
+ Spaces in the network or host command are replace with `_` in the `command` portion of the template filename. In this example, the `show interfaces` network CLI command becomes `show_interfaces` in the filename.
Note
`ansible.netcommon.native` parsing engine is fully supported with a Red Hat Ansible Automation Platform subscription.
Lastly in this task, the `set_fact` option sets the following `interfaces` fact for the device based on the now-structured data returned from `cli_parse`:
```
Ethernet1/1:
hardware: 100/1000/10000 Ethernet
mac_address: 5254.005a.f8bd
name: Ethernet1/1
state:
admin: up
operating: up
Ethernet1/10:
hardware: 100/1000/10000 Ethernet
mac_address: 5254.005a.f8c6
<...>
```
#### Linux example
You can also use the native parser to run commands and parse output from Linux hosts.
The output of a sample Linux command (`ip addr show`) looks as follows:
```
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether x2:6a:64:9d:84:19 brd ff:ff:ff:ff:ff:ff
3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether x6:c2:44:f7:41:e0 brd ff:ff:ff:ff:ff:ff permaddr d8:f2:ca:99:5c:82
```
Create the native template to match this output and store it as `templates/fedora_ip_addr_show.yaml`:
```
---
- example: '1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000'
getval: |
(?x) # free-spacing
\d+:\s # the interface index
(?P<name>\S+):\s # the name
<(?P<properties>\S+)> # the properties
\smtu\s(?P<mtu>\d+) # the mtu
.* # gunk
state\s(?P<state>\S+) # the state of the interface
result:
"{{ name }}":
name: "{{ name }}"
loopback: "{{ 'LOOPBACK' in stats.split(',') }}"
up: "{{ 'UP' in properties.split(',') }}"
carrier: "{{ not 'NO-CARRIER' in properties.split(',') }}"
broadcast: "{{ 'BROADCAST' in properties.split(',') }}"
multicast: "{{ 'MULTICAST' in properties.split(',') }}"
state: "{{ state|lower() }}"
mtu: "{{ mtu }}"
shared: True
- example: 'inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0'
getval: |
(?x) # free-spacing
\s+inet\s(?P<inet>([0-9]{1,3}\.){3}[0-9]{1,3}) # the ip address
/(?P<bits>\d{1,2}) # the mask bits
result:
"{{ name }}":
ip_address: "{{ inet }}"
mask_bits: "{{ bits }}"
```
Note
The `shared` key in the parser template allows the interface name to be used in subsequent parser entries. The use of examples and free-spacing mode with the regular expressions makes the template easier to read.
The following example task uses `cli_parse` with the native parser and the example template above to parse the Linux output:
```
- name: Run command and parse
ansible.netcommon.cli_parse:
command: ip addr show
parser:
name: ansible.netcommon.native
set_fact: interfaces
```
This task assumes you previously gathered facts to determine the `ansible_distribution` needed to locate the template. Alternately, you could provide the path in the `parser/template_path` option.
Lastly in this task, the `set_fact` option sets the following `interfaces` fact for the host, based on the now-structured data returned from `cli_parse`:
```
lo:
broadcast: false
carrier: true
ip_address: 127.0.0.1
mask_bits: 8
mtu: 65536
multicast: false
name: lo
state: unknown
up: true
enp64s0u1:
broadcast: true
carrier: true
ip_address: 192.168.86.83
mask_bits: 24
mtu: 1500
multicast: true
name: enp64s0u1
state: up
up: true
<...>
```
### Parsing JSON
Although Ansible will natively convert serialized JSON to Ansible native data when recognized, you can also use the `cli_parse` module for this conversion.
Example task:
```
- name: "Run command and parse as json"
ansible.netcommon.cli_parse:
command: show interface | json
parser:
name: ansible.netcommon.json
register: interfaces
```
Taking a deeper dive into this task:
* The `show interface | json` command is issued on the device.
* The output is set as the `interfaces` fact for the device.
* JSON support is provided primarily for playbook consistency.
Note
The use of `ansible.netcommon.json` is fully supported with a Red Hat Ansible Automation Platform subscription
### Parsing with ntc\_templates
The `ntc_templates` python library includes pre-defined `textfsm` templates for parsing a variety of network device commands output.
Example task:
```
- name: "Run command and parse with ntc_templates"
ansible.netcommon.cli_parse:
command: show interface
parser:
name: ansible.netcommon.ntc_templates
set_fact: interfaces
```
Taking a deeper dive into this task:
* The `ansible_network_os` of the device is converted to the ntc\_template format `cisco_nxos`. Alternately, you can provide the `os` with the `parser/os` option instead.
* The `cisco_nxos_show_interface.textfsm` template, included with the `ntc_templates` package, parses the output.
* See [the ntc\_templates README](https://github.com/networktocode/ntc-templates/blob/master/README.md) for additional information about the `ntc_templates` python library.
Note
Red Hat Ansible Automation Platform subscription support is limited to the use of the `ntc_templates` public APIs as documented.
This task and and the predefined template sets the following fact as the `interfaces` fact for the host:
```
interfaces:
- address: 5254.005a.f8b5
admin_state: up
bandwidth: 1000000 Kbit
bia: 5254.005a.f8b5
delay: 10 usec
description: ''
duplex: full-duplex
encapsulation: ARPA
hardware_type: Ethernet
input_errors: ''
input_packets: ''
interface: mgmt0
ip_address: 192.168.101.14/24
last_link_flapped: ''
link_status: up
mode: ''
mtu: '1500'
output_errors: ''
output_packets: ''
speed: 1000 Mb/s
- address: 5254.005a.f8bd
admin_state: up
bandwidth: 1000000 Kbit
bia: 5254.005a.f8bd
delay: 10 usec
```
### Parsing with pyATS
`pyATS` is part of the Cisco Test Automation & Validation Solution. It includes many predefined parsers for a number of network platforms and commands. You can use the predefined parsers that are part of the `pyATS` package with the `cli_parse` module.
Example task:
```
- name: "Run command and parse with pyats"
ansible.netcommon.cli_parse:
command: show interface
parser:
name: ansible.netcommon.pyats
set_fact: interfaces
```
Taking a deeper dive into this task:
* The `cli_parse` modules converts the `ansible_network_os` automatically (in this example, `ansible_network_os` set to `cisco.nxos.nxos`, converts to `nxos` for pyATS. Alternately, you can set the OS with the `parser/os` option instead.
* Using a combination of the command and OS, the pyATS selects the following parser: <https://pubhub.devnetcloud.com/media/genie-feature-browser/docs/#/parsers/show%2520interface>.
* The `cli_parse` module sets `cisco.ios.ios` to `iosxe` for pyATS. You can override this with the `parser/os` option.
* `cli_parse` only uses the predefined parsers in pyATS. See the [pyATS documentation](https://developer.cisco.com/docs/pyats/) and the full list of [pyATS included parsers](https://pubhub.devnetcloud.com/media/genie-feature-browser/docs/#/parsers).
Note
Red Hat Ansible Automation Platform subscription support is limited to the use of the pyATS public APIs as documented.
This task sets the following fact as the `interfaces` fact for the host:
```
mgmt0:
admin_state: up
auto_mdix: 'off'
auto_negotiate: true
bandwidth: 1000000
counters:
in_broadcast_pkts: 3
in_multicast_pkts: 1652395
in_octets: 556155103
in_pkts: 2236713
in_unicast_pkts: 584259
rate:
in_rate: 320
in_rate_pkts: 0
load_interval: 1
out_rate: 48
out_rate_pkts: 0
rx: true
tx: true
delay: 10
duplex_mode: full
enabled: true
encapsulations:
encapsulation: arpa
ethertype: '0x0000'
ipv4:
192.168.101.14/24:
ip: 192.168.101.14
prefix_length: '24'
link_state: up
<...>
```
### Parsing with textfsm
`textfsm` is a Python module which implements a template-based state machine for parsing semi-formatted text.
The following sample``textfsm`` template is stored as `templates/nxos_show_interface.textfsm`
```
Value Required INTERFACE (\S+)
Value LINK_STATUS (.+?)
Value ADMIN_STATE (.+?)
Value HARDWARE_TYPE (.\*)
Value ADDRESS ([a-zA-Z0-9]+.[a-zA-Z0-9]+.[a-zA-Z0-9]+)
Value BIA ([a-zA-Z0-9]+.[a-zA-Z0-9]+.[a-zA-Z0-9]+)
Value DESCRIPTION (.\*)
Value IP_ADDRESS (\d+\.\d+\.\d+\.\d+\/\d+)
Value MTU (\d+)
Value MODE (\S+)
Value DUPLEX (.+duplex?)
Value SPEED (.+?)
Value INPUT_PACKETS (\d+)
Value OUTPUT_PACKETS (\d+)
Value INPUT_ERRORS (\d+)
Value OUTPUT_ERRORS (\d+)
Value BANDWIDTH (\d+\s+\w+)
Value DELAY (\d+\s+\w+)
Value ENCAPSULATION (\w+)
Value LAST_LINK_FLAPPED (.+?)
Start
^\S+\s+is.+ -> Continue.Record
^${INTERFACE}\s+is\s+${LINK_STATUS},\sline\sprotocol\sis\s${ADMIN_STATE}$$
^${INTERFACE}\s+is\s+${LINK_STATUS}$$
^admin\s+state\s+is\s+${ADMIN_STATE},
^\s+Hardware(:|\s+is)\s+${HARDWARE_TYPE},\s+address(:|\s+is)\s+${ADDRESS}(.*bia\s+${BIA})*
^\s+Description:\s+${DESCRIPTION}
^\s+Internet\s+Address\s+is\s+${IP_ADDRESS}
^\s+Port\s+mode\s+is\s+${MODE}
^\s+${DUPLEX}, ${SPEED}(,|$$)
^\s+MTU\s+${MTU}.\*BW\s+${BANDWIDTH}.\*DLY\s+${DELAY}
^\s+Encapsulation\s+${ENCAPSULATION}
^\s+${INPUT_PACKETS}\s+input\s+packets\s+\d+\s+bytes\s\*$$
^\s+${INPUT_ERRORS}\s+input\s+error\s+\d+\s+short\s+frame\s+\d+\s+overrun\s+\d+\s+underrun\s+\d+\s+ignored\s\*$$
^\s+${OUTPUT_PACKETS}\s+output\s+packets\s+\d+\s+bytes\s\*$$
^\s+${OUTPUT_ERRORS}\s+output\s+error\s+\d+\s+collision\s+\d+\s+deferred\s+\d+\s+late\s+collision\s\*$$
^\s+Last\s+link\s+flapped\s+${LAST_LINK_FLAPPED}\s\*$$
```
The following task uses the example template for `textfsm` with the `cli_parse` module.
```
- name: "Run command and parse with textfsm"
ansible.netcommon.cli_parse:
command: show interface
parser:
name: ansible.netcommon.textfsm
set_fact: interfaces
```
Taking a deeper dive into this task:
* The `ansible_network_os` for the device (`cisco.nxos.nxos`) is converted to `nxos`. Alternately you can provide the OS in the `parser/os` option instead.
* The textfsm template name defaulted to `templates/nxos_show_interface.textfsm` using a combination of the OS and command run. Alternately you can override the generated template path with the `parser/template_path` option.
* See the [textfsm README](https://github.com/google/textfsm) for details.
* `textfsm` was previously made available as a filter plugin. Ansible users should transition to the `cli_parse` module.
Note
Red Hat Ansible Automation Platform subscription support is limited to the use of the `textfsm` public APIs as documented.
This task sets the following fact as the `interfaces` fact for the host:
```
- ADDRESS: X254.005a.f8b5
ADMIN_STATE: up
BANDWIDTH: 1000000 Kbit
BIA: X254.005a.f8b5
DELAY: 10 usec
DESCRIPTION: ''
DUPLEX: full-duplex
ENCAPSULATION: ARPA
HARDWARE_TYPE: Ethernet
INPUT_ERRORS: ''
INPUT_PACKETS: ''
INTERFACE: mgmt0
IP_ADDRESS: 192.168.101.14/24
LAST_LINK_FLAPPED: ''
LINK_STATUS: up
MODE: ''
MTU: '1500'
OUTPUT_ERRORS: ''
OUTPUT_PACKETS: ''
SPEED: 1000 Mb/s
- ADDRESS: X254.005a.f8bd
ADMIN_STATE: up
BANDWIDTH: 1000000 Kbit
BIA: X254.005a.f8bd
```
### Parsing with TTP
TTP is a Python library for semi-structured text parsing using templates. TTP uses a jinja-like syntax to limit the need for regular expressions. Users familiar with jinja templating may find the TTP template syntax familiar.
The following is an example TTP template stored as `templates/nxos_show_interface.ttp`:
```
{{ interface }} is {{ state }}
admin state is {{ admin_state }}{{ ignore(".\*") }}
```
The following task uses this template to parse the `show interface` command output:
```
- name: "Run command and parse with ttp"
ansible.netcommon.cli_parse:
command: show interface
parser:
name: ansible.netcommon.ttp
set_fact: interfaces
```
Taking a deeper dive in this task:
* The default template path `templates/nxos_show_interface.ttp` was generated using the `ansible_network_os` for the host and `command` provided.
* TTP supports several additional variables that will be passed to the parser. These include:
+ `parser/vars/ttp_init` - Additional parameter passed when the parser is initialized.
+ `parser/vars/ttp_results` - Additional parameters used to influence the parser output.
+ `parser/vars/ttp_vars` - Additional variables made available in the template.
* See the [TTP documentation](https://ttp.readthedocs.io) for details.
The task sets the follow fact as the `interfaces` fact for the host:
```
- admin_state: up,
interface: mgmt0
state: up
- admin_state: up,
interface: Ethernet1/1
state: up
- admin_state: up,
interface: Ethernet1/2
state: up
```
### Parsing with JC
JC is a python library that converts the output of dozens of common Linux/UNIX/macOS/Windows command-line tools and file types to python dictionaries or lists of dictionaries for easier parsing. JC is available as a filter plugin in the `community.general` collection.
The following is an example using JC to parse the output of the `dig` command:
```
- name: "Run dig command and parse with jc"
hosts: ubuntu
tasks:
- shell: dig example.com
register: result
- set_fact:
myvar: "{{ result.stdout | community.general.jc('dig') }}"
- debug:
msg: "The IP is: {{ myvar[0].answer[0].data }}"
```
* The JC project and documentation can be found [here](https://github.com/kellyjonbrazil/jc/).
* See this [blog entry](https://blog.kellybrazil.com/2020/08/30/parsing-command-output-in-ansible-with-jc/) for more information.
### Converting XML
Although Ansible contains a number of plugins that can convert XML to Ansible native data structures, the `cli_parse` module runs the command on devices that return XML and returns the converted data in a single task.
This example task runs the `show interface` command and parses the output as XML:
```
- name: "Run command and parse as xml"
ansible.netcommon.cli_parse:
command: show interface | xml
parser:
name: ansible.netcommon.xml
set_fact: interfaces
```
Note
Red Hat Ansible Automation Platform subscription support is limited to the use of the `xmltodict` public APIs as documented.
This task sets the `interfaces` fact for the host based on this returned output:
```
nf:rpc-reply:
'@xmlns': http://www.cisco.com/nxos:1.0:if_manager
'@xmlns:nf': urn:ietf:params:xml:ns:netconf:base:1.0
nf:data:
show:
interface:
__XML__OPT_Cmd_show_interface_quick:
__XML__OPT_Cmd_show_interface___readonly__:
__readonly__:
TABLE_interface:
ROW_interface:
- admin_state: up
encapsulation: ARPA
eth_autoneg: 'on'
eth_bia_addr: x254.005a.f8b5
eth_bw: '1000000'
```
Advanced use cases
------------------
The `cli_parse` module supports several features to support more complex uses cases.
### Provide a full template path
Use the `template_path` option to override the default template path in the task:
```
- name: "Run command and parse with native"
ansible.netcommon.cli_parse:
command: show interface
parser:
name: ansible.netcommon.native
template_path: /home/user/templates/filename.yaml
```
### Provide command to parser different than the command run
Use the `command` suboption for the `parser` to configure the command the parser expects if it is different from the command `cli_parse` runs:
```
- name: "Run command and parse with native"
ansible.netcommon.cli_parse:
command: sho int
parser:
name: ansible.netcommon.native
command: show interface
```
### Provide a custom OS value
Use the `os` suboption to the parser to directly set the OS instead of using `ansible_network_os` or `ansible_distribution` to generate the template path or with the specified parser engine:
```
- name: Use ios instead of iosxe for pyats
ansible.netcommon.cli_parse:
command: show something
parser:
name: ansible.netcommon.pyats
os: ios
- name: Use linux instead of fedora from ansible_distribution
ansible.netcommon.cli_parse:
command: ps -ef
parser:
name: ansible.netcommon.native
os: linux
```
### Parse existing text
Use the `text` option instead of `command` to parse text collected earlier in the playbook.
```
# using /home/user/templates/filename.yaml
- name: "Parse text from previous task"
ansible.netcommon.cli_parse:
text: "{{ output['stdout'] }}"
parser:
name: ansible.netcommon.native
template_path: /home/user/templates/filename.yaml
# using /home/user/templates/filename.yaml
- name: "Parse text from file"
ansible.netcommon.cli_parse:
text: "{{ lookup('file', 'path/to/file.txt') }}"
parser:
name: ansible.netcommon.native
template_path: /home/user/templates/filename.yaml
# using templates/nxos_show_version.yaml
- name: "Parse text from previous task"
ansible.netcommon.cli_parse:
text: "{{ sho_version['stdout'] }}"
parser:
name: ansible.netcommon.native
os: nxos
command: show version
```
See also
* [Developing cli\_parser plugins in a collection](../dev_guide/developing_plugins_network#develop-cli-parse-plugins)
| programming_docs |
ansible Meraki Platform Options Meraki Platform Options
=======================
The [cisco.meraki](https://galaxy.ansible.com/cisco/meraki) collection only supports the `local` connection type at this time.
* [Example Meraki task](#example-meraki-task)
Connections available
---------------------
| | Dashboard API |
| --- | --- |
| Protocol | HTTP(S) |
| Credentials | uses API key from Dashboard |
| Connection Settings | `ansible_connection: localhost` |
| Returned Data Format | `data.` |
### Example Meraki task
```
cisco.meraki.meraki_organization:
auth_key: abc12345
org_name: YourOrg
state: present
delegate_to: localhost
```
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible Netconf enabled Platform Options Netconf enabled Platform Options
================================
This page offers details on how the netconf connection works in Ansible and how to use it.
* [Connections available](#connections-available)
* [Using NETCONF in Ansible](#using-netconf-in-ansible)
+ [Enabling NETCONF](#enabling-netconf)
+ [Example NETCONF inventory `[junos:vars]`](#example-netconf-inventory-junos-vars)
+ [Example NETCONF task](#example-netconf-task)
+ [Example NETCONF task with configurable variables](#example-netconf-task-with-configurable-variables)
+ [Bastion/Jumphost configuration](#bastion-jumphost-configuration)
+ [ansible\_network\_os auto-detection](#ansible-network-os-auto-detection)
Connections available
---------------------
| | NETCONF all modules except `junos_netconf`, which enables NETCONF |
| --- | --- |
| Protocol | XML over SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.netconf` |
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.netconf` instead.
Using NETCONF in Ansible
------------------------
### Enabling NETCONF
Before you can use NETCONF to connect to a switch, you must:
* install the `ncclient` Python package on your control node(s) with `pip install ncclient`
* enable NETCONF on the Junos OS device(s)
To enable NETCONF on a new switch via Ansible, use the platform specific module via the CLI connection or set it manually. For example set up your platform-level variables just like in the CLI example above, then run a playbook task like this:
```
- name: Enable NETCONF
connection: ansible.netcommon.network_cli
junipernetworks.junos.junos_netconf:
when: ansible_network_os == 'junipernetworks.junos.junos'
```
Once NETCONF is enabled, change your variables to use the NETCONF connection.
### Example NETCONF inventory `[junos:vars]`
```
[junos:vars]
ansible_connection=ansible.netcommon.netconf
ansible_network_os=junipernetworks.junos.junos
ansible_user=myuser
ansible_password=!vault |
```
### Example NETCONF task
```
- name: Backup current switch config
junipernetworks.junos.netconf_config:
backup: yes
register: backup_junos_location
```
### Example NETCONF task with configurable variables
```
- name: configure interface while providing different private key file path
junipernetworks.junos.netconf_config:
backup: yes
register: backup_junos_location
vars:
ansible_private_key_file: /home/admin/.ssh/newprivatekeyfile
```
Note: For netconf connection plugin configurable variables see [ansible.netcommon.netconf](../../collections/ansible/netcommon/netconf_connection#ansible-collections-ansible-netcommon-netconf-connection).
### Bastion/Jumphost configuration
To use a jump host to connect to a NETCONF enabled device you must set the `ANSIBLE_NETCONF_SSH_CONFIG` environment variable.
`ANSIBLE_NETCONF_SSH_CONFIG can be set to either:`
* 1 or TRUE (to trigger the use of the default SSH config file ~/.ssh/config)
* The absolute path to a custom SSH config file.
The SSH config file should look something like:
```
Host *
proxycommand ssh -o StrictHostKeyChecking=no -W %h:%p [email protected]
StrictHostKeyChecking no
```
Authentication for the jump host must use key based authentication.
You can either specify the private key used in the SSH config file:
```
IdentityFile "/absolute/path/to/private-key.pem"
```
Or you can use an ssh-agent.
### ansible\_network\_os auto-detection
If `ansible_network_os` is not specified for a host, then Ansible will attempt to automatically detect what `network_os` plugin to use.
`ansible_network_os` auto-detection can also be triggered by using `auto` as the `ansible_network_os`. (Note: Previously `default` was used instead of `auto`).
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible Junos OS Platform Options Junos OS Platform Options
=========================
The [Juniper Junos OS](https://galaxy.ansible.com/junipernetworks/junos) supports multiple connections. This page offers details on how each connection works in Ansible and how to use it.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI inventory `[junos:vars]`](#example-cli-inventory-junos-vars)
+ [Example CLI task](#example-cli-task)
* [Using NETCONF in Ansible](#using-netconf-in-ansible)
+ [Enabling NETCONF](#enabling-netconf)
+ [Example NETCONF inventory `[junos:vars]`](#example-netconf-inventory-junos-vars)
+ [Example NETCONF task](#example-netconf-task)
Connections available
---------------------
| | CLI `junos_netconf` & `junos_command` modules only | NETCONF all modules except `junos_netconf`, which enables NETCONF |
| --- | --- | --- |
| Protocol | SSH | XML over SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) | via a bastion (jump host) |
| Connection Settings | `ansible_connection:
``ansible.netcommon.network_cli` | `ansible_connection:
``ansible.netcommon.netconf` |
| Enable Mode (Privilege Escalation) | not supported by Junos OS | not supported by Junos OS |
| Returned Data Format | `stdout[0].` | * json: `result[0]['software-information'][0]['host-name'][0]['data'] foo lo0`
* text: `result[1].interface-information[0].physical-interface[0].name[0].data foo lo0`
* xml: `result[1].rpc-reply.interface-information[0].physical-interface[0].name[0].data foo lo0`
|
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.network_cli` or `ansible_connection: ansible.netcommon.netconf` instead.
Using CLI in Ansible
--------------------
### Example CLI inventory `[junos:vars]`
```
[junos:vars]
ansible_connection=ansible.netcommon.network_cli
ansible_network_os=junipernetworks.junos.junos
ansible_user=myuser
ansible_password=!vault...
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Retrieve Junos OS version
junipernetworks.junos.junos_command:
commands: show version
when: ansible_network_os == 'junipernetworks.junos.junos'
```
Using NETCONF in Ansible
------------------------
### Enabling NETCONF
Before you can use NETCONF to connect to a switch, you must:
* install the `ncclient` python package on your control node(s) with `pip install ncclient`
* enable NETCONF on the Junos OS device(s)
To enable NETCONF on a new switch via Ansible, use the `junipernetworks.junos.junos_netconf` module through the CLI connection. Set up your platform-level variables just like in the CLI example above, then run a playbook task like this:
```
- name: Enable NETCONF
connection: ansible.netcommon.network_cli
junipernetworks.junos.junos_netconf:
when: ansible_network_os == 'junipernetworks.junos.junos'
```
Once NETCONF is enabled, change your variables to use the NETCONF connection.
### Example NETCONF inventory `[junos:vars]`
```
[junos:vars]
ansible_connection=ansible.netcommon.netconf
ansible_network_os=junipernetworks.junos.junos
ansible_user=myuser
ansible_password=!vault |
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
### Example NETCONF task
```
- name: Backup current switch config (junos)
junipernetworks.junos.junos_config:
backup: yes
register: backup_junos_location
when: ansible_network_os == 'junipernetworks.junos.junos'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible Ansible Network Examples Ansible Network Examples
========================
This document describes some examples of using Ansible to manage your network infrastructure.
* [Prerequisites](#prerequisites)
* [Groups and variables in an inventory file](#groups-and-variables-in-an-inventory-file)
+ [Ansible vault for password encryption](#ansible-vault-for-password-encryption)
+ [Common inventory variables](#common-inventory-variables)
+ [Privilege escalation](#privilege-escalation)
+ [Jump hosts](#jump-hosts)
* [Example 1: collecting facts and creating backup files with a playbook](#example-1-collecting-facts-and-creating-backup-files-with-a-playbook)
+ [Step 1: Creating the inventory](#step-1-creating-the-inventory)
+ [Step 2: Creating the playbook](#step-2-creating-the-playbook)
+ [Step 3: Running the playbook](#step-3-running-the-playbook)
+ [Step 4: Examining the playbook results](#step-4-examining-the-playbook-results)
* [Example 2: simplifying playbooks with network agnostic modules](#example-2-simplifying-playbooks-with-network-agnostic-modules)
+ [Sample playbook with platform-specific modules](#sample-playbook-with-platform-specific-modules)
+ [Simplified playbook with `cli_command` network agnostic module](#simplified-playbook-with-cli-command-network-agnostic-module)
+ [Using multiple prompts with the `ansible.netcommon.cli_command`](#using-multiple-prompts-with-the-ansible-netcommon-cli-command)
* [Implementation Notes](#implementation-notes)
+ [Demo variables](#demo-variables)
+ [Get running configuration](#get-running-configuration)
* [Troubleshooting](#troubleshooting)
Prerequisites
-------------
This example requires the following:
* **Ansible 2.10** (or higher) installed. See [Installing Ansible](../../installation_guide/intro_installation#intro-installation-guide) for more information.
* One or more network devices that are compatible with Ansible.
* Basic understanding of YAML [YAML Syntax](../../reference_appendices/yamlsyntax#yaml-syntax).
* Basic understanding of Jinja2 templates. See [Templating (Jinja2)](../../user_guide/playbooks_templating#playbooks-templating) for more information.
* Basic Linux command line use.
* Basic knowledge of network switch & router configurations.
Groups and variables in an inventory file
-----------------------------------------
An `inventory` file is a YAML or INI-like configuration file that defines the mapping of hosts into groups.
In our example, the inventory file defines the groups `eos`, `ios`, `vyos` and a “group of groups” called `switches`. Further details about subgroups and inventory files can be found in the [Ansible inventory Group documentation](../../user_guide/intro_inventory#subgroups).
Because Ansible is a flexible tool, there are a number of ways to specify connection information and credentials. We recommend using the `[my_group:vars]` capability in your inventory file.
```
[all:vars]
# these defaults can be overridden for any group in the [group:vars] section
ansible_connection=ansible.netcommon.network_cli
ansible_user=ansible
[switches:children]
eos
ios
vyos
[eos]
veos01 ansible_host=veos-01.example.net
veos02 ansible_host=veos-02.example.net
veos03 ansible_host=veos-03.example.net
veos04 ansible_host=veos-04.example.net
[eos:vars]
ansible_become=yes
ansible_become_method=enable
ansible_network_os=arista.eos.eos
ansible_user=my_eos_user
ansible_password=my_eos_password
[ios]
ios01 ansible_host=ios-01.example.net
ios02 ansible_host=ios-02.example.net
ios03 ansible_host=ios-03.example.net
[ios:vars]
ansible_become=yes
ansible_become_method=enable
ansible_network_os=cisco.ios.ios
ansible_user=my_ios_user
ansible_password=my_ios_password
[vyos]
vyos01 ansible_host=vyos-01.example.net
vyos02 ansible_host=vyos-02.example.net
vyos03 ansible_host=vyos-03.example.net
[vyos:vars]
ansible_network_os=vyos.vyos.vyos
ansible_user=my_vyos_user
ansible_password=my_vyos_password
```
If you use ssh-agent, you do not need the `ansible_password` lines. If you use ssh keys, but not ssh-agent, and you have multiple keys, specify the key to use for each connection in the `[group:vars]` section with `ansible_ssh_private_key_file=/path/to/correct/key`. For more information on `ansible_ssh_` options see [Connecting to hosts: behavioral inventory parameters](../../user_guide/intro_inventory#behavioral-parameters).
Warning
Never store passwords in plain text.
### Ansible vault for password encryption
The “Vault” feature of Ansible allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plain text in your playbooks or roles. These vault files can then be distributed or placed in source control. See [Using encrypted variables and files](../../user_guide/vault#playbooks-vault) for more information.
Here’s what it would look like if you specified your SSH passwords (encrypted with Ansible Vault) among your variables:
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
ansible_ssh_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
39336231636137663964343966653162353431333566633762393034646462353062633264303765
6331643066663534383564343537343334633031656538370a333737656236393835383863306466
62633364653238323333633337313163616566383836643030336631333431623631396364663533
3665626431626532630a353564323566316162613432373738333064366130303637616239396438
9853
```
### Common inventory variables
The following variables are common for all platforms in the inventory, though they can be overwritten for a particular inventory group or host.
ansible\_connection
Ansible uses the ansible-connection setting to determine how to connect to a remote device. When working with Ansible Networking, set this to an appropriate network connection option, such as``ansible.netcommon.network\_cli``, so Ansible treats the remote node as a network device with a limited execution environment. Without this setting, Ansible would attempt to use ssh to connect to the remote and execute the Python script on the network device, which would fail because Python generally isn’t available on network devices.
ansible\_network\_os
Informs Ansible which Network platform this hosts corresponds to. This is required when using the `ansible.netcommon.*` connection options.
ansible\_user
The user to connect to the remote device (switch) as. Without this the user that is running `ansible-playbook` would be used. Specifies which user on the network device the connection
ansible\_password
The corresponding password for `ansible_user` to log in as. If not specified SSH key will be used.
ansible\_become
If enable mode (privilege mode) should be used, see the next section.
ansible\_become\_method
Which type of `become` should be used, for `network_cli` the only valid choice is `enable`.
### Privilege escalation
Certain network platforms, such as Arista EOS and Cisco IOS, have the concept of different privilege modes. Certain network modules, such as those that modify system state including users, will only work in high privilege states. Ansible supports `become` when using `connection: ansible.netcommon.network_cli`. This allows privileges to be raised for the specific tasks that need them. Adding `become: yes` and `become_method: enable` informs Ansible to go into privilege mode before executing the task, as shown here:
```
[eos:vars]
ansible_connection=ansible.netcommon.network_cli
ansible_network_os=arista.eos.eos
ansible_become=yes
ansible_become_method=enable
```
For more information, see the [using become with network modules](../../user_guide/become#become-network) guide.
### Jump hosts
If the Ansible Controller does not have a direct route to the remote device and you need to use a Jump Host, please see the [Ansible Network Proxy Command](network_debug_troubleshooting#network-delegate-to-vs-proxycommand) guide for details on how to achieve this.
Example 1: collecting facts and creating backup files with a playbook
---------------------------------------------------------------------
Ansible facts modules gather system information ‘facts’ that are available to the rest of your playbook.
Ansible Networking ships with a number of network-specific facts modules. In this example, we use the `_facts` modules [arista.eos.eos\_facts](../../collections/arista/eos/eos_facts_module#ansible-collections-arista-eos-eos-facts-module), [cisco.ios.ios\_facts](../../collections/cisco/ios/ios_facts_module#ansible-collections-cisco-ios-ios-facts-module) and [vyos.vyos.vyos\_facts](../../collections/vyos/vyos/vyos_facts_module#ansible-collections-vyos-vyos-vyos-facts-module) to connect to the remote networking device. As the credentials are not explicitly passed with module arguments, Ansible uses the username and password from the inventory file.
Ansible’s “Network Fact modules” gather information from the system and store the results in facts prefixed with `ansible_net_`. The data collected by these modules is documented in the `Return Values` section of the module docs, in this case [arista.eos.eos\_facts](../../collections/arista/eos/eos_facts_module#ansible-collections-arista-eos-eos-facts-module) and [vyos.vyos.vyos\_facts](../../collections/vyos/vyos/vyos_facts_module#ansible-collections-vyos-vyos-vyos-facts-module). We can use the facts, such as `ansible_net_version` late on in the “Display some facts” task.
To ensure we call the correct mode (`*_facts`) the task is conditionally run based on the group defined in the inventory file, for more information on the use of conditionals in Ansible Playbooks see [Basic conditionals with when](../../user_guide/playbooks_conditionals#the-when-statement).
In this example, we will create an inventory file containing some network switches, then run a playbook to connect to the network devices and return some information about them.
### Step 1: Creating the inventory
First, create a file called `inventory`, containing:
```
[switches:children]
eos
ios
vyos
[eos]
eos01.example.net
[ios]
ios01.example.net
[vyos]
vyos01.example.net
```
### Step 2: Creating the playbook
Next, create a playbook file called `facts-demo.yml` containing the following:
```
- name: "Demonstrate connecting to switches"
hosts: switches
gather_facts: no
tasks:
###
# Collect data
#
- name: Gather facts (eos)
arista.eos.eos_facts:
when: ansible_network_os == 'arista.eos.eos'
- name: Gather facts (ios)
cisco.ios.ios_facts:
when: ansible_network_os == 'cisco.ios.ios'
- name: Gather facts (vyos)
vyos.vyos.vyos_facts:
when: ansible_network_os == 'vyos.vyos.vyos'
###
# Demonstrate variables
#
- name: Display some facts
debug:
msg: "The hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}"
- name: Facts from a specific host
debug:
var: hostvars['vyos01.example.net']
- name: Write facts to disk using a template
copy:
content: |
#jinja2: lstrip_blocks: True
EOS device info:
{% for host in groups['eos'] %}
Hostname: {{ hostvars[host].ansible_net_hostname }}
Version: {{ hostvars[host].ansible_net_version }}
Model: {{ hostvars[host].ansible_net_model }}
Serial: {{ hostvars[host].ansible_net_serialnum }}
{% endfor %}
IOS device info:
{% for host in groups['ios'] %}
Hostname: {{ hostvars[host].ansible_net_hostname }}
Version: {{ hostvars[host].ansible_net_version }}
Model: {{ hostvars[host].ansible_net_model }}
Serial: {{ hostvars[host].ansible_net_serialnum }}
{% endfor %}
VyOS device info:
{% for host in groups['vyos'] %}
Hostname: {{ hostvars[host].ansible_net_hostname }}
Version: {{ hostvars[host].ansible_net_version }}
Model: {{ hostvars[host].ansible_net_model }}
Serial: {{ hostvars[host].ansible_net_serialnum }}
{% endfor %}
dest: /tmp/switch-facts
run_once: yes
###
# Get running configuration
#
- name: Backup switch (eos)
arista.eos.eos_config:
backup: yes
register: backup_eos_location
when: ansible_network_os == 'arista.eos.eos'
- name: backup switch (vyos)
vyos.vyos.vyos_config:
backup: yes
register: backup_vyos_location
when: ansible_network_os == 'vyos.vyos.vyos'
- name: Create backup dir
file:
path: "/tmp/backups/{{ inventory_hostname }}"
state: directory
recurse: yes
- name: Copy backup files into /tmp/backups/ (eos)
copy:
src: "{{ backup_eos_location.backup_path }}"
dest: "/tmp/backups/{{ inventory_hostname }}/{{ inventory_hostname }}.bck"
when: ansible_network_os == 'arista.eos.eos'
- name: Copy backup files into /tmp/backups/ (vyos)
copy:
src: "{{ backup_vyos_location.backup_path }}"
dest: "/tmp/backups/{{ inventory_hostname }}/{{ inventory_hostname }}.bck"
when: ansible_network_os == 'vyos.vyos.vyos'
```
### Step 3: Running the playbook
To run the playbook, run the following from a console prompt:
```
ansible-playbook -i inventory facts-demo.yml
```
This should return output similar to the following:
```
PLAY RECAP
eos01.example.net : ok=7 changed=2 unreachable=0 failed=0
ios01.example.net : ok=7 changed=2 unreachable=0 failed=0
vyos01.example.net : ok=6 changed=2 unreachable=0 failed=0
```
### Step 4: Examining the playbook results
Next, look at the contents of the file we created containing the switch facts:
```
cat /tmp/switch-facts
```
You can also look at the backup files:
```
find /tmp/backups
```
If `ansible-playbook` fails, please follow the debug steps in [Network Debug and Troubleshooting Guide](network_debug_troubleshooting#network-debug-troubleshooting).
Example 2: simplifying playbooks with network agnostic modules
--------------------------------------------------------------
(This example originally appeared in the [Deep Dive on cli\_command for Network Automation](https://www.ansible.com/blog/deep-dive-on-cli-command-for-network-automation) blog post by Sean Cavanaugh -[@IPvSean](https://github.com/IPvSean)).
If you have two or more network platforms in your environment, you can use the network agnostic modules to simplify your playbooks. You can use network agnostic modules such as `ansible.netcommon.cli_command` or `ansible.netcommon.cli_config` in place of the platform-specific modules such as `arista.eos.eos_config`, `cisco.ios.ios_config`, and `junipernetworks.junos.junos_config`. This reduces the number of tasks and conditionals you need in your playbooks.
Note
Network agnostic modules require the [ansible.netcommon.network\_cli](../../collections/ansible/netcommon/network_cli_connection#ansible-collections-ansible-netcommon-network-cli-connection) connection plugin.
### Sample playbook with platform-specific modules
This example assumes three platforms, Arista EOS, Cisco NXOS, and Juniper JunOS. Without the network agnostic modules, a sample playbook might contain the following three tasks with platform-specific commands:
```
---
- name: Run Arista command
arista.eos.eos_command:
commands: show ip int br
when: ansible_network_os == 'arista.eos.eos'
- name: Run Cisco NXOS command
cisco.nxos.nxos_command:
commands: show ip int br
when: ansible_network_os == 'cisco.nxos.nxos'
- name: Run Vyos command
vyos.vyos.vyos_command:
commands: show interface
when: ansible_network_os == 'vyos.vyos.vyos'
```
### Simplified playbook with `cli_command` network agnostic module
You can replace these platform-specific modules with the network agnostic `ansible.netcommon.cli_command` module as follows:
```
---
- hosts: network
gather_facts: false
connection: ansible.netcommon.network_cli
tasks:
- name: Run cli_command on Arista and display results
block:
- name: Run cli_command on Arista
ansible.netcommon.cli_command:
command: show ip int br
register: result
- name: Display result to terminal window
debug:
var: result.stdout_lines
when: ansible_network_os == 'arista.eos.eos'
- name: Run cli_command on Cisco IOS and display results
block:
- name: Run cli_command on Cisco IOS
ansible.netcommon.cli_command:
command: show ip int br
register: result
- name: Display result to terminal window
debug:
var: result.stdout_lines
when: ansible_network_os == 'cisco.ios.ios'
- name: Run cli_command on Vyos and display results
block:
- name: Run cli_command on Vyos
ansible.netcommon.cli_command:
command: show interfaces
register: result
- name: Display result to terminal window
debug:
var: result.stdout_lines
when: ansible_network_os == 'vyos.vyos.vyos'
```
If you use groups and group\_vars by platform type, this playbook can be further simplified to :
```
---
- name: Run command and print to terminal window
hosts: routers
gather_facts: false
tasks:
- name: Run show command
ansible.netcommon.cli_command:
command: "{{show_interfaces}}"
register: command_output
```
You can see a full example of this using group\_vars and also a configuration backup example at [Network agnostic examples](https://github.com/network-automation/agnostic_example).
### Using multiple prompts with the `ansible.netcommon.cli_command`
The `ansible.netcommon.cli_command` also supports multiple prompts.
```
---
- name: Change password to default
ansible.netcommon.cli_command:
command: "{{ item }}"
prompt:
- "New password"
- "Retype new password"
answer:
- "mypassword123"
- "mypassword123"
check_all: True
loop:
- "configure"
- "rollback"
- "set system root-authentication plain-text-password"
- "commit"
```
See the [ansible.netcommon.cli\_command](https://docs.ansible.com/ansible/2.9/modules/cli_command_module.html#cli-command-module "(in Ansible v2.9)") for full documentation on this command.
Implementation Notes
--------------------
### Demo variables
Although these tasks are not needed to write data to disk, they are used in this example to demonstrate some methods of accessing facts about the given devices or a named host.
Ansible `hostvars` allows you to access variables from a named host. Without this we would return the details for the current host, rather than the named host.
For more information, see [Information about Ansible: magic variables](../../user_guide/playbooks_vars_facts#magic-variables-and-hostvars).
### Get running configuration
The [arista.eos.eos\_config](../../collections/arista/eos/eos_config_module#ansible-collections-arista-eos-eos-config-module) and [vyos.vyos.vyos\_config](../../collections/vyos/vyos/vyos_config_module#ansible-collections-vyos-vyos-vyos-config-module) modules have a `backup:` option that when set will cause the module to create a full backup of the current `running-config` from the remote device before any changes are made. The backup file is written to the `backup` folder in the playbook root directory. If the directory does not exist, it is created.
To demonstrate how we can move the backup file to a different location, we register the result and move the file to the path stored in `backup_path`.
Note that when using variables from tasks in this way we use double quotes (`"`) and double curly-brackets (`{{...}}` to tell Ansible that this is a variable.
Troubleshooting
---------------
If you receive an connection error please double check the inventory and playbook for typos or missing lines. If the issue still occurs follow the debug steps in [Network Debug and Troubleshooting Guide](network_debug_troubleshooting#network-debug-troubleshooting).
See also
* [Ansible for Network Automation](../index#network-guide)
* [How to build your inventory](../../user_guide/intro_inventory#intro-inventory)
* [Keeping vaulted variables visible](../../user_guide/playbooks_best_practices#tip-for-variables-and-vaults)
| programming_docs |
ansible NOS Platform Options NOS Platform Options
====================
Extreme NOS is part of the [community.network](https://galaxy.ansible.com/community/network) collection and only supports CLI connections today. `httpapi` modules may be added in future. This page offers details on how to use `ansible.netcommon.network_cli` on NOS in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/nos.yml`](#example-cli-group-vars-nos-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: community.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | not supported by NOS |
| Returned Data Format | `stdout[0].` |
NOS does not support `ansible_connection: local`. You must use `ansible_connection: ansible.netcommon.network_cli`.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/nos.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: community.network.nos
ansible_user: myuser
ansible_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Get version information (nos)
community.network.nos_command:
commands: "show version"
register: show_ver
when: ansible_network_os == 'community.network.nos'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible VOSS Platform Options VOSS Platform Options
=====================
Extreme VOSS is part of the [community.network](https://galaxy.ansible.com/community/network) collection and only supports CLI connections today. This page offers details on how to use `ansible.netcommon.network_cli` on VOSS in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/voss.yml`](#example-cli-group-vars-voss-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | supported: use `ansible_become: yes` with `ansible_become_method: enable` |
| Returned Data Format | `stdout[0].` |
VOSS does not support `ansible_connection: local`. You must use `ansible_connection: ansible.netcommon.network_cli`.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/voss.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: community.network.voss
ansible_user: myuser
ansible_become: yes
ansible_become_method: enable
ansible_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Retrieve VOSS info
community.network.voss_command:
commands: show sys-info
when: ansible_network_os == 'community.network.voss'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible NXOS Platform Options NXOS Platform Options
=====================
The [Cisco NXOS](https://galaxy.ansible.com/cisco/nxos) supports multiple connections. This page offers details on how each connection works in Ansible and how to use it.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/nxos.yml`](#example-cli-group-vars-nxos-yml)
+ [Example CLI task](#example-cli-task)
* [Using NX-API in Ansible](#using-nx-api-in-ansible)
+ [Enabling NX-API](#enabling-nx-api)
+ [Example NX-API `group_vars/nxos.yml`](#example-nx-api-group-vars-nxos-yml)
+ [Example NX-API task](#example-nx-api-task)
* [Cisco Nexus platform support matrix](#cisco-nexus-platform-support-matrix)
Connections available
---------------------
| | CLI | NX-API |
| --- | --- | --- |
| Protocol | SSH | HTTP(S) |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password | uses HTTPS certificates if present |
| Indirect Access | via a bastion (jump host) | via a web proxy |
| Connection Settings |
`ansible_connection:`
`ansible.netcommon.network_cli` |
`ansible_connection:`
`ansible.netcommon.httpapi` |
| Enable Mode (Privilege Escalation) supported as of 2.5.3 | supported: use `ansible_become: yes` with `ansible_become_method: enable` and `ansible_become_password:` | not supported by NX-API |
| Returned Data Format | `stdout[0].` | `stdout[0].messages[0].` |
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.network_cli` or `ansible_connection: ansible.netcommon.httpapi` instead.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/nxos.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: cisco.nxos.nxos
ansible_user: myuser
ansible_password: !vault...
ansible_become: yes
ansible_become_method: enable
ansible_become_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Backup current switch config (nxos)
cisco.nxos.nxos_config:
backup: yes
register: backup_nxos_location
when: ansible_network_os == 'cisco.nxos.nxos'
```
Using NX-API in Ansible
-----------------------
### Enabling NX-API
Before you can use NX-API to connect to a switch, you must enable NX-API. To enable NX-API on a new switch via Ansible, use the `nxos_nxapi` module via the CLI connection. Set up group\_vars/nxos.yml just like in the CLI example above, then run a playbook task like this:
```
- name: Enable NX-API
cisco.nxos.nxos_nxapi:
enable_http: yes
enable_https: yes
when: ansible_network_os == 'cisco.nxos.nxos'
```
To find out more about the options for enabling HTTP/HTTPS and local http see the [nxos\_nxapi](https://docs.ansible.com/ansible/2.9/modules/nxos_nxapi_module.html#nxos-nxapi-module "(in Ansible v2.9)") module documentation.
Once NX-API is enabled, change your `group_vars/nxos.yml` to use the NX-API connection.
### Example NX-API `group_vars/nxos.yml`
```
ansible_connection: ansible.netcommon.httpapi
ansible_network_os: cisco.nxos.nxos
ansible_user: myuser
ansible_password: !vault...
proxy_env:
http_proxy: http://proxy.example.com:8080
```
* If you are accessing your host directly (not through a web proxy) you can remove the `proxy_env` configuration.
* If you are accessing your host through a web proxy using `https`, change `http_proxy` to `https_proxy`.
### Example NX-API task
```
- name: Backup current switch config (nxos)
cisco.nxos.nxos_config:
backup: yes
register: backup_nxos_location
environment: "{{ proxy_env }}"
when: ansible_network_os == 'cisco.nxos.nxos'
```
In this example the `proxy_env` variable defined in `group_vars` gets passed to the `environment` option of the module used in the task.
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
Cisco Nexus platform support matrix
-----------------------------------
The following platforms and software versions have been certified by Cisco to work with this version of Ansible.
Platform / Software Minimum Requirements| Supported Platforms | Minimum NX-OS Version |
| --- | --- |
| Cisco Nexus N3k | 7.0(3)I2(5) and later |
| Cisco Nexus N9k | 7.0(3)I2(5) and later |
| Cisco Nexus N5k | 7.3(0)N1(1) and later |
| Cisco Nexus N6k | 7.3(0)N1(1) and later |
| Cisco Nexus N7k | 7.3(0)D1(1) and later |
| Cisco Nexus MDS | 8.4(1) and later |
Platform Models| Platform | Description |
| --- | --- |
| N3k | Support includes N30xx, N31xx and N35xx models |
| N5k | Support includes all N5xxx models |
| N6k | Support includes all N6xxx models |
| N7k | Support includes all N7xxx models |
| N9k | Support includes all N9xxx models |
| MDS | Support includes all MDS 9xxx models |
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible WeOS 4 Platform Options WeOS 4 Platform Options
=======================
Westermo WeOS 4 is part of the [community.network](https://galaxy.ansible.com/community/network) collection and only supports CLI connections. This page offers details on how to use `ansible.netcommon.network_cli` on WeOS 4 in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/weos4.yml`](#example-cli-group-vars-weos4-yml)
+ [Example CLI task](#example-cli-task)
+ [Example Configuration task](#example-configuration-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: community.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | not supported by WeOS 4 |
| Returned Data Format | `stdout[0].` |
WeOS 4 does not support `ansible_connection: local`. You must use `ansible_connection: ansible.netcommon.network_cli`.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/weos4.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: community.network.weos4
ansible_user: myuser
ansible_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Get version information (WeOS 4)
ansible.netcommon.cli_command:
commands: "show version"
register: show_ver
when: ansible_network_os == 'community.network.weos4'
```
### Example Configuration task
```
- name: Replace configuration with file on ansible host (WeOS 4)
ansible.netcommon.cli_config:
config: "{{ lookup('file', 'westermo.conf') }}"
replace: "yes"
diff_match: exact
diff_replace: config
when: ansible_network_os == 'community.network.weos4'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible Dell OS9 Platform Options Dell OS9 Platform Options
=========================
The [dellemc.os9](https://github.com/ansible-collections/dellemc.os9) collection supports Enable Mode (Privilege Escalation). This page offers details on how to use Enable Mode on OS9 in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/dellos9.yml`](#example-cli-group-vars-dellos9-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | supported: use `ansible_become: yes` with `ansible_become_method: enable` and `ansible_become_password:` |
| Returned Data Format | `stdout[0].` |
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.network_cli` instead.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/dellos9.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: dellemc.os9.os9
ansible_user: myuser
ansible_password: !vault...
ansible_become: yes
ansible_become_method: enable
ansible_become_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Backup current switch config (dellos9)
dellemc.os9.os9_config:
backup: yes
register: backup_dellos9_location
when: ansible_network_os == 'dellemc.os9.os9'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible RouterOS Platform Options RouterOS Platform Options
=========================
RouterOS is part of the [community.network](https://galaxy.ansible.com/community/network) collection and only supports CLI connections today. `httpapi` modules may be added in future. This page offers details on how to use `ansible.netcommon.network_cli` on RouterOS in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/routeros.yml`](#example-cli-group-vars-routeros-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.network.network_cli` |
| Enable Mode (Privilege Escalation) | not supported by RouterOS |
| Returned Data Format | `stdout[0].` |
RouterOS does not support `ansible_connection: local`. You must use `ansible_connection: ansible.netcommon.network_cli`.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/routeros.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: community.network.routeros
ansible_user: myuser
ansible_password: !vault...
ansible_become: yes
ansible_become_method: enable
ansible_become_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
* If you are getting timeout errors you may want to add `+cet1024w` suffix to your username which will disable console colors, enable “dumb” mode, tell RouterOS not to try detecting terminal capabilities and set terminal width to 1024 columns. See article [Console login process](https://wiki.mikrotik.com/wiki/Manual:Console_login_process) in MikroTik wiki for more information.
### Example CLI task
```
- name: Display resource statistics (routeros)
community.network.routeros_command:
commands: /system resource print
register: routeros_resources
when: ansible_network_os == 'community.network.routeros'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible IOS-XR Platform Options IOS-XR Platform Options
=======================
The [Cisco IOS-XR collection](https://galaxy.ansible.com/cisco/iosxr) supports multiple connections. This page offers details on how each connection works in Ansible and how to use it.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI inventory `[iosxr:vars]`](#example-cli-inventory-iosxr-vars)
+ [Example CLI task](#example-cli-task)
* [Using NETCONF in Ansible](#using-netconf-in-ansible)
+ [Enabling NETCONF](#enabling-netconf)
+ [Example NETCONF inventory `[iosxr:vars]`](#example-netconf-inventory-iosxr-vars)
+ [Example NETCONF task](#example-netconf-task)
Connections available
---------------------
| | CLI | NETCONF only for modules `iosxr_banner`, `iosxr_interface`, `iosxr_logging`, `iosxr_system`, `iosxr_user` |
| --- | --- | --- |
| Protocol | SSH | XML over SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) | via a bastion (jump host) |
| Connection Settings |
`ansible_connection:`
`ansible.netcommon.network_cli` |
`ansible_connection:`
`ansible.netcommon.netconf` |
| Enable Mode (Privilege Escalation) | not supported | not supported |
| Returned Data Format | Refer to individual module documentation | Refer to individual module documentation |
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.network_cli` or `ansible_connection: ansible.netcommon.netconf` instead.
Using CLI in Ansible
--------------------
### Example CLI inventory `[iosxr:vars]`
```
[iosxr:vars]
ansible_connection=ansible.netcommon.network_cli
ansible_network_os=cisco.iosxr.iosxr
ansible_user=myuser
ansible_password=!vault...
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Retrieve IOS-XR version
cisco.iosxr.iosxr_command:
commands: show version
when: ansible_network_os == 'cisco.iosxr.iosxr'
```
Using NETCONF in Ansible
------------------------
### Enabling NETCONF
Before you can use NETCONF to connect to a switch, you must:
* install the `ncclient` python package on your control node(s) with `pip install ncclient`
* enable NETCONF on the Cisco IOS-XR device(s)
To enable NETCONF on a new switch via Ansible, use the `cisco.iosxr.iosxr_netconf` module through the CLI connection. Set up your platform-level variables just like in the CLI example above, then run a playbook task like this:
```
- name: Enable NETCONF
connection: ansible.netcommon.network_cli
cisco.iosxr.iosxr_netconf:
when: ansible_network_os == 'cisco.iosxr.iosxr'
```
Once NETCONF is enabled, change your variables to use the NETCONF connection.
### Example NETCONF inventory `[iosxr:vars]`
```
[iosxr:vars]
ansible_connection=ansible.netcommon.netconf
ansible_network_os=cisco.iosxr.iosxr
ansible_user=myuser
ansible_password=!vault |
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
### Example NETCONF task
```
- name: Configure hostname and domain-name
cisco.iosxr.iosxr_system:
hostname: iosxr01
domain_name: test.example.com
domain_search:
- ansible.com
- redhat.com
- cisco.com
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
| programming_docs |
ansible FRR Platform Options FRR Platform Options
====================
The [FRR](https://galaxy.ansible.com/frr/frr) collection supports the `ansible.netcommon.network_cli` connection. This section provides details on how to use this connection for Free Range Routing (FRR).
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/frr.yml`](#example-cli-group-vars-frr-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | not supported |
| Returned Data Format | `stdout[0].` |
Using CLI in Ansible
--------------------
### Example CLI `group_vars/frr.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: frr.frr.frr
ansible_user: frruser
ansible_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* The `ansible_user` should be a part of the `frrvty` group and should have the default shell set to `/bin/vtysh`.
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Gather FRR facts
frr.frr.frr_facts:
gather_subset:
- config
- hardware
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible EXOS Platform Options EXOS Platform Options
=====================
Extreme EXOS is part of the [community.network](https://galaxy.ansible.com/community/network) collection and supports multiple connections. This page offers details on how each connection works in Ansible and how to use it.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/exos.yml`](#example-cli-group-vars-exos-yml)
+ [Example CLI task](#example-cli-task)
* [Using EXOS-API in Ansible](#using-exos-api-in-ansible)
+ [Example EXOS-API `group_vars/exos.yml`](#example-exos-api-group-vars-exos-yml)
+ [Example EXOS-API task](#example-exos-api-task)
Connections available
---------------------
| | CLI | EXOS-API |
| --- | --- | --- |
| Protocol | SSH | HTTP(S) |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password | uses HTTPS certificates if present |
| Indirect Access | via a bastion (jump host) | via a web proxy |
| Connection Settings |
`ansible_connection:`
`ansible.netcommon.network_cli` |
`ansible_connection:`
`ansible.netcommon.httpapi` |
| Enable Mode (Privilege Escalation) | not supported by EXOS | not supported by EXOS |
| Returned Data Format | `stdout[0].` | `stdout[0].messages[0].` |
EXOS does not support `ansible_connection: local`. You must use `ansible_connection: ansible.netcommon.network_cli` or `ansible_connection: ansible.netcommon.httpapi`.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/exos.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: community.network.exos
ansible_user: myuser
ansible_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Retrieve EXOS OS version
community.network.exos_command:
commands: show version
when: ansible_network_os == 'community.network.exos'
```
Using EXOS-API in Ansible
-------------------------
### Example EXOS-API `group_vars/exos.yml`
```
ansible_connection: ansible.netcommon.httpapi
ansible_network_os: community.network.exos
ansible_user: myuser
ansible_password: !vault...
proxy_env:
http_proxy: http://proxy.example.com:8080
```
* If you are accessing your host directly (not through a web proxy) you can remove the `proxy_env` configuration.
* If you are accessing your host through a web proxy using `https`, change `http_proxy` to `https_proxy`.
### Example EXOS-API task
```
- name: Retrieve EXOS OS version
community.network.exos_command:
commands: show version
when: ansible_network_os == 'community.network.exos'
```
In this example the `proxy_env` variable defined in `group_vars` gets passed to the `environment` option of the module used in the task.
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible CloudEngine OS Platform Options CloudEngine OS Platform Options
===============================
CloudEngine CE OS is part of the [community.network](https://galaxy.ansible.com/community/network) collection and supports multiple connections. This page offers details on how each connection works in Ansible and how to use it.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI inventory `[ce:vars]`](#example-cli-inventory-ce-vars)
+ [Example CLI task](#example-cli-task)
* [Using NETCONF in Ansible](#using-netconf-in-ansible)
+ [Enabling NETCONF](#enabling-netconf)
+ [Example NETCONF inventory `[ce:vars]`](#example-netconf-inventory-ce-vars)
+ [Example NETCONF task](#example-netconf-task)
* [Notes](#notes)
+ [Modules that work with `ansible.netcommon.network_cli`](#modules-that-work-with-ansible-netcommon-network-cli)
+ [Modules that work with `ansible.netcommon.netconf`](#modules-that-work-with-ansible-netcommon-netconf)
Connections available
---------------------
| | CLI | NETCONF |
| --- | --- | --- |
| Protocol | SSH | XML over SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) | via a bastion (jump host) |
| Connection Settings |
`ansible_connection:`
`ansible.netcommon.network_cli` |
`ansible_connection:`
`ansible.netcommon.netconf` |
| Enable Mode (Privilege Escalation) | not supported by ce OS | not supported by ce OS |
| Returned Data Format | Refer to individual module documentation | Refer to individual module documentation |
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.netconf` or `ansible_connection=ansible.netcommon.network_cli` instead.
Using CLI in Ansible
--------------------
### Example CLI inventory `[ce:vars]`
```
[ce:vars]
ansible_connection=ansible.netcommon.network_cli
ansible_network_os=community.network.ce
ansible_user=myuser
ansible_password=!vault...
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Retrieve CE OS version
community.network.ce_command:
commands: display version
when: ansible_network_os == 'community.network.ce'
```
Using NETCONF in Ansible
------------------------
### Enabling NETCONF
Before you can use NETCONF to connect to a switch, you must:
* install the `ncclient` python package on your control node(s) with `pip install ncclient`
* enable NETCONF on the CloudEngine OS device(s)
To enable NETCONF on a new switch using Ansible, use the `community.network.ce_config` module with the CLI connection. Set up your platform-level variables just like in the CLI example above, then run a playbook task like this:
```
- name: Enable NETCONF
connection: ansible.netcommon.network_cli
community.network.ce_config:
lines:
- snetconf server enable
when: ansible_network_os == 'community.network.ce'
```
Once NETCONF is enabled, change your variables to use the NETCONF connection.
### Example NETCONF inventory `[ce:vars]`
```
[ce:vars]
ansible_connection=ansible.netcommon.netconf
ansible_network_os=community.network.ce
ansible_user=myuser
ansible_password=!vault |
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
### Example NETCONF task
```
- name: Create a vlan, id is 50(ce)
community.network.ce_vlan:
vlan_id: 50
name: WEB
when: ansible_network_os == 'community.network.ce'
```
Notes
-----
### Modules that work with `ansible.netcommon.network_cli`
```
community.network.ce_acl_interface
community.network.ce_command
community.network.ce_config
community.network.ce_evpn_bgp
community.network.ce_evpn_bgp_rr
community.network.ce_evpn_global
community.network.ce_facts
community.network.ce_mlag_interface
community.network.ce_mtu
community.network.ce_netstream_aging
community.network.ce_netstream_export
community.network.ce_netstream_global
community.network.ce_netstream_template
community.network.ce_ntp_auth
community.network.ce_rollback
community.network.ce_snmp_contact
community.network.ce_snmp_location
community.network.ce_snmp_traps
community.network.ce_startup
community.network.ce_stp
community.network.ce_vxlan_arp
community.network.ce_vxlan_gateway
community.network.ce_vxlan_global
```
### Modules that work with `ansible.netcommon.netconf`
```
community.network.ce_aaa_server
community.network.ce_aaa_server_host
community.network.ce_acl
community.network.ce_acl_advance
community.network.ce_bfd_global
community.network.ce_bfd_session
community.network.ce_bfd_view
community.network.ce_bgp
community.network.ce_bgp_af
community.network.ce_bgp_neighbor
community.network.ce_bgp_neighbor_af
community.network.ce_dldp
community.network.ce_dldp_interface
community.network.ce_eth_trunk
community.network.ce_evpn_bd_vni
community.network.ce_file_copy
community.network.ce_info_center_debug
community.network.ce_info_center_global
community.network.ce_info_center_log
community.network.ce_info_center_trap
community.network.ce_interface
community.network.ce_interface_ospf
community.network.ce_ip_interface
community.network.ce_lacp
community.network.ce_link_status
community.network.ce_lldp
community.network.ce_lldp_interface
community.network.ce_mlag_config
community.network.ce_netconf
community.network.ce_ntp
community.network.ce_ospf
community.network.ce_ospf_vrf
community.network.ce_reboot
community.network.ce_sflow
community.network.ce_snmp_community
community.network.ce_snmp_target_host
community.network.ce_snmp_user
community.network.ce_static_route
community.network.ce_static_route_bfd
community.network.ce_switchport
community.network.ce_vlan
community.network.ce_vrf
community.network.ce_vrf_af
community.network.ce_vrf_interface
community.network.ce_vrrp
community.network.ce_vxlan_tunnel
community.network.ce_vxlan_vap
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible ICX Platform Options ICX Platform Options
====================
ICX is part of the [community.network](https://galaxy.ansible.com/community/network) collection supports Enable Mode (Privilege Escalation). This page offers details on how to use Enable Mode on ICX in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/icx.yml`](#example-cli-group-vars-icx-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | supported: use `ansible_become: yes` with `ansible_become_method: enable` and `ansible_become_password:` |
| Returned Data Format | `stdout[0].` |
Using CLI in Ansible
--------------------
### Example CLI `group_vars/icx.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: community.network.icx
ansible_user: myuser
ansible_password: !vault...
ansible_become: yes
ansible_become_method: enable
ansible_become_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Backup current switch config (icx)
community.network.icx_config:
backup: yes
register: backup_icx_location
when: ansible_network_os == 'community.network.icx'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible ENOS Platform Options ENOS Platform Options
=====================
ENOS is part of the [community.network](https://galaxy.ansible.com/community/network) collection and supports Enable Mode (Privilege Escalation). This page offers details on how to use Enable Mode on ENOS in Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/enos.yml`](#example-cli-group-vars-enos-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | supported: use `ansible_become: yes` with `ansible_become_method: enable` and `ansible_become_password:` |
| Returned Data Format | `stdout[0].` |
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.network_cli` instead.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/enos.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: community.network.enos
ansible_user: myuser
ansible_password: !vault...
ansible_become: yes
ansible_become_method: enable
ansible_become_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Retrieve ENOS OS version
community.network.enos_command:
commands: show version
when: ansible_network_os == 'community.network.enos'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
ansible VyOS Platform Options VyOS Platform Options
=====================
The [VyOS](https://galaxy.ansible.com/vyos/vyos) collection supports the `ansible.netcommon.network_cli` connection type. This page offers details on connection options to manage VyOS using Ansible.
* [Connections available](#connections-available)
* [Using CLI in Ansible](#using-cli-in-ansible)
+ [Example CLI `group_vars/vyos.yml`](#example-cli-group-vars-vyos-yml)
+ [Example CLI task](#example-cli-task)
Connections available
---------------------
| | CLI |
| --- | --- |
| Protocol | SSH |
| Credentials | uses SSH keys / SSH-agent if present accepts `-u myuser -k` if using password |
| Indirect Access | via a bastion (jump host) |
| Connection Settings | `ansible_connection: ansible.netcommon.network_cli` |
| Enable Mode (Privilege Escalation) | not supported |
| Returned Data Format | Refer to individual module documentation |
The `ansible_connection: local` has been deprecated. Please use `ansible_connection: ansible.netcommon.network_cli` instead.
Using CLI in Ansible
--------------------
### Example CLI `group_vars/vyos.yml`
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: vyos.vyos.vyos
ansible_user: myuser
ansible_password: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
```
* If you are using SSH keys (including an ssh-agent) you can remove the `ansible_password` configuration.
* If you are accessing your host directly (not through a bastion/jump host) you can remove the `ansible_ssh_common_args` configuration.
* If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the `ProxyCommand` directive. To prevent secrets from leaking out (for example in `ps` output), SSH does not support providing passwords via environment variables.
### Example CLI task
```
- name: Retrieve VyOS version info
vyos.vyos.vyos_command:
commands: show version
when: ansible_network_os == 'vyos.vyos.vyos'
```
Warning
Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with [Ansible Vault](../../user_guide/vault#playbooks-vault).
See also
[Setting timeout options](../getting_started/network_connection_options#timeout-options)
| programming_docs |
ansible Ansible Network FAQ Ansible Network FAQ
===================
* [How can I improve performance for network playbooks?](#how-can-i-improve-performance-for-network-playbooks)
+ [Consider `strategy: free` if you are running on multiple hosts](#consider-strategy-free-if-you-are-running-on-multiple-hosts)
+ [Execute `show running` only if you absolutely must](#execute-show-running-only-if-you-absolutely-must)
+ [Use `ProxyCommand` only if you absolutely must](#use-proxycommand-only-if-you-absolutely-must)
+ [Set `--forks` to match your needs](#set-forks-to-match-your-needs)
* [Why is my output sometimes replaced with `********`?](#why-is-my-output-sometimes-replaced-with)
* [Why do the `*_config` modules always return `changed=true` with abbreviated commands?](#why-do-the-config-modules-always-return-changed-true-with-abbreviated-commands)
How can I improve performance for network playbooks?
----------------------------------------------------
### Consider `strategy: free` if you are running on multiple hosts
The `strategy` plugin tells Ansible how to order multiple tasks on multiple hosts. [Strategy](../../plugins/strategy#strategy-plugins) is set at the playbook level.
The default strategy is `linear`. With strategy set to `linear`, Ansible waits until the current task has run on all hosts before starting the next task on any host. Ansible may have forks free, but will not use them until all hosts have completed the current task. If each task in your playbook must succeed on all hosts before you run the next task, use the `linear` strategy.
Using the `free` strategy, Ansible uses available forks to execute tasks on each host as quickly as possible. Even if an earlier task is still running on one host, Ansible executes later tasks on other hosts. The `free` strategy uses available forks more efficiently. If your playbook stalls on each task, waiting for one slow host, consider using `strategy: free` to boost overall performance.
### Execute `show running` only if you absolutely must
The `show running` command is the most resource-intensive command to execute on a network device, because of the way queries are handled by the network OS. Using the command in your Ansible playbook will slow performance significantly, especially on large devices; repeating it will multiply the performance hit. If you have a playbook that checks the running config, then executes changes, then checks the running config again, you should expect that playbook to be very slow.
### Use `ProxyCommand` only if you absolutely must
Network modules support the use of a [proxy or jump host](network_debug_troubleshooting#network-delegate-to-vs-proxycommand) with the `ProxyCommand` parameter. However, when you use a jump host, Ansible must open a new SSH connection for every task, even if you are using a persistent connection type (`network_cli` or `netconf`). To maximize the performance benefits of the persistent connection types introduced in version 2.5, avoid using jump hosts whenever possible.
### Set `--forks` to match your needs
Every time Ansible runs a task, it forks its own process. The `--forks` parameter defines the number of concurrent tasks - if you retain the default setting, which is `--forks=5`, and you are running a playbook on 10 hosts, five of those hosts will have to wait until a fork is available. Of course, the more forks you allow, the more memory and processing power Ansible will use. Since most network tasks are run on the control host, this means your laptop can quickly become cpu- or memory-bound.
Why is my output sometimes replaced with `********`?
----------------------------------------------------
Ansible replaces any string marked `no_log`, including passwords, with `********` in Ansible output. This is done by design, to protect your sensitive data. Most users are happy to have their passwords redacted. However, Ansible replaces every string that matches your password with `********`. If you use a common word for your password, this can be a problem. For example, if you choose `Admin` as your password, Ansible will replace every instance of the word `Admin` with `********` in your output. This may make your output harder to read. To avoid this problem, select a secure password that will not occur elsewhere in your Ansible output.
Why do the `*_config` modules always return `changed=true` with abbreviated commands?
-------------------------------------------------------------------------------------
When you issue commands directly on a network device, you can use abbreviated commands. For example, `int g1/0/11` and `interface GigabitEthernet1/0/11` do the same thing; `shut` and `shutdown` do the same thing. Ansible Network `*_command` modules work with abbreviations, because they run commands through the network OS.
When committing configuration, however, the network OS converts abbreviations into long-form commands. Whether you use `shut` or `shutdown` on `GigabitEthernet1/0/11`, the result in the configuration is the same: `shutdown`.
Ansible Network `*_config` modules compare the text of the commands you specify in `lines` to the text in the configuration. If you use `shut` in the `lines` section of your task, and the configuration reads `shutdown`, the module returns `changed=true` even though the configuration is already correct. Your task will update the configuration every time it runs.
To avoid this problem, use long-form commands with the `*_config` modules:
```
---
- hosts: all
gather_facts: no
tasks:
- cisco.ios.ios_config:
lines:
- shutdown
parents: interface GigabitEthernet1/0/11
```
ansible Network Developer Guide Network Developer Guide
=======================
Welcome to the Developer Guide for Ansible Network Automation!
**Who should use this guide?**
If you want to extend Ansible for Network Automation by creating a module or plugin, this guide is for you. This guide is specific to networking. You should already be familiar with how to create, test, and document modules and plugins, as well as the prerequisites for getting your module or plugin accepted into the main Ansible repository. See the [Developer Guide](https://docs.ansible.com/ansible/latest/dev_guide/index.html#developer-guide) for details. Before you proceed, please read:
* How to [add a custom plugin or module locally](https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html#developing-locally).
* How to figure out if [developing a module is the right approach](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html#module-dev-should-you) for my use case.
* How to [set up my Python development environment](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#environment-setup).
* How to [get started writing a module](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#developing-modules-general).
Find the network developer task that best describes what you want to do:
* I want to [develop a network resource module](developing_resource_modules_network#developing-resource-modules).
* I want to [develop a network connection plugin](developing_plugins_network#developing-plugins-network).
* I want to [document my set of modules for a network platform](documenting_modules_network#documenting-modules-network).
If you prefer to read the entire guide, here’s a list of the pages in order.
* [Developing network resource modules](developing_resource_modules_network)
* [Developing network plugins](developing_plugins_network)
* [Documenting new network platforms](documenting_modules_network)
ansible Developing network plugins Developing network plugins
==========================
You can extend the existing network modules with custom plugins in your collection.
* [Network connection plugins](#network-connection-plugins)
* [Developing httpapi plugins](#developing-httpapi-plugins)
+ [Making requests](#making-requests)
+ [Authenticating](#authenticating)
+ [Error handling](#error-handling)
* [Developing NETCONF plugins](#developing-netconf-plugins)
* [Developing network\_cli plugins](#developing-network-cli-plugins)
* [Developing cli\_parser plugins in a collection](#developing-cli-parser-plugins-in-a-collection)
Network connection plugins
--------------------------
Each network connection plugin has a set of its own plugins which provide a specification of the connection for a particular set of devices. The specific plugin used is selected at runtime based on the value of the `ansible_network_os` variable assigned to the host. This variable should be set to the same value as the name of the plugin to be loaded. Thus, `ansible_network_os=nxos` will try to load a plugin in a file named `nxos.py`, so it is important to name the plugin in a way that will be sensible to users.
Public methods of these plugins may be called from a module or module\_utils with the connection proxy object just as other connection methods can. The following is a very simple example of using such a call in a module\_utils file so it may be shared with other modules.
```
from ansible.module_utils.connection import Connection
def get_config(module):
# module is your AnsibleModule instance.
connection = Connection(module._socket_path)
# You can now call any method (that doesn't start with '_') of the connection
# plugin or its platform-specific plugin
return connection.get_config()
```
Developing httpapi plugins
--------------------------
[httpapi plugins](../../plugins/httpapi#httpapi-plugins) serve as adapters for various HTTP(S) APIs for use with the `httpapi` connection plugin. They should implement a minimal set of convenience methods tailored to the API you are attempting to use.
Specifically, there are a few methods that the `httpapi` connection plugin expects to exist.
### Making requests
The `httpapi` connection plugin has a `send()` method, but an httpapi plugin needs a `send_request(self, data, **message_kwargs)` method as a higher-level wrapper to `send()`. This method should prepare requests by adding fixed values like common headers or URL root paths. This method may do more complex work such as turning data into formatted payloads, or determining which path or method to request. It may then also unpack responses to be more easily consumed by the caller.
```
from ansible.module_utils.six.moves.urllib.error import HTTPError
def send_request(self, data, path, method='POST'):
# Fixed headers for requests
headers = {'Content-Type': 'application/json'}
try:
response, response_content = self.connection.send(path, data, method=method, headers=headers)
except HTTPError as exc:
return exc.code, exc.read()
# handle_response (defined separately) will take the format returned by the device
# and transform it into something more suitable for use by modules.
# This may be JSON text to Python dictionaries, for example.
return handle_response(response_content)
```
### Authenticating
By default, all requests will authenticate with HTTP Basic authentication. If a request can return some kind of token to stand in place of HTTP Basic, the `update_auth(self, response, response_text)` method should be implemented to inspect responses for such tokens. If the token is meant to be included with the headers of each request, it is sufficient to return a dictionary which will be merged with the computed headers for each request. The default implementation of this method does exactly this for cookies. If the token is used in another way, say in a query string, you should instead save that token to an instance variable, where the `send_request()` method (above) can add it to each request
```
def update_auth(self, response, response_text):
cookie = response.info().get('Set-Cookie')
if cookie:
return {'Cookie': cookie}
return None
```
If instead an explicit login endpoint needs to be requested to receive an authentication token, the `login(self, username, password)` method can be implemented to call that endpoint. If implemented, this method will be called once before requesting any other resources of the server. By default, it will also be attempted once when a HTTP 401 is returned from a request.
```
def login(self, username, password):
login_path = '/my/login/path'
data = {'user': username, 'password': password}
response = self.send_request(data, path=login_path)
try:
# This is still sent as an HTTP header, so we can set our connection's _auth
# variable manually. If the token is returned to the device in another way,
# you will have to keep track of it another way and make sure that it is sent
# with the rest of the request from send_request()
self.connection._auth = {'X-api-token': response['token']}
except KeyError:
raise AnsibleAuthenticationFailure(message="Failed to acquire login token.")
```
Similarly, `logout(self)` can be implemented to call an endpoint to invalidate and/or release the current token, if such an endpoint exists. This will be automatically called when the connection is closed (and, by extension, when reset).
```
def logout(self):
logout_path = '/my/logout/path'
self.send_request(None, path=logout_path)
# Clean up tokens
self.connection._auth = None
```
### Error handling
The `handle_httperror(self, exception)` method can deal with status codes returned by the server. The return value indicates how the plugin will continue with the request:
* A value of `true` means that the request can be retried. This my be used to indicate a transient error, or one that has been resolved. For example, the default implementation will try to call `login()` when presented with a 401, and return `true` if successful.
* A value of `false` means that the plugin is unable to recover from this response. The status code will be raised as an exception to the calling module.
* Any other value will be taken as a nonfatal response from the request. This may be useful if the server returns error messages in the body of the response. Returning the original exception is usually sufficient in this case, as HTTPError objects have the same interface as a successful response.
For example httpapi plugins, see the [source code for the httpapi plugins](https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/httpapi) included with Ansible Core.
Developing NETCONF plugins
--------------------------
The [netconf](https://docs.ansible.com/ansible/2.9/plugins/connection/netconf.html#netconf-connection "(in Ansible v2.9)") connection plugin provides a connection to remote devices over the `SSH NETCONF` subsystem. Network devices typically use this connection plugin to send and receive `RPC` calls over `NETCONF`.
The `netconf` connection plugin uses the `ncclient` Python library under the hood to initiate a NETCONF session with a NETCONF-enabled remote network device. `ncclient` also executes NETCONF RPC requests and receives responses. You must install the `ncclient` on the local Ansible controller.
To use the `netconf` connection plugin for network devices that support standard NETCONF ([**RFC 6241**](https://tools.ietf.org/html/rfc6241.html)) operations such as `get`, `get-config`, `edit-config`, set `ansible_network_os=default`. You can use [netconf\_get](https://docs.ansible.com/ansible/2.9/modules/netconf_get_module.html#netconf-get-module "(in Ansible v2.9)"), [netconf\_config](https://docs.ansible.com/ansible/2.9/modules/netconf_config_module.html#netconf-config-module "(in Ansible v2.9)") and [netconf\_rpc](https://docs.ansible.com/ansible/2.9/modules/netconf_rpc_module.html#netconf-rpc-module "(in Ansible v2.9)") modules to talk to a NETCONF enabled remote host.
As a contributor and user, you should be able to use all the methods under the `NetconfBase` class if your device supports standard NETCONF. You can contribute a new plugin if the device you are working with has a vendor specific NETCONF RPC. To support a vendor specific NETCONF RPC, add the implementation in the network OS specific NETCONF plugin.
For Junos for example:
* See the vendor-specific Junos RPC methods implemented in `plugins/netconf/junos.py`.
* Set the value of `ansible_network_os` to the name of the netconf plugin file, that is `junos` in this case.
Developing network\_cli plugins
-------------------------------
The [network\_cli](https://docs.ansible.com/ansible/2.9/plugins/connection/network_cli.html#network-cli-connection "(in Ansible v2.9)") connection type uses `paramiko_ssh` under the hood which creates a pseudo terminal to send commands and receive responses. `network_cli` loads two platform specific plugins based on the value of `ansible_network_os`:
* Terminal plugin (for example `plugins/terminal/ios.py`) - Controls the parameters related to terminal, such as setting terminal length and width, page disabling and privilege escalation. Also defines regex to identify the command prompt and error prompts.
* [Cliconf Plugins](../../plugins/cliconf#cliconf-plugins) (for example, [ios cliconf](https://docs.ansible.com/ansible/2.9/plugins/cliconf/ios.html#ios-cliconf "(in Ansible v2.9)")) - Provides an abstraction layer for low level send and receive operations. For example, the `edit_config()` method ensures that the prompt is in `config` mode before executing configuration commands.
To contribute a new network operating system to work with the `network_cli` connection, implement the `cliconf` and `terminal` plugins for that network OS.
The plugins can reside in:
* Adjacent to playbook in folders
```
cliconf_plugins/
terminal_plugins/
```
* Roles
```
myrole/cliconf_plugins/
myrole/terminal_plugins/
```
* Collections
```
myorg/mycollection/plugins/terminal/
myorg/mycollection/plugins/cliconf/
```
The user can also set the [DEFAULT\_CLICONF\_PLUGIN\_PATH](../../reference_appendices/config#default-cliconf-plugin-path) to configure the `cliconf` plugin path.
After adding the `cliconf` and `terminal` plugins in the expected locations, users can:
* Use the [cli\_command](https://docs.ansible.com/ansible/2.9/modules/cli_command_module.html#cli-command-module "(in Ansible v2.9)") to run an arbitrary command on the network device.
* Use the [cli\_config](https://docs.ansible.com/ansible/2.9/modules/cli_config_module.html#cli-config-module "(in Ansible v2.9)") to implement configuration changes on the remote hosts without platform-specific modules.
Developing cli\_parser plugins in a collection
----------------------------------------------
You can use `cli_parse` as an entry point for a cli\_parser plugin in your own collection.
The following sample shows the start of a custom cli\_parser plugin:
```
from ansible_collections.ansible.netcommon.plugins.module_utils.cli_parser.cli_parserbase import (
CliParserBase,
)
class CliParser(CliParserBase):
""" Sample cli_parser plugin
"""
# Use the follow extention when loading a template
DEFAULT_TEMPLATE_EXTENSION = "txt"
# Provide the contents of the template to the parse function
PROVIDE_TEMPLATE_CONTENTS = True
def myparser(text, template_contents):
# parse the text using the template contents
return {...}
def parse(self, *_args, **kwargs):
""" Standard entry point for a cli_parse parse execution
:return: Errors or parsed text as structured data
:rtype: dict
:example:
The parse function of a parser should return a dict:
{"errors": [a list of errors]}
or
{"parsed": obj}
"""
template_contents = kwargs["template_contents"]
text = self._task_args.get("text")
try:
parsed = myparser(text, template_contents)
except Exception as exc:
msg = "Custom parser returned an error while parsing. Error: {err}"
return {"errors": [msg.format(err=to_native(exc))]}
return {"parsed": parsed}
```
The following task uses this custom cli\_parser plugin:
```
- name: Use a custom cli_parser
ansible.netcommon.cli_parse:
command: ls -l
parser:
name: my_organiztion.my_collection.custom_parser
```
To develop a custom plugin: - Each cli\_parser plugin requires a `CliParser` class. - Each cli\_parser plugin requires a `parse` function. - Always return a dictionary with `errors` or `parsed`. - Place the custom cli\_parser in plugins/cli\_parsers directory of the collection. - See the [current cli\_parsers](https://github.com/ansible-collections/ansible.netcommon/tree/main/plugins/cli_parsers) for examples to follow.
See also
* [Parsing semi-structured text with Ansible](../user_guide/cli_parsing#cli-parsing)
| programming_docs |
ansible Developing network resource modules Developing network resource modules
===================================
* [Understanding network and security resource modules](#understanding-network-and-security-resource-modules)
* [Developing network and security resource modules](#developing-network-and-security-resource-modules)
+ [Understanding the model and resource module builder](#understanding-the-model-and-resource-module-builder)
+ [Accessing the resource module builder](#accessing-the-resource-module-builder)
+ [Creating a model](#creating-a-model)
+ [Creating a collection scaffold from a resource model](#creating-a-collection-scaffold-from-a-resource-model)
* [Examples](#examples)
+ [Collection directory layout](#collection-directory-layout)
+ [Role directory layout](#role-directory-layout)
+ [Using the collection](#using-the-collection)
+ [Using the role](#using-the-role)
* [Resource module structure and workflow](#resource-module-structure-and-workflow)
* [Running `ansible-test sanity` and `tox` on resource modules](#running-ansible-test-sanity-and-tox-on-resource-modules)
* [Testing resource modules](#testing-resource-modules)
+ [Resource module integration tests](#resource-module-integration-tests)
+ [Unit test requirements](#unit-test-requirements)
* [Example: Unit testing Ansible network resource modules](#example-unit-testing-ansible-network-resource-modules)
+ [Using mock objects to unit test Ansible network resource modules](#using-mock-objects-to-unit-test-ansible-network-resource-modules)
+ [Mocking device data](#mocking-device-data)
Understanding network and security resource modules
---------------------------------------------------
Network and security devices separate configuration into sections (such as interfaces, VLANs, and so on) that apply to a network or security service. Ansible resource modules take advantage of this to allow users to configure subsections or resources within the device configuration. Resource modules provide a consistent experience across different network and security devices. For example, a network resource module may only update the configuration for a specific portion of the network interfaces, VLANs, ACLs, and so on for a network device. The resource module:
1. Fetches a piece of the configuration (fact gathering), for example, the interfaces configuration.
2. Converts the returned configuration into key-value pairs.
3. Places those key-value pairs into an internal agnostic structured data format.
Now that the configuration data is normalized, the user can update and modify the data and then use the resource module to send the configuration data back to the device. This results in a full round-trip configuration update without the need for manual parsing, data manipulation, and data model management.
The resource module has two top-level keys - `config` and `state`:
* `config` defines the resource configuration data model as key-value pairs. The type of the `config` option can be `dict` or `list of dict` based on the resource managed. That is, if the device has a single global configuration, it should be a `dict` (for example, a global LLDP configuration). If the device has multiple instances of configuration, it should be of type `list` with each element in the list of type `dict` (for example, interfaces configuration).
* `state` defines the action the resource module takes on the end device.
The `state` for a new resource module should support the following values (as applicable for the devices that support them):
merged
Ansible merges the on-device configuration with the provided configuration in the task.
replaced
Ansible replaces the on-device configuration subsection with the provided configuration subsection in the task.
overridden
Ansible overrides the on-device configuration for the resource with the provided configuration in the task. Use caution with this state as you could remove your access to the device (for example, by overriding the management interface configuration).
deleted
Ansible deletes the on-device configuration subsection and restores any default settings.
gathered
Ansible displays the resource details gathered from the network device and accessed with the `gathered` key in the result.
rendered
Ansible renders the provided configuration in the task in the device-native format (for example, Cisco IOS CLI). Ansible returns this rendered configuration in the `rendered` key in the result. Note this state does not communicate with the network device and can be used offline.
parsed
Ansible parses the configuration from the `running_configuration` option into Ansible structured data in the `parsed` key in the result. Note this does not gather the configuration from the network device so this state can be used offline.
Modules in Ansible-maintained collections must support these state values. If you develop a module with only “present” and “absent” for state, you may submit it to a community collection.
Note
The states `rendered`, `gathered`, and `parsed` do not perform any change on the device.
See also
[Deep Dive on VLANs Resource Modules for Network Automation](https://www.ansible.com/blog/deep-dive-on-vlans-resource-modules-for-network-automation)
Walkthrough of how state values are implemented for VLANs.
Developing network and security resource modules
------------------------------------------------
The Ansible Engineering team ensures the module design and code pattern within Ansible-maintained collections is uniform across resources and across platforms to give a vendor-agnostic feel and deliver good quality code. We recommend you use the [resource module builder](https://github.com/ansible-network/resource_module_builder) to develop a resource module.
The highlevel process for developing a resource module is:
1. Create and share a resource model design in the [resource module models repository](https://github.com/ansible-network/resource_module_models) as a PR for review.
2. Download the latest version of the [resource module builder](https://github.com/ansible-network/resource_module_builder).
3. Run the `resource module builder` to create a collection scaffold from your approved resource model.
4. Write the code to implement your resource module.
5. Develop integration and unit tests to verify your resource module.
6. Create a PR to the appropriate collection that you want to add your new resource module to. See [Contributing to Ansible-maintained Collections](https://docs.ansible.com/ansible/latest/community/contributing_maintained_collections.html#contributing-maintained-collections) for details on determining the correct collection for your module.
### Understanding the model and resource module builder
The resource module builder is an Ansible Playbook that helps developers scaffold and maintain an Ansible resource module. It uses a model as the single source of truth for the module. This model is a `yaml` file that is used for the module DOCUMENTATION section and the argument spec.
The resource module builder has the following capabilities:
* Uses a defined model to scaffold a resource module directory layout and initial class files.
* Scaffolds either an Ansible role or a collection.
* Subsequent uses of the resource module builder will only replace the module arspec and file containing the module docstring.
* Allows you to store complex examples along side the model in the same directory.
* Maintains the model as the source of truth for the module and use resource module builder to update the source files as needed.
* Generates working sample modules for both `<network_os>_<resource>` and `<network_os>_facts`.
### Accessing the resource module builder
To access the resource module builder:
1. clone the github repository:
```
git clone https://github.com/ansible-network/resource_module_builder.git
```
2. Install the requirements:
```
pip install -r requirements.txt
```
### Creating a model
You must create a model for your new resource. The model is the single source of truth for both the argspec and docstring, keeping them in sync. Once your model is approved, you can use the resource module builder to generate three items based on the model:
* The scaffold for a new module
* The argspec for the new module
* The docstring for the new module
For any subsequent changes to the functionality, update the model first and use the resource module builder to update the module argspec and docstring.
For example, the resource model builder includes the `myos_interfaces.yml` sample in the `models` directory, as seen below:
```
---
GENERATOR_VERSION: '1.0'
NETWORK_OS: myos
RESOURCE: interfaces
COPYRIGHT: Copyright 2019 Red Hat
LICENSE: gpl-3.0.txt
DOCUMENTATION: |
module: myos_interfaces
version_added: 1.0.0
short_description: 'Manages <xxxx> attributes of <network_os> <resource>'
description: 'Manages <xxxx> attributes of <network_os> <resource>.'
author: Ansible Network Engineer
notes:
- 'Tested against <network_os> <version>'
options:
config:
description: The provided configuration
type: list
elements: dict
suboptions:
name:
type: str
description: The name of the <resource>
some_string:
type: str
description:
- The some_string_01
choices:
- choice_a
- choice_b
- choice_c
default: choice_a
some_bool:
description:
- The some_bool.
type: bool
some_int:
description:
- The some_int.
type: int
version_added: '1.1.0'
some_dict:
type: dict
description:
- The some_dict.
suboptions:
property_01:
description:
- The property_01
type: str
state:
description:
- The state of the configuration after module completion.
type: str
choices:
- merged
- replaced
- overridden
- deleted
default: merged
EXAMPLES:
- deleted_example_01.txt
- merged_example_01.txt
- overridden_example_01.txt
- replaced_example_01.txt
```
Notice that you should include examples for each of the states that the resource supports. The resource module builder also includes these in the sample model.
Share this model as a PR for review at [resource module models repository](https://github.com/ansible-network/resource_module_models). You can also see more model examples at that location.
### Creating a collection scaffold from a resource model
To use the resource module builder to create a collection scaffold from your approved resource model:
```
ansible-playbook -e rm_dest=<destination for modules and module utils> \
-e structure=collection \
-e collection_org=<collection_org> \
-e collection_name=<collection_name> \
-e model=<model> \
site.yml
```
Where the parameters are as follows:
* `rm_dest`: The directory where the resource module builder places the files and directories for the resource module and facts modules.
* `structure`: The directory layout type (role or collection)
+ `role`: Generate a role directory layout.
+ `collection`: Generate a collection directory layout.
* `collection_org`: The organization of the collection, required when `structure=collection`.
* `collection_name`: The name of the collection, required when `structure=collection`.
* `model`: The path to the model file.
To use the resource module builder to create a role scaffold:
```
ansible-playbook -e rm_dest=<destination for modules and module utils> \
-e structure=role \
-e model=<model> \
site.yml
```
Examples
--------
### Collection directory layout
This example shows the directory layout for the following:
* `network_os`: myos
* `resource`: interfaces
```
ansible-playbook -e rm_dest=~/github/rm_example \
-e structure=collection \
-e collection_org=cidrblock \
-e collection_name=my_collection \
-e model=models/myos/interfaces/myos_interfaces.yml \
site.yml
```
```
├── docs
├── LICENSE.txt
├── playbooks
├── plugins
| ├── action
| ├── filter
| ├── inventory
| ├── modules
| | ├── __init__.py
| | ├── myos_facts.py
| | └── myos_interfaces.py
| └── module_utils
| ├── __init__.py
| └── network
| ├── __init__.py
| └── myos
| ├── argspec
| | ├── facts
| | | ├── facts.py
| | | └── __init__.py
| | ├── __init__.py
| | └── interfaces
| | ├── __init__.py
| | └── interfaces.py
| ├── config
| | ├── __init__.py
| | └── interfaces
| | ├── __init__.py
| | └── interfaces.py
| ├── facts
| | ├── facts.py
| | ├── __init__.py
| | └── interfaces
| | ├── __init__.py
| | └── interfaces.py
| ├── __init__.py
| └── utils
| ├── __init__.py
| └── utils.py
├── README.md
└── roles
```
### Role directory layout
This example displays the role directory layout for the following:
* `network_os`: myos
* `resource`: interfaces
```
ansible-playbook -e rm_dest=~/github/rm_example/roles/my_role \
-e structure=role \
-e model=models/myos/interfaces/myos_interfaces.yml \
site.yml
```
```
roles
└── my_role
├── library
│ ├── __init__.py
│ ├── myos_facts.py
│ └── myos_interfaces.py
├── LICENSE.txt
├── module_utils
│ ├── __init__.py
│ └── network
│ ├── __init__.py
│ └── myos
│ ├── argspec
│ │ ├── facts
│ │ │ ├── facts.py
│ │ │ └── __init__.py
│ │ ├── __init__.py
│ │ └── interfaces
│ │ ├── __init__.py
│ │ └── interfaces.py
│ ├── config
│ │ ├── __init__.py
│ │ └── interfaces
│ │ ├── __init__.py
│ │ └── interfaces.py
│ ├── facts
│ │ ├── facts.py
│ │ ├── __init__.py
│ │ └── interfaces
│ │ ├── __init__.py
│ │ └── interfaces.py
│ ├── __init__.py
│ └── utils
│ ├── __init__.py
│ └── utils.py
└── README.md
```
### Using the collection
This example shows how to use the generated collection in a playbook:
```
----
- hosts: myos101
gather_facts: False
tasks:
- cidrblock.my_collection.myos_interfaces:
register: result
- debug:
var: result
- cidrblock.my_collection.myos_facts:
- debug:
var: ansible_network_resources
```
### Using the role
This example shows how to use the generated role in a playbook:
```
- hosts: myos101
gather_facts: False
roles:
- my_role
- hosts: myos101
gather_facts: False
tasks:
- myos_interfaces:
register: result
- debug:
var: result
- myos_facts:
- debug:
var: ansible_network_resources
```
Resource module structure and workflow
--------------------------------------
The resource module structure includes the following components:
Module
* `library/<ansible_network_os>_<resource>.py`.
* Imports the `module_utils` resource package and calls `execute_module` API:
```
def main():
result = <resource_package>(module).execute_module()
```
Module argspec
* `module_utils/<ansible_network_os>/argspec/<resource>/`.
* Argspec for the resource.
Facts
* `module_utils/<ansible_network_os>/facts/<resource>/`.
* Populate facts for the resource.
* Entry in `module_utils/<ansible_network_os>/facts/facts.py` for `get_facts` API to keep `<ansible_network_os>_facts` module and facts gathered for the resource module in sync for every subset.
* Entry of Resource subset in FACTS\_RESOURCE\_SUBSETS list in `module_utils/<ansible_network_os>/facts/facts.py` to make facts collection work.
Module package in module\_utils
* `module_utils/<ansible_network_os>/<config>/<resource>/`.
* Implement `execute_module` API that loads the configuration to device and generates the result with `changed`, `commands`, `before` and `after` keys.
* Call `get_facts` API that returns the `<resource>` configuration facts or return the difference if the device has onbox diff support.
* Compare facts gathered and given key-values if diff is not supported.
* Generate final configuration.
Utils
* `module_utils/<ansible_network_os>/utils`.
* Utilities for the `<ansible_network_os>` platform.
Running `ansible-test sanity` and `tox` on resource modules
-----------------------------------------------------------
You should run `ansible-test sanity` and `tox -elinters` from the collection root directory before pushing your PR to an Ansible-maintained collection. The CI runs both and will fail if these tests fail. See [Testing Ansible](https://docs.ansible.com/ansible/latest/dev_guide/testing.html#developing-testing) for details on `ansible-test sanity`.
To install the necessary packages:
1. Ensure you have a valid Ansible development environment configured. See [Preparing an environment for developing Ansible modules](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#environment-setup) for details.
2. Run `pip install -r requirements.txt` from the collection root directory.
Running `tox -elinters`:
* Reads `tox.ini` from the collection root directory and installs required dependencies (such as `black` and `flake8`).
* Runs these with preconfigured options (such as line-length and ignores.)
* Runs `black` in check mode to show which files will be formatted without actually formatting them.
Testing resource modules
------------------------
The tests rely on a role generated by the resource module builder. After changes to the resource module builder, the role should be regenerated and the tests modified and run as needed. To generate the role after changes:
```
rm -rf rmb_tests/roles/my_role
ansible-playbook -e rm_dest=./rmb_tests/roles/my_role \
-e structure=role \
-e model=models/myos/interfaces/myos_interfaces.yml \
site.yml
```
### Resource module integration tests
High-level integration test requirements for new resource modules are as follows:
1. Write a test case for every state.
2. Write additional test cases to test the behavior of the module when an empty `config.yaml` is given.
3. Add a round trip test case. This involves a `merge` operation, followed by `gather_facts`, a `merge` update with additional configuration, and then reverting back to the base configuration using the previously gathered facts with the `state` set to `overridden`.
4. Wherever applicable, assertions should check after and before `dicts` against a hard coded Source of Truth.
We use Zuul as the CI to run the integration test.
* To view the report, click Details on the CI comment in the PR
* To view a failure report, click ansible/check and select the failed test.
* To view logs while the test is running, check for your PR number in the [Zull status board](https://dashboard.zuul.ansible.com/t/ansible/status).
* To fix static test failure locally, run the **tox -e black** **inside the root folder of collection**.
To view The Ansible run logs and debug test failures:
1. Click the failed job to get the summary, and click Logs for the log.
2. Click console and scroll down to find the failed test.
3. Click > next to the failed test for complete details.
#### Integration test structure
Each test case should generally follow this pattern:
* setup —> test —> assert —> test again (for idempotency) —> assert —> tear down (if needed) -> done. This keeps test playbooks from becoming monolithic and difficult to troubleshoot.
* Include a name for each task that is not an assertion. You can add names to assertions as well, but it is easier to identify the broken task within a failed test if you add a name for each task.
* Files containing test cases must end in `.yaml`
#### Implementation
For platforms that support `connection: local` *and* `connection: network_cli` use the following guidance:
* Name the `targets/` directories after the module name.
* The `main.yaml` file should just reference the transport.
The following example walks through the integration tests for the `vyos.vyos.vyos_l3_interfaces` module in the [vyos.vyos](https://github.com/ansible-collections/vyos.vyos/tree/master/tests/integration) collection:
`test/integration/targets/vyos_l3_interfaces/tasks/main.yaml`
```
---
- include: cli.yaml
tags:
- cli
```
`test/integration/targets/vyos_l3_interfaces/tasks/cli.yaml`
```
---
- name: collect all cli test cases
find:
paths: "{{ role_path }}/tests/cli"
patterns: "{{ testcase }}.yaml"
register: test_cases
delegate_to: localhost
- name: set test_items
set_fact: test_items="{{ test_cases.files | map(attribute='path') | list }}"
- name: run test cases (connection=network_cli)
include: "{{ test_case_to_run }} ansible_connection=network_cli"
with_items: "{{ test_items }}"
loop_control:
loop_var: test_case_to_run
- name: run test case (connection=local)
include: "{{ test_case_to_run }} ansible_connection=local ansible_become=no"
with_first_found: "{{ test_items }}"
loop_control:
loop_var: test_case_to_run
```
`test/integration/targets/vyos_l3_interfaces/tests/cli/overridden.yaml`
```
---
- debug:
msg: START vyos_l3_interfaces merged integration tests on connection={{ ansible_connection
}}
- include_tasks: _remove_config.yaml
- block:
- include_tasks: _populate.yaml
- name: Overrides all device configuration with provided configuration
register: result
vyos.vyos.vyos_l3_interfaces: &id001
config:
- name: eth0
ipv4:
- address: dhcp
- name: eth1
ipv4:
- address: 192.0.2.15/24
state: overridden
- name: Assert that before dicts were correctly generated
assert:
that:
- "{{ populate | symmetric_difference(result['before']) |length == 0 }}"
- name: Assert that correct commands were generated
assert:
that:
- "{{ overridden['commands'] | symmetric_difference(result['commands'])\
\ |length == 0 }}"
- name: Assert that after dicts were correctly generated
assert:
that:
- "{{ overridden['after'] | symmetric_difference(result['after']) |length\
\ == 0 }}"
- name: Overrides all device configuration with provided configurations (IDEMPOTENT)
register: result
vyos.vyos.vyos_l3_interfaces: *id001
- name: Assert that the previous task was idempotent
assert:
that:
- result['changed'] == false
- name: Assert that before dicts were correctly generated
assert:
that:
- "{{ overridden['after'] | symmetric_difference(result['before']) |length\
\ == 0 }}"
always:
- include_tasks: _remove_config.yaml
```
#### Detecting test resources at runtime
Your tests should detect resources (such as interfaces) at runtime rather than hard-coding them into the test. This allows the test to run on a variety of systems.
For example:
```
- name: Collect interface list
connection: ansible.netcommon.network_cli
register: intout
cisco.nxos.nxos_command:
commands:
- show interface brief | json
- set_fact:
intdataraw: "{{ intout.stdout_lines[0]['TABLE_interface']['ROW_interface'] }}"
- set_fact:
nxos_int1: '{{ intdataraw[1].interface }}'
- set_fact:
nxos_int2: '{{ intdataraw[2].interface }}'
- set_fact:
nxos_int3: '{{ intdataraw[3].interface }}'
```
See the complete test example of this at <https://github.com/ansible-collections/cisco.nxos/blob/master/tests/integration/targets/prepare_nxos_tests/tasks/main.yml>.
#### Running network integration tests
Ansible uses Zuul to run an integration test suite on every PR, including new tests introduced by that PR. To find and fix problems in network modules, run the network integration test locally before you submit a PR.
First, create an inventory file that points to your test machines. The inventory group should match the platform name (for example, `eos`, `ios`):
```
cd test/integration
cp inventory.network.template inventory.networking
${EDITOR:-vi} inventory.networking
# Add in machines for the platform(s) you wish to test
```
To run these network integration tests, use `ansible-test network-integration --inventory </path/to/inventory> <tests_to_run>`:
```
ansible-test network-integration --inventory ~/myinventory -vvv vyos_facts
ansible-test network-integration --inventory ~/myinventory -vvv vyos_.*
```
To run all network tests for a particular platform:
```
ansible-test network-integration --inventory /path/to-collection-module/test/integration/inventory.networking vyos_.*
```
This example will run against all `vyos` modules. Note that `vyos_.*` is a regex match, not a bash wildcard - include the `.` if you modify this example.
To run integration tests for a specific module:
```
ansible-test network-integration --inventory /path/to-collection-module/test/integration/inventory.networking vyos_l3_interfaces
```
To run a single test case on a specific module:
```
# Only run vyos_l3_interfaces/tests/cli/gathered.yaml
ansible-test network-integration --inventory /path/to-collection-module/test/integration/inventory.networking vyos_l3_interfaces --testcase gathered
```
To run integration tests for a specific transport:
```
# Only run nxapi test
ansible-test network-integration --inventory /path/to-collection-module/test/integration/inventory.networking --tags="nxapi" nxos_.*
# Skip any cli tests
ansible-test network-integration --inventory /path/to-collection-module/test/integration/inventory.networking --skip-tags="cli" nxos_.*
```
See [test/integration/targets/nxos\_bgp/tasks/main.yaml](https://github.com/ansible-collections/cisco.nxos/blob/master/tests/integration/targets/nxos_bgp/tasks/main.yaml) for how this is implemented in the tests.
For more options:
```
ansible-test network-integration --help
```
If you need additional help or feedback, reach out in the `#ansible-network` IRC channel on the [irc.libera.chat](https://libera.chat/) IRC network.
### Unit test requirements
High-level unit test requirements that new resource modules should follow:
1. Write test cases for all the states with all possible combinations of config values.
2. Write test cases to test the error conditions ( negative scenarios).
3. Check the value of `changed` and `commands` keys in every test case.
We run all unit test cases on our Zuul test suite, on the latest python version supported by our CI setup.
Use the [same procedure](#using-zuul-resource-modules) as the integration tests to view Zuul unit tests reports and logs.
See [unit module testing](https://docs.ansible.com/ansible/latest/dev_guide/testing_units_modules.html#testing-units-modules) for general unit test details.
Example: Unit testing Ansible network resource modules
------------------------------------------------------
This section walks through an example of how to develop unit tests for Ansible resource modules.
See [Unit Tests](https://docs.ansible.com/ansible/latest/dev_guide/testing_units.html#testing-units) and [Unit Testing Ansible Modules](https://docs.ansible.com/ansible/latest/dev_guide/testing_units_modules.html#testing-units-modules) for general documentation on Ansible unit tests for modules. Please read those pages first to understand unit tests and why and when you should use them.
### Using mock objects to unit test Ansible network resource modules
[Mock objects](https://docs.python.org/3/library/unittest.mock.html) can be very useful in building unit tests for special or difficult cases, but they can also lead to complex and confusing coding situations. One good use for mocks would be to simulate an API. The `mock` Python package is bundled with Ansible (use `import units.compat.mock`).
You can mock the device connection and output from the device as follows:
```
self.mock_get_config = patch( "ansible_collections.ansible.netcommon.plugins.module_utils.network.common.network.Config.get_config"
)
self.get_config = self.mock_get_config.start()
self.mock_load_config = patch(
"ansible_collections.ansible.netcommon.plugins.module_utils.network.common.network.Config.load_config"
)
self.load_config = self.mock_load_config.start()
self.mock_get_resource_connection_config = patch(
"ansible_collections.ansible.netcommon.plugins.module_utils.network.common.cfg.base.get_resource_connection"
)
self.get_resource_connection_config = (self.mock_get_resource_connection_config.start())
self.mock_get_resource_connection_facts = patch(
"ansible_collections.ansible.netcommon.plugins.module_utils.network.common.facts.facts.get_resource_connection"
)
self.get_resource_connection_facts = (self.mock_get_resource_connection_facts.start())
self.mock_edit_config = patch(
"ansible_collections.arista.eos.plugins.module_utils.network.eos.providers.providers.CliProvider.edit_config"
)
self.edit_config = self.mock_edit_config.start()
self.mock_execute_show_command = patch(
"ansible_collections.arista.eos.plugins.module_utils.network.eos.facts.l2_interfaces.l2_interfaces.L2_interfacesFacts.get_device_data"
)
self.execute_show_command = self.mock_execute_show_command.start()
```
The facts file of the module now includes a new method, `get_device_data`. Call `get_device_data` here to emulate the device output.
### Mocking device data
To mock fetching results from devices or provide other complex data structures that come from external libraries, you can use `fixtures` to read in pre-generated data. The text files for this pre-generated data live in `test/units/modules/network/PLATFORM/fixtures/`. See for example the [eos\_l2\_interfaces.cfg file](https://github.com/ansible-collections/arista.eos/blob/master/tests/unit/modules/network/eos/fixtures/eos_l2_interfaces_config.cfg).
Load data using the `load_fixture` method and set this data as the return value of the `get_device_data` method in the facts file:
```
def load_fixtures(self, commands=None, transport='cli'):
def load_from_file(*args, **kwargs):
return load_fixture('eos_l2_interfaces_config.cfg')
self.execute_show_command.side_effect = load_from_file
```
See the unit test file [test\_eos\_l2\_interfaces](https://github.com/ansible-collections/arista.eos/blob/master/tests/unit/modules/network/eos/test_eos_l2_interfaces.py) for a practical example.
See also
[Unit Tests](https://docs.ansible.com/ansible/latest/dev_guide/testing_units.html#testing-units)
Deep dive into developing unit tests for Ansible modules
[Testing Ansible](https://docs.ansible.com/ansible/latest/dev_guide/testing_running_locally.html#testing-running-locally)
Running tests locally including gathering and reporting coverage data
[Developing Ansible modules](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#developing-modules-general)
Get started developing a module
| programming_docs |
ansible Documenting new network platforms Documenting new network platforms
=================================
* [Modifying the platform options table](#modifying-the-platform-options-table)
* [Adding a platform-specific options section](#adding-a-platform-specific-options-section)
* [Adding your new file to the table of contents](#adding-your-new-file-to-the-table-of-contents)
When you create network modules for a new platform, or modify the connections provided by an existing network platform(such as `network_cli` and `httpapi`), you also need to update the [Settings by Platform](../user_guide/platform_index#settings-by-platform) table and add or modify the Platform Options file for your platform.
You should already have documented each module as described in [Module format and documentation](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_documenting.html#developing-modules-documenting).
Modifying the platform options table
------------------------------------
The [Settings by Platform](../user_guide/platform_index#settings-by-platform) table is a convenient summary of the connections options provided by each network platform that has modules in Ansible. Add a row for your platform to this table, in alphabetical order. For example:
```
+-------------------+-------------------------+-------------+---------+---------+----------+
| My OS | ``myos`` | ✓ | ✓ | | ✓ |
```
Ensure that the table stays formatted correctly. That is:
* Each row is inserted in alphabetical order.
* The cell division `|` markers line up with the `+` markers.
* The check marks appear only for the connection types provided by the network modules.
Adding a platform-specific options section
------------------------------------------
The platform- specific sections are individual `.rst` files that provide more detailed information for the users of your network platform modules. Name your new file `platform_<name>.rst` (for example, `platform_myos.rst`). The platform name should match the module prefix. See [platform\_eos.rst](https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/network/user_guide/platform_eos.rst) and [EOS Platform Options](../user_guide/platform_eos#eos-platform-options) for an example of the details you should provide in your platform-specific options section.
Your platform-specific section should include the following:
* **Connections available table** - a deeper dive into each connection type, including details on credentials, indirect access, connections settings, and enable mode.
* **How to use each connection type** - with working examples of each connection type.
If your network platform supports SSH connections, also include the following at the bottom of your `.rst` file:
```
.. include:: shared_snippets/SSH_warning.txt
```
Adding your new file to the table of contents
---------------------------------------------
As a final step, add your new file in alphabetical order in the `platform_index.rst` file. You should then build the documentation to verify your additions. See [Contributing to the Ansible Documentation](https://docs.ansible.com/ansible/latest/community/documentation_contributions.html#community-documentation-contributions) for more details.
ansible Beyond the basics Beyond the basics
=================
This page introduces some concepts that help you manage your Ansible workflow with directory structure and source control. Like the Basic Concepts at the beginning of this guide, these intermediate concepts are common to all uses of Ansible.
* [A typical Ansible filetree](#a-typical-ansible-filetree)
* [Tracking changes to inventory and playbooks: source control with git](#tracking-changes-to-inventory-and-playbooks-source-control-with-git)
A typical Ansible filetree
--------------------------
Ansible expects to find certain files in certain places. As you expand your inventory and create and run more network playbooks, keep your files organized in your working Ansible project directory like this:
```
.
├── backup
│ ├── vyos.example.net_config.2018-02-08@11:10:15
│ ├── vyos.example.net_config.2018-02-12@08:22:41
├── first_playbook.yml
├── inventory
├── group_vars
│ ├── vyos.yml
│ └── eos.yml
├── roles
│ ├── static_route
│ └── system
├── second_playbook.yml
└── third_playbook.yml
```
The `backup` directory and the files in it get created when you run modules like `vyos_config` with the `backup: yes` parameter.
Tracking changes to inventory and playbooks: source control with git
--------------------------------------------------------------------
As you expand your inventory, roles and playbooks, you should place your Ansible projects under source control. We recommend `git` for source control. `git` provides an audit trail, letting you track changes, roll back mistakes, view history and share the workload of managing, maintaining and expanding your Ansible ecosystem. There are plenty of tutorials and guides to using `git` available.
ansible Network Getting Started Network Getting Started
=======================
Ansible collections support a wide range of vendors, device types, and actions, so you can manage your entire network with a single automation tool. With Ansible, you can:
* Automate repetitive tasks to speed routine network changes and free up your time for more strategic work
* Leverage the same simple, powerful, and agentless automation tool for network tasks that operations and development use
* Separate the data model (in a playbook or role) from the execution layer (via Ansible modules) to manage heterogeneous network devices
* Benefit from community and vendor-generated sample playbooks and roles to help accelerate network automation projects
* Communicate securely with network hardware over SSH or HTTPS
**Who should use this guide?**
This guide is intended for network engineers using Ansible for the first time. If you understand networks but have never used Ansible, work through the guide from start to finish.
This guide is also useful for experienced Ansible users automating network tasks for the first time. You can use Ansible commands, playbooks and modules to configure hubs, switches, routers, bridges and other network devices. But network modules are different from Linux/Unix and Windows modules, and you must understand some network-specific concepts to succeed. If you understand Ansible but have never automated a network task, start with the second section.
This guide introduces basic Ansible concepts and guides you through your first Ansible commands, playbooks and inventory entries.
Getting Started Guide
* [Basic Concepts](basic_concepts)
+ [Control node](basic_concepts#control-node)
+ [Managed nodes](basic_concepts#managed-nodes)
+ [Inventory](basic_concepts#inventory)
+ [Collections](basic_concepts#collections)
+ [Modules](basic_concepts#modules)
+ [Tasks](basic_concepts#tasks)
+ [Playbooks](basic_concepts#playbooks)
* [How Network Automation is Different](network_differences)
+ [Execution on the control node](network_differences#execution-on-the-control-node)
+ [Multiple communication protocols](network_differences#multiple-communication-protocols)
+ [Collections organized by network platform](network_differences#collections-organized-by-network-platform)
+ [Privilege Escalation: `enable` mode, `become`, and `authorize`](network_differences#privilege-escalation-enable-mode-become-and-authorize)
* [Run Your First Command and Playbook](first_playbook)
+ [Prerequisites](first_playbook#prerequisites)
+ [Install Ansible](first_playbook#install-ansible)
+ [Establish a manual connection to a managed node](first_playbook#establish-a-manual-connection-to-a-managed-node)
+ [Run your first network Ansible command](first_playbook#run-your-first-network-ansible-command)
+ [Create and run your first network Ansible Playbook](first_playbook#create-and-run-your-first-network-ansible-playbook)
+ [Gathering facts from network devices](first_playbook#gathering-facts-from-network-devices)
* [Build Your Inventory](first_inventory)
+ [Basic inventory](first_inventory#basic-inventory)
+ [Add variables to the inventory](first_inventory#add-variables-to-the-inventory)
+ [Group variables within inventory](first_inventory#group-variables-within-inventory)
+ [Variable syntax](first_inventory#variable-syntax)
+ [Group inventory by platform](first_inventory#group-inventory-by-platform)
+ [Verifying the inventory](first_inventory#verifying-the-inventory)
+ [Protecting sensitive variables with `ansible-vault`](first_inventory#protecting-sensitive-variables-with-ansible-vault)
* [Use Ansible network roles](network_roles)
+ [Understanding roles](network_roles#understanding-roles)
* [Beyond the basics](intermediate_concepts)
+ [A typical Ansible filetree](intermediate_concepts#a-typical-ansible-filetree)
+ [Tracking changes to inventory and playbooks: source control with git](intermediate_concepts#tracking-changes-to-inventory-and-playbooks-source-control-with-git)
* [Working with network connection options](network_connection_options)
+ [Setting timeout options](network_connection_options#setting-timeout-options)
* [Resources and next steps](network_resources)
+ [Documents](network_resources#documents)
+ [Events (on video and in person)](network_resources#events-on-video-and-in-person)
+ [GitHub repos](network_resources#github-repos)
+ [IRC and Slack](network_resources#irc-and-slack)
ansible Use Ansible network roles Use Ansible network roles
=========================
Roles are sets of Ansible defaults, files, tasks, templates, variables, and other Ansible components that work together. As you saw on [Run Your First Command and Playbook](first_playbook#first-network-playbook), moving from a command to a playbook makes it easy to run multiple tasks and repeat the same tasks in the same order. Moving from a playbook to a role makes it even easier to reuse and share your ordered tasks. You can look at [Ansible Galaxy](../../galaxy/user_guide#ansible-galaxy), which lets you share your roles and use others’ roles, either directly or as inspiration.
* [A sample DNS playbook](#a-sample-dns-playbook)
* [Convert the playbook into a role](#convert-the-playbook-into-a-role)
* [Variable precedence](#variable-precedence)
+ [Lowest precedence](#lowest-precedence)
+ [Highest precedence](#highest-precedence)
* [Update an installed role](#update-an-installed-role)
Understanding roles
-------------------
So what exactly is a role, and why should you care? Ansible roles are basically playbooks broken up into a known file structure. Moving to roles from a playbook makes sharing, reading, and updating your Ansible workflow easier. Users can write their own roles. So for example, you don’t have to write your own DNS playbook. Instead, you specify a DNS server and a role to configure it for you.
To simplify your workflow even further, the Ansible Network team has written a series of roles for common network use cases. Using these roles means you don’t have to reinvent the wheel. Instead of writing and maintaining your own `create_vlan` playbooks or roles, you can concentrate on designing, codifying and maintaining the parser templates that describe your network topologies and inventory, and let Ansible’s network roles do the work. See the [network-related roles](https://galaxy.ansible.com/ansible-network) on Ansible Galaxy.
### A sample DNS playbook
To demonstrate the concept of what a role is, the example `playbook.yml` below is a single YAML file containing a two-task playbook. This Ansible Playbook configures the hostname on a Cisco IOS XE device, then it configures the DNS (domain name system) servers.
```
---
- name: configure cisco routers
hosts: routers
connection: ansible.netcommon.network_cli
gather_facts: no
vars:
dns: "8.8.8.8 8.8.4.4"
tasks:
- name: configure hostname
cisco.ios.ios_config:
lines: hostname {{ inventory_hostname }}
- name: configure DNS
cisco.ios.ios_config:
lines: ip name-server {{dns}}
```
If you run this playbook using the `ansible-playbook` command, you’ll see the output below. This example used `-l` option to limit the playbook to only executing on the **rtr1** node.
```
[user@ansible ~]$ ansible-playbook playbook.yml -l rtr1
PLAY [configure cisco routers] *************************************************
TASK [configure hostname] ******************************************************
changed: [rtr1]
TASK [configure DNS] ***********************************************************
changed: [rtr1]
PLAY RECAP *********************************************************************
rtr1 : ok=2 changed=2 unreachable=0 failed=0
```
This playbook configured the hostname and DNS servers. You can verify that configuration on the Cisco IOS XE **rtr1** router:
```
rtr1#sh run | i name
hostname rtr1
ip name-server 8.8.8.8 8.8.4.4
```
### Convert the playbook into a role
The next step is to convert this playbook into a reusable role. You can create the directory structure manually, or you can use `ansible-galaxy init` to create the standard framework for a role.
```
[user@ansible ~]$ ansible-galaxy init system-demo
[user@ansible ~]$ cd system-demo/
[user@ansible system-demo]$ tree
.
├── defaults
│ └── main.yml
├── files
├── handlers
│ └── main.yml
├── meta
│ └── main.yml
├── README.md
├── tasks
│ └── main.yml
├── templates
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
```
This first demonstration uses only the **tasks** and **vars** directories. The directory structure would look as follows:
```
[user@ansible system-demo]$ tree
.
├── tasks
│ └── main.yml
└── vars
└── main.yml
```
Next, move the content of the `vars` and `tasks` sections from the original Ansible Playbook into the role. First, move the two tasks into the `tasks/main.yml` file:
```
[user@ansible system-demo]$ cat tasks/main.yml
---
- name: configure hostname
cisco.ios.ios_config:
lines: hostname {{ inventory_hostname }}
- name: configure DNS
cisco.ios.ios_config:
lines: ip name-server {{dns}}
```
Next, move the variables into the `vars/main.yml` file:
```
[user@ansible system-demo]$ cat vars/main.yml
---
dns: "8.8.8.8 8.8.4.4"
```
Finally, modify the original Ansible Playbook to remove the `tasks` and `vars` sections and add the keyword `roles` with the name of the role, in this case `system-demo`. You’ll have this playbook:
```
---
- name: configure cisco routers
hosts: routers
connection: ansible.netcommon.network_cli
gather_facts: no
roles:
- system-demo
```
To summarize, this demonstration now has a total of three directories and three YAML files. There is the `system-demo` folder, which represents the role. This `system-demo` contains two folders, `tasks` and `vars`. There is a `main.yml` is each respective folder. The `vars/main.yml` contains the variables from `playbook.yml`. The `tasks/main.yml` contains the tasks from `playbook.yml`. The `playbook.yml` file has been modified to call the role rather than specifying vars and tasks directly. Here is a tree of the current working directory:
```
[user@ansible ~]$ tree
.
├── playbook.yml
└── system-demo
├── tasks
│ └── main.yml
└── vars
└── main.yml
```
Running the playbook results in identical behavior with slightly different output:
```
[user@ansible ~]$ ansible-playbook playbook.yml -l rtr1
PLAY [configure cisco routers] *************************************************
TASK [system-demo : configure hostname] ****************************************
ok: [rtr1]
TASK [system-demo : configure DNS] *********************************************
ok: [rtr1]
PLAY RECAP *********************************************************************
rtr1 : ok=2 changed=0 unreachable=0 failed=0
```
As seen above each task is now prepended with the role name, in this case `system-demo`. When running a playbook that contains several roles, this will help pinpoint where a task is being called from. This playbook returned `ok` instead of `changed` because it has identical behavior for the single file playbook we started from.
As before, the playbook will generate the following configuration on a Cisco IOS-XE router:
```
rtr1#sh run | i name
hostname rtr1
ip name-server 8.8.8.8 8.8.4.4
```
This is why Ansible roles can be simply thought of as deconstructed playbooks. They are simple, effective and reusable. Now another user can simply include the `system-demo` role instead of having to create a custom “hard coded” playbook.
### Variable precedence
What if you want to change the DNS servers? You aren’t expected to change the `vars/main.yml` within the role structure. Ansible has many places where you can specify variables for a given play. See [Using Variables](../../user_guide/playbooks_variables#playbooks-variables) for details on variables and precedence. There are actually 21 places to put variables. While this list can seem overwhelming at first glance, the vast majority of use cases only involve knowing the spot for variables of least precedence and how to pass variables with most precedence. See [Variable precedence: Where should I put a variable?](../../user_guide/playbooks_variables#ansible-variable-precedence) for more guidance on where you should put variables.
#### Lowest precedence
The lowest precedence is the `defaults` directory within a role. This means all the other 20 locations you could potentially specify the variable will all take higher precedence than `defaults`, no matter what. To immediately give the vars from the `system-demo` role the least precedence, rename the `vars` directory to `defaults`.
```
[user@ansible system-demo]$ mv vars defaults
[user@ansible system-demo]$ tree
.
├── defaults
│ └── main.yml
├── tasks
│ └── main.yml
```
Add a new `vars` section to the playbook to override the default behavior (where the variable `dns` is set to 8.8.8.8 and 8.8.4.4). For this demonstration, set `dns` to 1.1.1.1, so `playbook.yml` becomes:
```
---
- name: configure cisco routers
hosts: routers
connection: ansible.netcommon.network_cli
gather_facts: no
vars:
dns: 1.1.1.1
roles:
- system-demo
```
Run this updated playbook on **rtr2**:
```
[user@ansible ~]$ ansible-playbook playbook.yml -l rtr2
```
The configuration on the **rtr2** Cisco router will look as follows:
```
rtr2#sh run | i name-server
ip name-server 1.1.1.1
```
The variable configured in the playbook now has precedence over the `defaults` directory. In fact, any other spot you configure variables would win over the values in the `defaults` directory.
#### Highest precedence
Specifying variables in the `defaults` directory within a role will always take the lowest precedence, while specifying `vars` as extra vars with the `-e` or `--extra-vars=` will always take the highest precedence, no matter what. Re-running the playbook with the `-e` option overrides both the `defaults` directory (8.8.4.4 and 8.8.8.8) as well as the newly created `vars` within the playbook that contains the 1.1.1.1 dns server.
```
[user@ansible ~]$ ansible-playbook playbook.yml -e "dns=192.168.1.1" -l rtr3
```
The result on the Cisco IOS XE router will only contain the highest precedence setting of 192.168.1.1:
```
rtr3#sh run | i name-server
ip name-server 192.168.1.1
```
How is this useful? Why should you care? Extra vars are commonly used by network operators to override defaults. A powerful example of this is with the Job Template Survey feature on AWX or the [Red Hat Ansible Automation Platform](https://docs.ansible.com/ansible/latest/reference_appendices/tower.html#ansible-platform). It is possible through the web UI to prompt a network operator to fill out parameters with a Web form. This can be really simple for non-technical playbook writers to execute a playbook using their Web browser.
### Update an installed role
The Ansible Galaxy page for a role lists all available versions. To update a locally installed role to a new or different version, use the `ansible-galaxy install` command with the version and `--force` option. You may also need to manually update any dependent roles to support this version. See the role **Read Me** tab in Galaxy for dependent role minimum version requirements.
```
[user@ansible]$ ansible-galaxy install mynamespace.my_role,v2.7.1 --force
```
See also
[Ansible Galaxy documentation](https://galaxy.ansible.com/docs/)
Ansible Galaxy user guide
| programming_docs |
ansible Build Your Inventory Build Your Inventory
====================
Running a playbook without an inventory requires several command-line flags. Also, running a playbook against a single device is not a huge efficiency gain over making the same change manually. The next step to harnessing the full power of Ansible is to use an inventory file to organize your managed nodes into groups with information like the `ansible_network_os` and the SSH user. A fully-featured inventory file can serve as the source of truth for your network. Using an inventory file, a single playbook can maintain hundreds of network devices with a single command. This page shows you how to build an inventory file, step by step.
* [Basic inventory](#basic-inventory)
* [Add variables to the inventory](#add-variables-to-the-inventory)
* [Group variables within inventory](#group-variables-within-inventory)
* [Variable syntax](#variable-syntax)
* [Group inventory by platform](#group-inventory-by-platform)
* [Verifying the inventory](#verifying-the-inventory)
* [Protecting sensitive variables with `ansible-vault`](#protecting-sensitive-variables-with-ansible-vault)
Basic inventory
---------------
First, group your inventory logically. Best practice is to group servers and network devices by their What (application, stack or microservice), Where (datacenter or region), and When (development stage):
* **What**: db, web, leaf, spine
* **Where**: east, west, floor\_19, building\_A
* **When**: dev, test, staging, prod
Avoid spaces, hyphens, and preceding numbers (use `floor_19`, not `19th_floor`) in your group names. Group names are case sensitive.
This tiny example data center illustrates a basic group structure. You can group groups using the syntax `[metagroupname:children]` and listing groups as members of the metagroup. Here, the group `network` includes all leafs and all spines; the group `datacenter` includes all network devices plus all webservers.
```
---
leafs:
hosts:
leaf01:
ansible_host: 10.16.10.11
leaf02:
ansible_host: 10.16.10.12
spines:
hosts:
spine01:
ansible_host: 10.16.10.13
spine02:
ansible_host: 10.16.10.14
network:
children:
leafs:
spines:
webservers:
hosts:
webserver01:
ansible_host: 10.16.10.15
webserver02:
ansible_host: 10.16.10.16
datacenter:
children:
network:
webservers:
```
You can also create this same inventory in INI format.
```
[leafs]
leaf01
leaf02
[spines]
spine01
spine02
[network:children]
leafs
spines
[webservers]
webserver01
webserver02
[datacenter:children]
network
webservers
```
Add variables to the inventory
------------------------------
Next, you can set values for many of the variables you needed in your first Ansible command in the inventory, so you can skip them in the `ansible-playbook` command. In this example, the inventory includes each network device’s IP, OS, and SSH user. If your network devices are only accessible by IP, you must add the IP to the inventory file. If you access your network devices using hostnames, the IP is not necessary.
```
---
leafs:
hosts:
leaf01:
ansible_host: 10.16.10.11
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
leaf02:
ansible_host: 10.16.10.12
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
spines:
hosts:
spine01:
ansible_host: 10.16.10.13
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
spine02:
ansible_host: 10.16.10.14
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
network:
children:
leafs:
spines:
webservers:
hosts:
webserver01:
ansible_host: 10.16.10.15
ansible_user: my_server_user
webserver02:
ansible_host: 10.16.10.16
ansible_user: my_server_user
datacenter:
children:
network:
webservers:
```
Group variables within inventory
--------------------------------
When devices in a group share the same variable values, such as OS or SSH user, you can reduce duplication and simplify maintenance by consolidating these into group variables:
```
---
leafs:
hosts:
leaf01:
ansible_host: 10.16.10.11
leaf02:
ansible_host: 10.16.10.12
vars:
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
spines:
hosts:
spine01:
ansible_host: 10.16.10.13
spine02:
ansible_host: 10.16.10.14
vars:
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
network:
children:
leafs:
spines:
webservers:
hosts:
webserver01:
ansible_host: 10.16.10.15
webserver02:
ansible_host: 10.16.10.16
vars:
ansible_user: my_server_user
datacenter:
children:
network:
webservers:
```
Variable syntax
---------------
The syntax for variable values is different in inventory, in playbooks, and in the `group_vars` files, which are covered below. Even though playbook and `group_vars` files are both written in YAML, you use variables differently in each.
* In an ini-style inventory file you **must** use the syntax `key=value` for variable values: `ansible_network_os=vyos.vyos.vyos`.
* In any file with the `.yml` or `.yaml` extension, including playbooks and `group_vars` files, you **must** use YAML syntax: `key: value`.
* In `group_vars` files, use the full `key` name: `ansible_network_os: vyos.vyos.vyos`.
* In playbooks, use the short-form `key` name, which drops the `ansible` prefix: `network_os: vyos.vyos.vyos`.
Group inventory by platform
---------------------------
As your inventory grows, you may want to group devices by platform. This allows you to specify platform-specific variables easily for all devices on that platform:
```
---
leafs:
hosts:
leaf01:
ansible_host: 10.16.10.11
leaf02:
ansible_host: 10.16.10.12
spines:
hosts:
spine01:
ansible_host: 10.16.10.13
spine02:
ansible_host: 10.16.10.14
network:
children:
leafs:
spines:
vars:
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
webservers:
hosts:
webserver01:
ansible_host: 10.16.10.15
webserver02:
ansible_host: 10.16.10.16
vars:
ansible_user: my_server_user
datacenter:
children:
network:
webservers:
```
With this setup, you can run `first_playbook.yml` with only two flags:
```
ansible-playbook -i inventory.yml -k first_playbook.yml
```
With the `-k` flag, you provide the SSH password(s) at the prompt. Alternatively, you can store SSH and other secrets and passwords securely in your group\_vars files with `ansible-vault`. See [Protecting sensitive variables with ansible-vault](#network-vault) for details.
Verifying the inventory
-----------------------
You can use the [ansible-inventory](../../cli/ansible-inventory#ansible-inventory) CLI command to display the inventory as Ansible sees it.
```
$ ansible-inventory -i test.yml --list
{
"_meta": {
"hostvars": {
"leaf01": {
"ansible_connection": "ansible.netcommon.network_cli",
"ansible_host": "10.16.10.11",
"ansible_network_os": "vyos.vyos.vyos",
"ansible_user": "my_vyos_user"
},
"leaf02": {
"ansible_connection": "ansible.netcommon.network_cli",
"ansible_host": "10.16.10.12",
"ansible_network_os": "vyos.vyos.vyos",
"ansible_user": "my_vyos_user"
},
"spine01": {
"ansible_connection": "ansible.netcommon.network_cli",
"ansible_host": "10.16.10.13",
"ansible_network_os": "vyos.vyos.vyos",
"ansible_user": "my_vyos_user"
},
"spine02": {
"ansible_connection": "ansible.netcommon.network_cli",
"ansible_host": "10.16.10.14",
"ansible_network_os": "vyos.vyos.vyos",
"ansible_user": "my_vyos_user"
},
"webserver01": {
"ansible_host": "10.16.10.15",
"ansible_user": "my_server_user"
},
"webserver02": {
"ansible_host": "10.16.10.16",
"ansible_user": "my_server_user"
}
}
},
"all": {
"children": [
"datacenter",
"ungrouped"
]
},
"datacenter": {
"children": [
"network",
"webservers"
]
},
"leafs": {
"hosts": [
"leaf01",
"leaf02"
]
},
"network": {
"children": [
"leafs",
"spines"
]
},
"spines": {
"hosts": [
"spine01",
"spine02"
]
},
"webservers": {
"hosts": [
"webserver01",
"webserver02"
]
}
}
```
Protecting sensitive variables with `ansible-vault`
---------------------------------------------------
The `ansible-vault` command provides encryption for files and/or individual variables like passwords. This tutorial will show you how to encrypt a single SSH password. You can use the commands below to encrypt other sensitive information, such as database passwords, privilege-escalation passwords and more.
First you must create a password for ansible-vault itself. It is used as the encryption key, and with this you can encrypt dozens of different passwords across your Ansible project. You can access all those secrets (encrypted values) with a single password (the ansible-vault password) when you run your playbooks. Here’s a simple example.
1. Create a file and write your password for ansible-vault to it:
```
echo "my-ansible-vault-pw" > ~/my-ansible-vault-pw-file
```
2. Create the encrypted ssh password for your VyOS network devices, pulling your ansible-vault password from the file you just created:
```
ansible-vault encrypt_string --vault-id my_user@~/my-ansible-vault-pw-file 'VyOS_SSH_password' --name 'ansible_password'
```
If you prefer to type your ansible-vault password rather than store it in a file, you can request a prompt:
```
ansible-vault encrypt_string --vault-id my_user@prompt 'VyOS_SSH_password' --name 'ansible_password'
```
and type in the vault password for `my_user`.
The [`--vault-id`](../../cli/ansible-playbook#cmdoption-ansible-playbook-vault-id) flag allows different vault passwords for different users or different levels of access. The output includes the user name `my_user` from your `ansible-vault` command and uses the YAML syntax `key: value`:
```
ansible_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;my_user
66386134653765386232383236303063623663343437643766386435663632343266393064373933
3661666132363339303639353538316662616638356631650a316338316663666439383138353032
63393934343937373637306162366265383461316334383132626462656463363630613832313562
3837646266663835640a313164343535316666653031353763613037656362613535633538386539
65656439626166666363323435613131643066353762333232326232323565376635
Encryption successful
```
This is an example using an extract from a YAML inventory, as the INI format does not support inline vaults:
```
...
vyos: # this is a group in yaml inventory, but you can also do under a host
vars:
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
ansible_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;my_user
66386134653765386232383236303063623663343437643766386435663632343266393064373933
3661666132363339303639353538316662616638356631650a316338316663666439383138353032
63393934343937373637306162366265383461316334383132626462656463363630613832313562
3837646266663835640a313164343535316666653031353763613037656362613535633538386539
65656439626166666363323435613131643066353762333232326232323565376635
...
```
To use an inline vaulted variables with an INI inventory you need to store it in a ‘vars’ file in YAML format, it can reside in host\_vars/ or group\_vars/ to be automatically picked up or referenced from a play via `vars_files` or `include_vars`.
To run a playbook with this setup, drop the `-k` flag and add a flag for your `vault-id`:
```
ansible-playbook -i inventory --vault-id my_user@~/my-ansible-vault-pw-file first_playbook.yml
```
Or with a prompt instead of the vault password file:
```
ansible-playbook -i inventory --vault-id my_user@prompt first_playbook.yml
```
To see the original value, you can use the debug module. Please note if your YAML file defines the `ansible_connection` variable (as we used in our example), it will take effect when you execute the command below. To prevent this, please make a copy of the file without the ansible\_connection variable.
```
cat vyos.yml | grep -v ansible_connection >> vyos_no_connection.yml
ansible localhost -m debug -a var="ansible_password" -e "@vyos_no_connection.yml" --ask-vault-pass
Vault password:
localhost | SUCCESS => {
"ansible_password": "VyOS_SSH_password"
}
```
Warning
Vault content can only be decrypted with the password that was used to encrypt it. If you want to stop using one password and move to a new one, you can update and re-encrypt existing vault content with `ansible-vault rekey myfile`, then provide the old password and the new password. Copies of vault content still encrypted with the old password can still be decrypted with old password.
For more details on building inventory files, see [the introduction to inventory](../../user_guide/intro_inventory#intro-inventory); for more details on ansible-vault, see [the full Ansible Vault documentation](../../user_guide/vault#vault).
Now that you understand the basics of commands, playbooks, and inventory, it’s time to explore some more complex Ansible Network examples.
ansible Basic Concepts Basic Concepts
==============
These concepts are common to all uses of Ansible, including network automation. You need to understand them to use Ansible for network automation. This basic introduction provides the background you need to follow the examples in this guide.
* [Control node](#control-node)
* [Managed nodes](#managed-nodes)
* [Inventory](#inventory)
* [Collections](#collections)
* [Modules](#modules)
* [Tasks](#tasks)
* [Playbooks](#playbooks)
Control node
------------
Any machine with Ansible installed. You can run Ansible commands and playbooks by invoking the `ansible` or `ansible-playbook` command from any control node. You can use any computer that has a Python installation as a control node - laptops, shared desktops, and servers can all run Ansible. However, you cannot use a Windows machine as a control node. You can have multiple control nodes.
Managed nodes
-------------
The network devices (and/or servers) you manage with Ansible. Managed nodes are also sometimes called “hosts”. Ansible is not installed on managed nodes.
Inventory
---------
A list of managed nodes. An inventory file is also sometimes called a “hostfile”. Your inventory can specify information like IP address for each managed node. An inventory can also organize managed nodes, creating and nesting groups for easier scaling. To learn more about inventory, see [the Working with Inventory](../../user_guide/intro_inventory#intro-inventory) section.
Collections
-----------
Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. You can install and use collections through [Ansible Galaxy](https://galaxy.ansible.com). To learn more about collections, see [Using collections](../../user_guide/collections_using#collections).
Modules
-------
The units of code Ansible executes. Each module has a particular use, from administering users on a specific type of database to managing VLAN interfaces on a specific type of network device. You can invoke a single module with a task, or invoke several different modules in a playbook. Starting in Ansible 2.10, modules are grouped in collections. For an idea of how many collections Ansible includes, take a look at the [Collection Index](../../collections/index#list-of-collections).
Tasks
-----
The units of action in Ansible. You can execute a single task once with an ad hoc command.
Playbooks
---------
Ordered lists of tasks, saved so you can run those tasks in that order repeatedly. Playbooks can include variables as well as tasks. Playbooks are written in YAML and are easy to read, write, share and understand. To learn more about playbooks, see [Intro to playbooks](../../user_guide/playbooks_intro#about-playbooks).
ansible Resources and next steps Resources and next steps
========================
* [Documents](#documents)
* [Events (on video and in person)](#events-on-video-and-in-person)
* [GitHub repos](#github-repos)
* [IRC and Slack](#irc-and-slack)
Documents
---------
Read more about Ansible for Network Automation:
* Network Automation on the [Ansible website](https://www.ansible.com/overview/networking)
* Ansible Network [Blog posts](https://www.ansible.com/blog/topic/networks)
Events (on video and in person)
-------------------------------
All sessions at Ansible events are recorded and include many Network-related topics (use Filter by Category to view only Network topics). You can also join us for future events in your area. See:
* [Recorded AnsibleFests](https://www.ansible.com/resources/videos/ansiblefest)
* [Recorded AnsibleAutomates](https://www.ansible.com/resources/webinars-training)
* [Upcoming Ansible Events](https://www.ansible.com/community/events) page.
GitHub repos
------------
Ansible hosts module code, examples, demonstrations, and other content on GitHub. Anyone with a GitHub account is able to create Pull Requests (PRs) or issues on these repos:
* [Network-Automation](https://github.com/network-automation) is an open community for all things network automation. Have an idea, some playbooks, or roles to share? Email [[email protected]](https://docs.ansible.com/cdn-cgi/l/email-protection#10717e6379727c753d7e7564677f627b363323272b363325222b363324282b627574787164363324262b737f7d) and we will add you as a contributor to the repository.
* [Ansible collections](https://github.com/ansible-collections) is the main repository for Ansible-maintained and community collections, including collections for network devices.
IRC and Slack
-------------
Join us on:
* IRC Channel - `#ansible-network` on [irc.libera.chat](https://libera.chat/)
* Slack - <https://ansiblenetwork.slack.com>
ansible Run Your First Command and Playbook Run Your First Command and Playbook
===================================
Put the concepts you learned to work with this quick tutorial. Install Ansible, execute a network configuration command manually, execute the same command with Ansible, then create a playbook so you can execute the command any time on multiple network devices.
* [Prerequisites](#prerequisites)
* [Install Ansible](#install-ansible)
* [Establish a manual connection to a managed node](#establish-a-manual-connection-to-a-managed-node)
* [Run your first network Ansible command](#run-your-first-network-ansible-command)
* [Create and run your first network Ansible Playbook](#create-and-run-your-first-network-ansible-playbook)
* [Gathering facts from network devices](#gathering-facts-from-network-devices)
Prerequisites
-------------
Before you work through this tutorial you need:
* Ansible 2.10 (or higher) installed
* One or more network devices that are compatible with Ansible
* Basic Linux command line knowledge
* Basic knowledge of network switch & router configuration
Install Ansible
---------------
Install Ansible using your preferred method. See [Installing Ansible](../../installation_guide/intro_installation#installation-guide). Then return to this tutorial.
Confirm the version of Ansible (must be >= 2.10):
```
ansible --version
```
Establish a manual connection to a managed node
-----------------------------------------------
To confirm your credentials, connect to a network device manually and retrieve its configuration. Replace the sample user and device name with your real credentials. For example, for a VyOS router:
```
ssh [email protected]
show config
exit
```
This manual connection also establishes the authenticity of the network device, adding its RSA key fingerprint to your list of known hosts. (If you have connected to the device before, you have already established its authenticity.)
Run your first network Ansible command
--------------------------------------
Instead of manually connecting and running a command on the network device, you can retrieve its configuration with a single, stripped-down Ansible command:
```
ansible all -i vyos.example.net, -c ansible.netcommon.network_cli -u my_vyos_user -k -m vyos.vyos.vyos_facts -e ansible_network_os=vyos.vyos.vyos
```
The flags in this command set seven values:
* the host group(s) to which the command should apply (in this case, all)
* the inventory (-i, the device or devices to target - without the trailing comma -i points to an inventory file)
* the connection method (-c, the method for connecting and executing ansible)
* the user (-u, the username for the SSH connection)
* the SSH connection method (-k, please prompt for the password)
* the module (-m, the Ansible module to run, using the fully qualified collection name (FQCN))
* an extra variable ( -e, in this case, setting the network OS value)
NOTE: If you use `ssh-agent` with ssh keys, Ansible loads them automatically. You can omit `-k` flag.
Note
If you are running Ansible in a virtual environment, you will also need to add the variable `ansible_python_interpreter=/path/to/venv/bin/python`
Create and run your first network Ansible Playbook
--------------------------------------------------
If you want to run this command every day, you can save it in a playbook and run it with `ansible-playbook` instead of `ansible`. The playbook can store a lot of the parameters you provided with flags at the command line, leaving less to type at the command line. You need two files for this - a playbook and an inventory file.
1. Download [`first_playbook.yml`](../../_downloads/588d4b6e9316c8eb903fbe2485b14d64/first_playbook.yml), which looks like this:
```
---
- name: Network Getting Started First Playbook
connection: ansible.netcommon.network_cli
gather_facts: false
hosts: all
tasks:
- name: Get config for VyOS devices
vyos.vyos.vyos_facts:
gather_subset: all
- name: Display the config
debug:
msg: "The hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}"
```
The playbook sets three of the seven values from the command line above: the group (`hosts: all`), the connection method (`connection: ansible.netcommon.network_cli`) and the module (in each task). With those values set in the playbook, you can omit them on the command line. The playbook also adds a second task to show the config output. When a module runs in a playbook, the output is held in memory for use by future tasks instead of written to the console. The debug task here lets you see the results in your shell.
2. Run the playbook with the command:
```
ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook.yml
```
The playbook contains one play with two tasks, and should generate output like this:
```
$ ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook.yml
PLAY [First Playbook]
***************************************************************************************************************************
TASK [Get config for VyOS devices]
***************************************************************************************************************************
ok: [vyos.example.net]
TASK [Display the config]
***************************************************************************************************************************
ok: [vyos.example.net] => {
"msg": "The hostname is vyos and the OS is VyOS 1.1.8"
}
```
3. Now that you can retrieve the device config, try updating it with Ansible. Download [`first_playbook_ext.yml`](../../_downloads/47cc11a5d29fe635cb56cb6e1cd74e0f/first_playbook_ext.yml), which is an extended version of the first playbook:
```
---
- name: Network Getting Started First Playbook Extended
connection: ansible.netcommon.network_cli
gather_facts: false
hosts: all
tasks:
- name: Get config for VyOS devices
vyos.vyos.vyos_facts:
gather_subset: all
- name: Display the config
debug:
msg: "The hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}"
- name: Update the hostname
vyos.vyos.vyos_config:
backup: yes
lines:
- set system host-name vyos-changed
- name: Get changed config for VyOS devices
vyos.vyos.vyos_facts:
gather_subset: all
- name: Display the changed config
debug:
msg: "The new hostname is {{ ansible_net_hostname }} and the OS is {{ ansible_net_version }}"
```
The extended first playbook has four tasks in a single play. Run it with the same command you used above. The output shows you the change Ansible made to the config:
```
$ ansible-playbook -i vyos.example.net, -u ansible -k -e ansible_network_os=vyos.vyos.vyos first_playbook_ext.yml
PLAY [First Playbook]
************************************************************************************************************************************
TASK [Get config for VyOS devices]
**********************************************************************************************************************************
ok: [vyos.example.net]
TASK [Display the config]
*************************************************************************************************************************************
ok: [vyos.example.net] => {
"msg": "The hostname is vyos and the OS is VyOS 1.1.8"
}
TASK [Update the hostname]
*************************************************************************************************************************************
changed: [vyos.example.net]
TASK [Get changed config for VyOS devices]
*************************************************************************************************************************************
ok: [vyos.example.net]
TASK [Display the changed config]
*************************************************************************************************************************************
ok: [vyos.example.net] => {
"msg": "The new hostname is vyos-changed and the OS is VyOS 1.1.8"
}
PLAY RECAP
************************************************************************************************************************************
vyos.example.net : ok=5 changed=1 unreachable=0 failed=0
```
Gathering facts from network devices
------------------------------------
The `gather_facts` keyword now supports gathering network device facts in standardized key/value pairs. You can feed these network facts into further tasks to manage the network device.
You can also use the new `gather_network_resources` parameter with the network `*_facts` modules (such as [arista.eos.eos\_facts](../../collections/arista/eos/eos_facts_module#ansible-collections-arista-eos-eos-facts-module)) to return just a subset of the device configuration, as shown below.
```
- hosts: arista
gather_facts: True
gather_subset: interfaces
module_defaults:
arista.eos.eos_facts:
gather_network_resources: interfaces
```
The playbook returns the following interface facts:
```
"network_resources": {
"interfaces": [
{
"description": "test-interface",
"enabled": true,
"mtu": "512",
"name": "Ethernet1"
},
{
"enabled": true,
"mtu": "3000",
"name": "Ethernet2"
},
{
"enabled": true,
"name": "Ethernet3"
},
{
"enabled": true,
"name": "Ethernet4"
},
{
"enabled": true,
"name": "Ethernet5"
},
{
"enabled": true,
"name": "Ethernet6"
},
]
}
```
Note that this returns a subset of what is returned by just setting `gather_subset: interfaces`.
You can store these facts and use them directly in another task, such as with the [eos\_interfaces](../../collections/arista/eos/eos_interfaces_module#ansible-collections-arista-eos-eos-interfaces-module) resource module.
| programming_docs |
ansible Working with network connection options Working with network connection options
=======================================
Network modules can support multiple connection protocols, such as `ansible.netcommon.network_cli`, `ansible.netcommon.netconf`, and `ansible.netcommon.httpapi`. These connections include some common options you can set to control how the connection to your network device behaves.
Common options are:
* `become` and `become_method` as described in [Privilege Escalation: enable mode, become, and authorize](network_differences#privilege-escalation).
* `network_os` - set to match your network platform you are communicating with. See the [platform-specific](../user_guide/platform_index#platform-options) pages.
* `remote_user` as described in [Setting a remote user](../../user_guide/connection_details#connection-set-user).
* Timeout options - `persistent_command_timeout`, `persistent_connect_timeout`, and `timeout`.
Setting timeout options
-----------------------
When communicating with a remote device, you have control over how long Ansible maintains the connection to that device, as well as how long Ansible waits for a command to complete on that device. Each of these options can be set as variables in your playbook files, environment variables, or settings in your [ansible.cfg file](../../reference_appendices/config#ansible-configuration-settings).
For example, the three options for controlling the connection timeout are as follows.
Using vars (per task):
```
- name: save running-config
cisco.ios.ios_command:
commands: copy running-config startup-config
vars:
ansible_command_timeout: 30
```
Using the environment variable:
```
$export ANSIBLE_PERSISTENT_COMMAND_TIMEOUT=30
```
Using the global configuration (in `ansible.cfg`)
```
[persistent_connection ]
command_timeout = 30
```
See [Variable precedence: Where should I put a variable?](../../user_guide/playbooks_variables#ansible-variable-precedence) for details on the relative precedence of each of these variables. See the individual connection type to understand each option.
ansible How Network Automation is Different How Network Automation is Different
===================================
Network automation leverages the basic Ansible concepts, but there are important differences in how the network modules work. This introduction prepares you to understand the exercises in this guide.
* [Execution on the control node](#execution-on-the-control-node)
* [Multiple communication protocols](#multiple-communication-protocols)
* [Collections organized by network platform](#collections-organized-by-network-platform)
* [Privilege Escalation: `enable` mode, `become`, and `authorize`](#privilege-escalation-enable-mode-become-and-authorize)
+ [Using `become` for privilege escalation](#using-become-for-privilege-escalation)
Execution on the control node
-----------------------------
Unlike most Ansible modules, network modules do not run on the managed nodes. From a user’s point of view, network modules work like any other modules. They work with ad hoc commands, playbooks, and roles. Behind the scenes, however, network modules use a different methodology than the other (Linux/Unix and Windows) modules use. Ansible is written and executed in Python. Because the majority of network devices can not run Python, the Ansible network modules are executed on the Ansible control node, where `ansible` or `ansible-playbook` runs.
Network modules also use the control node as a destination for backup files, for those modules that offer a `backup` option. With Linux/Unix modules, where a configuration file already exists on the managed node(s), the backup file gets written by default in the same directory as the new, changed file. Network modules do not update configuration files on the managed nodes, because network configuration is not written in files. Network modules write backup files on the control node, usually in the `backup` directory under the playbook root directory.
Multiple communication protocols
--------------------------------
Because network modules execute on the control node instead of on the managed nodes, they can support multiple communication protocols. The communication protocol (XML over SSH, CLI over SSH, API over HTTPS) selected for each network module depends on the platform and the purpose of the module. Some network modules support only one protocol; some offer a choice. The most common protocol is CLI over SSH. You set the communication protocol with the `ansible_connection` variable:
| Value of ansible\_connection | Protocol | Requires | Persistent? |
| --- | --- | --- | --- |
| ansible.netcommon.network\_cli | CLI over SSH | network\_os setting | yes |
| ansible.netcommon.netconf | XML over SSH | network\_os setting | yes |
| ansible.netcommon.httpapi | API over HTTP/HTTPS | network\_os setting | yes |
| local | depends on provider | provider setting | no |
Note
`ansible.netcommon.httpapi` deprecates `eos_eapi` and `nxos_nxapi`. See [Httpapi Plugins](../../plugins/httpapi#httpapi-plugins) for details and an example.
The `ansible_connection: local` has been deprecated. Please use one of the persistent connection types listed above instead. With persistent connections, you can define the hosts and credentials only once, rather than in every task. You also need to set the `network_os` variable for the specific network platform you are communicating with. For more details on using each connection type on various platforms, see the [platform-specific](../user_guide/platform_index#platform-options) pages.
Collections organized by network platform
-----------------------------------------
A network platform is a set of network devices with a common operating system that can be managed by an Ansible collection, for example:
* Arista: [arista.eos](https://galaxy.ansible.com/arista/eos)
* Cisco: [cisco.ios](https://galaxy.ansible.com/cisco/ios), [cisco.iosxr](https://galaxy.ansible.com/cisco/iosxr), [cisco.nxos](https://galaxy.ansible.com/cisco/nxos)
* Juniper: [junipernetworks.junos](https://galaxy.ansible.com/junipernetworks/junos)
* VyOS [vyos.vyos](https://galaxy.ansible.com/vyos/vyos)
All modules within a network platform share certain requirements. Some network platforms have specific differences - see the [platform-specific](../user_guide/platform_index#platform-options) documentation for details.
Privilege Escalation: `enable` mode, `become`, and `authorize`
--------------------------------------------------------------
Several network platforms support privilege escalation, where certain tasks must be done by a privileged user. On network devices this is called the `enable` mode (the equivalent of `sudo` in \*nix administration). Ansible network modules offer privilege escalation for those network devices that support it. For details of which platforms support `enable` mode, with examples of how to use it, see the [platform-specific](../user_guide/platform_index#platform-options) documentation.
### Using `become` for privilege escalation
Use the top-level Ansible parameter `become: yes` with `become_method: enable` to run a task, play, or playbook with escalated privileges on any network platform that supports privilege escalation. You must use either `connection: network_cli` or `connection: httpapi` with `become: yes` with `become_method: enable`. If you are using `network_cli` to connect Ansible to your network devices, a `group_vars` file would look like:
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: cisco.ios.ios
ansible_become: yes
ansible_become_method: enable
```
For more information, see [Become and Networks](../../user_guide/become#become-network)
ansible Playbook Keywords Playbook Keywords
=================
These are the keywords available on common playbook objects. Keywords are one of several sources for configuring Ansible behavior. See [Controlling how Ansible behaves: precedence rules](general_precedence#general-precedence-rules) for details on the relative precedence of each source.
Note
Please note:
* Aliases for the directives are not reflected here, nor are mutable one. For example, [action](#term-action) in task can be substituted by the name of any Ansible module.
* The keywords do not have `version_added` information at this time
* Some keywords set defaults for the objects inside of them rather than for the objects themselves
* [Play](#play)
* [Role](#role)
* [Block](#block)
* [Task](#task)
Play
----
`any_errors_fatal`
Force any un-handled task errors on any host to propagate to all hosts and end the play.
`become`
Boolean that controls if privilege escalation is used or not on [Task](https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html#term-Task) execution. Implemented by the become plugin. See [Become Plugins](../plugins/become#become-plugins).
`become_exe`
Path to the executable used to elevate privileges. Implemented by the become plugin. See [Become Plugins](../plugins/become#become-plugins).
`become_flags`
A string of flag(s) to pass to the privilege escalation program when [become](#term-52) is True.
`become_method`
Which method of privilege escalation to use (such as sudo or su).
`become_user`
User that you ‘become’ after using privilege escalation. The remote/login user must have permissions to become this user.
`check_mode`
A boolean that controls if a task is executed in ‘check’ mode. See [Validating tasks: check mode and diff mode](../user_guide/playbooks_checkmode#check-mode-dry).
`collections`
List of collection namespaces to search for modules, plugins, and roles. See [Using collections in a Playbook](../user_guide/collections_using#collections-using-playbook)
Note
Tasks within a role do not inherit the value of `collections` from the play. To have a role search a list of collections, use the `collections` keyword in `meta/main.yml` within a role.
`connection`
Allows you to change the connection plugin used for tasks to execute on the target. See [Using connection plugins](../plugins/connection#using-connection).
`debugger`
Enable debugging tasks based on state of the task result. See [Debugging tasks](../user_guide/playbooks_debugger#playbook-debugger).
`diff`
Toggle to make tasks return ‘diff’ information or not.
`environment`
A dictionary that gets converted into environment vars to be provided for the task upon execution. This can ONLY be used with modules. This isn’t supported for any other type of plugins nor Ansible itself nor its configuration, it just sets the variables for the code responsible for executing the task. This is not a recommended way to pass in confidential data.
`fact_path`
Set the fact path option for the fact gathering plugin controlled by [gather\_facts](#term-gather_facts).
`force_handlers`
Will force notified handler execution for hosts even if they failed during the play. Will not trigger if the play itself fails.
`gather_facts`
A boolean that controls if the play will automatically run the ‘setup’ task to gather facts for the hosts.
`gather_subset`
Allows you to pass subset options to the fact gathering plugin controlled by [gather\_facts](#term-gather_facts).
`gather_timeout`
Allows you to set the timeout for the fact gathering plugin controlled by [gather\_facts](#term-gather_facts).
`handlers`
A section with tasks that are treated as handlers, these won’t get executed normally, only when notified after each section of tasks is complete. A handler’s `listen` field is not templatable.
`hosts`
A list of groups, hosts or host pattern that translates into a list of hosts that are the play’s target.
`ignore_errors`
Boolean that allows you to ignore task failures and continue with play. It does not affect connection errors.
`ignore_unreachable`
Boolean that allows you to ignore task failures due to an unreachable host and continue with the play. This does not affect other task errors (see [ignore\_errors](#term-65)) but is useful for groups of volatile/ephemeral hosts.
`max_fail_percentage`
can be used to abort the run after a given percentage of hosts in the current batch has failed. This only wokrs on linear or linear derived strategies.
`module_defaults`
Specifies default parameter values for modules.
`name`
Identifier. Can be used for documentation, or in tasks/handlers.
`no_log`
Boolean that controls information disclosure.
`order`
Controls the sorting of hosts as they are used for executing the play. Possible values are inventory (default), sorted, reverse\_sorted, reverse\_inventory and shuffle.
`port`
Used to override the default port used in a connection.
`post_tasks`
A list of tasks to execute after the [tasks](#term-tasks) section.
`pre_tasks`
A list of tasks to execute before [roles](#term-roles).
`remote_user`
User used to log into the target via the connection plugin.
`roles`
List of roles to be imported into the play
`run_once`
Boolean that will bypass the host loop, forcing the task to attempt to execute on the first host available and afterwards apply any results and facts to all active hosts in the same batch.
`serial`
Explicitly define how Ansible batches the execution of the current play on the play’s target
See also
[Setting the batch size with serial](../user_guide/playbooks_strategies#rolling-update-batch-size)
`strategy`
Allows you to choose the connection plugin to use for the play.
`tags`
Tags applied to the task or included tasks, this allows selecting subsets of tasks from the command line.
`tasks`
Main list of tasks to execute in the play, they run after [roles](#term-roles) and before [post\_tasks](#term-post_tasks).
`throttle`
Limit number of concurrent task runs on task, block and playbook level. This is independent of the forks and serial settings, but cannot be set higher than those limits. For example, if forks is set to 10 and the throttle is set to 15, at most 10 hosts will be operated on in parallel.
`timeout`
Time limit for task to execute in, if exceeded Ansible will interrupt and fail the task.
`vars`
Dictionary/map of variables
`vars_files`
List of files that contain vars to include in the play.
`vars_prompt`
list of variables to prompt for.
Role
----
`any_errors_fatal`
Force any un-handled task errors on any host to propagate to all hosts and end the play.
`become`
Boolean that controls if privilege escalation is used or not on [Task](https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html#term-Task) execution. Implemented by the become plugin. See [Become Plugins](../plugins/become#become-plugins).
`become_exe`
Path to the executable used to elevate privileges. Implemented by the become plugin. See [Become Plugins](../plugins/become#become-plugins).
`become_flags`
A string of flag(s) to pass to the privilege escalation program when [become](#term-52) is True.
`become_method`
Which method of privilege escalation to use (such as sudo or su).
`become_user`
User that you ‘become’ after using privilege escalation. The remote/login user must have permissions to become this user.
`check_mode`
A boolean that controls if a task is executed in ‘check’ mode. See [Validating tasks: check mode and diff mode](../user_guide/playbooks_checkmode#check-mode-dry).
`collections`
List of collection namespaces to search for modules, plugins, and roles. See [Using collections in a Playbook](../user_guide/collections_using#collections-using-playbook)
Note
Tasks within a role do not inherit the value of `collections` from the play. To have a role search a list of collections, use the `collections` keyword in `meta/main.yml` within a role.
`connection`
Allows you to change the connection plugin used for tasks to execute on the target. See [Using connection plugins](../plugins/connection#using-connection).
`debugger`
Enable debugging tasks based on state of the task result. See [Debugging tasks](../user_guide/playbooks_debugger#playbook-debugger).
`delegate_facts`
Boolean that allows you to apply facts to a delegated host instead of inventory\_hostname.
`delegate_to`
Host to execute task instead of the target (inventory\_hostname). Connection vars from the delegated host will also be used for the task.
`diff`
Toggle to make tasks return ‘diff’ information or not.
`environment`
A dictionary that gets converted into environment vars to be provided for the task upon execution. This can ONLY be used with modules. This isn’t supported for any other type of plugins nor Ansible itself nor its configuration, it just sets the variables for the code responsible for executing the task. This is not a recommended way to pass in confidential data.
`ignore_errors`
Boolean that allows you to ignore task failures and continue with play. It does not affect connection errors.
`ignore_unreachable`
Boolean that allows you to ignore task failures due to an unreachable host and continue with the play. This does not affect other task errors (see [ignore\_errors](#term-65)) but is useful for groups of volatile/ephemeral hosts.
`module_defaults`
Specifies default parameter values for modules.
`name`
Identifier. Can be used for documentation, or in tasks/handlers.
`no_log`
Boolean that controls information disclosure.
`port`
Used to override the default port used in a connection.
`remote_user`
User used to log into the target via the connection plugin.
`run_once`
Boolean that will bypass the host loop, forcing the task to attempt to execute on the first host available and afterwards apply any results and facts to all active hosts in the same batch.
`tags`
Tags applied to the task or included tasks, this allows selecting subsets of tasks from the command line.
`throttle`
Limit number of concurrent task runs on task, block and playbook level. This is independent of the forks and serial settings, but cannot be set higher than those limits. For example, if forks is set to 10 and the throttle is set to 15, at most 10 hosts will be operated on in parallel.
`timeout`
Time limit for task to execute in, if exceeded Ansible will interrupt and fail the task.
`vars`
Dictionary/map of variables
`when`
Conditional expression, determines if an iteration of a task is run or not.
Block
-----
`always`
List of tasks, in a block, that execute no matter if there is an error in the block or not.
`any_errors_fatal`
Force any un-handled task errors on any host to propagate to all hosts and end the play.
`become`
Boolean that controls if privilege escalation is used or not on [Task](https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html#term-Task) execution. Implemented by the become plugin. See [Become Plugins](../plugins/become#become-plugins).
`become_exe`
Path to the executable used to elevate privileges. Implemented by the become plugin. See [Become Plugins](../plugins/become#become-plugins).
`become_flags`
A string of flag(s) to pass to the privilege escalation program when [become](#term-52) is True.
`become_method`
Which method of privilege escalation to use (such as sudo or su).
`become_user`
User that you ‘become’ after using privilege escalation. The remote/login user must have permissions to become this user.
`block`
List of tasks in a block.
`check_mode`
A boolean that controls if a task is executed in ‘check’ mode. See [Validating tasks: check mode and diff mode](../user_guide/playbooks_checkmode#check-mode-dry).
`collections`
List of collection namespaces to search for modules, plugins, and roles. See [Using collections in a Playbook](../user_guide/collections_using#collections-using-playbook)
Note
Tasks within a role do not inherit the value of `collections` from the play. To have a role search a list of collections, use the `collections` keyword in `meta/main.yml` within a role.
`connection`
Allows you to change the connection plugin used for tasks to execute on the target. See [Using connection plugins](../plugins/connection#using-connection).
`debugger`
Enable debugging tasks based on state of the task result. See [Debugging tasks](../user_guide/playbooks_debugger#playbook-debugger).
`delegate_facts`
Boolean that allows you to apply facts to a delegated host instead of inventory\_hostname.
`delegate_to`
Host to execute task instead of the target (inventory\_hostname). Connection vars from the delegated host will also be used for the task.
`diff`
Toggle to make tasks return ‘diff’ information or not.
`environment`
A dictionary that gets converted into environment vars to be provided for the task upon execution. This can ONLY be used with modules. This isn’t supported for any other type of plugins nor Ansible itself nor its configuration, it just sets the variables for the code responsible for executing the task. This is not a recommended way to pass in confidential data.
`ignore_errors`
Boolean that allows you to ignore task failures and continue with play. It does not affect connection errors.
`ignore_unreachable`
Boolean that allows you to ignore task failures due to an unreachable host and continue with the play. This does not affect other task errors (see [ignore\_errors](#term-65)) but is useful for groups of volatile/ephemeral hosts.
`module_defaults`
Specifies default parameter values for modules.
`name`
Identifier. Can be used for documentation, or in tasks/handlers.
`no_log`
Boolean that controls information disclosure.
`notify`
List of handlers to notify when the task returns a ‘changed=True’ status.
`port`
Used to override the default port used in a connection.
`remote_user`
User used to log into the target via the connection plugin.
`rescue`
List of tasks in a [block](#term-block) that run if there is a task error in the main [block](#term-block) list.
`run_once`
Boolean that will bypass the host loop, forcing the task to attempt to execute on the first host available and afterwards apply any results and facts to all active hosts in the same batch.
`tags`
Tags applied to the task or included tasks, this allows selecting subsets of tasks from the command line.
`throttle`
Limit number of concurrent task runs on task, block and playbook level. This is independent of the forks and serial settings, but cannot be set higher than those limits. For example, if forks is set to 10 and the throttle is set to 15, at most 10 hosts will be operated on in parallel.
`timeout`
Time limit for task to execute in, if exceeded Ansible will interrupt and fail the task.
`vars`
Dictionary/map of variables
`when`
Conditional expression, determines if an iteration of a task is run or not.
Task
----
`action`
The ‘action’ to execute for a task, it normally translates into a C(module) or action plugin.
`any_errors_fatal`
Force any un-handled task errors on any host to propagate to all hosts and end the play.
`args`
A secondary way to add arguments into a task. Takes a dictionary in which keys map to options and values.
`async`
Run a task asynchronously if the C(action) supports this; value is maximum runtime in seconds.
`become`
Boolean that controls if privilege escalation is used or not on [Task](https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html#term-Task) execution. Implemented by the become plugin. See [Become Plugins](../plugins/become#become-plugins).
`become_exe`
Path to the executable used to elevate privileges. Implemented by the become plugin. See [Become Plugins](../plugins/become#become-plugins).
`become_flags`
A string of flag(s) to pass to the privilege escalation program when [become](#term-52) is True.
`become_method`
Which method of privilege escalation to use (such as sudo or su).
`become_user`
User that you ‘become’ after using privilege escalation. The remote/login user must have permissions to become this user.
`changed_when`
Conditional expression that overrides the task’s normal ‘changed’ status.
`check_mode`
A boolean that controls if a task is executed in ‘check’ mode. See [Validating tasks: check mode and diff mode](../user_guide/playbooks_checkmode#check-mode-dry).
`collections`
List of collection namespaces to search for modules, plugins, and roles. See [Using collections in a Playbook](../user_guide/collections_using#collections-using-playbook)
Note
Tasks within a role do not inherit the value of `collections` from the play. To have a role search a list of collections, use the `collections` keyword in `meta/main.yml` within a role.
`connection`
Allows you to change the connection plugin used for tasks to execute on the target. See [Using connection plugins](../plugins/connection#using-connection).
`debugger`
Enable debugging tasks based on state of the task result. See [Debugging tasks](../user_guide/playbooks_debugger#playbook-debugger).
`delay`
Number of seconds to delay between retries. This setting is only used in combination with [until](#term-until).
`delegate_facts`
Boolean that allows you to apply facts to a delegated host instead of inventory\_hostname.
`delegate_to`
Host to execute task instead of the target (inventory\_hostname). Connection vars from the delegated host will also be used for the task.
`diff`
Toggle to make tasks return ‘diff’ information or not.
`environment`
A dictionary that gets converted into environment vars to be provided for the task upon execution. This can ONLY be used with modules. This isn’t supported for any other type of plugins nor Ansible itself nor its configuration, it just sets the variables for the code responsible for executing the task. This is not a recommended way to pass in confidential data.
`failed_when`
Conditional expression that overrides the task’s normal ‘failed’ status.
`ignore_errors`
Boolean that allows you to ignore task failures and continue with play. It does not affect connection errors.
`ignore_unreachable`
Boolean that allows you to ignore task failures due to an unreachable host and continue with the play. This does not affect other task errors (see [ignore\_errors](#term-65)) but is useful for groups of volatile/ephemeral hosts.
`local_action`
Same as action but also implies `delegate_to: localhost`
`loop`
Takes a list for the task to iterate over, saving each list element into the `item` variable (configurable via loop\_control)
`loop_control`
Several keys here allow you to modify/set loop behaviour in a task.
See also
[Adding controls to loops](../user_guide/playbooks_loops#loop-control)
`module_defaults`
Specifies default parameter values for modules.
`name`
Identifier. Can be used for documentation, or in tasks/handlers.
`no_log`
Boolean that controls information disclosure.
`notify`
List of handlers to notify when the task returns a ‘changed=True’ status.
`poll`
Sets the polling interval in seconds for async tasks (default 10s).
`port`
Used to override the default port used in a connection.
`register`
Name of variable that will contain task status and module return data.
`remote_user`
User used to log into the target via the connection plugin.
`retries`
Number of retries before giving up in a [until](#term-until) loop. This setting is only used in combination with [until](#term-until).
`run_once`
Boolean that will bypass the host loop, forcing the task to attempt to execute on the first host available and afterwards apply any results and facts to all active hosts in the same batch.
`tags`
Tags applied to the task or included tasks, this allows selecting subsets of tasks from the command line.
`throttle`
Limit number of concurrent task runs on task, block and playbook level. This is independent of the forks and serial settings, but cannot be set higher than those limits. For example, if forks is set to 10 and the throttle is set to 15, at most 10 hosts will be operated on in parallel.
`timeout`
Time limit for task to execute in, if exceeded Ansible will interrupt and fail the task.
`until`
This keyword implies a ‘[retries](#term-retries) loop’ that will go on until the condition supplied here is met or we hit the [retries](#term-retries) limit.
`vars`
Dictionary/map of variables
`when`
Conditional expression, determines if an iteration of a task is run or not.
`with_<lookup_plugin>`
The same as `loop` but magically adds the output of any lookup plugin to generate the item list.
| programming_docs |
ansible Releases and maintenance Releases and maintenance
========================
Please go to [the devel release and maintenance page](https://docs.ansible.com/ansible/devel/reference_appendices/release_and_maintenance.html) for up to date information.
Note
This link takes you to a different version of the Ansible documentation. Use the version selection on the left or your browser back button to return to this version of the documentation.
See also
[Committers Guidelines](https://docs.ansible.com/ansible/latest/community/committer_guidelines.html#community-committer-guidelines)
Guidelines for Ansible core contributors and maintainers
[Testing Strategies](test_strategies#testing-strategies)
Testing strategies
[Ansible Community Guide](https://docs.ansible.com/ansible/latest/community/index.html#ansible-community-guide)
Community information and contributing
[Development Mailing List](https://groups.google.com/group/ansible-devel)
Mailing list for development topics
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Ansible Automation Hub Ansible Automation Hub
======================
[Ansible Automation Hub](https://www.ansible.com/products/automation-hub) is the official location to discover and download supported [collections](../user_guide/collections_using#collections), included as part of an Ansible Automation Platform subscription. These content collections contain modules, plugins, roles, and playbooks in a downloadable package.
Ansible Automation Hub gives you direct access to trusted content collections from Red Hat and Certified Partners. You can find content by topic or Ansible Partner organizations.
Ansible Automation Hub is the downstream Red Hat supported product version of Ansible Galaxy. Find out more about Ansible Automation Hub features and how to access it at [Ansible Automation Hub](https://www.ansible.com/products/automation-hub). Ansible Automation Hub is part of the Red Hat Ansible Automation Platform subscription, and comes bundled with support from Red Hat, Inc.
ansible Special Variables Special Variables
=================
Magic variables
---------------
These variables cannot be set directly by the user; Ansible will always override them to reflect internal state.
ansible\_check\_mode
Boolean that indicates if we are in check mode or not
ansible\_config\_file
The full path of used Ansible configuration file
ansible\_dependent\_role\_names
The names of the roles currently imported into the current play as dependencies of other plays
ansible\_diff\_mode
Boolean that indicates if we are in diff mode or not
ansible\_forks
Integer reflecting the number of maximum forks available to this run
ansible\_inventory\_sources
List of sources used as inventory
ansible\_limit
Contents of the `--limit` CLI option for the current execution of Ansible
ansible\_loop
A dictionary/map containing extended loop information when enabled via `loop_control.extended`
ansible\_loop\_var
The name of the value provided to `loop_control.loop_var`. Added in `2.8`
ansible\_index\_var
The name of the value provided to `loop_control.index_var`. Added in `2.9`
ansible\_parent\_role\_names
When the current role is being executed by means of an [include\_role](../collections/ansible/builtin/include_role_module#include-role-module) or [import\_role](../collections/ansible/builtin/import_role_module#import-role-module) action, this variable contains a list of all parent roles, with the most recent role (in other words, the role that included/imported this role) being the first item in the list. When multiple inclusions occur, this list lists the *last* role (in other words, the role that included this role) as the *first* item in the list. It is also possible that a specific role exists more than once in this list.
For example: When role **A** includes role **B**, inside role B, `ansible_parent_role_names` will equal to `['A']`. If role **B** then includes role **C**, the list becomes `['B', 'A']`.
ansible\_parent\_role\_paths
When the current role is being executed by means of an [include\_role](../collections/ansible/builtin/include_role_module#include-role-module) or [import\_role](../collections/ansible/builtin/import_role_module#import-role-module) action, this variable contains a list of all parent roles, with the most recent role (in other words, the role that included/imported this role) being the first item in the list. Please refer to `ansible_parent_role_names` for the order of items in this list.
ansible\_play\_batch
List of active hosts in the current play run limited by the serial, aka ‘batch’. Failed/Unreachable hosts are not considered ‘active’.
ansible\_play\_hosts
List of hosts in the current play run, not limited by the serial. Failed/Unreachable hosts are excluded from this list.
ansible\_play\_hosts\_all
List of all the hosts that were targeted by the play
ansible\_play\_role\_names
The names of the roles currently imported into the current play. This list does **not** contain the role names that are implicitly included via dependencies.
ansible\_playbook\_python
The path to the python interpreter being used by Ansible on the controller
ansible\_role\_names
The names of the roles currently imported into the current play, or roles referenced as dependencies of the roles imported into the current play.
ansible\_role\_name
The fully qualified collection role name, in the format of `namespace.collection.role_name`
ansible\_collection\_name
The name of the collection the task that is executing is a part of. In the format of `namespace.collection`
ansible\_run\_tags
Contents of the `--tags` CLI option, which specifies which tags will be included for the current run. Note that if `--tags` is not passed, this variable will default to `["all"]`.
ansible\_search\_path
Current search path for action plugins and lookups, in other words, where we search for relative paths when you do `template: src=myfile`
ansible\_skip\_tags
Contents of the `--skip-tags` CLI option, which specifies which tags will be skipped for the current run.
ansible\_verbosity
Current verbosity setting for Ansible
ansible\_version
Dictionary/map that contains information about the current running version of ansible, it has the following keys: full, major, minor, revision and string.
group\_names
List of groups the current host is part of
groups
A dictionary/map with all the groups in inventory and each group has the list of hosts that belong to it
hostvars
A dictionary/map with all the hosts in inventory and variables assigned to them
inventory\_hostname
The inventory name for the ‘current’ host being iterated over in the play
inventory\_hostname\_short
The short version of `inventory_hostname`
inventory\_dir
The directory of the inventory source in which the `inventory_hostname` was first defined
inventory\_file
The file name of the inventory source in which the `inventory_hostname` was first defined
omit
Special variable that allows you to ‘omit’ an option in a task, for example `- user: name=bob home={{ bobs_home|default(omit) }}`
play\_hosts
Deprecated, the same as ansible\_play\_batch
ansible\_play\_name
The name of the currently executed play. Added in `2.8`. (`name` attribute of the play, not file name of the playbook.)
playbook\_dir
The path to the directory of the playbook that was passed to the `ansible-playbook` command line.
role\_name
The name of the role currently being executed.
role\_names
Deprecated, the same as ansible\_play\_role\_names
role\_path
The path to the dir of the currently running role
Facts
-----
These are variables that contain information pertinent to the current host (`inventory_hostname`). They are only available if gathered first. See [Discovering variables: facts and magic variables](../user_guide/playbooks_vars_facts#vars-and-facts) for more information.
ansible\_facts
Contains any facts gathered or cached for the `inventory_hostname` Facts are normally gathered by the [setup](../collections/ansible/builtin/setup_module#setup-module) module automatically in a play, but any module can return facts.
ansible\_local
Contains any ‘local facts’ gathered or cached for the `inventory_hostname`. The keys available depend on the custom facts created. See the [setup](../collections/ansible/builtin/setup_module#setup-module) module and [facts.d or local facts](../user_guide/playbooks_vars_facts#local-facts) for more details.
Connection variables
--------------------
Connection variables are normally used to set the specifics on how to execute actions on a target. Most of them correspond to connection plugins, but not all are specific to them; other plugins like shell, terminal and become are normally involved. Only the common ones are described as each connection/become/shell/etc plugin can define its own overrides and specific variables. See [Controlling how Ansible behaves: precedence rules](general_precedence#general-precedence-rules) for how connection variables interact with [configuration settings](config#ansible-configuration-settings), [command-line options](../user_guide/command_line_tools#command-line-tools), and [playbook keywords](playbooks_keywords#playbook-keywords).
ansible\_become\_user
The user Ansible ‘becomes’ after using privilege escalation. This must be available to the ‘login user’.
ansible\_connection
The connection plugin actually used for the task on the target host.
ansible\_host
The ip/name of the target host to use instead of `inventory_hostname`.
ansible\_python\_interpreter
The path to the Python executable Ansible should use on the target host.
ansible\_user
The user Ansible ‘logs in’ as.
ansible Python 3 Support Python 3 Support
================
Ansible 2.5 and above work with Python 3. Previous to 2.5, using Python 3 was considered a tech preview. This topic discusses how to set up your controller and managed machines to use Python 3.
Note
On the controller we support Python 3.5 or greater and Python 2.7 or greater. Module-side, we support Python 3.5 or greater and Python 2.6 or greater.
On the controller side
----------------------
The easiest way to run **/usr/bin/ansible** under Python 3 is to install it with the Python3 version of pip. This will make the default **/usr/bin/ansible** run with Python3:
```
$ pip3 install ansible
$ ansible --version | grep "python version"
python version = 3.6.2 (default, Sep 22 2017, 08:28:09) [GCC 7.2.1 20170915 (Red Hat 7.2.1-2)]
```
If you are running Ansible [Running the devel branch from a clone](../installation_guide/intro_installation#from-source) and want to use Python 3 with your source checkout, run your command via `python3`. For example:
```
$ source ./hacking/env-setup
$ python3 $(which ansible) localhost -m ping
$ python3 $(which ansible-playbook) sample-playbook.yml
```
Note
Individual Linux distribution packages may be packaged for Python2 or Python3. When running from distro packages you’ll only be able to use Ansible with the Python version for which it was installed. Sometimes distros will provide a means of installing for several Python versions (via a separate package or via some commands that are run after install). You’ll need to check with your distro to see if that applies in your case.
Using Python 3 on the managed machines with commands and playbooks
------------------------------------------------------------------
* Ansible will automatically detect and use Python 3 on many platforms that ship with it. To explicitly configure a Python 3 interpreter, set the `ansible_python_interpreter` inventory variable at a group or host level to the location of a Python 3 interpreter, such as **/usr/bin/python3**. The default interpreter path may also be set in `ansible.cfg`.
See also
[Interpreter Discovery](interpreter_discovery#interpreter-discovery) for more information.
```
# Example inventory that makes an alias for localhost that uses Python3
localhost-py3 ansible_host=localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python3
# Example of setting a group of hosts to use Python3
[py3-hosts]
ubuntu16
fedora27
[py3-hosts:vars]
ansible_python_interpreter=/usr/bin/python3
```
See also
[How to build your inventory](../user_guide/intro_inventory#intro-inventory) for more information.
* Run your command or playbook:
```
$ ansible localhost-py3 -m ping
$ ansible-playbook sample-playbook.yml
```
Note that you can also use the `-e` command line option to manually set the python interpreter when you run a command. This can be useful if you want to test whether a specific module or playbook has any bugs under Python 3. For example:
```
$ ansible localhost -m ping -e 'ansible_python_interpreter=/usr/bin/python3'
$ ansible-playbook sample-playbook.yml -e 'ansible_python_interpreter=/usr/bin/python3'
```
What to do if an incompatibility is found
-----------------------------------------
We have spent several releases squashing bugs and adding new tests so that Ansible’s core feature set runs under both Python 2 and Python 3. However, bugs may still exist in edge cases and many of the modules shipped with Ansible are maintained by the community and not all of those may be ported yet.
If you find a bug running under Python 3 you can submit a bug report on [Ansible’s GitHub project](https://github.com/ansible/ansible/issues/). Be sure to mention Python3 in the bug report so that the right people look at it.
If you would like to fix the code and submit a pull request on github, you can refer to [Ansible and Python 3](https://docs.ansible.com/ansible/latest/dev_guide/developing_python_3.html#developing-python-3) for information on how we fix common Python3 compatibility issues in the Ansible codebase.
ansible Testing Strategies Testing Strategies
==================
Integrating Testing With Ansible Playbooks
------------------------------------------
Many times, people ask, “how can I best integrate testing with Ansible playbooks?” There are many options. Ansible is actually designed to be a “fail-fast” and ordered system, therefore it makes it easy to embed testing directly in Ansible playbooks. In this chapter, we’ll go into some patterns for integrating tests of infrastructure and discuss the right level of testing that may be appropriate.
Note
This is a chapter about testing the application you are deploying, not the chapter on how to test Ansible modules during development. For that content, please hop over to the Development section.
By incorporating a degree of testing into your deployment workflow, there will be fewer surprises when code hits production and, in many cases, tests can be leveraged in production to prevent failed updates from migrating across an entire installation. Since it’s push-based, it’s also very easy to run the steps on the localhost or testing servers. Ansible lets you insert as many checks and balances into your upgrade workflow as you would like to have.
The Right Level of Testing
--------------------------
Ansible resources are models of desired-state. As such, it should not be necessary to test that services are started, packages are installed, or other such things. Ansible is the system that will ensure these things are declaratively true. Instead, assert these things in your playbooks.
```
tasks:
- service:
name: foo
state: started
enabled: yes
```
If you think the service may not be started, the best thing to do is request it to be started. If the service fails to start, Ansible will yell appropriately. (This should not be confused with whether the service is doing something functional, which we’ll show more about how to do later).
Check Mode As A Drift Test
--------------------------
In the above setup, `–check` mode in Ansible can be used as a layer of testing as well. If running a deployment playbook against an existing system, using the `–check` flag to the `ansible` command will report if Ansible thinks it would have had to have made any changes to bring the system into a desired state.
This can let you know up front if there is any need to deploy onto the given system. Ordinarily, scripts and commands don’t run in check mode, so if you want certain steps to execute in normal mode even when the `–check` flag is used, such as calls to the script module, disable check mode for those tasks:
```
roles:
- webserver
tasks:
- script: verify.sh
check_mode: no
```
Modules That Are Useful for Testing
-----------------------------------
Certain playbook modules are particularly good for testing. Below is an example that ensures a port is open:
```
tasks:
- wait_for:
host: "{{ inventory_hostname }}"
port: 22
delegate_to: localhost
```
Here’s an example of using the URI module to make sure a web service returns:
```
tasks:
- action: uri url=http://www.example.com return_content=yes
register: webpage
- fail:
msg: 'service is not happy'
when: "'AWESOME' not in webpage.content"
```
It’s easy to push an arbitrary script (in any language) on a remote host and the script will automatically fail if it has a non-zero return code:
```
tasks:
- script: test_script1
- script: test_script2 --parameter value --parameter2 value
```
If using roles (you should be, roles are great!), scripts pushed by the script module can live in the ‘files/’ directory of a role.
And the assert module makes it very easy to validate various kinds of truth:
```
tasks:
- shell: /usr/bin/some-command --parameter value
register: cmd_result
- assert:
that:
- "'not ready' not in cmd_result.stderr"
- "'gizmo enabled' in cmd_result.stdout"
```
Should you feel the need to test for the existence of files that are not declaratively set by your Ansible configuration, the ‘stat’ module is a great choice:
```
tasks:
- stat:
path: /path/to/something
register: p
- assert:
that:
- p.stat.exists and p.stat.isdir
```
As mentioned above, there’s no need to check things like the return codes of commands. Ansible is checking them automatically. Rather than checking for a user to exist, consider using the user module to make it exist.
Ansible is a fail-fast system, so when there is an error creating that user, it will stop the playbook run. You do not have to check up behind it.
Testing Lifecycle
-----------------
If writing some degree of basic validation of your application into your playbooks, they will run every time you deploy.
As such, deploying into a local development VM and a staging environment will both validate that things are according to plan ahead of your production deploy.
Your workflow may be something like this:
```
- Use the same playbook all the time with embedded tests in development
- Use the playbook to deploy to a staging environment (with the same playbooks) that simulates production
- Run an integration test battery written by your QA team against staging
- Deploy to production, with the same integrated tests.
```
Something like an integration test battery should be written by your QA team if you are a production webservice. This would include things like Selenium tests or automated API tests and would usually not be something embedded into your Ansible playbooks.
However, it does make sense to include some basic health checks into your playbooks, and in some cases it may be possible to run a subset of the QA battery against remote nodes. This is what the next section covers.
Integrating Testing With Rolling Updates
----------------------------------------
If you have read into [Controlling where tasks run: delegation and local actions](../user_guide/playbooks_delegation#playbooks-delegation) it may quickly become apparent that the rolling update pattern can be extended, and you can use the success or failure of the playbook run to decide whether to add a machine into a load balancer or not.
This is the great culmination of embedded tests:
```
---
- hosts: webservers
serial: 5
pre_tasks:
- name: take out of load balancer pool
command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
roles:
- common
- webserver
- apply_testing_checks
post_tasks:
- name: add back to load balancer pool
command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
```
Of course in the above, the “take out of the pool” and “add back” steps would be replaced with a call to an Ansible load balancer module or appropriate shell command. You might also have steps that use a monitoring module to start and end an outage window for the machine.
However, what you can see from the above is that tests are used as a gate – if the “apply\_testing\_checks” step is not performed, the machine will not go back into the pool.
Read the delegation chapter about “max\_fail\_percentage” and you can also control how many failing tests will stop a rolling update from proceeding.
This above approach can also be modified to run a step from a testing machine remotely against a machine:
```
---
- hosts: webservers
serial: 5
pre_tasks:
- name: take out of load balancer pool
command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
roles:
- common
- webserver
tasks:
- script: /srv/qa_team/app_testing_script.sh --server {{ inventory_hostname }}
delegate_to: testing_server
post_tasks:
- name: add back to load balancer pool
command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
```
In the above example, a script is run from the testing server against a remote node prior to bringing it back into the pool.
In the event of a problem, fix the few servers that fail using Ansible’s automatically generated retry file to repeat the deploy on just those servers.
Achieving Continuous Deployment
-------------------------------
If desired, the above techniques may be extended to enable continuous deployment practices.
The workflow may look like this:
```
- Write and use automation to deploy local development VMs
- Have a CI system like Jenkins deploy to a staging environment on every code change
- The deploy job calls testing scripts to pass/fail a build on every deploy
- If the deploy job succeeds, it runs the same deploy playbook against production inventory
```
Some Ansible users use the above approach to deploy a half-dozen or dozen times an hour without taking all of their infrastructure offline. A culture of automated QA is vital if you wish to get to this level.
If you are still doing a large amount of manual QA, you should still make the decision on whether to deploy manually as well, but it can still help to work in the rolling update patterns of the previous section and incorporate some basic health checks using modules like ‘script’, ‘stat’, ‘uri’, and ‘assert’.
Conclusion
----------
Ansible believes you should not need another framework to validate basic things of your infrastructure is true. This is the case because Ansible is an order-based system that will fail immediately on unhandled errors for a host, and prevent further configuration of that host. This forces errors to the top and shows them in a summary at the end of the Ansible run.
However, as Ansible is designed as a multi-tier orchestration system, it makes it very easy to incorporate tests into the end of a playbook run, either using loose tasks or roles. When used with rolling updates, testing steps can decide whether to put a machine back into a load balanced pool or not.
Finally, because Ansible errors propagate all the way up to the return code of the Ansible program itself, and Ansible by default runs in an easy push-based mode, Ansible is a great step to put into a build environment if you wish to use it to roll out systems as part of a Continuous Integration/Continuous Delivery pipeline, as is covered in sections above.
The focus should not be on infrastructure testing, but on application testing, so we strongly encourage getting together with your QA team and ask what sort of tests would make sense to run every time you deploy development VMs, and which sort of tests they would like to run against the staging environment on every deploy. Obviously at the development stage, unit tests are great too. But don’t unit test your playbook. Ansible describes states of resources declaratively, so you don’t have to. If there are cases where you want to be sure of something though, that’s great, and things like stat/assert are great go-to modules for that purpose.
In all, testing is a very organizational and site-specific thing. Everybody should be doing it, but what makes the most sense for your environment will vary with what you are deploying and who is using it – but everyone benefits from a more robust and reliable deployment system.
See also
[Collection Index](../collections/index#list-of-collections)
Browse existing collections, modules, and plugins
[Working with playbooks](../user_guide/playbooks#working-with-playbooks)
An introduction to playbooks
[Controlling where tasks run: delegation and local actions](../user_guide/playbooks_delegation#playbooks-delegation)
Delegation, useful for working with load balancers, clouds, and locally executed steps.
[User Mailing List](https://groups.google.com/group/ansible-project)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Interpreter Discovery Interpreter Discovery
=====================
Most Ansible modules that execute under a POSIX environment require a Python interpreter on the target host. Unless configured otherwise, Ansible will attempt to discover a suitable Python interpreter on each target host the first time a Python module is executed for that host.
To control the discovery behavior:
* for individual hosts and groups, use the `ansible_python_interpreter` inventory variable
* globally, use the `interpreter_python` key in the `[defaults]` section of `ansible.cfg`
Use one of the following values:
`auto_legacy(default in 2.8)`
Detects the target OS platform, distribution, and version, then consults a table listing the correct Python interpreter and path for each platform/distribution/version. If an entry is found, and `/usr/bin/python` is absent, uses the discovered interpreter (and path). If an entry is found, and `/usr/bin/python` is present, uses `/usr/bin/python` and issues a warning. This exception provides temporary compatibility with previous versions of Ansible that always defaulted to `/usr/bin/python`, so if you have installed Python and other dependencies at `/usr/bin/python` on some hosts, Ansible will find and use them with this setting. If no entry is found, or the listed Python is not present on the target host, searches a list of common Python interpreter paths and uses the first one found; also issues a warning that future installation of another Python interpreter could alter the one chosen.
`auto(future default in 2.12)`
Detects the target OS platform, distribution, and version, then consults a table listing the correct Python interpreter and path for each platform/distribution/version. If an entry is found, uses the discovered interpreter. If no entry is found, or the listed Python is not present on the target host, searches a list of common Python interpreter paths and uses the first one found; also issues a warning that future installation of another Python interpreter could alter the one chosen.
auto\_legacy\_silent
Same as `auto_legacy`, but does not issue warnings.
auto\_silent
Same as `auto`, but does not issue warnings.
You can still set `ansible_python_interpreter` to a specific path at any variable level (for example, in host\_vars, in vars files, in playbooks, and so on). Setting a specific path completely disables automatic interpreter discovery; Ansible always uses the path specified.
ansible Return Values Return Values
=============
* [Common](#common)
+ [backup\_file](#backup-file)
+ [changed](#changed)
+ [diff](#diff)
+ [failed](#failed)
+ [invocation](#invocation)
+ [msg](#msg)
+ [rc](#rc)
+ [results](#results)
+ [skipped](#skipped)
+ [stderr](#stderr)
+ [stderr\_lines](#stderr-lines)
+ [stdout](#stdout)
+ [stdout\_lines](#stdout-lines)
* [Internal use](#internal-use)
+ [ansible\_facts](#ansible-facts)
+ [exception](#exception)
+ [warnings](#warnings)
+ [deprecations](#deprecations)
Ansible modules normally return a data structure that can be registered into a variable, or seen directly when output by the `ansible` program. Each module can optionally document its own unique return values (visible through ansible-doc and on the [main docsite](../index#ansible-documentation)).
This document covers return values common to all modules.
Note
Some of these keys might be set by Ansible itself once it processes the module’s return information.
Common
------
### backup\_file
For those modules that implement `backup=no|yes` when manipulating files, a path to the backup file created.
```
"backup_file": "./foo.txt.32729.2020-07-30@06:24:19~"
```
### changed
A boolean indicating if the task had to make changes to the target or delegated host.
```
"changed": true
```
### diff
Information on differences between the previous and current state. Often a dictionary with entries `before` and `after`, which will then be formatted by the callback plugin to a diff view.
```
"diff": [
{
"after": "",
"after_header": "foo.txt (content)",
"before": "",
"before_header": "foo.txt (content)"
},
{
"after_header": "foo.txt (file attributes)",
"before_header": "foo.txt (file attributes)"
}
```
### failed
A boolean that indicates if the task was failed or not.
```
"failed": false
```
### invocation
Information on how the module was invoked.
```
"invocation": {
"module_args": {
"_original_basename": "foo.txt",
"attributes": null,
"backup": true,
"checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709",
"content": null,
"delimiter": null,
"dest": "./foo.txt",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": "666",
"owner": null,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/Users/foo/.ansible/tmp/ansible-tmp-1596115458.110205-105717464505158/source",
"unsafe_writes": null,
"validate": null
}
```
### msg
A string with a generic message relayed to the user.
```
"msg": "line added"
```
### rc
Some modules execute command line utilities or are geared for executing commands directly (raw, shell, command, and so on), this field contains ‘return code’ of these utilities.
```
"rc": 257
```
### results
If this key exists, it indicates that a loop was present for the task and that it contains a list of the normal module ‘result’ per item.
```
"results": [
{
"ansible_loop_var": "item",
"backup": "foo.txt.83170.2020-07-30@07:03:05~",
"changed": true,
"diff": [
{
"after": "",
"after_header": "foo.txt (content)",
"before": "",
"before_header": "foo.txt (content)"
},
{
"after_header": "foo.txt (file attributes)",
"before_header": "foo.txt (file attributes)"
}
],
"failed": false,
"invocation": {
"module_args": {
"attributes": null,
"backrefs": false,
"backup": true
}
},
"item": "foo",
"msg": "line added"
},
{
"ansible_loop_var": "item",
"backup": "foo.txt.83187.2020-07-30@07:03:05~",
"changed": true,
"diff": [
{
"after": "",
"after_header": "foo.txt (content)",
"before": "",
"before_header": "foo.txt (content)"
},
{
"after_header": "foo.txt (file attributes)",
"before_header": "foo.txt (file attributes)"
}
],
"failed": false,
"invocation": {
"module_args": {
"attributes": null,
"backrefs": false,
"backup": true
}
},
"item": "bar",
"msg": "line added"
}
]
```
### skipped
A boolean that indicates if the task was skipped or not
```
"skipped": true
```
### stderr
Some modules execute command line utilities or are geared for executing commands directly (raw, shell, command, and so on), this field contains the error output of these utilities.
```
"stderr": "ls: foo: No such file or directory"
```
### stderr\_lines
When `stderr` is returned we also always provide this field which is a list of strings, one item per line from the original.
```
"stderr_lines": [
"ls: doesntexist: No such file or directory"
]
```
### stdout
Some modules execute command line utilities or are geared for executing commands directly (raw, shell, command, and so on). This field contains the normal output of these utilities.
```
"stdout": "foo!"
```
### stdout\_lines
When `stdout` is returned, Ansible always provides a list of strings, each containing one item per line from the original output.
```
"stdout_lines": [
"foo!"
]
```
Internal use
------------
These keys can be added by modules but will be removed from registered variables; they are ‘consumed’ by Ansible itself.
### ansible\_facts
This key should contain a dictionary which will be appended to the facts assigned to the host. These will be directly accessible and don’t require using a registered variable.
### exception
This key can contain traceback information caused by an exception in a module. It will only be displayed on high verbosity (-vvv).
### warnings
This key contains a list of strings that will be presented to the user.
### deprecations
This key contains a list of dictionaries that will be presented to the user. Keys of the dictionaries are `msg` and `version`, values are string, value for the `version` key can be an empty string.
See also
[Collection Index](../collections/index#list-of-collections)
Browse existing collections, modules, and plugins
[GitHub modules directory](https://github.com/ansible/ansible/tree/devel/lib/ansible/modules)
Browse source of core and extras modules
[Mailing List](https://groups.google.com/group/ansible-devel)
Development mailing list
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Ansible Reference: Module Utilities Ansible Reference: Module Utilities
===================================
This page documents utilities intended to be helpful when writing Ansible modules in Python.
AnsibleModule
-------------
To use this functionality, include `from ansible.module_utils.basic import AnsibleModule` in your module.
*class* ansible.module\_utils.basic.AnsibleModule(*argument\_spec*, *bypass\_checks=False*, *no\_log=False*, *mutually\_exclusive=None*, *required\_together=None*, *required\_one\_of=None*, *add\_file\_common\_args=False*, *supports\_check\_mode=False*, *required\_if=None*, *required\_by=None*)
Common code for quickly building an ansible module in Python (although you can write modules with anything that can return JSON).
See [Developing Ansible modules](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#developing-modules-general) for a general introduction and [Ansible module architecture](https://docs.ansible.com/ansible/latest/dev_guide/developing_program_flow_modules.html#developing-program-flow-modules) for more detailed explanation.
add\_path\_info(*kwargs*)
for results that are files, supplement the info about the file in the return path with stats about the file path.
atomic\_move(*src*, *dest*, *unsafe\_writes=False*)
atomically move src to dest, copying attributes from dest, returns true on success it uses os.rename to ensure this as it is an atomic operation, rest of the function is to work around limitations, corner cases and ensure selinux context is saved if possible
backup\_local(*fn*)
make a date-marked backup of the specified file, return True or False on success or failure
boolean(*arg*)
Convert the argument to a boolean
digest\_from\_file(*filename*, *algorithm*)
Return hex digest of local file for a digest\_method specified by name, or None if file is not present.
exit\_json(*\*\*kwargs*)
return from the module, without error
fail\_json(*msg*, *\*\*kwargs*)
return from the module, with an error message
find\_mount\_point(*path*)
Takes a path and returns it’s mount point
Parameters
**path** – a string type with a filesystem path
Returns
the path to the mount point as a text type
get\_bin\_path(*arg*, *required=False*, *opt\_dirs=None*)
Find system executable in PATH.
Parameters
* **arg** – The executable to find.
* **required** – if executable is not found and required is `True`, fail\_json
* **opt\_dirs** – optional list of directories to search in addition to `PATH`
Returns
if found return full path; otherwise return None
is\_executable(*path*)
is the given path executable?
Parameters
**path** – The path of the file to check.
Limitations:
* Does not account for FSACLs.
* Most times we really want to know “Can the current user execute this file”. This function does not tell us that, only if any execute bit is set.
is\_special\_selinux\_path(*path*)
Returns a tuple containing (True, selinux\_context) if the given path is on a NFS or other ‘special’ fs mount point, otherwise the return will be (False, None).
load\_file\_common\_arguments(*params*, *path=None*)
many modules deal with files, this encapsulates common options that the file module accepts such that it is directly available to all modules and they can share code.
Allows to overwrite the path/dest module argument by providing path.
md5(*filename*)
Return MD5 hex digest of local file using digest\_from\_file().
Do not use this function unless you have no other choice for:
1. Optional backwards compatibility
2. Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
preserved\_copy(*src*, *dest*)
Copy a file with preserved ownership, permissions and context
run\_command(*args*, *check\_rc=False*, *close\_fds=True*, *executable=None*, *data=None*, *binary\_data=False*, *path\_prefix=None*, *cwd=None*, *use\_unsafe\_shell=False*, *prompt\_regex=None*, *environ\_update=None*, *umask=None*, *encoding='utf-8'*, *errors='surrogate\_or\_strict'*, *expand\_user\_and\_vars=True*, *pass\_fds=None*, *before\_communicate\_callback=None*, *ignore\_invalid\_cwd=True*)
Execute a command, returns rc, stdout, and stderr.
Parameters
**args** – is the command to run \* If args is a list, the command will be run with shell=False. \* If args is a string and use\_unsafe\_shell=False it will split args to a list and run with shell=False \* If args is a string and use\_unsafe\_shell=True it runs with shell=True.
Kw check\_rc
Whether to call fail\_json in case of non zero RC. Default False
Kw close\_fds
See documentation for subprocess.Popen(). Default True
Kw executable
See documentation for subprocess.Popen(). Default None
Kw data
If given, information to write to the stdin of the command
Kw binary\_data
If False, append a newline to the data. Default False
Kw path\_prefix
If given, additional path to find the command in. This adds to the PATH environment variable so helper commands in the same directory can also be found
Kw cwd
If given, working directory to run the command inside
Kw use\_unsafe\_shell
See `args` parameter. Default False
Kw prompt\_regex
Regex string (not a compiled regex) which can be used to detect prompts in the stdout which would otherwise cause the execution to hang (especially if no input data is specified)
Kw environ\_update
dictionary to *update* os.environ with
Kw umask
Umask to be used when running the command. Default None
Kw encoding
Since we return native strings, on python3 we need to know the encoding to use to transform from bytes to text. If you want to always get bytes back, use encoding=None. The default is “utf-8”. This does not affect transformation of strings given as args.
Kw errors
Since we return native strings, on python3 we need to transform stdout and stderr from bytes to text. If the bytes are undecodable in the `encoding` specified, then use this error handler to deal with them. The default is `surrogate_or_strict` which means that the bytes will be decoded using the surrogateescape error handler if available (available on all python3 versions we support) otherwise a UnicodeError traceback will be raised. This does not affect transformations of strings given as args.
Kw expand\_user\_and\_vars
When `use_unsafe_shell=False` this argument dictates whether `~` is expanded in paths and environment variables are expanded before running the command. When `True` a string such as `$SHELL` will be expanded regardless of escaping. When `False` and `use_unsafe_shell=False` no path or variable expansion will be done.
Kw pass\_fds
When running on Python 3 this argument dictates which file descriptors should be passed to an underlying `Popen` constructor. On Python 2, this will set `close_fds` to False.
Kw before\_communicate\_callback
This function will be called after `Popen` object will be created but before communicating to the process. (`Popen` object will be passed to callback as a first argument)
Kw ignore\_invalid\_cwd
This flag indicates whether an invalid `cwd` (non-existent or not a directory) should be ignored or should raise an exception.
Returns
A 3-tuple of return code (integer), stdout (native string), and stderr (native string). On python2, stdout and stderr are both byte strings. On python3, stdout and stderr are text strings converted according to the encoding and errors parameters. If you want byte strings on python3, use encoding=None to turn decoding to text off.
sha1(*filename*)
Return SHA1 hex digest of local file using digest\_from\_file().
sha256(*filename*)
Return SHA-256 hex digest of local file using digest\_from\_file().
Basic
-----
To use this functionality, include `import ansible.module_utils.basic` in your module.
`ansible.module_utils.basic.get_all_subclasses(cls)`
**Deprecated**: Use ansible.module\_utils.common.\_utils.get\_all\_subclasses instead
`ansible.module_utils.basic.get_platform()`
**Deprecated** Use [`platform.system()`](https://docs.python.org/3/library/platform.html#platform.system "(in Python v3.10)") directly.
Returns
Name of the platform the module is running on in a native string
Returns a native string that labels the platform (“Linux”, “Solaris”, etc). Currently, this is the result of calling [`platform.system()`](https://docs.python.org/3/library/platform.html#platform.system "(in Python v3.10)").
`ansible.module_utils.basic.heuristic_log_sanitize(data, no_log_values=None)`
Remove strings that look like passwords from log messages
`ansible.module_utils.basic.load_platform_subclass(cls, *args, **kwargs)`
**Deprecated**: Use ansible.module\_utils.common.sys\_info.get\_platform\_subclass instead
Argument Spec
-------------
Classes and functions for validating parameters against an argument spec.
### ArgumentSpecValidator
`class ansible.module_utils.common.arg_spec.ArgumentSpecValidator(argument_spec, mutually_exclusive=None, required_together=None, required_one_of=None, required_if=None, required_by=None)`
Argument spec validation class
Creates a validator based on the `argument_spec` that can be used to validate a number of parameters using the [`validate()`](#ansible.module_utils.common.arg_spec.ArgumentSpecValidator.validate "ansible.module_utils.common.arg_spec.ArgumentSpecValidator.validate") method.
Parameters
* **argument\_spec** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.10)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")*,* [dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.10)")*]*) – Specification of valid parameters and their type. May include nested argument specs.
* **mutually\_exclusive** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.10)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")*] or* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.10)")*[*[list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.10)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")*]**]*) – List or list of lists of terms that should not be provided together.
* **required\_together** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.10)")*[*[list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.10)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")*]**]*) – List of lists of terms that are required together.
* **required\_one\_of** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.10)")*[*[list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.10)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")*]**]*) – List of lists of terms, one of which in each list is required.
* **required\_if** ([list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.10)")) – List of lists of `[parameter, value, [parameters]]` where one of `[parameters]` is required if `parameter == value`.
* **required\_by** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.10)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")*,* [list](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.10)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")*]**]*) – Dictionary of parameter names that contain a list of parameters required by each key in the dictionary.
`validate(parameters, *args, **kwargs)`
Validate `parameters` against argument spec.
Error messages in the [`ValidationResult`](#ansible.module_utils.common.arg_spec.ValidationResult "ansible.module_utils.common.arg_spec.ValidationResult") may contain no\_log values and should be sanitized with [`sanitize_keys()`](#ansible.module_utils.common.parameters.sanitize_keys "ansible.module_utils.common.parameters.sanitize_keys") before logging or displaying.
Parameters
**parameters** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.10)")*[*[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")*,* [dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.10)")*]*) – Parameters to validate against the argument spec
Returns
[`ValidationResult`](#ansible.module_utils.common.arg_spec.ValidationResult "ansible.module_utils.common.arg_spec.ValidationResult") containing validated parameters.
Simple Example
```
argument_spec = {
'name': {'type': 'str'},
'age': {'type': 'int'},
}
parameters = {
'name': 'bo',
'age': '42',
}
validator = ArgumentSpecValidator(argument_spec)
result = validator.validate(parameters)
if result.error_messages:
sys.exit("Validation failed: {0}".format(", ".join(result.error_messages))
valid_params = result.validated_parameters
```
### ValidationResult
`class ansible.module_utils.common.arg_spec.ValidationResult(parameters)`
Result of argument spec validation.
This is the object returned by [`ArgumentSpecValidator.validate()`](#ansible.module_utils.common.arg_spec.ArgumentSpecValidator.validate "ansible.module_utils.common.arg_spec.ArgumentSpecValidator.validate") containing the validated parameters and any errors.
Parameters
**parameters** ([dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.10)")) – Terms to be validated and coerced to the correct type.
`errors`
[`AnsibleValidationErrorMultiple`](#ansible.module_utils.errors.AnsibleValidationErrorMultiple "ansible.module_utils.errors.AnsibleValidationErrorMultiple") containing all [`AnsibleValidationError`](#ansible.module_utils.errors.AnsibleValidationError "ansible.module_utils.errors.AnsibleValidationError") objects if there were any failures during validation.
`property validated_parameters`
Validated and coerced parameters.
`property unsupported_parameters`
[`set`](https://docs.python.org/3/library/stdtypes.html#set "(in Python v3.10)") of unsupported parameter names.
`property error_messages`
[`list`](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.10)") of all error messages from each exception in [`errors`](#ansible.module_utils.common.arg_spec.ValidationResult.errors "ansible.module_utils.common.arg_spec.ValidationResult.errors").
### Parameters
`ansible.module_utils.common.parameters.DEFAULT_TYPE_VALIDATORS`
[`dict`](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.10)") of type names, such as `'str'`, and the default function used to check that type, [`check_type_str()`](#ansible.module_utils.common.validation.check_type_str "ansible.module_utils.common.validation.check_type_str") in this case.
`ansible.module_utils.common.parameters.env_fallback(*args, **kwargs)`
Load value from environment variable
`ansible.module_utils.common.parameters.remove_values(value, no_log_strings)`
Remove strings in `no_log_strings` from value.
If value is a container type, then remove a lot more.
Use of `deferred_removals` exists, rather than a pure recursive solution, because of the potential to hit the maximum recursion depth when dealing with large amounts of data (see [issue #24560](https://github.com/ansible/ansible/issues/24560)).
`ansible.module_utils.common.parameters.sanitize_keys(obj, no_log_strings, ignore_keys=frozenset({}))`
Sanitize the keys in a container object by removing `no_log` values from key names.
This is a companion function to the [`remove_values()`](#ansible.module_utils.common.parameters.remove_values "ansible.module_utils.common.parameters.remove_values") function. Similar to that function, we make use of `deferred_removals` to avoid hitting maximum recursion depth in cases of large data structures.
Parameters
* **obj** – The container object to sanitize. Non-container objects are returned unmodified.
* **no\_log\_strings** – A set of string values we do not want logged.
* **ignore\_keys** – A set of string values of keys to not sanitize.
Returns
An object with sanitized keys.
### Validation
Standalone functions for validating various parameter types.
`ansible.module_utils.common.validation.check_missing_parameters(parameters, required_parameters=None)`
This is for checking for required params when we can not check via argspec because we need more information than is simply given in the argspec.
Raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if any required parameters are missing
Parameters
* **parameters** – Dictionary of parameters
* **required\_parameters** – List of parameters to look for in the given parameters.
Returns
Empty list or raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if the check fails.
`ansible.module_utils.common.validation.check_mutually_exclusive(terms, parameters, options_context=None)`
Check mutually exclusive terms against argument parameters
Accepts a single list or list of lists that are groups of terms that should be mutually exclusive with one another
Parameters
* **terms** – List of mutually exclusive parameters
* **parameters** – Dictionary of parameters
Returns
Empty list or raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if the check fails.
`ansible.module_utils.common.validation.check_required_arguments(argument_spec, parameters, options_context=None)`
Check all parameters in argument\_spec and return a list of parameters that are required but not present in parameters.
Raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if the check fails
Parameters
* **argument\_spec** – Argument spec dictionary containing all parameters and their specification
* **parameters** – Dictionary of parameters
Returns
Empty list or raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if the check fails.
`ansible.module_utils.common.validation.check_required_by(requirements, parameters, options_context=None)`
For each key in requirements, check the corresponding list to see if they exist in parameters.
Accepts a single string or list of values for each key.
Parameters
* **requirements** – Dictionary of requirements
* **parameters** – Dictionary of parameters
Returns
Empty dictionary or raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if the
`ansible.module_utils.common.validation.check_required_if(requirements, parameters, options_context=None)`
Check parameters that are conditionally required
Raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if the check fails
Parameters
**requirements** – List of lists specifying a parameter, value, parameters required when the given parameter is the specified value, and optionally a boolean indicating any or all parameters are required.
Example
```
required_if=[
['state', 'present', ('path',), True],
['someint', 99, ('bool_param', 'string_param')],
]
```
Parameters
**parameters** – Dictionary of parameters
Returns
Empty list or raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if the check fails. The results attribute of the exception contains a list of dictionaries. Each dictionary is the result of evaluating each item in requirements. Each return dictionary contains the following keys:
key missing
List of parameters that are required but missing
key requires
’any’ or ‘all’
key parameter
Parameter name that has the requirement
key value
Original value of the parameter
key requirements
Original required parameters
Example
```
[
{
'parameter': 'someint',
'value': 99
'requirements': ('bool_param', 'string_param'),
'missing': ['string_param'],
'requires': 'all',
}
]
```
`ansible.module_utils.common.validation.check_required_one_of(terms, parameters, options_context=None)`
Check each list of terms to ensure at least one exists in the given module parameters
Accepts a list of lists or tuples
Parameters
* **terms** – List of lists of terms to check. For each list of terms, at least one is required.
* **parameters** – Dictionary of parameters
* **options\_context** – List of strings of parent key names if `terms` are in a sub spec.
Returns
Empty list or raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if the check fails.
`ansible.module_utils.common.validation.check_required_together(terms, parameters, options_context=None)`
Check each list of terms to ensure every parameter in each list exists in the given parameters.
Accepts a list of lists or tuples.
Parameters
* **terms** – List of lists of terms to check. Each list should include parameters that are all required when at least one is specified in the parameters.
* **parameters** – Dictionary of parameters
Returns
Empty list or raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if the check fails.
`ansible.module_utils.common.validation.check_type_bits(value)`
Convert a human-readable string bits value to bits in integer.
Example: `check_type_bits('1Mb')` returns integer 1048576.
Raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if unable to covert the value.
`ansible.module_utils.common.validation.check_type_bool(value)`
Verify that the value is a bool or convert it to a bool and return it.
Raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if unable to convert to a bool
Parameters
**value** – String, int, or float to convert to bool. Valid booleans include: ‘1’, ‘on’, 1, ‘0’, 0, ‘n’, ‘f’, ‘false’, ‘true’, ‘y’, ‘t’, ‘yes’, ‘no’, ‘off’
Returns
Boolean True or False
`ansible.module_utils.common.validation.check_type_bytes(value)`
Convert a human-readable string value to bytes
Raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if unable to covert the value
`ansible.module_utils.common.validation.check_type_dict(value)`
Verify that value is a dict or convert it to a dict and return it.
Raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if unable to convert to a dict
Parameters
**value** – Dict or string to convert to a dict. Accepts `k1=v2, k2=v2`.
Returns
value converted to a dictionary
`ansible.module_utils.common.validation.check_type_float(value)`
Verify that value is a float or convert it to a float and return it
Raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if unable to convert to a float
Parameters
**value** – float, int, str, or bytes to verify or convert and return.
Returns
float of given value.
`ansible.module_utils.common.validation.check_type_int(value)`
Verify that the value is an integer and return it or convert the value to an integer and return it
Raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if unable to convert to an int
Parameters
**value** – String or int to convert of verify
Returns
int of given value
`ansible.module_utils.common.validation.check_type_jsonarg(value)`
Return a jsonified string. Sometimes the controller turns a json string into a dict/list so transform it back into json here
Raises [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if unable to covert the value
`ansible.module_utils.common.validation.check_type_list(value)`
Verify that the value is a list or convert to a list
A comma separated string will be split into a list. Raises a [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") if unable to convert to a list.
Parameters
**value** – Value to validate or convert to a list
Returns
Original value if it is already a list, single item list if a float, int, or string without commas, or a multi-item list if a comma-delimited string.
`ansible.module_utils.common.validation.check_type_path(value)`
Verify the provided value is a string or convert it to a string, then return the expanded path
`ansible.module_utils.common.validation.check_type_raw(value)`
Returns the raw value
`ansible.module_utils.common.validation.check_type_str(value, allow_conversion=True, param=None, prefix='')`
Verify that the value is a string or convert to a string.
Since unexpected changes can sometimes happen when converting to a string, `allow_conversion` controls whether or not the value will be converted or a TypeError will be raised if the value is not a string and would be converted
Parameters
* **value** – Value to validate or convert to a string
* **allow\_conversion** – Whether to convert the string and return it or raise a TypeError
Returns
Original value if it is a string, the value converted to a string if allow\_conversion=True, or raises a TypeError if allow\_conversion=False.
`ansible.module_utils.common.validation.count_terms(terms, parameters)`
Count the number of occurrences of a key in a given dictionary
Parameters
* **terms** – String or iterable of values to check
* **parameters** – Dictionary of parameters
Returns
An integer that is the number of occurrences of the terms values in the provided dictionary.
Errors
------
`exception ansible.module_utils.errors.AnsibleFallbackNotFound`
Fallback validator was not found
`exception ansible.module_utils.errors.AnsibleValidationError(message)`
Single argument spec validation error
`error_message`
The error message passed in when the exception was raised.
`property msg`
The error message passed in when the exception was raised.
`exception ansible.module_utils.errors.AnsibleValidationErrorMultiple(errors=None)`
Multiple argument spec validation errors
`errors`
[`list`](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.10)") of [`AnsibleValidationError`](#ansible.module_utils.errors.AnsibleValidationError "ansible.module_utils.errors.AnsibleValidationError") objects
`property msg`
The first message from the first error in `errors`.
`property messages`
[`list`](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.10)") of each error message in `errors`.
`append(error)`
Append a new error to `self.errors`.
Only [`AnsibleValidationError`](#ansible.module_utils.errors.AnsibleValidationError "ansible.module_utils.errors.AnsibleValidationError") should be added.
`extend(errors)`
Append each item in `errors` to `self.errors`. Only [`AnsibleValidationError`](#ansible.module_utils.errors.AnsibleValidationError "ansible.module_utils.errors.AnsibleValidationError") should be added.
`exception ansible.module_utils.errors.AliasError(message)`
Error handling aliases
`exception ansible.module_utils.errors.ArgumentTypeError(message)`
Error with parameter type
`exception ansible.module_utils.errors.ArgumentValueError(message)`
Error with parameter value
`exception ansible.module_utils.errors.ElementError(message)`
Error when validating elements
`exception ansible.module_utils.errors.MutuallyExclusiveError(message)`
Mutually exclusive parameters were supplied
`exception ansible.module_utils.errors.NoLogError(message)`
Error converting no\_log values
`exception ansible.module_utils.errors.RequiredByError(message)`
Error with parameters that are required by other parameters
`exception ansible.module_utils.errors.RequiredDefaultError(message)`
A required parameter was assigned a default value
`exception ansible.module_utils.errors.RequiredError(message)`
Missing a required parameter
`exception ansible.module_utils.errors.RequiredIfError(message)`
Error with conditionally required parameters
`exception ansible.module_utils.errors.RequiredOneOfError(message)`
Error with parameters where at least one is required
`exception ansible.module_utils.errors.RequiredTogetherError(message)`
Error with parameters that are required together
`exception ansible.module_utils.errors.SubParameterTypeError(message)`
Incorrect type for subparameter
`exception ansible.module_utils.errors.UnsupportedError(message)`
Unsupported parameters were supplied
| programming_docs |
ansible Ansible Configuration Settings Ansible Configuration Settings
==============================
Ansible supports several sources for configuring its behavior, including an ini file named `ansible.cfg`, environment variables, command-line options, playbook keywords, and variables. See [Controlling how Ansible behaves: precedence rules](general_precedence#general-precedence-rules) for details on the relative precedence of each source.
The `ansible-config` utility allows users to see all the configuration settings available, their defaults, how to set them and where their current value comes from. See [ansible-config](../cli/ansible-config#ansible-config) for more information.
The configuration file
----------------------
Changes can be made and used in a configuration file which will be searched for in the following order:
* `ANSIBLE_CONFIG` (environment variable if set)
* `ansible.cfg` (in the current directory)
* `~/.ansible.cfg` (in the home directory)
* `/etc/ansible/ansible.cfg`
Ansible will process the above list and use the first file found, all others are ignored.
Note
The configuration file is one variant of an INI format. Both the hash sign (`#`) and semicolon (`;`) are allowed as comment markers when the comment starts the line. However, if the comment is inline with regular values, only the semicolon is allowed to introduce the comment. For instance:
```
# some basic default values...
inventory = /etc/ansible/hosts ; This points to the file that lists your hosts
```
### Avoiding security risks with `ansible.cfg` in the current directory
If Ansible were to load `ansible.cfg` from a world-writable current working directory, it would create a serious security risk. Another user could place their own config file there, designed to make Ansible run malicious code both locally and remotely, possibly with elevated privileges. For this reason, Ansible will not automatically load a config file from the current working directory if the directory is world-writable.
If you depend on using Ansible with a config file in the current working directory, the best way to avoid this problem is to restrict access to your Ansible directories to particular user(s) and/or group(s). If your Ansible directories live on a filesystem which has to emulate Unix permissions, like Vagrant or Windows Subsystem for Linux (WSL), you may, at first, not know how you can fix this as `chmod`, `chown`, and `chgrp` might not work there. In most of those cases, the correct fix is to modify the mount options of the filesystem so the files and directories are readable and writable by the users and groups running Ansible but closed to others. For more details on the correct settings, see:
* for Vagrant, the [Vagrant documentation](https://www.vagrantup.com/docs/synced-folders/) covers synced folder permissions.
* for WSL, the [WSL docs](https://docs.microsoft.com/en-us/windows/wsl/wsl-config#set-wsl-launch-settings) and this [Microsoft blog post](https://blogs.msdn.microsoft.com/commandline/2018/01/12/chmod-chown-wsl-improvements/) cover mount options.
If you absolutely depend on storing your Ansible config in a world-writable current working directory, you can explicitly specify the config file via the [`ANSIBLE_CONFIG`](#envvar-ANSIBLE_CONFIG) environment variable. Please take appropriate steps to mitigate the security concerns above before doing so.
### Relative paths for configuration
You can specify a relative path for many configuration options. In most of those cases the path used will be relative to the `ansible.cfg` file used for the current execution. If you need a path relative to your current working directory (CWD) you can use the `{{CWD}}` macro to specify it. We do not recommend this approach, as using your CWD as the root of relative paths can be a security risk. For example: `cd /tmp; secureinfo=./newrootpassword ansible-playbook ~/safestuff/change_root_pwd.yml`.
Common Options
--------------
This is a copy of the options available from our release, your local install might have extra options due to additional plugins, you can use the command line utility mentioned above (`ansible-config`) to browse through those.
### ACTION\_WARNINGS
Description
By default Ansible will issue a warning when received from a task action (module or action plugin) These warnings can be silenced by adjusting this setting to False.
Type
boolean
Default
True
Version Added
2.5
Ini
Section
[defaults]
Key
action\_warnings
Environment
Variable
[`ANSIBLE_ACTION_WARNINGS`](#envvar-ANSIBLE_ACTION_WARNINGS)
### AGNOSTIC\_BECOME\_PROMPT
Description
Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
Type
boolean
Default
True
Version Added
2.5
Ini
Section
[privilege\_escalation]
Key
agnostic\_become\_prompt
Environment
Variable
[`ANSIBLE_AGNOSTIC_BECOME_PROMPT`](#envvar-ANSIBLE_AGNOSTIC_BECOME_PROMPT)
### ALLOW\_WORLD\_READABLE\_TMPFILES
Description
This setting has been moved to the individual shell plugins as a plugin option [Shell Plugins](../plugins/shell#shell-plugins). The existing configuration settings are still accepted with the shell plugin adding additional options, like variables. This message will be removed in 2.14.
Type
boolean
Default
False
Deprecated in
2.14
Deprecated detail
moved to shell plugins
Deprecated alternatives
world\_readable\_tmp
### ANSIBLE\_CONNECTION\_PATH
Description
Specify where to look for the ansible-connection script. This location will be checked before searching $PATH. If null, ansible will start with the same directory as the ansible script.
Type
path
Default
None
Version Added
2.8
Ini
Section
[persistent\_connection]
Key
ansible\_connection\_path
Environment
Variable
[`ANSIBLE_CONNECTION_PATH`](#envvar-ANSIBLE_CONNECTION_PATH)
### ANSIBLE\_COW\_ACCEPTLIST
Description
White list of cowsay templates that are ‘safe’ to use, set to empty list if you want to enable all installed templates.
Type
list
Default
[‘bud-frogs’, ‘bunny’, ‘cheese’, ‘daemon’, ‘default’, ‘dragon’, ‘elephant-in-snake’, ‘elephant’, ‘eyes’, ‘hellokitty’, ‘kitty’, ‘luke-koala’, ‘meow’, ‘milk’, ‘moofasa’, ‘moose’, ‘ren’, ‘sheep’, ‘small’, ‘stegosaurus’, ‘stimpy’, ‘supermilker’, ‘three-eyes’, ‘turkey’, ‘turtle’, ‘tux’, ‘udder’, ‘vader-koala’, ‘vader’, ‘www’]
Ini
* Section
[defaults]
Key
cow\_whitelist
Deprecated in
2.15
Deprecated detail
normalizing names to new standard
Deprecated alternatives
cowsay\_enabled\_stencils
* Section
[defaults]
Key
cowsay\_enabled\_stencils
Version Added
2.11
Environment
* Variable
[`ANSIBLE_COW_ACCEPTLIST`](#envvar-ANSIBLE_COW_ACCEPTLIST)
Version Added
2.11
* Variable
[`ANSIBLE_COW_WHITELIST`](#envvar-ANSIBLE_COW_WHITELIST)
Deprecated in
2.15
Deprecated detail
normalizing names to new standard
Deprecated alternatives
ANSIBLE\_COW\_ACCEPTLIST
### ANSIBLE\_COW\_PATH
Description
Specify a custom cowsay path or swap in your cowsay implementation of choice
Type
string
Default
None
Ini
Section
[defaults]
Key
cowpath
Environment
Variable
[`ANSIBLE_COW_PATH`](#envvar-ANSIBLE_COW_PATH)
### ANSIBLE\_COW\_SELECTION
Description
This allows you to chose a specific cowsay stencil for the banners or use ‘random’ to cycle through them.
Default
default
Ini
Section
[defaults]
Key
cow\_selection
Environment
Variable
[`ANSIBLE_COW_SELECTION`](#envvar-ANSIBLE_COW_SELECTION)
### ANSIBLE\_FORCE\_COLOR
Description
This option forces color mode even when running without a TTY or the “nocolor” setting is True.
Type
boolean
Default
False
Ini
Section
[defaults]
Key
force\_color
Environment
Variable
[`ANSIBLE_FORCE_COLOR`](#envvar-ANSIBLE_FORCE_COLOR)
### ANSIBLE\_NOCOLOR
Description
This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
Type
boolean
Default
False
Ini
Section
[defaults]
Key
nocolor
Environment
* Variable
[`ANSIBLE_NOCOLOR`](#envvar-ANSIBLE_NOCOLOR)
* Variable
[`NO_COLOR`](#envvar-NO_COLOR)
Version Added
2.11
### ANSIBLE\_NOCOWS
Description
If you have cowsay installed but want to avoid the ‘cows’ (why????), use this.
Type
boolean
Default
False
Ini
Section
[defaults]
Key
nocows
Environment
Variable
[`ANSIBLE_NOCOWS`](#envvar-ANSIBLE_NOCOWS)
### ANSIBLE\_PIPELINING
Description
Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server, by executing many Ansible modules without actual file transfer. This can result in a very significant performance improvement when enabled. However this conflicts with privilege escalation (become). For example, when using ‘sudo:’ operations you must first disable ‘requiretty’ in /etc/sudoers on all managed hosts, which is why it is disabled by default. This option is disabled if `ANSIBLE_KEEP_REMOTE_FILES` is enabled. This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
Type
boolean
Default
False
Ini
* Section
[connection]
Key
pipelining
* Section
[defaults]
Key
pipelining
Environment
Variable
[`ANSIBLE_PIPELINING`](#envvar-ANSIBLE_PIPELINING)
### ANY\_ERRORS\_FATAL
Description
Sets the default value for the any\_errors\_fatal keyword, if True, Task failures will be considered fatal errors.
Type
boolean
Default
False
Version Added
2.4
Ini
Section
[defaults]
Key
any\_errors\_fatal
Environment
Variable
[`ANSIBLE_ANY_ERRORS_FATAL`](#envvar-ANSIBLE_ANY_ERRORS_FATAL)
### BECOME\_ALLOW\_SAME\_USER
Description
This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
Type
boolean
Default
False
Ini
Section
[privilege\_escalation]
Key
become\_allow\_same\_user
Environment
Variable
[`ANSIBLE_BECOME_ALLOW_SAME_USER`](#envvar-ANSIBLE_BECOME_ALLOW_SAME_USER)
### BECOME\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Become Plugins.
Type
pathspec
Default
~/.ansible/plugins/become:/usr/share/ansible/plugins/become
Version Added
2.8
Ini
Section
[defaults]
Key
become\_plugins
Environment
Variable
[`ANSIBLE_BECOME_PLUGINS`](#envvar-ANSIBLE_BECOME_PLUGINS)
### CACHE\_PLUGIN
Description
Chooses which cache plugin to use, the default ‘memory’ is ephemeral.
Default
memory
Ini
Section
[defaults]
Key
fact\_caching
Environment
Variable
[`ANSIBLE_CACHE_PLUGIN`](#envvar-ANSIBLE_CACHE_PLUGIN)
### CACHE\_PLUGIN\_CONNECTION
Description
Defines connection or path information for the cache plugin
Default
None
Ini
Section
[defaults]
Key
fact\_caching\_connection
Environment
Variable
[`ANSIBLE_CACHE_PLUGIN_CONNECTION`](#envvar-ANSIBLE_CACHE_PLUGIN_CONNECTION)
### CACHE\_PLUGIN\_PREFIX
Description
Prefix to use for cache plugin files/tables
Default
ansible\_facts
Ini
Section
[defaults]
Key
fact\_caching\_prefix
Environment
Variable
[`ANSIBLE_CACHE_PLUGIN_PREFIX`](#envvar-ANSIBLE_CACHE_PLUGIN_PREFIX)
### CACHE\_PLUGIN\_TIMEOUT
Description
Expiration timeout for the cache plugin data
Type
integer
Default
86400
Ini
Section
[defaults]
Key
fact\_caching\_timeout
Environment
Variable
[`ANSIBLE_CACHE_PLUGIN_TIMEOUT`](#envvar-ANSIBLE_CACHE_PLUGIN_TIMEOUT)
### CALLABLE\_ACCEPT\_LIST
Description
Whitelist of callable methods to be made available to template evaluation
Type
list
Default
[]
Ini
* Section
[defaults]
Key
callable\_whitelist
Deprecated in
2.15
Deprecated detail
normalizing names to new standard
Deprecated alternatives
callable\_enabled
* Section
[defaults]
Key
callable\_enabled
Version Added
2.11
Environment
* Variable
[`ANSIBLE_CALLABLE_ENABLED`](#envvar-ANSIBLE_CALLABLE_ENABLED)
Version Added
2.11
* Variable
[`ANSIBLE_CALLABLE_WHITELIST`](#envvar-ANSIBLE_CALLABLE_WHITELIST)
Deprecated in
2.15
Deprecated detail
normalizing names to new standard
Deprecated alternatives
ANSIBLE\_CALLABLE\_ENABLED
### CALLBACKS\_ENABLED
Description
List of enabled callbacks, not all callbacks need enabling, but many of those shipped with Ansible do as we don’t want them activated by default.
Type
list
Default
[]
Ini
* Section
[defaults]
Key
callback\_whitelist
Deprecated in
2.15
Deprecated detail
normalizing names to new standard
Deprecated alternatives
callbacks\_enabled
* Section
[defaults]
Key
callbacks\_enabled
Version Added
2.11
Environment
* Variable
[`ANSIBLE_CALLBACK_WHITELIST`](#envvar-ANSIBLE_CALLBACK_WHITELIST)
Deprecated in
2.15
Deprecated detail
normalizing names to new standard
Deprecated alternatives
ANSIBLE\_CALLBACKS\_ENABLED
* Variable
[`ANSIBLE_CALLBACKS_ENABLED`](#envvar-ANSIBLE_CALLBACKS_ENABLED)
Version Added
2.11
### COLLECTIONS\_ON\_ANSIBLE\_VERSION\_MISMATCH
Description
When a collection is loaded that does not support the running Ansible version (via the collection metadata key `requires_ansible`), the default behavior is to issue a warning and continue anyway. Setting this value to `ignore` skips the warning entirely, while setting it to `fatal` will immediately halt Ansible execution.
Default
warning
Choices
* error
* warning
* ignore
Ini
Section
[defaults]
Key
collections\_on\_ansible\_version\_mismatch
Environment
Variable
[`ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH`](#envvar-ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH)
### COLLECTIONS\_PATHS
Description
Colon separated paths in which Ansible will search for collections content. Collections must be in nested *subdirectories*, not directly in these directories. For example, if `COLLECTIONS_PATHS` includes `~/.ansible/collections`, and you want to add `my.collection` to that directory, it must be saved as `~/.ansible/collections/ansible_collections/my/collection`.
Type
pathspec
Default
~/.ansible/collections:/usr/share/ansible/collections
Ini
* Section
[defaults]
Key
collections\_paths
* Section
[defaults]
Key
collections\_path
Version Added
2.10
Environment
* Variable
[`ANSIBLE_COLLECTIONS_PATH`](#envvar-ANSIBLE_COLLECTIONS_PATH)
Version Added
2.10
* Variable
[`ANSIBLE_COLLECTIONS_PATHS`](#envvar-ANSIBLE_COLLECTIONS_PATHS)
### COLLECTIONS\_SCAN\_SYS\_PATH
Description
A boolean to enable or disable scanning the sys.path for installed collections
Type
boolean
Default
True
Ini
Section
[defaults]
Key
collections\_scan\_sys\_path
Environment
Variable
[`ANSIBLE_COLLECTIONS_SCAN_SYS_PATH`](#envvar-ANSIBLE_COLLECTIONS_SCAN_SYS_PATH)
### COLOR\_CHANGED
Description
Defines the color to use on ‘Changed’ task status
Default
yellow
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Ini
Section
[colors]
Key
changed
Environment
Variable
[`ANSIBLE_COLOR_CHANGED`](#envvar-ANSIBLE_COLOR_CHANGED)
### COLOR\_CONSOLE\_PROMPT
Description
Defines the default color to use for ansible-console
Default
white
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Version Added
2.7
Ini
Section
[colors]
Key
console\_prompt
Environment
Variable
[`ANSIBLE_COLOR_CONSOLE_PROMPT`](#envvar-ANSIBLE_COLOR_CONSOLE_PROMPT)
### COLOR\_DEBUG
Description
Defines the color to use when emitting debug messages
Default
dark gray
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Ini
Section
[colors]
Key
debug
Environment
Variable
[`ANSIBLE_COLOR_DEBUG`](#envvar-ANSIBLE_COLOR_DEBUG)
### COLOR\_DEPRECATE
Description
Defines the color to use when emitting deprecation messages
Default
purple
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Ini
Section
[colors]
Key
deprecate
Environment
Variable
[`ANSIBLE_COLOR_DEPRECATE`](#envvar-ANSIBLE_COLOR_DEPRECATE)
### COLOR\_DIFF\_ADD
Description
Defines the color to use when showing added lines in diffs
Default
green
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Ini
Section
[colors]
Key
diff\_add
Environment
Variable
[`ANSIBLE_COLOR_DIFF_ADD`](#envvar-ANSIBLE_COLOR_DIFF_ADD)
### COLOR\_DIFF\_LINES
Description
Defines the color to use when showing diffs
Default
cyan
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Ini
Section
[colors]
Key
diff\_lines
Environment
Variable
[`ANSIBLE_COLOR_DIFF_LINES`](#envvar-ANSIBLE_COLOR_DIFF_LINES)
### COLOR\_DIFF\_REMOVE
Description
Defines the color to use when showing removed lines in diffs
Default
red
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Ini
Section
[colors]
Key
diff\_remove
Environment
Variable
[`ANSIBLE_COLOR_DIFF_REMOVE`](#envvar-ANSIBLE_COLOR_DIFF_REMOVE)
### COLOR\_ERROR
Description
Defines the color to use when emitting error messages
Default
red
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Ini
Section
[colors]
Key
error
Environment
Variable
[`ANSIBLE_COLOR_ERROR`](#envvar-ANSIBLE_COLOR_ERROR)
### COLOR\_HIGHLIGHT
Description
Defines the color to use for highlighting
Default
white
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Ini
Section
[colors]
Key
highlight
Environment
Variable
[`ANSIBLE_COLOR_HIGHLIGHT`](#envvar-ANSIBLE_COLOR_HIGHLIGHT)
### COLOR\_OK
Description
Defines the color to use when showing ‘OK’ task status
Default
green
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Ini
Section
[colors]
Key
ok
Environment
Variable
[`ANSIBLE_COLOR_OK`](#envvar-ANSIBLE_COLOR_OK)
### COLOR\_SKIP
Description
Defines the color to use when showing ‘Skipped’ task status
Default
cyan
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Ini
Section
[colors]
Key
skip
Environment
Variable
[`ANSIBLE_COLOR_SKIP`](#envvar-ANSIBLE_COLOR_SKIP)
### COLOR\_UNREACHABLE
Description
Defines the color to use on ‘Unreachable’ status
Default
bright red
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Ini
Section
[colors]
Key
unreachable
Environment
Variable
[`ANSIBLE_COLOR_UNREACHABLE`](#envvar-ANSIBLE_COLOR_UNREACHABLE)
### COLOR\_VERBOSE
Description
Defines the color to use when emitting verbose messages. i.e those that show with ‘-v’s.
Default
blue
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Ini
Section
[colors]
Key
verbose
Environment
Variable
[`ANSIBLE_COLOR_VERBOSE`](#envvar-ANSIBLE_COLOR_VERBOSE)
### COLOR\_WARN
Description
Defines the color to use when emitting warning messages
Default
bright purple
Choices
* black
* bright gray
* blue
* white
* green
* bright blue
* cyan
* bright green
* red
* bright cyan
* purple
* bright red
* yellow
* bright purple
* dark gray
* bright yellow
* magenta
* bright magenta
* normal
Ini
Section
[colors]
Key
warn
Environment
Variable
[`ANSIBLE_COLOR_WARN`](#envvar-ANSIBLE_COLOR_WARN)
### COMMAND\_WARNINGS
Description
Ansible can issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module. These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option `warn`. As of version 2.11, this is disabled by default.
Type
boolean
Default
False
Version Added
1.8
Ini
Section
[defaults]
Key
command\_warnings
Environment
Variable
[`ANSIBLE_COMMAND_WARNINGS`](#envvar-ANSIBLE_COMMAND_WARNINGS)
Deprecated in
2.14
Deprecated detail
the command warnings feature is being removed
### CONDITIONAL\_BARE\_VARS
Description
With this setting on (True), running conditional evaluation ‘var’ is treated differently than ‘var.subkey’ as the first is evaluated directly while the second goes through the Jinja2 parser. But ‘false’ strings in ‘var’ get evaluated as booleans. With this setting off they both evaluate the same but in cases in which ‘var’ was ‘false’ (a string) it won’t get evaluated as a boolean anymore. Currently this setting defaults to ‘True’ but will soon change to ‘False’ and the setting itself will be removed in the future. Expect that this setting eventually will be deprecated after 2.12
Type
boolean
Default
False
Version Added
2.8
Ini
Section
[defaults]
Key
conditional\_bare\_variables
Environment
Variable
[`ANSIBLE_CONDITIONAL_BARE_VARS`](#envvar-ANSIBLE_CONDITIONAL_BARE_VARS)
### CONNECTION\_FACTS\_MODULES
Description
Which modules to run during a play’s fact gathering stage based on connection
Type
dict
Default
{‘asa’: ‘ansible.legacy.asa\_facts’, ‘cisco.asa.asa’: ‘cisco.asa.asa\_facts’, ‘eos’: ‘ansible.legacy.eos\_facts’, ‘arista.eos.eos’: ‘arista.eos.eos\_facts’, ‘frr’: ‘ansible.legacy.frr\_facts’, ‘frr.frr.frr’: ‘frr.frr.frr\_facts’, ‘ios’: ‘ansible.legacy.ios\_facts’, ‘cisco.ios.ios’: ‘cisco.ios.ios\_facts’, ‘iosxr’: ‘ansible.legacy.iosxr\_facts’, ‘cisco.iosxr.iosxr’: ‘cisco.iosxr.iosxr\_facts’, ‘junos’: ‘ansible.legacy.junos\_facts’, ‘junipernetworks.junos.junos’: ‘junipernetworks.junos.junos\_facts’, ‘nxos’: ‘ansible.legacy.nxos\_facts’, ‘cisco.nxos.nxos’: ‘cisco.nxos.nxos\_facts’, ‘vyos’: ‘ansible.legacy.vyos\_facts’, ‘vyos.vyos.vyos’: ‘vyos.vyos.vyos\_facts’, ‘exos’: ‘ansible.legacy.exos\_facts’, ‘extreme.exos.exos’: ‘extreme.exos.exos\_facts’, ‘slxos’: ‘ansible.legacy.slxos\_facts’, ‘extreme.slxos.slxos’: ‘extreme.slxos.slxos\_facts’, ‘voss’: ‘ansible.legacy.voss\_facts’, ‘extreme.voss.voss’: ‘extreme.voss.voss\_facts’, ‘ironware’: ‘ansible.legacy.ironware\_facts’, ‘community.network.ironware’: ‘community.network.ironware\_facts’}
### CONTROLLER\_PYTHON\_WARNING
Description
Toggle to control showing warnings related to running a Python version older than Python 3.8 on the controller
Type
boolean
Default
True
Ini
Section
[defaults]
Key
controller\_python\_warning
Environment
Variable
[`ANSIBLE_CONTROLLER_PYTHON_WARNING`](#envvar-ANSIBLE_CONTROLLER_PYTHON_WARNING)
### COVERAGE\_REMOTE\_OUTPUT
Description
Sets the output directory on the remote host to generate coverage reports to. Currently only used for remote coverage on PowerShell modules. This is for internal use only.
Type
str
Version Added
2.9
Environment
Variable
[`_ANSIBLE_COVERAGE_REMOTE_OUTPUT`](#envvar-_ANSIBLE_COVERAGE_REMOTE_OUTPUT)
Variables
name
`_ansible_coverage_remote_output`
### COVERAGE\_REMOTE\_PATHS
Description
A list of paths for files on the Ansible controller to run coverage for when executing on the remote host. Only files that match the path glob will have its coverage collected. Multiple path globs can be specified and are separated by `:`. Currently only used for remote coverage on PowerShell modules. This is for internal use only.
Type
str
Default
*
Version Added
2.9
Environment
Variable
[`_ANSIBLE_COVERAGE_REMOTE_PATH_FILTER`](#envvar-_ANSIBLE_COVERAGE_REMOTE_PATH_FILTER)
### DEFAULT\_ACTION\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Action Plugins.
Type
pathspec
Default
~/.ansible/plugins/action:/usr/share/ansible/plugins/action
Ini
Section
[defaults]
Key
action\_plugins
Environment
Variable
[`ANSIBLE_ACTION_PLUGINS`](#envvar-ANSIBLE_ACTION_PLUGINS)
### DEFAULT\_ALLOW\_UNSAFE\_LOOKUPS
Description
When enabled, this option allows lookup plugins (whether used in variables as `{{lookup('foo')}}` or as a loop as with\_foo) to return data that is not marked ‘unsafe’. By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language, as this could represent a security risk. This option is provided to allow for backwards-compatibility, however users should first consider adding allow\_unsafe=True to any lookups which may be expected to contain data which may be run through the templating engine late
Type
boolean
Default
False
Version Added
2.2.3
Ini
Section
[defaults]
Key
allow\_unsafe\_lookups
### DEFAULT\_ASK\_PASS
Description
This controls whether an Ansible playbook should prompt for a login password. If using SSH keys for authentication, you probably do not needed to change this setting.
Type
boolean
Default
False
Ini
Section
[defaults]
Key
ask\_pass
Environment
Variable
[`ANSIBLE_ASK_PASS`](#envvar-ANSIBLE_ASK_PASS)
### DEFAULT\_ASK\_VAULT\_PASS
Description
This controls whether an Ansible playbook should prompt for a vault password.
Type
boolean
Default
False
Ini
Section
[defaults]
Key
ask\_vault\_pass
Environment
Variable
[`ANSIBLE_ASK_VAULT_PASS`](#envvar-ANSIBLE_ASK_VAULT_PASS)
### DEFAULT\_BECOME
Description
Toggles the use of privilege escalation, allowing you to ‘become’ another user after login.
Type
boolean
Default
False
Ini
Section
[privilege\_escalation]
Key
become
Environment
Variable
[`ANSIBLE_BECOME`](#envvar-ANSIBLE_BECOME)
### DEFAULT\_BECOME\_ASK\_PASS
Description
Toggle to prompt for privilege escalation password.
Type
boolean
Default
False
Ini
Section
[privilege\_escalation]
Key
become\_ask\_pass
Environment
Variable
[`ANSIBLE_BECOME_ASK_PASS`](#envvar-ANSIBLE_BECOME_ASK_PASS)
### DEFAULT\_BECOME\_EXE
Description
executable to use for privilege escalation, otherwise Ansible will depend on PATH
Default
None
Ini
Section
[privilege\_escalation]
Key
become\_exe
Environment
Variable
[`ANSIBLE_BECOME_EXE`](#envvar-ANSIBLE_BECOME_EXE)
### DEFAULT\_BECOME\_FLAGS
Description
Flags to pass to the privilege escalation executable.
Default Ini
Section
[privilege\_escalation]
Key
become\_flags
Environment
Variable
[`ANSIBLE_BECOME_FLAGS`](#envvar-ANSIBLE_BECOME_FLAGS)
### DEFAULT\_BECOME\_METHOD
Description
Privilege escalation method to use when `become` is enabled.
Default
sudo
Ini
Section
[privilege\_escalation]
Key
become\_method
Environment
Variable
[`ANSIBLE_BECOME_METHOD`](#envvar-ANSIBLE_BECOME_METHOD)
### DEFAULT\_BECOME\_USER
Description
The user your login/remote user ‘becomes’ when using privilege escalation, most systems will use ‘root’ when no user is specified.
Default
root
Ini
Section
[privilege\_escalation]
Key
become\_user
Environment
Variable
[`ANSIBLE_BECOME_USER`](#envvar-ANSIBLE_BECOME_USER)
### DEFAULT\_CACHE\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Cache Plugins.
Type
pathspec
Default
~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
Ini
Section
[defaults]
Key
cache\_plugins
Environment
Variable
[`ANSIBLE_CACHE_PLUGINS`](#envvar-ANSIBLE_CACHE_PLUGINS)
### DEFAULT\_CALLBACK\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Callback Plugins.
Type
pathspec
Default
~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
Ini
Section
[defaults]
Key
callback\_plugins
Environment
Variable
[`ANSIBLE_CALLBACK_PLUGINS`](#envvar-ANSIBLE_CALLBACK_PLUGINS)
### DEFAULT\_CLICONF\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Cliconf Plugins.
Type
pathspec
Default
~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
Ini
Section
[defaults]
Key
cliconf\_plugins
Environment
Variable
[`ANSIBLE_CLICONF_PLUGINS`](#envvar-ANSIBLE_CLICONF_PLUGINS)
### DEFAULT\_CONNECTION\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Connection Plugins.
Type
pathspec
Default
~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
Ini
Section
[defaults]
Key
connection\_plugins
Environment
Variable
[`ANSIBLE_CONNECTION_PLUGINS`](#envvar-ANSIBLE_CONNECTION_PLUGINS)
### DEFAULT\_DEBUG
Description
Toggles debug output in Ansible. This is *very* verbose and can hinder multiprocessing. Debug output can also include secret information despite no\_log settings being enabled, which means debug mode should not be used in production.
Type
boolean
Default
False
Ini
Section
[defaults]
Key
debug
Environment
Variable
[`ANSIBLE_DEBUG`](#envvar-ANSIBLE_DEBUG)
### DEFAULT\_EXECUTABLE
Description
This indicates the command to use to spawn a shell under for Ansible’s execution needs on a target. Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is.
Default
/bin/sh
Ini
Section
[defaults]
Key
executable
Environment
Variable
[`ANSIBLE_EXECUTABLE`](#envvar-ANSIBLE_EXECUTABLE)
### DEFAULT\_FACT\_PATH
Description
This option allows you to globally configure a custom path for ‘local\_facts’ for the implied M(ansible.builtin.setup) task when using fact gathering. If not set, it will fallback to the default from the M(ansible.builtin.setup) module: `/etc/ansible/facts.d`. This does **not** affect user defined tasks that use the M(ansible.builtin.setup) module.
Type
string
Default
None
Ini
Section
[defaults]
Key
fact\_path
Environment
Variable
[`ANSIBLE_FACT_PATH`](#envvar-ANSIBLE_FACT_PATH)
### DEFAULT\_FILTER\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
Type
pathspec
Default
~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
Ini
Section
[defaults]
Key
filter\_plugins
Environment
Variable
[`ANSIBLE_FILTER_PLUGINS`](#envvar-ANSIBLE_FILTER_PLUGINS)
### DEFAULT\_FORCE\_HANDLERS
Description
This option controls if notified handlers run on a host even if a failure occurs on that host. When false, the handlers will not run if a failure has occurred on a host. This can also be set per play or on the command line. See Handlers and Failure for more details.
Type
boolean
Default
False
Version Added
1.9.1
Ini
Section
[defaults]
Key
force\_handlers
Environment
Variable
[`ANSIBLE_FORCE_HANDLERS`](#envvar-ANSIBLE_FORCE_HANDLERS)
### DEFAULT\_FORKS
Description
Maximum number of forks Ansible will use to execute tasks on target hosts.
Type
integer
Default
5
Ini
Section
[defaults]
Key
forks
Environment
Variable
[`ANSIBLE_FORKS`](#envvar-ANSIBLE_FORKS)
### DEFAULT\_GATHER\_SUBSET
Description
Set the `gather_subset` option for the M(ansible.builtin.setup) task in the implicit fact gathering. See the module documentation for specifics. It does **not** apply to user defined M(ansible.builtin.setup) tasks.
Type
list
Default
[‘all’]
Version Added
2.1
Ini
Section
[defaults]
Key
gather\_subset
Environment
Variable
[`ANSIBLE_GATHER_SUBSET`](#envvar-ANSIBLE_GATHER_SUBSET)
### DEFAULT\_GATHER\_TIMEOUT
Description
Set the timeout in seconds for the implicit fact gathering. It does **not** apply to user defined M(ansible.builtin.setup) tasks.
Type
integer
Default
10
Ini
Section
[defaults]
Key
gather\_timeout
Environment
Variable
[`ANSIBLE_GATHER_TIMEOUT`](#envvar-ANSIBLE_GATHER_TIMEOUT)
### DEFAULT\_GATHERING
Description
This setting controls the default policy of fact gathering (facts discovered about remote systems). When ‘implicit’ (the default), the cache plugin will be ignored and facts will be gathered per play unless ‘gather\_facts: False’ is set. When ‘explicit’ the inverse is true, facts will not be gathered unless directly requested in the play. The ‘smart’ value means each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the playbook run. This option can be useful for those wishing to save fact gathering time. Both ‘smart’ and ‘explicit’ will use the cache plugin.
Default
implicit
Choices
* smart
* explicit
* implicit
Version Added
1.6
Ini
Section
[defaults]
Key
gathering
Environment
Variable
[`ANSIBLE_GATHERING`](#envvar-ANSIBLE_GATHERING)
### DEFAULT\_HANDLER\_INCLUDES\_STATIC
Description
Since 2.0 M(ansible.builtin.include) can be ‘dynamic’, this setting (if True) forces that if the include appears in a `handlers` section to be ‘static’.
Type
boolean
Default
False
Ini
Section
[defaults]
Key
handler\_includes\_static
Environment
Variable
[`ANSIBLE_HANDLER_INCLUDES_STATIC`](#envvar-ANSIBLE_HANDLER_INCLUDES_STATIC)
Deprecated in
2.12
Deprecated detail
include itself is deprecated and this setting will not matter in the future
Deprecated alternatives
none as its already built into the decision between include\_tasks and import\_tasks
### DEFAULT\_HASH\_BEHAVIOUR
Description
This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible. This does not affect variables whose values are scalars (integers, strings) or arrays. **WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable, leading to continual confusion and misuse. Don’t change this setting unless you think you have an absolute need for it. We recommend avoiding reusing variable names and relying on the `combine` filter and `vars` and `varnames` lookups to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much complexity has been introduced into the data structures and plays. For some uses you can also look into custom vars\_plugins to merge on input, even substituting the default `host_group_vars` that is in charge of parsing the `host_vars/` and `group_vars/` directories. Most users of this setting are only interested in inventory scope, but the setting itself affects all sources and makes debugging even harder. All playbooks and roles in the official examples repos assume the default for this setting. Changing the setting to `merge` applies across variable sources, but many sources will internally still overwrite the variables. For example `include_vars` will dedupe variables internally before updating Ansible, with ‘last defined’ overwriting previous definitions in same file. The Ansible project recommends you **avoid ``merge`` for new projects.** It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it. New projects should **avoid ‘merge’**.
Type
string
Default
replace
Choices
* replace
Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
* merge
Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
Ini
Section
[defaults]
Key
hash\_behaviour
Environment
Variable
[`ANSIBLE_HASH_BEHAVIOUR`](#envvar-ANSIBLE_HASH_BEHAVIOUR)
### DEFAULT\_HOST\_LIST
Description
Comma separated list of Ansible inventory sources
Type
pathlist
Default
/etc/ansible/hosts
Ini
Section
[defaults]
Key
inventory
Environment
Variable
[`ANSIBLE_INVENTORY`](#envvar-ANSIBLE_INVENTORY)
### DEFAULT\_HTTPAPI\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for HttpApi Plugins.
Type
pathspec
Default
~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
Ini
Section
[defaults]
Key
httpapi\_plugins
Environment
Variable
[`ANSIBLE_HTTPAPI_PLUGINS`](#envvar-ANSIBLE_HTTPAPI_PLUGINS)
### DEFAULT\_INTERNAL\_POLL\_INTERVAL
Description
This sets the interval (in seconds) of Ansible internal processes polling each other. Lower values improve performance with large playbooks at the expense of extra CPU load. Higher values are more suitable for Ansible usage in automation scenarios, when UI responsiveness is not required but CPU usage might be a concern. The default corresponds to the value hardcoded in Ansible <= 2.1
Type
float
Default
0.001
Version Added
2.2
Ini
Section
[defaults]
Key
internal\_poll\_interval
### DEFAULT\_INVENTORY\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Inventory Plugins.
Type
pathspec
Default
~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
Ini
Section
[defaults]
Key
inventory\_plugins
Environment
Variable
[`ANSIBLE_INVENTORY_PLUGINS`](#envvar-ANSIBLE_INVENTORY_PLUGINS)
### DEFAULT\_JINJA2\_EXTENSIONS
Description
This is a developer-specific feature that allows enabling additional Jinja2 extensions. See the Jinja2 documentation for details. If you do not know what these do, you probably don’t need to change this setting :)
Default
[]
Ini
Section
[defaults]
Key
jinja2\_extensions
Environment
Variable
[`ANSIBLE_JINJA2_EXTENSIONS`](#envvar-ANSIBLE_JINJA2_EXTENSIONS)
### DEFAULT\_JINJA2\_NATIVE
Description
This option preserves variable types during template operations. This requires Jinja2 >= 2.10.
Type
boolean
Default
False
Version Added
2.7
Ini
Section
[defaults]
Key
jinja2\_native
Environment
Variable
[`ANSIBLE_JINJA2_NATIVE`](#envvar-ANSIBLE_JINJA2_NATIVE)
### DEFAULT\_KEEP\_REMOTE\_FILES
Description
Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote. If this option is enabled it will disable `ANSIBLE_PIPELINING`.
Type
boolean
Default
False
Ini
Section
[defaults]
Key
keep\_remote\_files
Environment
Variable
[`ANSIBLE_KEEP_REMOTE_FILES`](#envvar-ANSIBLE_KEEP_REMOTE_FILES)
### DEFAULT\_LIBVIRT\_LXC\_NOSECLABEL
Description
This setting causes libvirt to connect to lxc containers by passing –noseclabel to virsh. This is necessary when running on systems which do not have SELinux.
Type
boolean
Default
False
Version Added
2.1
Ini
Section
[selinux]
Key
libvirt\_lxc\_noseclabel
Environment
* Variable
[`ANSIBLE_LIBVIRT_LXC_NOSECLABEL`](#envvar-ANSIBLE_LIBVIRT_LXC_NOSECLABEL)
* Variable
[`LIBVIRT_LXC_NOSECLABEL`](#envvar-LIBVIRT_LXC_NOSECLABEL)
Deprecated in
2.12
Deprecated detail
environment variables without `ANSIBLE_` prefix are deprecated
Deprecated alternatives
the `ANSIBLE_LIBVIRT_LXC_NOSECLABEL` environment variable
### DEFAULT\_LOAD\_CALLBACK\_PLUGINS
Description
Controls whether callback plugins are loaded when running /usr/bin/ansible. This may be used to log activity from the command line, send notifications, and so on. Callback plugins are always loaded for `ansible-playbook`.
Type
boolean
Default
False
Version Added
1.8
Ini
Section
[defaults]
Key
bin\_ansible\_callbacks
Environment
Variable
[`ANSIBLE_LOAD_CALLBACK_PLUGINS`](#envvar-ANSIBLE_LOAD_CALLBACK_PLUGINS)
### DEFAULT\_LOCAL\_TMP
Description
Temporary directory for Ansible to use on the controller.
Type
tmppath
Default
~/.ansible/tmp
Ini
Section
[defaults]
Key
local\_tmp
Environment
Variable
[`ANSIBLE_LOCAL_TEMP`](#envvar-ANSIBLE_LOCAL_TEMP)
### DEFAULT\_LOG\_FILTER
Description
List of logger names to filter out of the log file
Type
list
Default
[]
Ini
Section
[defaults]
Key
log\_filter
Environment
Variable
[`ANSIBLE_LOG_FILTER`](#envvar-ANSIBLE_LOG_FILTER)
### DEFAULT\_LOG\_PATH
Description
File to which Ansible will log on the controller. When empty logging is disabled.
Type
path
Default
None
Ini
Section
[defaults]
Key
log\_path
Environment
Variable
[`ANSIBLE_LOG_PATH`](#envvar-ANSIBLE_LOG_PATH)
### DEFAULT\_LOOKUP\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Lookup Plugins.
Type
pathspec
Default
~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
Ini
Section
[defaults]
Key
lookup\_plugins
Environment
Variable
[`ANSIBLE_LOOKUP_PLUGINS`](#envvar-ANSIBLE_LOOKUP_PLUGINS)
### DEFAULT\_MANAGED\_STR
Description
Sets the macro for the ‘ansible\_managed’ variable available for M(ansible.builtin.template) and M(ansible.windows.win\_template) modules. This is only relevant for those two modules.
Default
Ansible managed
Ini
Section
[defaults]
Key
ansible\_managed
### DEFAULT\_MODULE\_ARGS
Description
This sets the default arguments to pass to the `ansible` adhoc binary if no `-a` is specified.
Default Ini
Section
[defaults]
Key
module\_args
Environment
Variable
[`ANSIBLE_MODULE_ARGS`](#envvar-ANSIBLE_MODULE_ARGS)
### DEFAULT\_MODULE\_COMPRESSION
Description
Compression scheme to use when transferring Python modules to the target.
Default
ZIP\_DEFLATED
Ini
Section
[defaults]
Key
module\_compression
### DEFAULT\_MODULE\_NAME
Description
Module to use with the `ansible` AdHoc command, if none is specified via `-m`.
Default
command
Ini
Section
[defaults]
Key
module\_name
### DEFAULT\_MODULE\_PATH
Description
Colon separated paths in which Ansible will search for Modules.
Type
pathspec
Default
~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
Ini
Section
[defaults]
Key
library
Environment
Variable
[`ANSIBLE_LIBRARY`](#envvar-ANSIBLE_LIBRARY)
### DEFAULT\_MODULE\_UTILS\_PATH
Description
Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
Type
pathspec
Default
~/.ansible/plugins/module\_utils:/usr/share/ansible/plugins/module\_utils
Ini
Section
[defaults]
Key
module\_utils
Environment
Variable
[`ANSIBLE_MODULE_UTILS`](#envvar-ANSIBLE_MODULE_UTILS)
### DEFAULT\_NETCONF\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Netconf Plugins.
Type
pathspec
Default
~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
Ini
Section
[defaults]
Key
netconf\_plugins
Environment
Variable
[`ANSIBLE_NETCONF_PLUGINS`](#envvar-ANSIBLE_NETCONF_PLUGINS)
### DEFAULT\_NO\_LOG
Description
Toggle Ansible’s display and logging of task details, mainly used to avoid security disclosures.
Type
boolean
Default
False
Ini
Section
[defaults]
Key
no\_log
Environment
Variable
[`ANSIBLE_NO_LOG`](#envvar-ANSIBLE_NO_LOG)
### DEFAULT\_NO\_TARGET\_SYSLOG
Description
Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer style PowerShell modules from writting to the event log.
Type
boolean
Default
False
Ini
Section
[defaults]
Key
no\_target\_syslog
Environment
Variable
[`ANSIBLE_NO_TARGET_SYSLOG`](#envvar-ANSIBLE_NO_TARGET_SYSLOG)
Variables
name
`ansible_no_target_syslog` :Version Added: 2.10
### DEFAULT\_NULL\_REPRESENTATION
Description
What templating should return as a ‘null’ value. When not set it will let Jinja2 decide.
Type
none
Default
None
Ini
Section
[defaults]
Key
null\_representation
Environment
Variable
[`ANSIBLE_NULL_REPRESENTATION`](#envvar-ANSIBLE_NULL_REPRESENTATION)
### DEFAULT\_POLL\_INTERVAL
Description
For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how often to check back on the status of those tasks when an explicit poll interval is not supplied. The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and providing a quick turnaround when something may have completed.
Type
integer
Default
15
Ini
Section
[defaults]
Key
poll\_interval
Environment
Variable
[`ANSIBLE_POLL_INTERVAL`](#envvar-ANSIBLE_POLL_INTERVAL)
### DEFAULT\_PRIVATE\_KEY\_FILE
Description
Option for connections using a certificate or key file to authenticate, rather than an agent or passwords, you can set the default value here to avoid re-specifying –private-key with every invocation.
Type
path
Default
None
Ini
Section
[defaults]
Key
private\_key\_file
Environment
Variable
[`ANSIBLE_PRIVATE_KEY_FILE`](#envvar-ANSIBLE_PRIVATE_KEY_FILE)
### DEFAULT\_PRIVATE\_ROLE\_VARS
Description
Makes role variables inaccessible from other roles. This was introduced as a way to reset role variables to default values if a role is used more than once in a playbook.
Type
boolean
Default
False
Ini
Section
[defaults]
Key
private\_role\_vars
Environment
Variable
[`ANSIBLE_PRIVATE_ROLE_VARS`](#envvar-ANSIBLE_PRIVATE_ROLE_VARS)
### DEFAULT\_REMOTE\_PORT
Description
Port to use in remote connections, when blank it will use the connection plugin default.
Type
integer
Default
None
Ini
Section
[defaults]
Key
remote\_port
Environment
Variable
[`ANSIBLE_REMOTE_PORT`](#envvar-ANSIBLE_REMOTE_PORT)
### DEFAULT\_REMOTE\_USER
Description
Sets the login user for the target machines When blank it uses the connection plugin’s default, normally the user currently executing Ansible.
Default
None
Ini
Section
[defaults]
Key
remote\_user
Environment
Variable
[`ANSIBLE_REMOTE_USER`](#envvar-ANSIBLE_REMOTE_USER)
### DEFAULT\_ROLES\_PATH
Description
Colon separated paths in which Ansible will search for Roles.
Type
pathspec
Default
~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
Ini
Section
[defaults]
Key
roles\_path
Environment
Variable
[`ANSIBLE_ROLES_PATH`](#envvar-ANSIBLE_ROLES_PATH)
### DEFAULT\_SELINUX\_SPECIAL\_FS
Description
Some filesystems do not support safe operations and/or return inconsistent errors, this setting makes Ansible ‘tolerate’ those in the list w/o causing fatal errors. Data corruption may occur and writes are not always verified when a filesystem is in the list.
Type
list
Default
fuse, nfs, vboxsf, ramfs, 9p, vfat
Ini
Section
[selinux]
Key
special\_context\_filesystems
Environment
Variable
[`ANSIBLE_SELINUX_SPECIAL_FS`](#envvar-ANSIBLE_SELINUX_SPECIAL_FS) :Version Added: 2.9
### DEFAULT\_STDOUT\_CALLBACK
Description
Set the main callback used to display Ansible output, you can only have one at a time. You can have many other callbacks, but just one can be in charge of stdout.
Default
default
Ini
Section
[defaults]
Key
stdout\_callback
Environment
Variable
[`ANSIBLE_STDOUT_CALLBACK`](#envvar-ANSIBLE_STDOUT_CALLBACK)
### DEFAULT\_STRATEGY
Description
Set the default strategy used for plays.
Default
linear
Version Added
2.3
Ini
Section
[defaults]
Key
strategy
Environment
Variable
[`ANSIBLE_STRATEGY`](#envvar-ANSIBLE_STRATEGY)
### DEFAULT\_STRATEGY\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Strategy Plugins.
Type
pathspec
Default
~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
Ini
Section
[defaults]
Key
strategy\_plugins
Environment
Variable
[`ANSIBLE_STRATEGY_PLUGINS`](#envvar-ANSIBLE_STRATEGY_PLUGINS)
### DEFAULT\_SU
Description
Toggle the use of “su” for tasks.
Type
boolean
Default
False
Ini
Section
[defaults]
Key
su
Environment
Variable
[`ANSIBLE_SU`](#envvar-ANSIBLE_SU)
### DEFAULT\_SYSLOG\_FACILITY
Description
Syslog facility to use when Ansible logs to the remote target
Default
LOG\_USER
Ini
Section
[defaults]
Key
syslog\_facility
Environment
Variable
[`ANSIBLE_SYSLOG_FACILITY`](#envvar-ANSIBLE_SYSLOG_FACILITY)
### DEFAULT\_TASK\_INCLUDES\_STATIC
Description
The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task.
Type
boolean
Default
False
Version Added
2.1
Ini
Section
[defaults]
Key
task\_includes\_static
Environment
Variable
[`ANSIBLE_TASK_INCLUDES_STATIC`](#envvar-ANSIBLE_TASK_INCLUDES_STATIC)
Deprecated in
2.12
Deprecated detail
include itself is deprecated and this setting will not matter in the future
Deprecated alternatives
None, as its already built into the decision between include\_tasks and import\_tasks
### DEFAULT\_TERMINAL\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Terminal Plugins.
Type
pathspec
Default
~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
Ini
Section
[defaults]
Key
terminal\_plugins
Environment
Variable
[`ANSIBLE_TERMINAL_PLUGINS`](#envvar-ANSIBLE_TERMINAL_PLUGINS)
### DEFAULT\_TEST\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
Type
pathspec
Default
~/.ansible/plugins/test:/usr/share/ansible/plugins/test
Ini
Section
[defaults]
Key
test\_plugins
Environment
Variable
[`ANSIBLE_TEST_PLUGINS`](#envvar-ANSIBLE_TEST_PLUGINS)
### DEFAULT\_TIMEOUT
Description
This is the default timeout for connection plugins to use.
Type
integer
Default
10
Ini
Section
[defaults]
Key
timeout
Environment
Variable
[`ANSIBLE_TIMEOUT`](#envvar-ANSIBLE_TIMEOUT)
### DEFAULT\_TRANSPORT
Description
Default connection plugin to use, the ‘smart’ option will toggle between ‘ssh’ and ‘paramiko’ depending on controller OS and ssh versions
Default
smart
Ini
Section
[defaults]
Key
transport
Environment
Variable
[`ANSIBLE_TRANSPORT`](#envvar-ANSIBLE_TRANSPORT)
### DEFAULT\_UNDEFINED\_VAR\_BEHAVIOR
Description
When True, this causes ansible templating to fail steps that reference variable names that are likely typoed. Otherwise, any ‘{{ template\_expression }}’ that contains undefined variables will be rendered in a template or ansible action line exactly as written.
Type
boolean
Default
True
Version Added
1.3
Ini
Section
[defaults]
Key
error\_on\_undefined\_vars
Environment
Variable
[`ANSIBLE_ERROR_ON_UNDEFINED_VARS`](#envvar-ANSIBLE_ERROR_ON_UNDEFINED_VARS)
### DEFAULT\_VARS\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Vars Plugins.
Type
pathspec
Default
~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
Ini
Section
[defaults]
Key
vars\_plugins
Environment
Variable
[`ANSIBLE_VARS_PLUGINS`](#envvar-ANSIBLE_VARS_PLUGINS)
### DEFAULT\_VAULT\_ENCRYPT\_IDENTITY
Description
The vault\_id to use for encrypting by default. If multiple vault\_ids are provided, this specifies which to use for encryption. The –encrypt-vault-id cli option overrides the configured value.
Default
None
Ini
Section
[defaults]
Key
vault\_encrypt\_identity
Environment
Variable
[`ANSIBLE_VAULT_ENCRYPT_IDENTITY`](#envvar-ANSIBLE_VAULT_ENCRYPT_IDENTITY)
### DEFAULT\_VAULT\_ID\_MATCH
Description
If true, decrypting vaults with a vault id will only try the password from the matching vault-id
Default
False
Ini
Section
[defaults]
Key
vault\_id\_match
Environment
Variable
[`ANSIBLE_VAULT_ID_MATCH`](#envvar-ANSIBLE_VAULT_ID_MATCH)
### DEFAULT\_VAULT\_IDENTITY
Description
The label to use for the default vault id label in cases where a vault id label is not provided
Default
default
Ini
Section
[defaults]
Key
vault\_identity
Environment
Variable
[`ANSIBLE_VAULT_IDENTITY`](#envvar-ANSIBLE_VAULT_IDENTITY)
### DEFAULT\_VAULT\_IDENTITY\_LIST
Description
A list of vault-ids to use by default. Equivalent to multiple –vault-id args. Vault-ids are tried in order.
Type
list
Default
[]
Ini
Section
[defaults]
Key
vault\_identity\_list
Environment
Variable
[`ANSIBLE_VAULT_IDENTITY_LIST`](#envvar-ANSIBLE_VAULT_IDENTITY_LIST)
### DEFAULT\_VAULT\_PASSWORD\_FILE
Description
The vault password file to use. Equivalent to –vault-password-file or –vault-id
Type
path
Default
None
Ini
Section
[defaults]
Key
vault\_password\_file
Environment
Variable
[`ANSIBLE_VAULT_PASSWORD_FILE`](#envvar-ANSIBLE_VAULT_PASSWORD_FILE)
### DEFAULT\_VERBOSITY
Description
Sets the default verbosity, equivalent to the number of `-v` passed in the command line.
Type
integer
Default
0
Ini
Section
[defaults]
Key
verbosity
Environment
Variable
[`ANSIBLE_VERBOSITY`](#envvar-ANSIBLE_VERBOSITY)
### DEPRECATION\_WARNINGS
Description
Toggle to control the showing of deprecation warnings
Type
boolean
Default
True
Ini
Section
[defaults]
Key
deprecation\_warnings
Environment
Variable
[`ANSIBLE_DEPRECATION_WARNINGS`](#envvar-ANSIBLE_DEPRECATION_WARNINGS)
### DEVEL\_WARNING
Description
Toggle to control showing warnings related to running devel
Type
boolean
Default
True
Ini
Section
[defaults]
Key
devel\_warning
Environment
Variable
[`ANSIBLE_DEVEL_WARNING`](#envvar-ANSIBLE_DEVEL_WARNING)
### DIFF\_ALWAYS
Description
Configuration toggle to tell modules to show differences when in ‘changed’ status, equivalent to `--diff`.
Type
bool
Default
False
Ini
Section
[diff]
Key
always
Environment
Variable
[`ANSIBLE_DIFF_ALWAYS`](#envvar-ANSIBLE_DIFF_ALWAYS)
### DIFF\_CONTEXT
Description
How many lines of context to show when displaying the differences between files.
Type
integer
Default
3
Ini
Section
[diff]
Key
context
Environment
Variable
[`ANSIBLE_DIFF_CONTEXT`](#envvar-ANSIBLE_DIFF_CONTEXT)
### DISPLAY\_ARGS\_TO\_STDOUT
Description
Normally `ansible-playbook` will print a header for each task that is run. These headers will contain the name: field from the task if you specified one. If you didn’t then `ansible-playbook` uses the task’s action to help you tell which task is presently running. Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action. If you set this variable to True in the config then `ansible-playbook` will also include the task’s arguments in the header. This setting defaults to False because there is a chance that you have sensitive values in your parameters and you do not want those to be printed. If you set this to True you should be sure that you have secured your environment’s stdout (no one can shoulder surf your screen and you aren’t saving stdout to an insecure file) or made sure that all of your playbooks explicitly added the `no_log: True` parameter to tasks which have sensitive values See How do I keep secret data in my playbook? for more information.
Type
boolean
Default
False
Version Added
2.1
Ini
Section
[defaults]
Key
display\_args\_to\_stdout
Environment
Variable
[`ANSIBLE_DISPLAY_ARGS_TO_STDOUT`](#envvar-ANSIBLE_DISPLAY_ARGS_TO_STDOUT)
### DISPLAY\_SKIPPED\_HOSTS
Description
Toggle to control displaying skipped task/host entries in a task in the default callback
Type
boolean
Default
True
Ini
Section
[defaults]
Key
display\_skipped\_hosts
Environment
* Variable
[`ANSIBLE_DISPLAY_SKIPPED_HOSTS`](#envvar-ANSIBLE_DISPLAY_SKIPPED_HOSTS)
* Variable
[`DISPLAY_SKIPPED_HOSTS`](#envvar-DISPLAY_SKIPPED_HOSTS)
Deprecated in
2.12
Deprecated detail
environment variables without `ANSIBLE_` prefix are deprecated
Deprecated alternatives
the `ANSIBLE_DISPLAY_SKIPPED_HOSTS` environment variable
### DOC\_FRAGMENT\_PLUGIN\_PATH
Description
Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
Type
pathspec
Default
~/.ansible/plugins/doc\_fragments:/usr/share/ansible/plugins/doc\_fragments
Ini
Section
[defaults]
Key
doc\_fragment\_plugins
Environment
Variable
[`ANSIBLE_DOC_FRAGMENT_PLUGINS`](#envvar-ANSIBLE_DOC_FRAGMENT_PLUGINS)
### DOCSITE\_ROOT\_URL
Description
Root docsite URL used to generate docs URLs in warning/error text; must be an absolute URL with valid scheme and trailing slash.
Default
<https://docs.ansible.com/ansible-core/>
Version Added
2.8
Ini
Section
[defaults]
Key
docsite\_root\_url
### DUPLICATE\_YAML\_DICT\_KEY
Description
By default Ansible will issue a warning when a duplicate dict key is encountered in YAML. These warnings can be silenced by adjusting this setting to False.
Type
string
Default
warn
Choices
* warn
* error
* ignore
Version Added
2.9
Ini
Section
[defaults]
Key
duplicate\_dict\_key
Environment
Variable
[`ANSIBLE_DUPLICATE_YAML_DICT_KEY`](#envvar-ANSIBLE_DUPLICATE_YAML_DICT_KEY)
### ENABLE\_TASK\_DEBUGGER
Description
Whether or not to enable the task debugger, this previously was done as a strategy plugin. Now all strategy plugins can inherit this behavior. The debugger defaults to activating when a task is failed on unreachable. Use the debugger keyword for more flexibility.
Type
boolean
Default
False
Version Added
2.5
Ini
Section
[defaults]
Key
enable\_task\_debugger
Environment
Variable
[`ANSIBLE_ENABLE_TASK_DEBUGGER`](#envvar-ANSIBLE_ENABLE_TASK_DEBUGGER)
### ERROR\_ON\_MISSING\_HANDLER
Description
Toggle to allow missing handlers to become a warning instead of an error when notifying.
Type
boolean
Default
True
Ini
Section
[defaults]
Key
error\_on\_missing\_handler
Environment
Variable
[`ANSIBLE_ERROR_ON_MISSING_HANDLER`](#envvar-ANSIBLE_ERROR_ON_MISSING_HANDLER)
### FACTS\_MODULES
Description
Which modules to run during a play’s fact gathering stage, using the default of ‘smart’ will try to figure it out based on connection type.
Type
list
Default
[‘smart’]
Ini
Section
[defaults]
Key
facts\_modules
Environment
Variable
[`ANSIBLE_FACTS_MODULES`](#envvar-ANSIBLE_FACTS_MODULES)
Variables
name
`ansible_facts_modules`
### GALAXY\_CACHE\_DIR
Description
The directory that stores cached responses from a Galaxy server. This is only used by the `ansible-galaxy collection install` and `download` commands. Cache files inside this dir will be ignored if they are world writable.
Type
path
Default
~/.ansible/galaxy\_cache
Version Added
2.11
Ini
Section
[galaxy]
Key
cache\_dir
Environment
Variable
[`ANSIBLE_GALAXY_CACHE_DIR`](#envvar-ANSIBLE_GALAXY_CACHE_DIR)
### GALAXY\_DISPLAY\_PROGRESS
Description
Some steps in `ansible-galaxy` display a progress wheel which can cause issues on certain displays or when outputing the stdout to a file. This config option controls whether the display wheel is shown or not. The default is to show the display wheel if stdout has a tty.
Type
bool
Default
None
Version Added
2.10
Ini
Section
[galaxy]
Key
display\_progress
Environment
Variable
[`ANSIBLE_GALAXY_DISPLAY_PROGRESS`](#envvar-ANSIBLE_GALAXY_DISPLAY_PROGRESS)
### GALAXY\_IGNORE\_CERTS
Description
If set to yes, ansible-galaxy will not validate TLS certificates. This can be useful for testing against a server with a self-signed certificate.
Type
boolean
Default
False
Ini
Section
[galaxy]
Key
ignore\_certs
Environment
Variable
[`ANSIBLE_GALAXY_IGNORE`](#envvar-ANSIBLE_GALAXY_IGNORE)
### GALAXY\_ROLE\_SKELETON
Description
Role or collection skeleton directory to use as a template for the `init` action in `ansible-galaxy`, same as `--role-skeleton`.
Type
path
Default
None
Ini
Section
[galaxy]
Key
role\_skeleton
Environment
Variable
[`ANSIBLE_GALAXY_ROLE_SKELETON`](#envvar-ANSIBLE_GALAXY_ROLE_SKELETON)
### GALAXY\_ROLE\_SKELETON\_IGNORE
Description
patterns of files to ignore inside a Galaxy role or collection skeleton directory
Type
list
Default
[‘^.git$’, ‘^.\*/.git\_keep$’]
Ini
Section
[galaxy]
Key
role\_skeleton\_ignore
Environment
Variable
[`ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE`](#envvar-ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE)
### GALAXY\_SERVER
Description
URL to prepend when roles don’t specify the full URI, assume they are referencing this server as the source.
Default
<https://galaxy.ansible.com>
Ini
Section
[galaxy]
Key
server
Environment
Variable
[`ANSIBLE_GALAXY_SERVER`](#envvar-ANSIBLE_GALAXY_SERVER)
### GALAXY\_SERVER\_LIST
Description
A list of Galaxy servers to use when installing a collection. The value corresponds to the config ini header `[galaxy_server.{{item}}]` which defines the server details. See [Configuring the ansible-galaxy client](../user_guide/collections_using#galaxy-server-config) for more details on how to define a Galaxy server. The order of servers in this list is used to as the order in which a collection is resolved. Setting this config option will ignore the [GALAXY\_SERVER](#galaxy-server) config option.
Type
list
Version Added
2.9
Ini
Section
[galaxy]
Key
server\_list
Environment
Variable
[`ANSIBLE_GALAXY_SERVER_LIST`](#envvar-ANSIBLE_GALAXY_SERVER_LIST)
### GALAXY\_TOKEN\_PATH
Description
Local path to galaxy access token file
Type
path
Default
~/.ansible/galaxy\_token
Version Added
2.9
Ini
Section
[galaxy]
Key
token\_path
Environment
Variable
[`ANSIBLE_GALAXY_TOKEN_PATH`](#envvar-ANSIBLE_GALAXY_TOKEN_PATH)
### HOST\_KEY\_CHECKING
Description
Set this to “False” if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host
Type
boolean
Default
True
Ini
Section
[defaults]
Key
host\_key\_checking
Environment
Variable
[`ANSIBLE_HOST_KEY_CHECKING`](#envvar-ANSIBLE_HOST_KEY_CHECKING)
### HOST\_PATTERN\_MISMATCH
Description
This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
Default
warning
Choices
* warning
* error
* ignore
Version Added
2.8
Ini
Section
[inventory]
Key
host\_pattern\_mismatch
Environment
Variable
[`ANSIBLE_HOST_PATTERN_MISMATCH`](#envvar-ANSIBLE_HOST_PATTERN_MISMATCH)
### INJECT\_FACTS\_AS\_VARS
Description
Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace. Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
Type
boolean
Default
True
Version Added
2.5
Ini
Section
[defaults]
Key
inject\_facts\_as\_vars
Environment
Variable
[`ANSIBLE_INJECT_FACT_VARS`](#envvar-ANSIBLE_INJECT_FACT_VARS)
### INTERPRETER\_PYTHON
Description
Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode. Supported discovery modes are `auto`, `auto_silent`, and `auto_legacy` (the default). All discovery modes employ a lookup table to use the included system Python (on distributions known to include one), falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed later may change which one is used). This warning behavior can be disabled by setting `auto_silent`. The default value of `auto_legacy` provides all the same behavior, but for backwards-compatibility with older Ansible releases that always defaulted to `/usr/bin/python`, will use that interpreter if present (and issue a warning that the default behavior will change to that of `auto` in a future Ansible release.
Default
auto\_legacy
Version Added
2.8
Ini
Section
[defaults]
Key
interpreter\_python
Environment
Variable
[`ANSIBLE_PYTHON_INTERPRETER`](#envvar-ANSIBLE_PYTHON_INTERPRETER)
Variables
name
`ansible_python_interpreter`
### INTERPRETER\_PYTHON\_DISTRO\_MAP
Default
{‘centos’: {‘6’: ‘/usr/bin/python’, ‘8’: ‘/usr/libexec/platform-python’}, ‘debian’: {‘8’: ‘/usr/bin/python’, ‘10’: ‘/usr/bin/python3’}, ‘fedora’: {‘23’: ‘/usr/bin/python3’}, ‘oracle’: {‘6’: ‘/usr/bin/python’, ‘8’: ‘/usr/libexec/platform-python’}, ‘redhat’: {‘6’: ‘/usr/bin/python’, ‘8’: ‘/usr/libexec/platform-python’}, ‘rhel’: {‘6’: ‘/usr/bin/python’, ‘8’: ‘/usr/libexec/platform-python’}, ‘ubuntu’: {‘14’: ‘/usr/bin/python’, ‘16’: ‘/usr/bin/python3’}}
Version Added
2.8
### INTERPRETER\_PYTHON\_FALLBACK
Default
[‘/usr/bin/python’, ‘python3.9’, ‘python3.8’, ‘python3.7’, ‘python3.6’, ‘python3.5’, ‘python2.7’, ‘python2.6’, ‘/usr/libexec/platform-python’, ‘/usr/bin/python3’, ‘python’]
Version Added
2.8
### INVALID\_TASK\_ATTRIBUTE\_FAILED
Description
If ‘false’, invalid attributes for a task will result in warnings instead of errors
Type
boolean
Default
True
Version Added
2.7
Ini
Section
[defaults]
Key
invalid\_task\_attribute\_failed
Environment
Variable
[`ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED`](#envvar-ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED)
### INVENTORY\_ANY\_UNPARSED\_IS\_FAILED
Description
If ‘true’, it is a fatal error when any given inventory source cannot be successfully parsed by any available inventory plugin; otherwise, this situation only attracts a warning.
Type
boolean
Default
False
Version Added
2.7
Ini
Section
[inventory]
Key
any\_unparsed\_is\_failed
Environment
Variable
[`ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED`](#envvar-ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED)
### INVENTORY\_CACHE\_ENABLED
Description
Toggle to turn on inventory caching
Type
bool
Default
False
Ini
Section
[inventory]
Key
cache
Environment
Variable
[`ANSIBLE_INVENTORY_CACHE`](#envvar-ANSIBLE_INVENTORY_CACHE)
### INVENTORY\_CACHE\_PLUGIN
Description
The plugin for caching inventory. If INVENTORY\_CACHE\_PLUGIN is not provided CACHE\_PLUGIN can be used instead.
Ini
Section
[inventory]
Key
cache\_plugin
Environment
Variable
[`ANSIBLE_INVENTORY_CACHE_PLUGIN`](#envvar-ANSIBLE_INVENTORY_CACHE_PLUGIN)
### INVENTORY\_CACHE\_PLUGIN\_CONNECTION
Description
The inventory cache connection. If INVENTORY\_CACHE\_PLUGIN\_CONNECTION is not provided CACHE\_PLUGIN\_CONNECTION can be used instead.
Ini
Section
[inventory]
Key
cache\_connection
Environment
Variable
[`ANSIBLE_INVENTORY_CACHE_CONNECTION`](#envvar-ANSIBLE_INVENTORY_CACHE_CONNECTION)
### INVENTORY\_CACHE\_PLUGIN\_PREFIX
Description
The table prefix for the cache plugin. If INVENTORY\_CACHE\_PLUGIN\_PREFIX is not provided CACHE\_PLUGIN\_PREFIX can be used instead.
Default
ansible\_facts
Ini
Section
[inventory]
Key
cache\_prefix
Environment
Variable
[`ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX`](#envvar-ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX)
### INVENTORY\_CACHE\_TIMEOUT
Description
Expiration timeout for the inventory cache plugin data. If INVENTORY\_CACHE\_TIMEOUT is not provided CACHE\_TIMEOUT can be used instead.
Default
3600
Ini
Section
[inventory]
Key
cache\_timeout
Environment
Variable
[`ANSIBLE_INVENTORY_CACHE_TIMEOUT`](#envvar-ANSIBLE_INVENTORY_CACHE_TIMEOUT)
### INVENTORY\_ENABLED
Description
List of enabled inventory plugins, it also determines the order in which they are used.
Type
list
Default
[‘host\_list’, ‘script’, ‘auto’, ‘yaml’, ‘ini’, ‘toml’]
Ini
Section
[inventory]
Key
enable\_plugins
Environment
Variable
[`ANSIBLE_INVENTORY_ENABLED`](#envvar-ANSIBLE_INVENTORY_ENABLED)
### INVENTORY\_EXPORT
Description
Controls if ansible-inventory will accurately reflect Ansible’s view into inventory or its optimized for exporting.
Type
bool
Default
False
Ini
Section
[inventory]
Key
export
Environment
Variable
[`ANSIBLE_INVENTORY_EXPORT`](#envvar-ANSIBLE_INVENTORY_EXPORT)
### INVENTORY\_IGNORE\_EXTS
Description
List of extensions to ignore when using a directory as an inventory source
Type
list
Default
{{(REJECT\_EXTS + (‘.orig’, ‘.ini’, ‘.cfg’, ‘.retry’))}}
Ini
* Section
[defaults]
Key
inventory\_ignore\_extensions
* Section
[inventory]
Key
ignore\_extensions
Environment
Variable
[`ANSIBLE_INVENTORY_IGNORE`](#envvar-ANSIBLE_INVENTORY_IGNORE)
### INVENTORY\_IGNORE\_PATTERNS
Description
List of patterns to ignore when using a directory as an inventory source
Type
list
Default
[]
Ini
* Section
[defaults]
Key
inventory\_ignore\_patterns
* Section
[inventory]
Key
ignore\_patterns
Environment
Variable
[`ANSIBLE_INVENTORY_IGNORE_REGEX`](#envvar-ANSIBLE_INVENTORY_IGNORE_REGEX)
### INVENTORY\_UNPARSED\_IS\_FAILED
Description
If ‘true’ it is a fatal error if every single potential inventory source fails to parse, otherwise this situation will only attract a warning.
Type
bool
Default
False
Ini
Section
[inventory]
Key
unparsed\_is\_failed
Environment
Variable
[`ANSIBLE_INVENTORY_UNPARSED_FAILED`](#envvar-ANSIBLE_INVENTORY_UNPARSED_FAILED)
### LOCALHOST\_WARNING
Description
By default Ansible will issue a warning when there are no hosts in the inventory. These warnings can be silenced by adjusting this setting to False.
Type
boolean
Default
True
Version Added
2.6
Ini
Section
[defaults]
Key
localhost\_warning
Environment
Variable
[`ANSIBLE_LOCALHOST_WARNING`](#envvar-ANSIBLE_LOCALHOST_WARNING)
### MAX\_FILE\_SIZE\_FOR\_DIFF
Description
Maximum size of files to be considered for diff display
Type
int
Default
104448
Ini
Section
[defaults]
Key
max\_diff\_size
Environment
Variable
[`ANSIBLE_MAX_DIFF_SIZE`](#envvar-ANSIBLE_MAX_DIFF_SIZE)
### MODULE\_IGNORE\_EXTS
Description
List of extensions to ignore when looking for modules to load This is for rejecting script and binary module fallback extensions
Type
list
Default
{{(REJECT\_EXTS + (‘.yaml’, ‘.yml’, ‘.ini’))}}
Ini
Section
[defaults]
Key
module\_ignore\_exts
Environment
Variable
[`ANSIBLE_MODULE_IGNORE_EXTS`](#envvar-ANSIBLE_MODULE_IGNORE_EXTS)
### NETCONF\_SSH\_CONFIG
Description
This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump host ssh settings should be present in ~/.ssh/config file, alternatively it can be set to custom ssh configuration file path to read the bastion/jump host settings.
Default
None
Ini
Section
[netconf\_connection]
Key
ssh\_config
Environment
Variable
[`ANSIBLE_NETCONF_SSH_CONFIG`](#envvar-ANSIBLE_NETCONF_SSH_CONFIG)
### NETWORK\_GROUP\_MODULES
Type
list
Default
[‘eos’, ‘nxos’, ‘ios’, ‘iosxr’, ‘junos’, ‘enos’, ‘ce’, ‘vyos’, ‘sros’, ‘dellos9’, ‘dellos10’, ‘dellos6’, ‘asa’, ‘aruba’, ‘aireos’, ‘bigip’, ‘ironware’, ‘onyx’, ‘netconf’, ‘exos’, ‘voss’, ‘slxos’]
Ini
Section
[defaults]
Key
network\_group\_modules
Environment
* Variable
[`ANSIBLE_NETWORK_GROUP_MODULES`](#envvar-ANSIBLE_NETWORK_GROUP_MODULES)
* Variable
[`NETWORK_GROUP_MODULES`](#envvar-NETWORK_GROUP_MODULES)
Deprecated in
2.12
Deprecated detail
environment variables without `ANSIBLE_` prefix are deprecated
Deprecated alternatives
the `ANSIBLE_NETWORK_GROUP_MODULES` environment variable
### OLD\_PLUGIN\_CACHE\_CLEARING
Description
Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly ‘sticky’. This setting allows to return to that behaviour.
Type
boolean
Default
False
Version Added
2.8
Ini
Section
[defaults]
Key
old\_plugin\_cache\_clear
Environment
Variable
[`ANSIBLE_OLD_PLUGIN_CACHE_CLEAR`](#envvar-ANSIBLE_OLD_PLUGIN_CACHE_CLEAR)
### PARAMIKO\_HOST\_KEY\_AUTO\_ADD
Type
boolean
Default
False
Ini
Section
[paramiko\_connection]
Key
host\_key\_auto\_add
Environment
Variable
[`ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD`](#envvar-ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD)
### PARAMIKO\_LOOK\_FOR\_KEYS
Type
boolean
Default
True
Ini
Section
[paramiko\_connection]
Key
look\_for\_keys
Environment
Variable
[`ANSIBLE_PARAMIKO_LOOK_FOR_KEYS`](#envvar-ANSIBLE_PARAMIKO_LOOK_FOR_KEYS)
### PERSISTENT\_COMMAND\_TIMEOUT
Description
This controls the amount of time to wait for response from remote device before timing out persistent connection.
Type
int
Default
30
Ini
Section
[persistent\_connection]
Key
command\_timeout
Environment
Variable
[`ANSIBLE_PERSISTENT_COMMAND_TIMEOUT`](#envvar-ANSIBLE_PERSISTENT_COMMAND_TIMEOUT)
### PERSISTENT\_CONNECT\_RETRY\_TIMEOUT
Description
This controls the retry timeout for persistent connection to connect to the local domain socket.
Type
integer
Default
15
Ini
Section
[persistent\_connection]
Key
connect\_retry\_timeout
Environment
Variable
[`ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT`](#envvar-ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT)
### PERSISTENT\_CONNECT\_TIMEOUT
Description
This controls how long the persistent connection will remain idle before it is destroyed.
Type
integer
Default
30
Ini
Section
[persistent\_connection]
Key
connect\_timeout
Environment
Variable
[`ANSIBLE_PERSISTENT_CONNECT_TIMEOUT`](#envvar-ANSIBLE_PERSISTENT_CONNECT_TIMEOUT)
### PERSISTENT\_CONTROL\_PATH\_DIR
Description
Path to socket to be used by the connection persistence system.
Type
path
Default
~/.ansible/pc
Ini
Section
[persistent\_connection]
Key
control\_path\_dir
Environment
Variable
[`ANSIBLE_PERSISTENT_CONTROL_PATH_DIR`](#envvar-ANSIBLE_PERSISTENT_CONTROL_PATH_DIR)
### PLAYBOOK\_DIR
Description
A number of non-playbook CLIs have a `--playbook-dir` argument; this sets the default value for it.
Type
path
Version Added
2.9
Ini
Section
[defaults]
Key
playbook\_dir
Environment
Variable
[`ANSIBLE_PLAYBOOK_DIR`](#envvar-ANSIBLE_PLAYBOOK_DIR)
### PLAYBOOK\_VARS\_ROOT
Description
This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host\_vars/group\_vars The `top` option follows the traditional behaviour of using the top playbook in the chain to find the root directory. The `bottom` option follows the 2.4.0 behaviour of using the current playbook to find the root directory. The `all` option examines from the first parent to the current playbook.
Default
top
Choices
* top
* bottom
* all
Version Added
2.4.1
Ini
Section
[defaults]
Key
playbook\_vars\_root
Environment
Variable
[`ANSIBLE_PLAYBOOK_VARS_ROOT`](#envvar-ANSIBLE_PLAYBOOK_VARS_ROOT)
### PLUGIN\_FILTERS\_CFG
Description
A path to configuration for filtering which plugins installed on the system are allowed to be used. See [Rejecting modules](../user_guide/plugin_filtering_config#plugin-filtering-config) for details of the filter file’s format. The default is /etc/ansible/plugin\_filters.yml
Type
path
Default
None
Version Added
2.5.0
Ini
* Section
[default]
Key
plugin\_filters\_cfg
Deprecated in
2.12
Deprecated detail
specifying “plugin\_filters\_cfg” under the “default” section is deprecated
Deprecated alternatives
the “defaults” section instead
* Section
[defaults]
Key
plugin\_filters\_cfg
### PYTHON\_MODULE\_RLIMIT\_NOFILE
Description
Attempts to set RLIMIT\_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on Python 2.x. See <https://bugs.python.org/issue11284>). The value will be limited by the existing hard limit. Default value of 0 does not attempt to adjust existing system-defined limits.
Default
0
Version Added
2.8
Ini
Section
[defaults]
Key
python\_module\_rlimit\_nofile
Environment
Variable
[`ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE`](#envvar-ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE)
Variables
name
`ansible_python_module_rlimit_nofile`
### RETRY\_FILES\_ENABLED
Description
This controls whether a failed Ansible playbook should create a .retry file.
Type
bool
Default
False
Ini
Section
[defaults]
Key
retry\_files\_enabled
Environment
Variable
[`ANSIBLE_RETRY_FILES_ENABLED`](#envvar-ANSIBLE_RETRY_FILES_ENABLED)
### RETRY\_FILES\_SAVE\_PATH
Description
This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled. This file will be overwritten after each run with the list of failed hosts from all plays.
Type
path
Default
None
Ini
Section
[defaults]
Key
retry\_files\_save\_path
Environment
Variable
[`ANSIBLE_RETRY_FILES_SAVE_PATH`](#envvar-ANSIBLE_RETRY_FILES_SAVE_PATH)
### RUN\_VARS\_PLUGINS
Description
This setting can be used to optimize vars\_plugin usage depending on user’s inventory size and play selection. Setting to C(demand) will run vars\_plugins relative to inventory sources anytime vars are ‘demanded’ by tasks. Setting to C(start) will run vars\_plugins relative to inventory sources after importing that inventory source.
Type
str
Default
demand
Choices
* demand
* start
Version Added
2.10
Ini
Section
[defaults]
Key
run\_vars\_plugins
Environment
Variable
[`ANSIBLE_RUN_VARS_PLUGINS`](#envvar-ANSIBLE_RUN_VARS_PLUGINS)
### SHOW\_CUSTOM\_STATS
Description
This adds the custom stats set via the set\_stats plugin to the default output
Type
bool
Default
False
Ini
Section
[defaults]
Key
show\_custom\_stats
Environment
Variable
[`ANSIBLE_SHOW_CUSTOM_STATS`](#envvar-ANSIBLE_SHOW_CUSTOM_STATS)
### STRING\_CONVERSION\_ACTION
Description
Action to take when a module parameter value is converted to a string (this does not affect variables). For string parameters, values such as ‘1.00’, “[‘a’, ‘b’,]”, and ‘yes’, ‘y’, etc. will be converted by the YAML parser unless fully quoted. Valid options are ‘error’, ‘warn’, and ‘ignore’. Since 2.8, this option defaults to ‘warn’ but will change to ‘error’ in 2.12.
Type
string
Default
warn
Version Added
2.8
Ini
Section
[defaults]
Key
string\_conversion\_action
Environment
Variable
[`ANSIBLE_STRING_CONVERSION_ACTION`](#envvar-ANSIBLE_STRING_CONVERSION_ACTION)
### STRING\_TYPE\_FILTERS
Description
This list of filters avoids ‘type conversion’ when templating variables Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
Type
list
Default
[‘string’, ‘to\_json’, ‘to\_nice\_json’, ‘to\_yaml’, ‘to\_nice\_yaml’, ‘ppretty’, ‘json’]
Ini
Section
[jinja2]
Key
dont\_type\_filters
Environment
Variable
[`ANSIBLE_STRING_TYPE_FILTERS`](#envvar-ANSIBLE_STRING_TYPE_FILTERS)
### SYSTEM\_WARNINGS
Description
Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts) These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
Type
boolean
Default
True
Ini
Section
[defaults]
Key
system\_warnings
Environment
Variable
[`ANSIBLE_SYSTEM_WARNINGS`](#envvar-ANSIBLE_SYSTEM_WARNINGS)
### TAGS\_RUN
Description
default list of tags to run in your plays, Skip Tags has precedence.
Type
list
Default
[]
Version Added
2.5
Ini
Section
[tags]
Key
run
Environment
Variable
[`ANSIBLE_RUN_TAGS`](#envvar-ANSIBLE_RUN_TAGS)
### TAGS\_SKIP
Description
default list of tags to skip in your plays, has precedence over Run Tags
Type
list
Default
[]
Version Added
2.5
Ini
Section
[tags]
Key
skip
Environment
Variable
[`ANSIBLE_SKIP_TAGS`](#envvar-ANSIBLE_SKIP_TAGS)
### TASK\_DEBUGGER\_IGNORE\_ERRORS
Description
This option defines whether the task debugger will be invoked on a failed task when ignore\_errors=True is specified. True specifies that the debugger will honor ignore\_errors, False will not honor ignore\_errors.
Type
boolean
Default
True
Version Added
2.7
Ini
Section
[defaults]
Key
task\_debugger\_ignore\_errors
Environment
Variable
[`ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS`](#envvar-ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS)
### TASK\_TIMEOUT
Description
Set the maximum time (in seconds) that a task can run for. If set to 0 (the default) there is no timeout.
Type
integer
Default
0
Version Added
2.10
Ini
Section
[defaults]
Key
task\_timeout
Environment
Variable
[`ANSIBLE_TASK_TIMEOUT`](#envvar-ANSIBLE_TASK_TIMEOUT)
### TRANSFORM\_INVALID\_GROUP\_CHARS
Description
Make ansible transform invalid characters in group names supplied by inventory sources. If ‘never’ it will allow for the group name but warn about the issue. When ‘ignore’, it does the same as ‘never’, without issuing a warning. When ‘always’ it will replace any invalid characters with ‘\_’ (underscore) and warn the user When ‘silently’, it does the same as ‘always’, without issuing a warning.
Type
string
Default
never
Choices
* always
* never
* ignore
* silently
Version Added
2.8
Ini
Section
[defaults]
Key
force\_valid\_group\_names
Environment
Variable
[`ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS`](#envvar-ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS)
### USE\_PERSISTENT\_CONNECTIONS
Description
Toggles the use of persistence for connections.
Type
boolean
Default
False
Ini
Section
[defaults]
Key
use\_persistent\_connections
Environment
Variable
[`ANSIBLE_USE_PERSISTENT_CONNECTIONS`](#envvar-ANSIBLE_USE_PERSISTENT_CONNECTIONS)
### VARIABLE\_PLUGINS\_ENABLED
Description
Whitelist for variable plugins that require it.
Type
list
Default
[‘host\_group\_vars’]
Version Added
2.10
Ini
Section
[defaults]
Key
vars\_plugins\_enabled
Environment
Variable
[`ANSIBLE_VARS_ENABLED`](#envvar-ANSIBLE_VARS_ENABLED)
### VARIABLE\_PRECEDENCE
Description
Allows to change the group variable precedence merge order.
Type
list
Default
[‘all\_inventory’, ‘groups\_inventory’, ‘all\_plugins\_inventory’, ‘all\_plugins\_play’, ‘groups\_plugins\_inventory’, ‘groups\_plugins\_play’]
Version Added
2.4
Ini
Section
[defaults]
Key
precedence
Environment
Variable
[`ANSIBLE_PRECEDENCE`](#envvar-ANSIBLE_PRECEDENCE)
### VERBOSE\_TO\_STDERR
Description
Force ‘verbose’ option to use stderr instead of stdout
Type
bool
Default
False
Version Added
2.8
Ini
Section
[defaults]
Key
verbose\_to\_stderr
Environment
Variable
[`ANSIBLE_VERBOSE_TO_STDERR`](#envvar-ANSIBLE_VERBOSE_TO_STDERR)
### WIN\_ASYNC\_STARTUP\_TIMEOUT
Description
For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load. This is not the total time an async command can run for, but is a separate timeout to wait for an async command to start. The task will only start to be timed against its async\_timeout once it has connected to the pipe, so the overall maximum duration the task can take will be extended by the amount specified here.
Type
integer
Default
5
Version Added
2.10
Ini
Section
[defaults]
Key
win\_async\_startup\_timeout
Environment
Variable
[`ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT`](#envvar-ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT)
Variables
name
`ansible_win_async_startup_timeout`
### WORKER\_SHUTDOWN\_POLL\_COUNT
Description
The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly. After this limit is reached any worker processes still running will be terminated. This is for internal use only.
Type
integer
Default
0
Version Added
2.10
Environment
Variable
[`ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT`](#envvar-ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT)
### WORKER\_SHUTDOWN\_POLL\_DELAY
Description
The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly. This is for internal use only.
Type
float
Default
0.1
Version Added
2.10
Environment
Variable
[`ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY`](#envvar-ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY)
### YAML\_FILENAME\_EXTENSIONS
Description
Check all of these extensions when looking for ‘variable’ files which should be YAML or JSON or vaulted versions of these. This affects vars\_files, include\_vars, inventory and vars plugins among others.
Type
list
Default
[‘.yml’, ‘.yaml’, ‘.json’]
Ini
Section
[defaults]
Key
yaml\_valid\_extensions
Environment
Variable
[`ANSIBLE_YAML_FILENAME_EXT`](#envvar-ANSIBLE_YAML_FILENAME_EXT)
Environment Variables
---------------------
`ANSIBLE_CONFIG`
Override the default ansible config file
`ANSIBLE_CONNECTION_PATH`
Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.If null, ansible will start with the same directory as the ansible script.
See also [ANSIBLE\_CONNECTION\_PATH](#ansible-connection-path)
`ANSIBLE_COW_SELECTION`
This allows you to chose a specific cowsay stencil for the banners or use ‘random’ to cycle through them.
See also [ANSIBLE\_COW\_SELECTION](#ansible-cow-selection)
`ANSIBLE_COW_WHITELIST`
White list of cowsay templates that are ‘safe’ to use, set to empty list if you want to enable all installed templates.
See also [ANSIBLE\_COW\_ACCEPTLIST](#ansible-cow-acceptlist)
Deprecated in
2.15
Deprecated detail
normalizing names to new standard
Deprecated alternatives
ANSIBLE\_COW\_ACCEPTLIST
`ANSIBLE_COW_ACCEPTLIST`
White list of cowsay templates that are ‘safe’ to use, set to empty list if you want to enable all installed templates.
See also [ANSIBLE\_COW\_ACCEPTLIST](#ansible-cow-acceptlist)
Version Added
2.11
`ANSIBLE_FORCE_COLOR`
This option forces color mode even when running without a TTY or the “nocolor” setting is True.
See also [ANSIBLE\_FORCE\_COLOR](#ansible-force-color)
`ANSIBLE_NOCOLOR`
This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
See also [ANSIBLE\_NOCOLOR](#ansible-nocolor)
`NO_COLOR`
This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
See also [ANSIBLE\_NOCOLOR](#ansible-nocolor)
Version Added
2.11
`ANSIBLE_NOCOWS`
If you have cowsay installed but want to avoid the ‘cows’ (why????), use this.
See also [ANSIBLE\_NOCOWS](#ansible-nocows)
`ANSIBLE_COW_PATH`
Specify a custom cowsay path or swap in your cowsay implementation of choice
See also [ANSIBLE\_COW\_PATH](#ansible-cow-path)
`ANSIBLE_PIPELINING`
Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server, by executing many Ansible modules without actual file transfer.This can result in a very significant performance improvement when enabled.However this conflicts with privilege escalation (become). For example, when using ‘sudo:’ operations you must first disable ‘requiretty’ in /etc/sudoers on all managed hosts, which is why it is disabled by default.This option is disabled if `ANSIBLE_KEEP_REMOTE_FILES` is enabled.This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
See also [ANSIBLE\_PIPELINING](#ansible-pipelining)
`ANSIBLE_ANY_ERRORS_FATAL`
Sets the default value for the any\_errors\_fatal keyword, if True, Task failures will be considered fatal errors.
See also [ANY\_ERRORS\_FATAL](#any-errors-fatal)
`ANSIBLE_BECOME_ALLOW_SAME_USER`
This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
See also [BECOME\_ALLOW\_SAME\_USER](#become-allow-same-user)
`ANSIBLE_AGNOSTIC_BECOME_PROMPT`
Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
See also [AGNOSTIC\_BECOME\_PROMPT](#agnostic-become-prompt)
`ANSIBLE_CACHE_PLUGIN`
Chooses which cache plugin to use, the default ‘memory’ is ephemeral.
See also [CACHE\_PLUGIN](#cache-plugin)
`ANSIBLE_CACHE_PLUGIN_CONNECTION`
Defines connection or path information for the cache plugin
See also [CACHE\_PLUGIN\_CONNECTION](#cache-plugin-connection)
`ANSIBLE_CACHE_PLUGIN_PREFIX`
Prefix to use for cache plugin files/tables
See also [CACHE\_PLUGIN\_PREFIX](#cache-plugin-prefix)
`ANSIBLE_CACHE_PLUGIN_TIMEOUT`
Expiration timeout for the cache plugin data
See also [CACHE\_PLUGIN\_TIMEOUT](#cache-plugin-timeout)
`ANSIBLE_COLLECTIONS_SCAN_SYS_PATH`
A boolean to enable or disable scanning the sys.path for installed collections
See also [COLLECTIONS\_SCAN\_SYS\_PATH](#collections-scan-sys-path)
`ANSIBLE_COLLECTIONS_PATHS`
Colon separated paths in which Ansible will search for collections content. Collections must be in nested *subdirectories*, not directly in these directories. For example, if `COLLECTIONS_PATHS` includes `~/.ansible/collections`, and you want to add `my.collection` to that directory, it must be saved as `~/.ansible/collections/ansible_collections/my/collection`.
See also [COLLECTIONS\_PATHS](#collections-paths)
`ANSIBLE_COLLECTIONS_PATH`
Colon separated paths in which Ansible will search for collections content. Collections must be in nested *subdirectories*, not directly in these directories. For example, if `COLLECTIONS_PATHS` includes `~/.ansible/collections`, and you want to add `my.collection` to that directory, it must be saved as `~/.ansible/collections/ansible_collections/my/collection`.
See also [COLLECTIONS\_PATHS](#collections-paths)
Version Added
2.10
`ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH`
When a collection is loaded that does not support the running Ansible version (via the collection metadata key `requires_ansible`), the default behavior is to issue a warning and continue anyway. Setting this value to `ignore` skips the warning entirely, while setting it to `fatal` will immediately halt Ansible execution.
See also [COLLECTIONS\_ON\_ANSIBLE\_VERSION\_MISMATCH](#collections-on-ansible-version-mismatch)
`ANSIBLE_COLOR_CHANGED`
Defines the color to use on ‘Changed’ task status
See also [COLOR\_CHANGED](#color-changed)
`ANSIBLE_COLOR_CONSOLE_PROMPT`
Defines the default color to use for ansible-console
See also [COLOR\_CONSOLE\_PROMPT](#color-console-prompt)
`ANSIBLE_COLOR_DEBUG`
Defines the color to use when emitting debug messages
See also [COLOR\_DEBUG](#color-debug)
`ANSIBLE_COLOR_DEPRECATE`
Defines the color to use when emitting deprecation messages
See also [COLOR\_DEPRECATE](#color-deprecate)
`ANSIBLE_COLOR_DIFF_ADD`
Defines the color to use when showing added lines in diffs
See also [COLOR\_DIFF\_ADD](#color-diff-add)
`ANSIBLE_COLOR_DIFF_LINES`
Defines the color to use when showing diffs
See also [COLOR\_DIFF\_LINES](#color-diff-lines)
`ANSIBLE_COLOR_DIFF_REMOVE`
Defines the color to use when showing removed lines in diffs
See also [COLOR\_DIFF\_REMOVE](#color-diff-remove)
`ANSIBLE_COLOR_ERROR`
Defines the color to use when emitting error messages
See also [COLOR\_ERROR](#color-error)
`ANSIBLE_COLOR_HIGHLIGHT`
Defines the color to use for highlighting
See also [COLOR\_HIGHLIGHT](#color-highlight)
`ANSIBLE_COLOR_OK`
Defines the color to use when showing ‘OK’ task status
See also [COLOR\_OK](#color-ok)
`ANSIBLE_COLOR_SKIP`
Defines the color to use when showing ‘Skipped’ task status
See also [COLOR\_SKIP](#color-skip)
`ANSIBLE_COLOR_UNREACHABLE`
Defines the color to use on ‘Unreachable’ status
See also [COLOR\_UNREACHABLE](#color-unreachable)
`ANSIBLE_COLOR_VERBOSE`
Defines the color to use when emitting verbose messages. i.e those that show with ‘-v’s.
See also [COLOR\_VERBOSE](#color-verbose)
`ANSIBLE_COLOR_WARN`
Defines the color to use when emitting warning messages
See also [COLOR\_WARN](#color-warn)
`ANSIBLE_CONDITIONAL_BARE_VARS`
With this setting on (True), running conditional evaluation ‘var’ is treated differently than ‘var.subkey’ as the first is evaluated directly while the second goes through the Jinja2 parser. But ‘false’ strings in ‘var’ get evaluated as booleans.With this setting off they both evaluate the same but in cases in which ‘var’ was ‘false’ (a string) it won’t get evaluated as a boolean anymore.Currently this setting defaults to ‘True’ but will soon change to ‘False’ and the setting itself will be removed in the future.Expect that this setting eventually will be deprecated after 2.12
See also [CONDITIONAL\_BARE\_VARS](#conditional-bare-vars)
`_ANSIBLE_COVERAGE_REMOTE_OUTPUT`
Sets the output directory on the remote host to generate coverage reports to.Currently only used for remote coverage on PowerShell modules.This is for internal use only.
See also [COVERAGE\_REMOTE\_OUTPUT](#coverage-remote-output)
`_ANSIBLE_COVERAGE_REMOTE_PATH_FILTER`
A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.Only files that match the path glob will have its coverage collected.Multiple path globs can be specified and are separated by `:`.Currently only used for remote coverage on PowerShell modules.This is for internal use only.
See also [COVERAGE\_REMOTE\_PATHS](#coverage-remote-paths)
`ANSIBLE_ACTION_WARNINGS`
By default Ansible will issue a warning when received from a task action (module or action plugin)These warnings can be silenced by adjusting this setting to False.
See also [ACTION\_WARNINGS](#action-warnings)
`ANSIBLE_COMMAND_WARNINGS`
Ansible can issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module.These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option `warn`.As of version 2.11, this is disabled by default.
See also [COMMAND\_WARNINGS](#command-warnings)
`ANSIBLE_LOCALHOST_WARNING`
By default Ansible will issue a warning when there are no hosts in the inventory.These warnings can be silenced by adjusting this setting to False.
See also [LOCALHOST\_WARNING](#localhost-warning)
`ANSIBLE_DOC_FRAGMENT_PLUGINS`
Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
See also [DOC\_FRAGMENT\_PLUGIN\_PATH](#doc-fragment-plugin-path)
`ANSIBLE_ACTION_PLUGINS`
Colon separated paths in which Ansible will search for Action Plugins.
See also [DEFAULT\_ACTION\_PLUGIN\_PATH](#default-action-plugin-path)
`ANSIBLE_ASK_PASS`
This controls whether an Ansible playbook should prompt for a login password. If using SSH keys for authentication, you probably do not needed to change this setting.
See also [DEFAULT\_ASK\_PASS](#default-ask-pass)
`ANSIBLE_ASK_VAULT_PASS`
This controls whether an Ansible playbook should prompt for a vault password.
See also [DEFAULT\_ASK\_VAULT\_PASS](#default-ask-vault-pass)
`ANSIBLE_BECOME`
Toggles the use of privilege escalation, allowing you to ‘become’ another user after login.
See also [DEFAULT\_BECOME](#default-become)
`ANSIBLE_BECOME_ASK_PASS`
Toggle to prompt for privilege escalation password.
See also [DEFAULT\_BECOME\_ASK\_PASS](#default-become-ask-pass)
`ANSIBLE_BECOME_METHOD`
Privilege escalation method to use when `become` is enabled.
See also [DEFAULT\_BECOME\_METHOD](#default-become-method)
`ANSIBLE_BECOME_EXE`
executable to use for privilege escalation, otherwise Ansible will depend on PATH
See also [DEFAULT\_BECOME\_EXE](#default-become-exe)
`ANSIBLE_BECOME_FLAGS`
Flags to pass to the privilege escalation executable.
See also [DEFAULT\_BECOME\_FLAGS](#default-become-flags)
`ANSIBLE_BECOME_PLUGINS`
Colon separated paths in which Ansible will search for Become Plugins.
See also [BECOME\_PLUGIN\_PATH](#become-plugin-path)
`ANSIBLE_BECOME_USER`
The user your login/remote user ‘becomes’ when using privilege escalation, most systems will use ‘root’ when no user is specified.
See also [DEFAULT\_BECOME\_USER](#default-become-user)
`ANSIBLE_CACHE_PLUGINS`
Colon separated paths in which Ansible will search for Cache Plugins.
See also [DEFAULT\_CACHE\_PLUGIN\_PATH](#default-cache-plugin-path)
`ANSIBLE_CALLABLE_WHITELIST`
Whitelist of callable methods to be made available to template evaluation
See also [CALLABLE\_ACCEPT\_LIST](#callable-accept-list)
Deprecated in
2.15
Deprecated detail
normalizing names to new standard
Deprecated alternatives
ANSIBLE\_CALLABLE\_ENABLED
`ANSIBLE_CALLABLE_ENABLED`
Whitelist of callable methods to be made available to template evaluation
See also [CALLABLE\_ACCEPT\_LIST](#callable-accept-list)
Version Added
2.11
`ANSIBLE_CONTROLLER_PYTHON_WARNING`
Toggle to control showing warnings related to running a Python version older than Python 3.8 on the controller
See also [CONTROLLER\_PYTHON\_WARNING](#controller-python-warning)
`ANSIBLE_CALLBACK_PLUGINS`
Colon separated paths in which Ansible will search for Callback Plugins.
See also [DEFAULT\_CALLBACK\_PLUGIN\_PATH](#default-callback-plugin-path)
`ANSIBLE_CALLBACK_WHITELIST`
List of enabled callbacks, not all callbacks need enabling, but many of those shipped with Ansible do as we don’t want them activated by default.
See also [CALLBACKS\_ENABLED](#callbacks-enabled)
Deprecated in
2.15
Deprecated detail
normalizing names to new standard
Deprecated alternatives
ANSIBLE\_CALLBACKS\_ENABLED
`ANSIBLE_CALLBACKS_ENABLED`
List of enabled callbacks, not all callbacks need enabling, but many of those shipped with Ansible do as we don’t want them activated by default.
See also [CALLBACKS\_ENABLED](#callbacks-enabled)
Version Added
2.11
`ANSIBLE_CLICONF_PLUGINS`
Colon separated paths in which Ansible will search for Cliconf Plugins.
See also [DEFAULT\_CLICONF\_PLUGIN\_PATH](#default-cliconf-plugin-path)
`ANSIBLE_CONNECTION_PLUGINS`
Colon separated paths in which Ansible will search for Connection Plugins.
See also [DEFAULT\_CONNECTION\_PLUGIN\_PATH](#default-connection-plugin-path)
`ANSIBLE_DEBUG`
Toggles debug output in Ansible. This is *very* verbose and can hinder multiprocessing. Debug output can also include secret information despite no\_log settings being enabled, which means debug mode should not be used in production.
See also [DEFAULT\_DEBUG](#default-debug)
`ANSIBLE_EXECUTABLE`
This indicates the command to use to spawn a shell under for Ansible’s execution needs on a target. Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is.
See also [DEFAULT\_EXECUTABLE](#default-executable)
`ANSIBLE_FACT_PATH`
This option allows you to globally configure a custom path for ‘local\_facts’ for the implied M(ansible.builtin.setup) task when using fact gathering.If not set, it will fallback to the default from the M(ansible.builtin.setup) module: `/etc/ansible/facts.d`.This does **not** affect user defined tasks that use the M(ansible.builtin.setup) module.
See also [DEFAULT\_FACT\_PATH](#default-fact-path)
`ANSIBLE_FILTER_PLUGINS`
Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
See also [DEFAULT\_FILTER\_PLUGIN\_PATH](#default-filter-plugin-path)
`ANSIBLE_FORCE_HANDLERS`
This option controls if notified handlers run on a host even if a failure occurs on that host.When false, the handlers will not run if a failure has occurred on a host.This can also be set per play or on the command line. See Handlers and Failure for more details.
See also [DEFAULT\_FORCE\_HANDLERS](#default-force-handlers)
`ANSIBLE_FORKS`
Maximum number of forks Ansible will use to execute tasks on target hosts.
See also [DEFAULT\_FORKS](#default-forks)
`ANSIBLE_GATHERING`
This setting controls the default policy of fact gathering (facts discovered about remote systems).When ‘implicit’ (the default), the cache plugin will be ignored and facts will be gathered per play unless ‘gather\_facts: False’ is set.When ‘explicit’ the inverse is true, facts will not be gathered unless directly requested in the play.The ‘smart’ value means each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the playbook run.This option can be useful for those wishing to save fact gathering time. Both ‘smart’ and ‘explicit’ will use the cache plugin.
See also [DEFAULT\_GATHERING](#default-gathering)
`ANSIBLE_GATHER_SUBSET`
Set the `gather_subset` option for the M(ansible.builtin.setup) task in the implicit fact gathering. See the module documentation for specifics.It does **not** apply to user defined M(ansible.builtin.setup) tasks.
See also [DEFAULT\_GATHER\_SUBSET](#default-gather-subset)
`ANSIBLE_GATHER_TIMEOUT`
Set the timeout in seconds for the implicit fact gathering.It does **not** apply to user defined M(ansible.builtin.setup) tasks.
See also [DEFAULT\_GATHER\_TIMEOUT](#default-gather-timeout)
`ANSIBLE_HANDLER_INCLUDES_STATIC`
Since 2.0 M(ansible.builtin.include) can be ‘dynamic’, this setting (if True) forces that if the include appears in a `handlers` section to be ‘static’.
See also [DEFAULT\_HANDLER\_INCLUDES\_STATIC](#default-handler-includes-static)
`ANSIBLE_HASH_BEHAVIOUR`
This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.This does not affect variables whose values are scalars (integers, strings) or arrays.\*\*WARNING\*\*, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable, leading to continual confusion and misuse. Don’t change this setting unless you think you have an absolute need for it.We recommend avoiding reusing variable names and relying on the `combine` filter and `vars` and `varnames` lookups to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much complexity has been introduced into the data structures and plays.For some uses you can also look into custom vars\_plugins to merge on input, even substituting the default `host_group_vars` that is in charge of parsing the `host_vars/` and `group_vars/` directories. Most users of this setting are only interested in inventory scope, but the setting itself affects all sources and makes debugging even harder.All playbooks and roles in the official examples repos assume the default for this setting.Changing the setting to `merge` applies across variable sources, but many sources will internally still overwrite the variables. For example `include_vars` will dedupe variables internally before updating Ansible, with ‘last defined’ overwriting previous definitions in same file.The Ansible project recommends you **avoid ``merge`` for new projects.\*\*It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it. New projects should \*\*avoid ‘merge’**.
See also [DEFAULT\_HASH\_BEHAVIOUR](#default-hash-behaviour)
`ANSIBLE_INVENTORY`
Comma separated list of Ansible inventory sources
See also [DEFAULT\_HOST\_LIST](#default-host-list)
`ANSIBLE_HTTPAPI_PLUGINS`
Colon separated paths in which Ansible will search for HttpApi Plugins.
See also [DEFAULT\_HTTPAPI\_PLUGIN\_PATH](#default-httpapi-plugin-path)
`ANSIBLE_INVENTORY_PLUGINS`
Colon separated paths in which Ansible will search for Inventory Plugins.
See also [DEFAULT\_INVENTORY\_PLUGIN\_PATH](#default-inventory-plugin-path)
`ANSIBLE_JINJA2_EXTENSIONS`
This is a developer-specific feature that allows enabling additional Jinja2 extensions.See the Jinja2 documentation for details. If you do not know what these do, you probably don’t need to change this setting :)
See also [DEFAULT\_JINJA2\_EXTENSIONS](#default-jinja2-extensions)
`ANSIBLE_JINJA2_NATIVE`
This option preserves variable types during template operations. This requires Jinja2 >= 2.10.
See also [DEFAULT\_JINJA2\_NATIVE](#default-jinja2-native)
`ANSIBLE_KEEP_REMOTE_FILES`
Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.If this option is enabled it will disable `ANSIBLE_PIPELINING`.
See also [DEFAULT\_KEEP\_REMOTE\_FILES](#default-keep-remote-files)
`LIBVIRT_LXC_NOSECLABEL`
This setting causes libvirt to connect to lxc containers by passing –noseclabel to virsh. This is necessary when running on systems which do not have SELinux.
See also [DEFAULT\_LIBVIRT\_LXC\_NOSECLABEL](#default-libvirt-lxc-noseclabel)
Deprecated in
2.12
Deprecated detail
environment variables without `ANSIBLE_` prefix are deprecated
Deprecated alternatives
the `ANSIBLE_LIBVIRT_LXC_NOSECLABEL` environment variable
`ANSIBLE_LIBVIRT_LXC_NOSECLABEL`
This setting causes libvirt to connect to lxc containers by passing –noseclabel to virsh. This is necessary when running on systems which do not have SELinux.
See also [DEFAULT\_LIBVIRT\_LXC\_NOSECLABEL](#default-libvirt-lxc-noseclabel)
`ANSIBLE_LOAD_CALLBACK_PLUGINS`
Controls whether callback plugins are loaded when running /usr/bin/ansible. This may be used to log activity from the command line, send notifications, and so on. Callback plugins are always loaded for `ansible-playbook`.
See also [DEFAULT\_LOAD\_CALLBACK\_PLUGINS](#default-load-callback-plugins)
`ANSIBLE_LOCAL_TEMP`
Temporary directory for Ansible to use on the controller.
See also [DEFAULT\_LOCAL\_TMP](#default-local-tmp)
`ANSIBLE_LOG_PATH`
File to which Ansible will log on the controller. When empty logging is disabled.
See also [DEFAULT\_LOG\_PATH](#default-log-path)
`ANSIBLE_LOG_FILTER`
List of logger names to filter out of the log file
See also [DEFAULT\_LOG\_FILTER](#default-log-filter)
`ANSIBLE_LOOKUP_PLUGINS`
Colon separated paths in which Ansible will search for Lookup Plugins.
See also [DEFAULT\_LOOKUP\_PLUGIN\_PATH](#default-lookup-plugin-path)
`ANSIBLE_MODULE_ARGS`
This sets the default arguments to pass to the `ansible` adhoc binary if no `-a` is specified.
See also [DEFAULT\_MODULE\_ARGS](#default-module-args)
`ANSIBLE_LIBRARY`
Colon separated paths in which Ansible will search for Modules.
See also [DEFAULT\_MODULE\_PATH](#default-module-path)
`ANSIBLE_MODULE_UTILS`
Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
See also [DEFAULT\_MODULE\_UTILS\_PATH](#default-module-utils-path)
`ANSIBLE_NETCONF_PLUGINS`
Colon separated paths in which Ansible will search for Netconf Plugins.
See also [DEFAULT\_NETCONF\_PLUGIN\_PATH](#default-netconf-plugin-path)
`ANSIBLE_NO_LOG`
Toggle Ansible’s display and logging of task details, mainly used to avoid security disclosures.
See also [DEFAULT\_NO\_LOG](#default-no-log)
`ANSIBLE_NO_TARGET_SYSLOG`
Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer style PowerShell modules from writting to the event log.
See also [DEFAULT\_NO\_TARGET\_SYSLOG](#default-no-target-syslog)
`ANSIBLE_NULL_REPRESENTATION`
What templating should return as a ‘null’ value. When not set it will let Jinja2 decide.
See also [DEFAULT\_NULL\_REPRESENTATION](#default-null-representation)
`ANSIBLE_POLL_INTERVAL`
For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how often to check back on the status of those tasks when an explicit poll interval is not supplied. The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and providing a quick turnaround when something may have completed.
See also [DEFAULT\_POLL\_INTERVAL](#default-poll-interval)
`ANSIBLE_PRIVATE_KEY_FILE`
Option for connections using a certificate or key file to authenticate, rather than an agent or passwords, you can set the default value here to avoid re-specifying –private-key with every invocation.
See also [DEFAULT\_PRIVATE\_KEY\_FILE](#default-private-key-file)
`ANSIBLE_PRIVATE_ROLE_VARS`
Makes role variables inaccessible from other roles.This was introduced as a way to reset role variables to default values if a role is used more than once in a playbook.
See also [DEFAULT\_PRIVATE\_ROLE\_VARS](#default-private-role-vars)
`ANSIBLE_REMOTE_PORT`
Port to use in remote connections, when blank it will use the connection plugin default.
See also [DEFAULT\_REMOTE\_PORT](#default-remote-port)
`ANSIBLE_REMOTE_USER`
Sets the login user for the target machinesWhen blank it uses the connection plugin’s default, normally the user currently executing Ansible.
See also [DEFAULT\_REMOTE\_USER](#default-remote-user)
`ANSIBLE_ROLES_PATH`
Colon separated paths in which Ansible will search for Roles.
See also [DEFAULT\_ROLES\_PATH](#default-roles-path)
`ANSIBLE_SELINUX_SPECIAL_FS`
Some filesystems do not support safe operations and/or return inconsistent errors, this setting makes Ansible ‘tolerate’ those in the list w/o causing fatal errors.Data corruption may occur and writes are not always verified when a filesystem is in the list.
See also [DEFAULT\_SELINUX\_SPECIAL\_FS](#default-selinux-special-fs)
Version Added
2.9
`ANSIBLE_STDOUT_CALLBACK`
Set the main callback used to display Ansible output, you can only have one at a time.You can have many other callbacks, but just one can be in charge of stdout.
See also [DEFAULT\_STDOUT\_CALLBACK](#default-stdout-callback)
`ANSIBLE_ENABLE_TASK_DEBUGGER`
Whether or not to enable the task debugger, this previously was done as a strategy plugin.Now all strategy plugins can inherit this behavior. The debugger defaults to activating whena task is failed on unreachable. Use the debugger keyword for more flexibility.
See also [ENABLE\_TASK\_DEBUGGER](#enable-task-debugger)
`ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS`
This option defines whether the task debugger will be invoked on a failed task when ignore\_errors=True is specified.True specifies that the debugger will honor ignore\_errors, False will not honor ignore\_errors.
See also [TASK\_DEBUGGER\_IGNORE\_ERRORS](#task-debugger-ignore-errors)
`ANSIBLE_STRATEGY`
Set the default strategy used for plays.
See also [DEFAULT\_STRATEGY](#default-strategy)
`ANSIBLE_STRATEGY_PLUGINS`
Colon separated paths in which Ansible will search for Strategy Plugins.
See also [DEFAULT\_STRATEGY\_PLUGIN\_PATH](#default-strategy-plugin-path)
`ANSIBLE_SU`
Toggle the use of “su” for tasks.
See also [DEFAULT\_SU](#default-su)
`ANSIBLE_SYSLOG_FACILITY`
Syslog facility to use when Ansible logs to the remote target
See also [DEFAULT\_SYSLOG\_FACILITY](#default-syslog-facility)
`ANSIBLE_TASK_INCLUDES_STATIC`
The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task.
See also [DEFAULT\_TASK\_INCLUDES\_STATIC](#default-task-includes-static)
`ANSIBLE_TERMINAL_PLUGINS`
Colon separated paths in which Ansible will search for Terminal Plugins.
See also [DEFAULT\_TERMINAL\_PLUGIN\_PATH](#default-terminal-plugin-path)
`ANSIBLE_TEST_PLUGINS`
Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
See also [DEFAULT\_TEST\_PLUGIN\_PATH](#default-test-plugin-path)
`ANSIBLE_TIMEOUT`
This is the default timeout for connection plugins to use.
See also [DEFAULT\_TIMEOUT](#default-timeout)
`ANSIBLE_TRANSPORT`
Default connection plugin to use, the ‘smart’ option will toggle between ‘ssh’ and ‘paramiko’ depending on controller OS and ssh versions
See also [DEFAULT\_TRANSPORT](#default-transport)
`ANSIBLE_ERROR_ON_UNDEFINED_VARS`
When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.Otherwise, any ‘{{ template\_expression }}’ that contains undefined variables will be rendered in a template or ansible action line exactly as written.
See also [DEFAULT\_UNDEFINED\_VAR\_BEHAVIOR](#default-undefined-var-behavior)
`ANSIBLE_VARS_PLUGINS`
Colon separated paths in which Ansible will search for Vars Plugins.
See also [DEFAULT\_VARS\_PLUGIN\_PATH](#default-vars-plugin-path)
`ANSIBLE_VAULT_ID_MATCH`
If true, decrypting vaults with a vault id will only try the password from the matching vault-id
See also [DEFAULT\_VAULT\_ID\_MATCH](#default-vault-id-match)
`ANSIBLE_VAULT_IDENTITY`
The label to use for the default vault id label in cases where a vault id label is not provided
See also [DEFAULT\_VAULT\_IDENTITY](#default-vault-identity)
`ANSIBLE_VAULT_ENCRYPT_IDENTITY`
The vault\_id to use for encrypting by default. If multiple vault\_ids are provided, this specifies which to use for encryption. The –encrypt-vault-id cli option overrides the configured value.
See also [DEFAULT\_VAULT\_ENCRYPT\_IDENTITY](#default-vault-encrypt-identity)
`ANSIBLE_VAULT_IDENTITY_LIST`
A list of vault-ids to use by default. Equivalent to multiple –vault-id args. Vault-ids are tried in order.
See also [DEFAULT\_VAULT\_IDENTITY\_LIST](#default-vault-identity-list)
`ANSIBLE_VAULT_PASSWORD_FILE`
The vault password file to use. Equivalent to –vault-password-file or –vault-id
See also [DEFAULT\_VAULT\_PASSWORD\_FILE](#default-vault-password-file)
`ANSIBLE_VERBOSITY`
Sets the default verbosity, equivalent to the number of `-v` passed in the command line.
See also [DEFAULT\_VERBOSITY](#default-verbosity)
`ANSIBLE_DEPRECATION_WARNINGS`
Toggle to control the showing of deprecation warnings
See also [DEPRECATION\_WARNINGS](#deprecation-warnings)
`ANSIBLE_DEVEL_WARNING`
Toggle to control showing warnings related to running devel
See also [DEVEL\_WARNING](#devel-warning)
`ANSIBLE_DIFF_ALWAYS`
Configuration toggle to tell modules to show differences when in ‘changed’ status, equivalent to `--diff`.
See also [DIFF\_ALWAYS](#diff-always)
`ANSIBLE_DIFF_CONTEXT`
How many lines of context to show when displaying the differences between files.
See also [DIFF\_CONTEXT](#diff-context)
`ANSIBLE_DISPLAY_ARGS_TO_STDOUT`
Normally `ansible-playbook` will print a header for each task that is run. These headers will contain the name: field from the task if you specified one. If you didn’t then `ansible-playbook` uses the task’s action to help you tell which task is presently running. Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action. If you set this variable to True in the config then `ansible-playbook` will also include the task’s arguments in the header.This setting defaults to False because there is a chance that you have sensitive values in your parameters and you do not want those to be printed.If you set this to True you should be sure that you have secured your environment’s stdout (no one can shoulder surf your screen and you aren’t saving stdout to an insecure file) or made sure that all of your playbooks explicitly added the `no_log: True` parameter to tasks which have sensitive values See How do I keep secret data in my playbook? for more information.
See also [DISPLAY\_ARGS\_TO\_STDOUT](#display-args-to-stdout)
`DISPLAY_SKIPPED_HOSTS`
Toggle to control displaying skipped task/host entries in a task in the default callback
See also [DISPLAY\_SKIPPED\_HOSTS](#display-skipped-hosts)
Deprecated in
2.12
Deprecated detail
environment variables without `ANSIBLE_` prefix are deprecated
Deprecated alternatives
the `ANSIBLE_DISPLAY_SKIPPED_HOSTS` environment variable
`ANSIBLE_DISPLAY_SKIPPED_HOSTS`
Toggle to control displaying skipped task/host entries in a task in the default callback
See also [DISPLAY\_SKIPPED\_HOSTS](#display-skipped-hosts)
`ANSIBLE_DUPLICATE_YAML_DICT_KEY`
By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.These warnings can be silenced by adjusting this setting to False.
See also [DUPLICATE\_YAML\_DICT\_KEY](#duplicate-yaml-dict-key)
`ANSIBLE_ERROR_ON_MISSING_HANDLER`
Toggle to allow missing handlers to become a warning instead of an error when notifying.
See also [ERROR\_ON\_MISSING\_HANDLER](#error-on-missing-handler)
`ANSIBLE_FACTS_MODULES`
Which modules to run during a play’s fact gathering stage, using the default of ‘smart’ will try to figure it out based on connection type.
See also [FACTS\_MODULES](#facts-modules)
`ANSIBLE_GALAXY_IGNORE`
If set to yes, ansible-galaxy will not validate TLS certificates. This can be useful for testing against a server with a self-signed certificate.
See also [GALAXY\_IGNORE\_CERTS](#galaxy-ignore-certs)
`ANSIBLE_GALAXY_ROLE_SKELETON`
Role or collection skeleton directory to use as a template for the `init` action in `ansible-galaxy`, same as `--role-skeleton`.
See also [GALAXY\_ROLE\_SKELETON](#galaxy-role-skeleton)
`ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE`
patterns of files to ignore inside a Galaxy role or collection skeleton directory
See also [GALAXY\_ROLE\_SKELETON\_IGNORE](#galaxy-role-skeleton-ignore)
`ANSIBLE_GALAXY_SERVER`
URL to prepend when roles don’t specify the full URI, assume they are referencing this server as the source.
See also [GALAXY\_SERVER](#galaxy-server)
`ANSIBLE_GALAXY_SERVER_LIST`
A list of Galaxy servers to use when installing a collection.The value corresponds to the config ini header `[galaxy_server.{{item}}]` which defines the server details.See [Configuring the ansible-galaxy client](../user_guide/collections_using#galaxy-server-config) for more details on how to define a Galaxy server.The order of servers in this list is used to as the order in which a collection is resolved.Setting this config option will ignore the [GALAXY\_SERVER](#galaxy-server) config option.
See also [GALAXY\_SERVER\_LIST](#galaxy-server-list)
`ANSIBLE_GALAXY_TOKEN_PATH`
Local path to galaxy access token file
See also [GALAXY\_TOKEN\_PATH](#galaxy-token-path)
`ANSIBLE_GALAXY_DISPLAY_PROGRESS`
Some steps in `ansible-galaxy` display a progress wheel which can cause issues on certain displays or when outputing the stdout to a file.This config option controls whether the display wheel is shown or not.The default is to show the display wheel if stdout has a tty.
See also [GALAXY\_DISPLAY\_PROGRESS](#galaxy-display-progress)
`ANSIBLE_GALAXY_CACHE_DIR`
The directory that stores cached responses from a Galaxy server.This is only used by the `ansible-galaxy collection install` and `download` commands.Cache files inside this dir will be ignored if they are world writable.
See also [GALAXY\_CACHE\_DIR](#galaxy-cache-dir)
`ANSIBLE_HOST_KEY_CHECKING`
Set this to “False” if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host
See also [HOST\_KEY\_CHECKING](#host-key-checking)
`ANSIBLE_HOST_PATTERN_MISMATCH`
This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
See also [HOST\_PATTERN\_MISMATCH](#host-pattern-mismatch)
`ANSIBLE_PYTHON_INTERPRETER`
Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode. Supported discovery modes are `auto`, `auto_silent`, and `auto_legacy` (the default). All discovery modes employ a lookup table to use the included system Python (on distributions known to include one), falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed later may change which one is used). This warning behavior can be disabled by setting `auto_silent`. The default value of `auto_legacy` provides all the same behavior, but for backwards-compatibility with older Ansible releases that always defaulted to `/usr/bin/python`, will use that interpreter if present (and issue a warning that the default behavior will change to that of `auto` in a future Ansible release.
See also [INTERPRETER\_PYTHON](#interpreter-python)
`ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS`
Make ansible transform invalid characters in group names supplied by inventory sources.If ‘never’ it will allow for the group name but warn about the issue.When ‘ignore’, it does the same as ‘never’, without issuing a warning.When ‘always’ it will replace any invalid characters with ‘\_’ (underscore) and warn the userWhen ‘silently’, it does the same as ‘always’, without issuing a warning.
See also [TRANSFORM\_INVALID\_GROUP\_CHARS](#transform-invalid-group-chars)
`ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED`
If ‘false’, invalid attributes for a task will result in warnings instead of errors
See also [INVALID\_TASK\_ATTRIBUTE\_FAILED](#invalid-task-attribute-failed)
`ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED`
If ‘true’, it is a fatal error when any given inventory source cannot be successfully parsed by any available inventory plugin; otherwise, this situation only attracts a warning.
See also [INVENTORY\_ANY\_UNPARSED\_IS\_FAILED](#inventory-any-unparsed-is-failed)
`ANSIBLE_INVENTORY_CACHE`
Toggle to turn on inventory caching
See also [INVENTORY\_CACHE\_ENABLED](#inventory-cache-enabled)
`ANSIBLE_INVENTORY_CACHE_PLUGIN`
The plugin for caching inventory. If INVENTORY\_CACHE\_PLUGIN is not provided CACHE\_PLUGIN can be used instead.
See also [INVENTORY\_CACHE\_PLUGIN](#inventory-cache-plugin)
`ANSIBLE_INVENTORY_CACHE_CONNECTION`
The inventory cache connection. If INVENTORY\_CACHE\_PLUGIN\_CONNECTION is not provided CACHE\_PLUGIN\_CONNECTION can be used instead.
See also [INVENTORY\_CACHE\_PLUGIN\_CONNECTION](#inventory-cache-plugin-connection)
`ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX`
The table prefix for the cache plugin. If INVENTORY\_CACHE\_PLUGIN\_PREFIX is not provided CACHE\_PLUGIN\_PREFIX can be used instead.
See also [INVENTORY\_CACHE\_PLUGIN\_PREFIX](#inventory-cache-plugin-prefix)
`ANSIBLE_INVENTORY_CACHE_TIMEOUT`
Expiration timeout for the inventory cache plugin data. If INVENTORY\_CACHE\_TIMEOUT is not provided CACHE\_TIMEOUT can be used instead.
See also [INVENTORY\_CACHE\_TIMEOUT](#inventory-cache-timeout)
`ANSIBLE_INVENTORY_ENABLED`
List of enabled inventory plugins, it also determines the order in which they are used.
See also [INVENTORY\_ENABLED](#inventory-enabled)
`ANSIBLE_INVENTORY_EXPORT`
Controls if ansible-inventory will accurately reflect Ansible’s view into inventory or its optimized for exporting.
See also [INVENTORY\_EXPORT](#inventory-export)
`ANSIBLE_INVENTORY_IGNORE`
List of extensions to ignore when using a directory as an inventory source
See also [INVENTORY\_IGNORE\_EXTS](#inventory-ignore-exts)
`ANSIBLE_INVENTORY_IGNORE_REGEX`
List of patterns to ignore when using a directory as an inventory source
See also [INVENTORY\_IGNORE\_PATTERNS](#inventory-ignore-patterns)
`ANSIBLE_INVENTORY_UNPARSED_FAILED`
If ‘true’ it is a fatal error if every single potential inventory source fails to parse, otherwise this situation will only attract a warning.
See also [INVENTORY\_UNPARSED\_IS\_FAILED](#inventory-unparsed-is-failed)
`ANSIBLE_MAX_DIFF_SIZE`
Maximum size of files to be considered for diff display
See also [MAX\_FILE\_SIZE\_FOR\_DIFF](#max-file-size-for-diff)
`NETWORK_GROUP_MODULES`
See also [NETWORK\_GROUP\_MODULES](#network-group-modules)
Deprecated in
2.12
Deprecated detail
environment variables without `ANSIBLE_` prefix are deprecated
Deprecated alternatives
the `ANSIBLE_NETWORK_GROUP_MODULES` environment variable
`ANSIBLE_NETWORK_GROUP_MODULES`
See also [NETWORK\_GROUP\_MODULES](#network-group-modules)
`ANSIBLE_INJECT_FACT_VARS`
Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
See also [INJECT\_FACTS\_AS\_VARS](#inject-facts-as-vars)
`ANSIBLE_MODULE_IGNORE_EXTS`
List of extensions to ignore when looking for modules to loadThis is for rejecting script and binary module fallback extensions
See also [MODULE\_IGNORE\_EXTS](#module-ignore-exts)
`ANSIBLE_OLD_PLUGIN_CACHE_CLEAR`
Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly ‘sticky’. This setting allows to return to that behaviour.
See also [OLD\_PLUGIN\_CACHE\_CLEARING](#old-plugin-cache-clearing)
`ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD`
See also [PARAMIKO\_HOST\_KEY\_AUTO\_ADD](#paramiko-host-key-auto-add)
`ANSIBLE_PARAMIKO_LOOK_FOR_KEYS`
See also [PARAMIKO\_LOOK\_FOR\_KEYS](#paramiko-look-for-keys)
`ANSIBLE_PERSISTENT_CONTROL_PATH_DIR`
Path to socket to be used by the connection persistence system.
See also [PERSISTENT\_CONTROL\_PATH\_DIR](#persistent-control-path-dir)
`ANSIBLE_PERSISTENT_CONNECT_TIMEOUT`
This controls how long the persistent connection will remain idle before it is destroyed.
See also [PERSISTENT\_CONNECT\_TIMEOUT](#persistent-connect-timeout)
`ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT`
This controls the retry timeout for persistent connection to connect to the local domain socket.
See also [PERSISTENT\_CONNECT\_RETRY\_TIMEOUT](#persistent-connect-retry-timeout)
`ANSIBLE_PERSISTENT_COMMAND_TIMEOUT`
This controls the amount of time to wait for response from remote device before timing out persistent connection.
See also [PERSISTENT\_COMMAND\_TIMEOUT](#persistent-command-timeout)
`ANSIBLE_PLAYBOOK_DIR`
A number of non-playbook CLIs have a `--playbook-dir` argument; this sets the default value for it.
See also [PLAYBOOK\_DIR](#playbook-dir)
`ANSIBLE_PLAYBOOK_VARS_ROOT`
This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host\_vars/group\_varsThe `top` option follows the traditional behaviour of using the top playbook in the chain to find the root directory.The `bottom` option follows the 2.4.0 behaviour of using the current playbook to find the root directory.The `all` option examines from the first parent to the current playbook.
See also [PLAYBOOK\_VARS\_ROOT](#playbook-vars-root)
`ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE`
Attempts to set RLIMIT\_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on Python 2.x. See <https://bugs.python.org/issue11284>). The value will be limited by the existing hard limit. Default value of 0 does not attempt to adjust existing system-defined limits.
See also [PYTHON\_MODULE\_RLIMIT\_NOFILE](#python-module-rlimit-nofile)
`ANSIBLE_RETRY_FILES_ENABLED`
This controls whether a failed Ansible playbook should create a .retry file.
See also [RETRY\_FILES\_ENABLED](#retry-files-enabled)
`ANSIBLE_RETRY_FILES_SAVE_PATH`
This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.This file will be overwritten after each run with the list of failed hosts from all plays.
See also [RETRY\_FILES\_SAVE\_PATH](#retry-files-save-path)
`ANSIBLE_RUN_VARS_PLUGINS`
This setting can be used to optimize vars\_plugin usage depending on user’s inventory size and play selection.Setting to C(demand) will run vars\_plugins relative to inventory sources anytime vars are ‘demanded’ by tasks.Setting to C(start) will run vars\_plugins relative to inventory sources after importing that inventory source.
See also [RUN\_VARS\_PLUGINS](#run-vars-plugins)
`ANSIBLE_SHOW_CUSTOM_STATS`
This adds the custom stats set via the set\_stats plugin to the default output
See also [SHOW\_CUSTOM\_STATS](#show-custom-stats)
`ANSIBLE_STRING_TYPE_FILTERS`
This list of filters avoids ‘type conversion’ when templating variablesUseful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
See also [STRING\_TYPE\_FILTERS](#string-type-filters)
`ANSIBLE_SYSTEM_WARNINGS`
Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
See also [SYSTEM\_WARNINGS](#system-warnings)
`ANSIBLE_RUN_TAGS`
default list of tags to run in your plays, Skip Tags has precedence.
See also [TAGS\_RUN](#tags-run)
`ANSIBLE_SKIP_TAGS`
default list of tags to skip in your plays, has precedence over Run Tags
See also [TAGS\_SKIP](#tags-skip)
`ANSIBLE_TASK_TIMEOUT`
Set the maximum time (in seconds) that a task can run for.If set to 0 (the default) there is no timeout.
See also [TASK\_TIMEOUT](#task-timeout)
`ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT`
The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.After this limit is reached any worker processes still running will be terminated.This is for internal use only.
See also [WORKER\_SHUTDOWN\_POLL\_COUNT](#worker-shutdown-poll-count)
`ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY`
The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.This is for internal use only.
See also [WORKER\_SHUTDOWN\_POLL\_DELAY](#worker-shutdown-poll-delay)
`ANSIBLE_USE_PERSISTENT_CONNECTIONS`
Toggles the use of persistence for connections.
See also [USE\_PERSISTENT\_CONNECTIONS](#use-persistent-connections)
`ANSIBLE_VARS_ENABLED`
Whitelist for variable plugins that require it.
See also [VARIABLE\_PLUGINS\_ENABLED](#variable-plugins-enabled)
`ANSIBLE_PRECEDENCE`
Allows to change the group variable precedence merge order.
See also [VARIABLE\_PRECEDENCE](#variable-precedence)
`ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT`
For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.This is not the total time an async command can run for, but is a separate timeout to wait for an async command to start. The task will only start to be timed against its async\_timeout once it has connected to the pipe, so the overall maximum duration the task can take will be extended by the amount specified here.
See also [WIN\_ASYNC\_STARTUP\_TIMEOUT](#win-async-startup-timeout)
`ANSIBLE_YAML_FILENAME_EXT`
Check all of these extensions when looking for ‘variable’ files which should be YAML or JSON or vaulted versions of these.This affects vars\_files, include\_vars, inventory and vars plugins among others.
See also [YAML\_FILENAME\_EXTENSIONS](#yaml-filename-extensions)
`ANSIBLE_NETCONF_SSH_CONFIG`
This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump host ssh settings should be present in ~/.ssh/config file, alternatively it can be set to custom ssh configuration file path to read the bastion/jump host settings.
See also [NETCONF\_SSH\_CONFIG](#netconf-ssh-config)
`ANSIBLE_STRING_CONVERSION_ACTION`
Action to take when a module parameter value is converted to a string (this does not affect variables). For string parameters, values such as ‘1.00’, “[‘a’, ‘b’,]”, and ‘yes’, ‘y’, etc. will be converted by the YAML parser unless fully quoted.Valid options are ‘error’, ‘warn’, and ‘ignore’.Since 2.8, this option defaults to ‘warn’ but will change to ‘error’ in 2.12.
See also [STRING\_CONVERSION\_ACTION](#string-conversion-action)
`ANSIBLE_VERBOSE_TO_STDERR`
Force ‘verbose’ option to use stderr instead of stdout
See also [VERBOSE\_TO\_STDERR](#verbose-to-stderr)
| programming_docs |
ansible YAML Syntax YAML Syntax
===========
This page provides a basic overview of correct YAML syntax, which is how Ansible playbooks (our configuration management language) are expressed.
We use YAML because it is easier for humans to read and write than other common data formats like XML or JSON. Further, there are libraries available in most programming languages for working with YAML.
You may also wish to read [Working with playbooks](../user_guide/playbooks#working-with-playbooks) at the same time to see how this is used in practice.
YAML Basics
-----------
For Ansible, nearly every YAML file starts with a list. Each item in the list is a list of key/value pairs, commonly called a “hash” or a “dictionary”. So, we need to know how to write lists and dictionaries in YAML.
There’s another small quirk to YAML. All YAML files (regardless of their association with Ansible or not) can optionally begin with `---` and end with `...`. This is part of the YAML format and indicates the start and end of a document.
All members of a list are lines beginning at the same indentation level starting with a `"- "` (a dash and a space):
```
---
# A list of tasty fruits
- Apple
- Orange
- Strawberry
- Mango
...
```
A dictionary is represented in a simple `key: value` form (the colon must be followed by a space):
```
# An employee record
martin:
name: Martin D'vloper
job: Developer
skill: Elite
```
More complicated data structures are possible, such as lists of dictionaries, dictionaries whose values are lists or a mix of both:
```
# Employee records
- martin:
name: Martin D'vloper
job: Developer
skills:
- python
- perl
- pascal
- tabitha:
name: Tabitha Bitumen
job: Developer
skills:
- lisp
- fortran
- erlang
```
Dictionaries and lists can also be represented in an abbreviated form if you really want to:
```
---
martin: {name: Martin D'vloper, job: Developer, skill: Elite}
['Apple', 'Orange', 'Strawberry', 'Mango']
```
These are called “Flow collections”.
Ansible doesn’t really use these too much, but you can also specify a boolean value (true/false) in several forms:
```
create_key: yes
needs_agent: no
knows_oop: True
likes_emacs: TRUE
uses_cvs: false
```
Use lowercase ‘true’ or ‘false’ for boolean values in dictionaries if you want to be compatible with default yamllint options.
Values can span multiple lines using `|` or `>`. Spanning multiple lines using a “Literal Block Scalar” `|` will include the newlines and any trailing spaces. Using a “Folded Block Scalar” `>` will fold newlines to spaces; it’s used to make what would otherwise be a very long line easier to read and edit. In either case the indentation will be ignored. Examples are:
```
include_newlines: |
exactly as you see
will appear these three
lines of poetry
fold_newlines: >
this is really a
single line of text
despite appearances
```
While in the above `>` example all newlines are folded into spaces, there are two ways to enforce a newline to be kept:
```
fold_some_newlines: >
a
b
c
d
e
f
same_as: "a b\nc d\n e\nf\n"
```
Let’s combine what we learned so far in an arbitrary YAML example. This really has nothing to do with Ansible, but will give you a feel for the format:
```
---
# An employee record
name: Martin D'vloper
job: Developer
skill: Elite
employed: True
foods:
- Apple
- Orange
- Strawberry
- Mango
languages:
perl: Elite
python: Elite
pascal: Lame
education: |
4 GCSEs
3 A-Levels
BSc in the Internet of Things
```
That’s all you really need to know about YAML to start writing `Ansible` playbooks.
Gotchas
-------
While you can put just about anything into an unquoted scalar, there are some exceptions. A colon followed by a space (or newline) `": "` is an indicator for a mapping. A space followed by the pound sign `" #"` starts a comment.
Because of this, the following is going to result in a YAML syntax error:
```
foo: somebody said I should put a colon here: so I did
windows_drive: c:
```
…but this will work:
```
windows_path: c:\windows
```
You will want to quote hash values using colons followed by a space or the end of the line:
```
foo: 'somebody said I should put a colon here: so I did'
windows_drive: 'c:'
```
…and then the colon will be preserved.
Alternatively, you can use double quotes:
```
foo: "somebody said I should put a colon here: so I did"
windows_drive: "c:"
```
The difference between single quotes and double quotes is that in double quotes you can use escapes:
```
foo: "a \t TAB and a \n NEWLINE"
```
The list of allowed escapes can be found in the YAML Specification under “Escape Sequences” (YAML 1.1) or “Escape Characters” (YAML 1.2).
The following is invalid YAML:
```
foo: "an escaped \' single quote"
```
Further, Ansible uses “{{ var }}” for variables. If a value after a colon starts with a “{“, YAML will think it is a dictionary, so you must quote it, like so:
```
foo: "{{ variable }}"
```
If your value starts with a quote the entire value must be quoted, not just part of it. Here are some additional examples of how to properly quote things:
```
foo: "{{ variable }}/additional/string/literal"
foo2: "{{ variable }}\\backslashes\\are\\also\\special\\characters"
foo3: "even if it's just a string literal it must all be quoted"
```
Not valid:
```
foo: "E:\\path\\"rest\\of\\path
```
In addition to `'` and `"` there are a number of characters that are special (or reserved) and cannot be used as the first character of an unquoted scalar: `[] {} > | * & ! % # ` @ ,`.
You should also be aware of `? : -`. In YAML, they are allowed at the beginning of a string if a non-space character follows, but YAML processor implementations differ, so it’s better to use quotes.
In Flow Collections, the rules are a bit more strict:
```
a scalar in block mapping: this } is [ all , valid
flow mapping: { key: "you { should [ use , quotes here" }
```
Boolean conversion is helpful, but this can be a problem when you want a literal `yes` or other boolean values as a string. In these cases just use quotes:
```
non_boolean: "yes"
other_string: "False"
```
YAML converts certain strings into floating-point values, such as the string `1.0`. If you need to specify a version number (in a requirements.yml file, for example), you will need to quote the value if it looks like a floating-point value:
```
version: "1.0"
```
See also
[Working with playbooks](../user_guide/playbooks#working-with-playbooks)
Learn what playbooks can do and how to write/run them.
[YAMLLint](http://yamllint.com/)
YAML Lint (online) helps you debug YAML syntax if you are having problems
[GitHub examples directory](https://github.com/ansible/ansible-examples)
Complete playbook files from the github project source
[Wikipedia YAML syntax reference](https://en.wikipedia.org/wiki/YAML)
A good guide to YAML syntax
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
[irc.libera.chat](https://libera.chat/)
#yaml for YAML specific questions
[YAML 1.1 Specification](https://yaml.org/spec/1.1/)
The Specification for YAML 1.1, which PyYAML and libyaml are currently implementing
[YAML 1.2 Specification](https://yaml.org/spec/1.2/spec.html)
For completeness, YAML 1.2 is the successor of 1.1
ansible Controlling how Ansible behaves: precedence rules Controlling how Ansible behaves: precedence rules
=================================================
To give you maximum flexibility in managing your environments, Ansible offers many ways to control how Ansible behaves: how it connects to managed nodes, how it works once it has connected. If you use Ansible to manage a large number of servers, network devices, and cloud resources, you may define Ansible behavior in several different places and pass that information to Ansible in several different ways. This flexibility is convenient, but it can backfire if you do not understand the precedence rules.
These precedence rules apply to any setting that can be defined in multiple ways (by configuration settings, command-line options, playbook keywords, variables).
* [Configuration settings](#configuration-settings)
* [Command-line options](#command-line-options)
* [Playbook keywords](#playbook-keywords)
* [Variables](#variables)
+ [Variable scope: how long is a value available?](#variable-scope-how-long-is-a-value-available)
* [Using `-e` extra variables at the command line](#using-e-extra-variables-at-the-command-line)
Precedence categories
---------------------
Ansible offers four sources for controlling its behavior. In order of precedence from lowest (most easily overridden) to highest (overrides all others), the categories are:
* Configuration settings
* Command-line options
* Playbook keywords
* Variables
Each category overrides any information from all lower-precedence categories. For example, a playbook keyword will override any configuration setting.
Within each precedence category, specific rules apply. However, generally speaking, ‘last defined’ wins and overrides any previous definitions.
### Configuration settings
[Configuration settings](config#ansible-configuration-settings) include both values from the `ansible.cfg` file and environment variables. Within this category, values set in configuration files have lower precedence. Ansible uses the first `ansible.cfg` file it finds, ignoring all others. Ansible searches for `ansible.cfg` in these locations in order:
* `ANSIBLE_CONFIG` (environment variable if set)
* `ansible.cfg` (in the current directory)
* `~/.ansible.cfg` (in the home directory)
* `/etc/ansible/ansible.cfg`
Environment variables have a higher precedence than entries in `ansible.cfg`. If you have environment variables set on your control node, they override the settings in whichever `ansible.cfg` file Ansible loads. The value of any given environment variable follows normal shell precedence: the last value defined overwrites previous values.
### Command-line options
Any command-line option will override any configuration setting.
When you type something directly at the command line, you may feel that your hand-crafted values should override all others, but Ansible does not work that way. Command-line options have low precedence - they override configuration only. They do not override playbook keywords, variables from inventory or variables from playbooks.
You can override all other settings from all other sources in all other precedence categories at the command line by [Using -e extra variables at the command line](#general-precedence-extra-vars), but that is not a command-line option, it is a way of passing a [variable](#general-precedence-variables).
At the command line, if you pass multiple values for a parameter that accepts only a single value, the last defined value wins. For example, this [ad hoc task](../user_guide/intro_adhoc#intro-adhoc) will connect as `carol`, not as `mike`:
```
ansible -u mike -m ping myhost -u carol
```
Some parameters allow multiple values. In this case, Ansible will append all values from the hosts listed in inventory files inventory1 and inventory2:
```
ansible -i /path/inventory1 -i /path/inventory2 -m ping all
```
The help for each [command-line tool](../user_guide/command_line_tools#command-line-tools) lists available options for that tool.
### Playbook keywords
Any [playbook keyword](playbooks_keywords#playbook-keywords) will override any command-line option and any configuration setting.
Within playbook keywords, precedence flows with the playbook itself; the more specific wins against the more general:
* play (most general)
* blocks/includes/imports/roles (optional and can contain tasks and each other)
* tasks (most specific)
A simple example:
```
- hosts: all
connection: ssh
tasks:
- name: This task uses ssh.
ping:
- name: This task uses paramiko.
connection: paramiko
ping:
```
In this example, the `connection` keyword is set to `ssh` at the play level. The first task inherits that value, and connects using `ssh`. The second task inherits that value, overrides it, and connects using `paramiko`. The same logic applies to blocks and roles as well. All tasks, blocks, and roles within a play inherit play-level keywords; any task, block, or role can override any keyword by defining a different value for that keyword within the task, block, or role.
Remember that these are KEYWORDS, not variables. Both playbooks and variable files are defined in YAML but they have different significance. Playbooks are the command or ‘state description’ structure for Ansible, variables are data we use to help make playbooks more dynamic.
### Variables
Any variable will override any playbook keyword, any command-line option, and any configuration setting.
Variables that have equivalent playbook keywords, command-line options, and configuration settings are known as [Connection variables](special_variables#connection-variables). Originally designed for connection parameters, this category has expanded to include other core variables like the temporary directory and the python interpreter.
Connection variables, like all variables, can be set in multiple ways and places. You can define variables for hosts and groups in [inventory](../user_guide/intro_inventory#intro-inventory). You can define variables for tasks and plays in `vars:` blocks in [playbooks](../user_guide/playbooks_intro#about-playbooks). However, they are still variables - they are data, not keywords or configuration settings. Variables that override playbook keywords, command-line options, and configuration settings follow the same rules of [variable precedence](../user_guide/playbooks_variables#ansible-variable-precedence) as any other variables.
When set in a playbook, variables follow the same inheritance rules as playbook keywords. You can set a value for the play, then override it in a task, block, or role:
```
- hosts: cloud
gather_facts: false
become: yes
vars:
ansible_become_user: admin
tasks:
- name: This task uses admin as the become user.
dnf:
name: some-service
state: latest
- block:
- name: This task uses service-admin as the become user.
# a task to configure the new service
- name: This task also uses service-admin as the become user, defined in the block.
# second task to configure the service
vars:
ansible_become_user: service-admin
- name: This task (outside of the block) uses admin as the become user again.
service:
name: some-service
state: restarted
```
#### Variable scope: how long is a value available?
Variable values set in a playbook exist only within the playbook object that defines them. These ‘playbook object scope’ variables are not available to subsequent objects, including other plays.
Variable values associated directly with a host or group, including variables defined in inventory, by vars plugins, or using modules like [set\_fact](../collections/ansible/builtin/set_fact_module#set-fact-module) and [include\_vars](../collections/ansible/builtin/include_vars_module#include-vars-module), are available to all plays. These ‘host scope’ variables are also available via the `hostvars[]` dictionary.
### Using `-e` extra variables at the command line
To override all other settings in all other categories, you can use extra variables: `--extra-vars` or `-e` at the command line. Values passed with `-e` are variables, not command-line options, and they will override configuration settings, command-line options, and playbook keywords as well as variables set elsewhere. For example, this task will connect as `brian` not as `carol`:
```
ansible -u carol -e 'ansible_user=brian' -a whoami all
```
You must specify both the variable name and the value with `--extra-vars`.
ansible Logging Ansible output Logging Ansible output
======================
By default Ansible sends output about plays, tasks, and module arguments to your screen (STDOUT) on the control node. If you want to capture Ansible output in a log, you have three options:
* To save Ansible output in a single log on the control node, set the `log_path` [configuration file setting](../installation_guide/intro_configuration#intro-configuration). You may also want to set `display_args_to_stdout`, which helps to differentiate similar tasks by including variable values in the Ansible output.
* To save Ansible output in separate logs, one on each managed node, set the `no_target_syslog` and `syslog_facility` [configuration file settings](../installation_guide/intro_configuration#intro-configuration).
* To save Ansible output to a secure database, use AWX or [Red Hat Ansible Automation Platform](https://docs.ansible.com/ansible/latest/reference_appendices/tower.html#ansible-platform). You can then review history based on hosts, projects, and particular inventories over time, using graphs and/or a REST API.
Protecting sensitive data with `no_log`
---------------------------------------
If you save Ansible output to a log, you expose any secret data in your Ansible output, such as passwords and user names. To keep sensitive values out of your logs, mark tasks that expose them with the `no_log: True` attribute. However, the `no_log` attribute does not affect debugging output, so be careful not to debug playbooks in a production environment. See [How do I keep secret data in my playbook?](https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#keep-secret-data) for an example.
ansible Tests Tests
=====
[Tests](http://jinja.pocoo.org/docs/dev/templates/#tests) in Jinja are a way of evaluating template expressions and returning True or False. Jinja ships with many of these. See [builtin tests](http://jinja.palletsprojects.com/templates/#builtin-tests) in the official Jinja template documentation.
The main difference between tests and filters are that Jinja tests are used for comparisons, whereas filters are used for data manipulation, and have different applications in jinja. Tests can also be used in list processing filters, like `map()` and `select()` to choose items in the list.
Like all templating, tests always execute on the Ansible controller, **not** on the target of a task, as they test local data.
In addition to those Jinja2 tests, Ansible supplies a few more and users can easily create their own.
* [Test syntax](#test-syntax)
* [Testing strings](#testing-strings)
* [Vault](#vault)
* [Testing truthiness](#testing-truthiness)
* [Comparing versions](#comparing-versions)
* [Set theory tests](#set-theory-tests)
* [Testing if a list contains a value](#testing-if-a-list-contains-a-value)
* [Testing if a list value is True](#testing-if-a-list-value-is-true)
* [Testing paths](#testing-paths)
* [Testing size formats](#testing-size-formats)
+ [Human readable](#human-readable)
+ [Human to bytes](#human-to-bytes)
* [Testing task results](#testing-task-results)
Test syntax
-----------
[Test syntax](http://jinja.pocoo.org/docs/dev/templates/#tests) varies from [filter syntax](http://jinja.pocoo.org/docs/dev/templates/#filters) (`variable | filter`). Historically Ansible has registered tests as both jinja tests and jinja filters, allowing for them to be referenced using filter syntax.
As of Ansible 2.5, using a jinja test as a filter will generate a warning.
The syntax for using a jinja test is as follows:
```
variable is test_name
```
Such as:
```
result is failed
```
Testing strings
---------------
To match strings against a substring or a regular expression, use the `match`, `search` or `regex` tests:
```
vars:
url: "http://example.com/users/foo/resources/bar"
tasks:
- debug:
msg: "matched pattern 1"
when: url is match("http://example.com/users/.*/resources/")
- debug:
msg: "matched pattern 2"
when: url is search("/users/.*/resources/.*")
- debug:
msg: "matched pattern 3"
when: url is search("/users/")
- debug:
msg: "matched pattern 4"
when: url is regex("example.com/\w+/foo")
```
`match` succeeds if it finds the pattern at the beginning of the string, while `search` succeeds if it finds the pattern anywhere within string. By default, `regex` works like `search`, but `regex` can be configured to perform other tests as well, by passing the `match_type` keyword argument. In particular, `match_type` determines the `re` method that gets used to perform the search. The full list can be found in the relevant Python documentation [here](https://docs.python.org/3/library/re.html#regular-expression-objects).
All of the string tests also take optional `ignorecase` and `multiline` arguments. These correspond to `re.I` and `re.M` from Python’s `re` library, respectively.
Vault
-----
New in version 2.10.
You can test whether a variable is an inline single vault encrypted value using the `vault_encrypted` test.
```
vars:
variable: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
61323931353866666336306139373937316366366138656131323863373866376666353364373761
3539633234313836346435323766306164626134376564330a373530313635343535343133316133
36643666306434616266376434363239346433643238336464643566386135356334303736353136
6565633133366366360a326566323363363936613664616364623437336130623133343530333739
3039
tasks:
- debug:
msg: '{{ (variable is vault_encrypted) | ternary("Vault encrypted", "Not vault encrypted") }}'
```
Testing truthiness
------------------
New in version 2.10.
As of Ansible 2.10, you can now perform Python like truthy and falsy checks.
```
- debug:
msg: "Truthy"
when: value is truthy
vars:
value: "some string"
- debug:
msg: "Falsy"
when: value is falsy
vars:
value: ""
```
Additionally, the `truthy` and `falsy` tests accept an optional parameter called `convert_bool` that will attempt to convert boolean indicators to actual booleans.
```
- debug:
msg: "Truthy"
when: value is truthy(convert_bool=True)
vars:
value: "yes"
- debug:
msg: "Falsy"
when: value is falsy(convert_bool=True)
vars:
value: "off"
```
Comparing versions
------------------
New in version 1.6.
Note
In 2.5 `version_compare` was renamed to `version`
To compare a version number, such as checking if the `ansible_facts['distribution_version']` version is greater than or equal to ‘12.04’, you can use the `version` test.
The `version` test can also be used to evaluate the `ansible_facts['distribution_version']`:
```
{{ ansible_facts['distribution_version'] is version('12.04', '>=') }}
```
If `ansible_facts['distribution_version']` is greater than or equal to 12.04, this test returns True, otherwise False.
The `version` test accepts the following operators:
```
<, lt, <=, le, >, gt, >=, ge, ==, =, eq, !=, <>, ne
```
This test also accepts a 3rd parameter, `strict` which defines if strict version parsing as defined by `distutils.version.StrictVersion` should be used. The default is `False` (using `distutils.version.LooseVersion`), `True` enables strict version parsing:
```
{{ sample_version_var is version('1.0', operator='lt', strict=True) }}
```
As of Ansible 2.11 the `version` test accepts a `version_type` parameter which is mutually exclusive with `strict`, and accepts the following values:
```
loose, strict, semver, semantic
```
Using `version_type` to compare a semantic version would be achieved like the following:
```
{{ sample_semver_var is version('2.0.0-rc.1+build.123', 'lt', version_type='semver') }}
```
When using `version` in a playbook or role, don’t use `{{ }}` as described in the [FAQ](https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#when-should-i-use-also-how-to-interpolate-variables-or-dynamic-variable-names):
```
vars:
my_version: 1.2.3
tasks:
- debug:
msg: "my_version is higher than 1.0.0"
when: my_version is version('1.0.0', '>')
```
Set theory tests
----------------
New in version 2.1.
Note
In 2.5 `issubset` and `issuperset` were renamed to `subset` and `superset`
To see if a list includes or is included by another list, you can use ‘subset’ and ‘superset’:
```
vars:
a: [1,2,3,4,5]
b: [2,3]
tasks:
- debug:
msg: "A includes B"
when: a is superset(b)
- debug:
msg: "B is included in A"
when: b is subset(a)
```
Testing if a list contains a value
----------------------------------
New in version 2.8.
Ansible includes a `contains` test which operates similarly, but in reverse of the Jinja2 provided `in` test. The `contains` test is designed to work with the `select`, `reject`, `selectattr`, and `rejectattr` filters:
```
vars:
lacp_groups:
- master: lacp0
network: 10.65.100.0/24
gateway: 10.65.100.1
dns4:
- 10.65.100.10
- 10.65.100.11
interfaces:
- em1
- em2
- master: lacp1
network: 10.65.120.0/24
gateway: 10.65.120.1
dns4:
- 10.65.100.10
- 10.65.100.11
interfaces:
- em3
- em4
tasks:
- debug:
msg: "{{ (lacp_groups|selectattr('interfaces', 'contains', 'em1')|first).master }}"
```
New in version 2.4.
Testing if a list value is True
-------------------------------
You can use `any` and `all` to check if any or all elements in a list are true or not:
```
vars:
mylist:
- 1
- "{{ 3 == 3 }}"
- True
myotherlist:
- False
- True
tasks:
- debug:
msg: "all are true!"
when: mylist is all
- debug:
msg: "at least one is true"
when: myotherlist is any
```
Testing paths
-------------
Note
In 2.5 the following tests were renamed to remove the `is_` prefix
The following tests can provide information about a path on the controller:
```
- debug:
msg: "path is a directory"
when: mypath is directory
- debug:
msg: "path is a file"
when: mypath is file
- debug:
msg: "path is a symlink"
when: mypath is link
- debug:
msg: "path already exists"
when: mypath is exists
- debug:
msg: "path is {{ (mypath is abs)|ternary('absolute','relative')}}"
- debug:
msg: "path is the same file as path2"
when: mypath is same_file(path2)
- debug:
msg: "path is a mount"
when: mypath is mount
```
Testing size formats
--------------------
The `human_readable` and `human_to_bytes` functions let you test your playbooks to make sure you are using the right size format in your tasks, and that you provide Byte format to computers and human-readable format to people.
### Human readable
Asserts whether the given string is human readable or not.
For example:
```
- name: "Human Readable"
assert:
that:
- '"1.00 Bytes" == 1|human_readable'
- '"1.00 bits" == 1|human_readable(isbits=True)'
- '"10.00 KB" == 10240|human_readable'
- '"97.66 MB" == 102400000|human_readable'
- '"0.10 GB" == 102400000|human_readable(unit="G")'
- '"0.10 Gb" == 102400000|human_readable(isbits=True, unit="G")'
```
This would result in:
```
{ "changed": false, "msg": "All assertions passed" }
```
### Human to bytes
Returns the given string in the Bytes format.
For example:
```
- name: "Human to Bytes"
assert:
that:
- "{{'0'|human_to_bytes}} == 0"
- "{{'0.1'|human_to_bytes}} == 0"
- "{{'0.9'|human_to_bytes}} == 1"
- "{{'1'|human_to_bytes}} == 1"
- "{{'10.00 KB'|human_to_bytes}} == 10240"
- "{{ '11 MB'|human_to_bytes}} == 11534336"
- "{{ '1.1 GB'|human_to_bytes}} == 1181116006"
- "{{'10.00 Kb'|human_to_bytes(isbits=True)}} == 10240"
```
This would result in:
```
{ "changed": false, "msg": "All assertions passed" }
```
Testing task results
--------------------
The following tasks are illustrative of the tests meant to check the status of tasks:
```
tasks:
- shell: /usr/bin/foo
register: result
ignore_errors: True
- debug:
msg: "it failed"
when: result is failed
# in most cases you'll want a handler, but if you want to do something right now, this is nice
- debug:
msg: "it changed"
when: result is changed
- debug:
msg: "it succeeded in Ansible >= 2.1"
when: result is succeeded
- debug:
msg: "it succeeded"
when: result is success
- debug:
msg: "it was skipped"
when: result is skipped
```
Note
From 2.1, you can also use success, failure, change, and skip so that the grammar matches, for those who need to be strict about it.
See also
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[Conditionals](playbooks_conditionals#playbooks-conditionals)
Conditional statements in playbooks
[Using Variables](playbooks_variables#playbooks-variables)
All about variables
[Loops](playbooks_loops#playbooks-loops)
Looping in playbooks
[Roles](playbooks_reuse_roles#playbooks-reuse-roles)
Playbook organization by roles
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Validating tasks: check mode and diff mode Validating tasks: check mode and diff mode
==========================================
Ansible provides two modes of execution that validate tasks: check mode and diff mode. These modes can be used separately or together. They are useful when you are creating or editing a playbook or role and you want to know what it will do. In check mode, Ansible runs without making any changes on remote systems. Modules that support check mode report the changes they would have made. Modules that do not support check mode report nothing and do nothing. In diff mode, Ansible provides before-and-after comparisons. Modules that support diff mode display detailed information. You can combine check mode and diff mode for detailed validation of your playbook or role.
* [Using check mode](#using-check-mode)
+ [Enforcing or preventing check mode on tasks](#enforcing-or-preventing-check-mode-on-tasks)
+ [Skipping tasks or ignoring errors in check mode](#skipping-tasks-or-ignoring-errors-in-check-mode)
* [Using diff mode](#using-diff-mode)
+ [Enforcing or preventing diff mode on tasks](#enforcing-or-preventing-diff-mode-on-tasks)
Using check mode
----------------
Check mode is just a simulation. It will not generate output for tasks that use [conditionals based on registered variables](playbooks_conditionals#conditionals-registered-vars) (results of prior tasks). However, it is great for validating configuration management playbooks that run on one node at a time. To run a playbook in check mode:
```
ansible-playbook foo.yml --check
```
### Enforcing or preventing check mode on tasks
New in version 2.2.
If you want certain tasks to run in check mode always, or never, regardless of whether you run the playbook with or without `--check`, you can add the `check_mode` option to those tasks:
* To force a task to run in check mode, even when the playbook is called without `--check`, set `check_mode: yes`.
* To force a task to run in normal mode and make changes to the system, even when the playbook is called with `--check`, set `check_mode: no`.
For example:
```
tasks:
- name: This task will always make changes to the system
ansible.builtin.command: /something/to/run --even-in-check-mode
check_mode: no
- name: This task will never make changes to the system
ansible.builtin.lineinfile:
line: "important config"
dest: /path/to/myconfig.conf
state: present
check_mode: yes
register: changes_to_important_config
```
Running single tasks with `check_mode: yes` can be useful for testing Ansible modules, either to test the module itself or to test the conditions under which a module would make changes. You can register variables (see [Conditionals](playbooks_conditionals#playbooks-conditionals)) on these tasks for even more detail on the potential changes.
Note
Prior to version 2.2 only the equivalent of `check_mode: no` existed. The notation for that was `always_run: yes`.
### Skipping tasks or ignoring errors in check mode
New in version 2.1.
If you want to skip a task or ignore errors on a task when you run Ansible in check mode, you can use a boolean magic variable `ansible_check_mode`, which is set to `True` when Ansible runs in check mode. For example:
```
tasks:
- name: This task will be skipped in check mode
ansible.builtin.git:
repo: ssh://[email protected]/mylogin/hello.git
dest: /home/mylogin/hello
when: not ansible_check_mode
- name: This task will ignore errors in check mode
ansible.builtin.git:
repo: ssh://[email protected]/mylogin/hello.git
dest: /home/mylogin/hello
ignore_errors: "{{ ansible_check_mode }}"
```
Using diff mode
---------------
The `--diff` option for ansible-playbook can be used alone or with `--check`. When you run in diff mode, any module that supports diff mode reports the changes made or, if used with `--check`, the changes that would have been made. Diff mode is most common in modules that manipulate files (for example, the template module) but other modules might also show ‘before and after’ information (for example, the user module).
Diff mode produces a large amount of output, so it is best used when checking a single host at a time. For example:
```
ansible-playbook foo.yml --check --diff --limit foo.example.com
```
New in version 2.4.
### Enforcing or preventing diff mode on tasks
Because the `--diff` option can reveal sensitive information, you can disable it for a task by specifying `diff: no`. For example:
```
tasks:
- name: This task will not report a diff when the file changes
ansible.builtin.template:
src: secret.conf.j2
dest: /etc/secret.conf
owner: root
group: root
mode: '0600'
diff: no
```
ansible Advanced playbooks features Advanced playbooks features
===========================
This page is obsolete. Refer to the [main User Guide index page](index#user-guide-index) for links to all playbook-related topics. Please update any links you may have made directly to this page.
ansible Windows Remote Management Windows Remote Management
=========================
Unlike Linux/Unix hosts, which use SSH by default, Windows hosts are configured with WinRM. This topic covers how to configure and use WinRM with Ansible.
* [What is WinRM?](#what-is-winrm)
* [Authentication Options](#authentication-options)
+ [Basic](#basic)
+ [Certificate](#certificate)
- [Generate a Certificate](#generate-a-certificate)
- [Import a Certificate to the Certificate Store](#import-a-certificate-to-the-certificate-store)
- [Mapping a Certificate to an Account](#mapping-a-certificate-to-an-account)
+ [NTLM](#ntlm)
+ [Kerberos](#kerberos)
- [Installing the Kerberos Library](#installing-the-kerberos-library)
- [Configuring Host Kerberos](#configuring-host-kerberos)
- [Automatic Kerberos Ticket Management](#automatic-kerberos-ticket-management)
- [Manual Kerberos Ticket Management](#manual-kerberos-ticket-management)
- [Troubleshooting Kerberos](#troubleshooting-kerberos)
+ [CredSSP](#credssp)
- [Installing CredSSP Library](#installing-credssp-library)
- [CredSSP and TLS 1.2](#credssp-and-tls-1-2)
- [Set CredSSP Certificate](#set-credssp-certificate)
* [Non-Administrator Accounts](#non-administrator-accounts)
* [WinRM Encryption](#winrm-encryption)
* [Inventory Options](#inventory-options)
* [IPv6 Addresses](#ipv6-addresses)
* [HTTPS Certificate Validation](#https-certificate-validation)
* [TLS 1.2 Support](#tls-1-2-support)
* [Limitations](#limitations)
What is WinRM?
--------------
WinRM is a management protocol used by Windows to remotely communicate with another server. It is a SOAP-based protocol that communicates over HTTP/HTTPS, and is included in all recent Windows operating systems. Since Windows Server 2012, WinRM has been enabled by default, but in most cases extra configuration is required to use WinRM with Ansible.
Ansible uses the [pywinrm](https://github.com/diyan/pywinrm) package to communicate with Windows servers over WinRM. It is not installed by default with the Ansible package, but can be installed by running the following:
```
pip install "pywinrm>=0.3.0"
```
Note
on distributions with multiple python versions, use pip2 or pip2.x, where x matches the python minor version Ansible is running under.
Warning
Using the `winrm` or `psrp` connection plugins in Ansible on MacOS in the latest releases typically fail. This is a known problem that occurs deep within the Python stack and cannot be changed by Ansible. The only workaround today is to set the environment variable `no_proxy=*` and avoid using Kerberos auth.
Authentication Options
----------------------
When connecting to a Windows host, there are several different options that can be used when authenticating with an account. The authentication type may be set on inventory hosts or groups with the `ansible_winrm_transport` variable.
The following matrix is a high level overview of the options:
| Option | Local Accounts | Active Directory Accounts | Credential Delegation | HTTP Encryption |
| --- | --- | --- | --- | --- |
| Basic | Yes | No | No | No |
| Certificate | Yes | No | No | No |
| Kerberos | No | Yes | Yes | Yes |
| NTLM | Yes | Yes | No | Yes |
| CredSSP | Yes | Yes | Yes | Yes |
### Basic
Basic authentication is one of the simplest authentication options to use, but is also the most insecure. This is because the username and password are simply base64 encoded, and if a secure channel is not in use (eg, HTTPS) then it can be decoded by anyone. Basic authentication can only be used for local accounts (not domain accounts).
The following example shows host vars configured for basic authentication:
```
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: basic
```
Basic authentication is not enabled by default on a Windows host but can be enabled by running the following in PowerShell:
```
Set-Item -Path WSMan:\localhost\Service\Auth\Basic -Value $true
```
### Certificate
Certificate authentication uses certificates as keys similar to SSH key pairs, but the file format and key generation process is different.
The following example shows host vars configured for certificate authentication:
```
ansible_connection: winrm
ansible_winrm_cert_pem: /path/to/certificate/public/key.pem
ansible_winrm_cert_key_pem: /path/to/certificate/private/key.pem
ansible_winrm_transport: certificate
```
Certificate authentication is not enabled by default on a Windows host but can be enabled by running the following in PowerShell:
```
Set-Item -Path WSMan:\localhost\Service\Auth\Certificate -Value $true
```
Note
Encrypted private keys cannot be used as the urllib3 library that is used by Ansible for WinRM does not support this functionality.
#### Generate a Certificate
A certificate must be generated before it can be mapped to a local user. This can be done using one of the following methods:
* OpenSSL
* PowerShell, using the `New-SelfSignedCertificate` cmdlet
* Active Directory Certificate Services
Active Directory Certificate Services is beyond of scope in this documentation but may be the best option to use when running in a domain environment. For more information, see the [Active Directory Certificate Services documentation](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732625(v=ws.11)).
Note
Using the PowerShell cmdlet `New-SelfSignedCertificate` to generate a certificate for authentication only works when being generated from a Windows 10 or Windows Server 2012 R2 host or later. OpenSSL is still required to extract the private key from the PFX certificate to a PEM file for Ansible to use.
To generate a certificate with `OpenSSL`:
```
# Set the name of the local user that will have the key mapped to
USERNAME="username"
cat > openssl.conf << EOL
distinguished_name = req_distinguished_name
[req_distinguished_name]
[v3_req_client]
extendedKeyUsage = clientAuth
subjectAltName = otherName:1.3.6.1.4.1.311.20.2.3;UTF8:$USERNAME@localhost
EOL
export OPENSSL_CONF=openssl.conf
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out cert.pem -outform PEM -keyout cert_key.pem -subj "/CN=$USERNAME" -extensions v3_req_client
rm openssl.conf
```
To generate a certificate with `New-SelfSignedCertificate`:
```
# Set the name of the local user that will have the key mapped
$username = "username"
$output_path = "C:\temp"
# Instead of generating a file, the cert will be added to the personal
# LocalComputer folder in the certificate store
$cert = New-SelfSignedCertificate -Type Custom `
-Subject "CN=$username" `
-TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2","2.5.29.17={text}upn=$username@localhost") `
-KeyUsage DigitalSignature,KeyEncipherment `
-KeyAlgorithm RSA `
-KeyLength 2048
# Export the public key
$pem_output = @()
$pem_output += "-----BEGIN CERTIFICATE-----"
$pem_output += [System.Convert]::ToBase64String($cert.RawData) -replace ".{64}", "$&`n"
$pem_output += "-----END CERTIFICATE-----"
[System.IO.File]::WriteAllLines("$output_path\cert.pem", $pem_output)
# Export the private key in a PFX file
[System.IO.File]::WriteAllBytes("$output_path\cert.pfx", $cert.Export("Pfx"))
```
Note
To convert the PFX file to a private key that pywinrm can use, run the following command with OpenSSL `openssl pkcs12 -in cert.pfx -nocerts -nodes -out cert_key.pem -passin pass: -passout pass:`
#### Import a Certificate to the Certificate Store
Once a certificate has been generated, the issuing certificate needs to be imported into the `Trusted Root Certificate Authorities` of the `LocalMachine` store, and the client certificate public key must be present in the `Trusted People` folder of the `LocalMachine` store. For this example, both the issuing certificate and public key are the same.
Following example shows how to import the issuing certificate:
```
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import("cert.pem")
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::Root
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
```
Note
If using ADCS to generate the certificate, then the issuing certificate will already be imported and this step can be skipped.
The code to import the client certificate public key is:
```
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import("cert.pem")
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::TrustedPeople
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
```
#### Mapping a Certificate to an Account
Once the certificate has been imported, map it to the local user account:
```
$username = "username"
$password = ConvertTo-SecureString -String "password" -AsPlainText -Force
$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
# This is the issuer thumbprint which in the case of a self generated cert
# is the public key thumbprint, additional logic may be required for other
# scenarios
$thumbprint = (Get-ChildItem -Path cert:\LocalMachine\root | Where-Object { $_.Subject -eq "CN=$username" }).Thumbprint
New-Item -Path WSMan:\localhost\ClientCertificate `
-Subject "$username@localhost" `
-URI * `
-Issuer $thumbprint `
-Credential $credential `
-Force
```
Once this is complete, the hostvar `ansible_winrm_cert_pem` should be set to the path of the public key and the `ansible_winrm_cert_key_pem` variable should be set to the path of the private key.
### NTLM
NTLM is an older authentication mechanism used by Microsoft that can support both local and domain accounts. NTLM is enabled by default on the WinRM service, so no setup is required before using it.
NTLM is the easiest authentication protocol to use and is more secure than `Basic` authentication. If running in a domain environment, `Kerberos` should be used instead of NTLM.
Kerberos has several advantages over using NTLM:
* NTLM is an older protocol and does not support newer encryption protocols.
* NTLM is slower to authenticate because it requires more round trips to the host in the authentication stage.
* Unlike Kerberos, NTLM does not allow credential delegation.
This example shows host variables configured to use NTLM authentication:
```
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: ntlm
```
### Kerberos
Kerberos is the recommended authentication option to use when running in a domain environment. Kerberos supports features like credential delegation and message encryption over HTTP and is one of the more secure options that is available through WinRM.
Kerberos requires some additional setup work on the Ansible host before it can be used properly.
The following example shows host vars configured for Kerberos authentication:
```
ansible_user: [email protected]
ansible_password: Password
ansible_connection: winrm
ansible_port: 5985
ansible_winrm_transport: kerberos
```
As of Ansible version 2.3, the Kerberos ticket will be created based on `ansible_user` and `ansible_password`. If running on an older version of Ansible or when `ansible_winrm_kinit_mode` is `manual`, a Kerberos ticket must already be obtained. See below for more details.
There are some extra host variables that can be set:
```
ansible_winrm_kinit_mode: managed/manual (manual means Ansible will not obtain a ticket)
ansible_winrm_kinit_cmd: the kinit binary to use to obtain a Kerberos ticket (default to kinit)
ansible_winrm_service: overrides the SPN prefix that is used, the default is ``HTTP`` and should rarely ever need changing
ansible_winrm_kerberos_delegation: allows the credentials to traverse multiple hops
ansible_winrm_kerberos_hostname_override: the hostname to be used for the kerberos exchange
```
#### Installing the Kerberos Library
Some system dependencies that must be installed prior to using Kerberos. The script below lists the dependencies based on the distro:
```
# Via Yum (RHEL/Centos/Fedora)
yum -y install gcc python-devel krb5-devel krb5-libs krb5-workstation
# Via Apt (Ubuntu)
sudo apt-get install python-dev libkrb5-dev krb5-user
# Via Portage (Gentoo)
emerge -av app-crypt/mit-krb5
emerge -av dev-python/setuptools
# Via Pkg (FreeBSD)
sudo pkg install security/krb5
# Via OpenCSW (Solaris)
pkgadd -d http://get.opencsw.org/now
/opt/csw/bin/pkgutil -U
/opt/csw/bin/pkgutil -y -i libkrb5_3
# Via Pacman (Arch Linux)
pacman -S krb5
```
Once the dependencies have been installed, the `python-kerberos` wrapper can be install using `pip`:
```
pip install pywinrm[kerberos]
```
Note
While Ansible has supported Kerberos auth through `pywinrm` for some time, optional features or more secure options may only be available in newer versions of the `pywinrm` and/or `pykerberos` libraries. It is recommended you upgrade each version to the latest available to resolve any warnings or errors. This can be done through tools like `pip` or a system package manager like `dnf`, `yum`, `apt` but the package names and versions available may differ between tools.
#### Configuring Host Kerberos
Once the dependencies have been installed, Kerberos needs to be configured so that it can communicate with a domain. This configuration is done through the `/etc/krb5.conf` file, which is installed with the packages in the script above.
To configure Kerberos, in the section that starts with:
```
[realms]
```
Add the full domain name and the fully qualified domain names of the primary and secondary Active Directory domain controllers. It should look something like this:
```
[realms]
MY.DOMAIN.COM = {
kdc = domain-controller1.my.domain.com
kdc = domain-controller2.my.domain.com
}
```
In the section that starts with:
```
[domain_realm]
```
Add a line like the following for each domain that Ansible needs access for:
```
[domain_realm]
.my.domain.com = MY.DOMAIN.COM
```
You can configure other settings in this file such as the default domain. See [krb5.conf](https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html) for more details.
#### Automatic Kerberos Ticket Management
Ansible version 2.3 and later defaults to automatically managing Kerberos tickets when both `ansible_user` and `ansible_password` are specified for a host. In this process, a new ticket is created in a temporary credential cache for each host. This is done before each task executes to minimize the chance of ticket expiration. The temporary credential caches are deleted after each task completes and will not interfere with the default credential cache.
To disable automatic ticket management, set `ansible_winrm_kinit_mode=manual` via the inventory.
Automatic ticket management requires a standard `kinit` binary on the control host system path. To specify a different location or binary name, set the `ansible_winrm_kinit_cmd` hostvar to the fully qualified path to a MIT krbv5 `kinit`-compatible binary.
#### Manual Kerberos Ticket Management
To manually manage Kerberos tickets, the `kinit` binary is used. To obtain a new ticket the following command is used:
```
kinit [email protected]
```
Note
The domain must match the configured Kerberos realm exactly, and must be in upper case.
To see what tickets (if any) have been acquired, use the following command:
```
klist
```
To destroy all the tickets that have been acquired, use the following command:
```
kdestroy
```
#### Troubleshooting Kerberos
Kerberos is reliant on a properly-configured environment to work. To troubleshoot Kerberos issues, ensure that:
* The hostname set for the Windows host is the FQDN and not an IP address.
* The forward and reverse DNS lookups are working properly in the domain. To test this, ping the windows host by name and then use the ip address returned with `nslookup`. The same name should be returned when using `nslookup` on the IP address.
* The Ansible host’s clock is synchronized with the domain controller. Kerberos is time sensitive, and a little clock drift can cause the ticket generation process to fail.
* Ensure that the fully qualified domain name for the domain is configured in the `krb5.conf` file. To check this, run:
```
kinit -C [email protected]
klist
```
If the domain name returned by `klist` is different from the one requested, an alias is being used. The `krb5.conf` file needs to be updated so that the fully qualified domain name is used and not an alias.
* If the default kerberos tooling has been replaced or modified (some IdM solutions may do this), this may cause issues when installing or upgrading the Python Kerberos library. As of the time of this writing, this library is called `pykerberos` and is known to work with both MIT and Heimdal Kerberos libraries. To resolve `pykerberos` installation issues, ensure the system dependencies for Kerberos have been met (see: [Installing the Kerberos Library](#installing-the-kerberos-library)), remove any custom Kerberos tooling paths from the PATH environment variable, and retry the installation of Python Kerberos library package.
### CredSSP
CredSSP authentication is a newer authentication protocol that allows credential delegation. This is achieved by encrypting the username and password after authentication has succeeded and sending that to the server using the CredSSP protocol.
Because the username and password are sent to the server to be used for double hop authentication, ensure that the hosts that the Windows host communicates with are not compromised and are trusted.
CredSSP can be used for both local and domain accounts and also supports message encryption over HTTP.
To use CredSSP authentication, the host vars are configured like so:
```
ansible_user: Username
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: credssp
```
There are some extra host variables that can be set as shown below:
```
ansible_winrm_credssp_disable_tlsv1_2: when true, will not use TLS 1.2 in the CredSSP auth process
```
CredSSP authentication is not enabled by default on a Windows host, but can be enabled by running the following in PowerShell:
```
Enable-WSManCredSSP -Role Server -Force
```
#### Installing CredSSP Library
The `requests-credssp` wrapper can be installed using `pip`:
```
pip install pywinrm[credssp]
```
#### CredSSP and TLS 1.2
By default the `requests-credssp` library is configured to authenticate over the TLS 1.2 protocol. TLS 1.2 is installed and enabled by default for Windows Server 2012 and Windows 8 and more recent releases.
There are two ways that older hosts can be used with CredSSP:
* Install and enable a hotfix to enable TLS 1.2 support (recommended for Server 2008 R2 and Windows 7).
* Set `ansible_winrm_credssp_disable_tlsv1_2=True` in the inventory to run over TLS 1.0. This is the only option when connecting to Windows Server 2008, which has no way of supporting TLS 1.2
See [TLS 1.2 Support](#winrm-tls12) for more information on how to enable TLS 1.2 on the Windows host.
#### Set CredSSP Certificate
CredSSP works by encrypting the credentials through the TLS protocol and uses a self-signed certificate by default. The `CertificateThumbprint` option under the WinRM service configuration can be used to specify the thumbprint of another certificate.
Note
This certificate configuration is independent of the WinRM listener certificate. With CredSSP, message transport still occurs over the WinRM listener, but the TLS-encrypted messages inside the channel use the service-level certificate.
To explicitly set the certificate to use for CredSSP:
```
# Note the value $certificate_thumbprint will be different in each
# situation, this needs to be set based on the cert that is used.
$certificate_thumbprint = "7C8DCBD5427AFEE6560F4AF524E325915F51172C"
# Set the thumbprint value
Set-Item -Path WSMan:\localhost\Service\CertificateThumbprint -Value $certificate_thumbprint
```
Non-Administrator Accounts
--------------------------
WinRM is configured by default to only allow connections from accounts in the local `Administrators` group. This can be changed by running:
```
winrm configSDDL default
```
This will display an ACL editor, where new users or groups may be added. To run commands over WinRM, users and groups must have at least the `Read` and `Execute` permissions enabled.
While non-administrative accounts can be used with WinRM, most typical server administration tasks require some level of administrative access, so the utility is usually limited.
WinRM Encryption
----------------
By default WinRM will fail to work when running over an unencrypted channel. The WinRM protocol considers the channel to be encrypted if using TLS over HTTP (HTTPS) or using message level encryption. Using WinRM with TLS is the recommended option as it works with all authentication options, but requires a certificate to be created and used on the WinRM listener.
The `ConfigureRemotingForAnsible.ps1` creates a self-signed certificate and creates the listener with that certificate. If in a domain environment, ADCS can also create a certificate for the host that is issued by the domain itself.
If using HTTPS is not an option, then HTTP can be used when the authentication option is `NTLM`, `Kerberos` or `CredSSP`. These protocols will encrypt the WinRM payload with their own encryption method before sending it to the server. The message-level encryption is not used when running over HTTPS because the encryption uses the more secure TLS protocol instead. If both transport and message encryption is required, set `ansible_winrm_message_encryption=always` in the host vars.
Note
Message encryption over HTTP requires pywinrm>=0.3.0.
A last resort is to disable the encryption requirement on the Windows host. This should only be used for development and debugging purposes, as anything sent from Ansible can be viewed, manipulated and also the remote session can completely be taken over by anyone on the same network. To disable the encryption requirement:
```
Set-Item -Path WSMan:\localhost\Service\AllowUnencrypted -Value $true
```
Note
Do not disable the encryption check unless it is absolutely required. Doing so could allow sensitive information like credentials and files to be intercepted by others on the network.
Inventory Options
-----------------
Ansible’s Windows support relies on a few standard variables to indicate the username, password, and connection type of the remote hosts. These variables are most easily set up in the inventory, but can be set on the `host_vars`/ `group_vars` level.
When setting up the inventory, the following variables are required:
```
# It is suggested that these be encrypted with ansible-vault:
# ansible-vault edit group_vars/windows.yml
ansible_connection: winrm
# May also be passed on the command-line via --user
ansible_user: Administrator
# May also be supplied at runtime with --ask-pass
ansible_password: SecretPasswordGoesHere
```
Using the variables above, Ansible will connect to the Windows host with Basic authentication through HTTPS. If `ansible_user` has a UPN value like `[email protected]` then the authentication option will automatically attempt to use Kerberos unless `ansible_winrm_transport` has been set to something other than `kerberos`.
The following custom inventory variables are also supported for additional configuration of WinRM connections:
* `ansible_port`: The port WinRM will run over, HTTPS is `5986` which is the default while HTTP is `5985`
* `ansible_winrm_scheme`: Specify the connection scheme (`http` or `https`) to use for the WinRM connection. Ansible uses `https` by default unless `ansible_port` is `5985`
* `ansible_winrm_path`: Specify an alternate path to the WinRM endpoint, Ansible uses `/wsman` by default
* `ansible_winrm_realm`: Specify the realm to use for Kerberos authentication. If `ansible_user` contains `@`, Ansible will use the part of the username after `@` by default
* `ansible_winrm_transport`: Specify one or more authentication transport options as a comma-separated list. By default, Ansible will use `kerberos,
basic` if the `kerberos` module is installed and a realm is defined, otherwise it will be `plaintext`
* `ansible_winrm_server_cert_validation`: Specify the server certificate validation mode (`ignore` or `validate`). Ansible defaults to `validate` on Python 2.7.9 and higher, which will result in certificate validation errors against the Windows self-signed certificates. Unless verifiable certificates have been configured on the WinRM listeners, this should be set to `ignore`
* `ansible_winrm_operation_timeout_sec`: Increase the default timeout for WinRM operations, Ansible uses `20` by default
* `ansible_winrm_read_timeout_sec`: Increase the WinRM read timeout, Ansible uses `30` by default. Useful if there are intermittent network issues and read timeout errors keep occurring
* `ansible_winrm_message_encryption`: Specify the message encryption operation (`auto`, `always`, `never`) to use, Ansible uses `auto` by default. `auto` means message encryption is only used when `ansible_winrm_scheme` is `http` and `ansible_winrm_transport` supports message encryption. `always` means message encryption will always be used and `never` means message encryption will never be used
* `ansible_winrm_ca_trust_path`: Used to specify a different cacert container than the one used in the `certifi` module. See the HTTPS Certificate Validation section for more details.
* `ansible_winrm_send_cbt`: When using `ntlm` or `kerberos` over HTTPS, the authentication library will try to send channel binding tokens to mitigate against man in the middle attacks. This flag controls whether these bindings will be sent or not (default: `yes`).
* `ansible_winrm_*`: Any additional keyword arguments supported by `winrm.Protocol` may be provided in place of `*`
In addition, there are also specific variables that need to be set for each authentication option. See the section on authentication above for more information.
Note
Ansible 2.0 has deprecated the “ssh” from `ansible_ssh_user`, `ansible_ssh_pass`, `ansible_ssh_host`, and `ansible_ssh_port` to become `ansible_user`, `ansible_password`, `ansible_host`, and `ansible_port`. If using a version of Ansible prior to 2.0, the older style (`ansible_ssh_*`) should be used instead. The shorter variables are ignored, without warning, in older versions of Ansible.
Note
`ansible_winrm_message_encryption` is different from transport encryption done over TLS. The WinRM payload is still encrypted with TLS when run over HTTPS, even if `ansible_winrm_message_encryption=never`.
IPv6 Addresses
--------------
IPv6 addresses can be used instead of IPv4 addresses or hostnames. This option is normally set in an inventory. Ansible will attempt to parse the address using the [ipaddress](https://docs.python.org/3/library/ipaddress.html) package and pass to pywinrm correctly.
When defining a host using an IPv6 address, just add the IPv6 address as you would an IPv4 address or hostname:
```
[windows-server]
2001:db8::1
[windows-server:vars]
ansible_user=username
ansible_password=password
ansible_connection=winrm
```
Note
The ipaddress library is only included by default in Python 3.x. To use IPv6 addresses in Python 2.7, make sure to run `pip install ipaddress` which installs a backported package.
HTTPS Certificate Validation
----------------------------
As part of the TLS protocol, the certificate is validated to ensure the host matches the subject and the client trusts the issuer of the server certificate. When using a self-signed certificate or setting `ansible_winrm_server_cert_validation: ignore` these security mechanisms are bypassed. While self signed certificates will always need the `ignore` flag, certificates that have been issued from a certificate authority can still be validated.
One of the more common ways of setting up a HTTPS listener in a domain environment is to use Active Directory Certificate Service (AD CS). AD CS is used to generate signed certificates from a Certificate Signing Request (CSR). If the WinRM HTTPS listener is using a certificate that has been signed by another authority, like AD CS, then Ansible can be set up to trust that issuer as part of the TLS handshake.
To get Ansible to trust a Certificate Authority (CA) like AD CS, the issuer certificate of the CA can be exported as a PEM encoded certificate. This certificate can then be copied locally to the Ansible controller and used as a source of certificate validation, otherwise known as a CA chain.
The CA chain can contain a single or multiple issuer certificates and each entry is contained on a new line. To then use the custom CA chain as part of the validation process, set `ansible_winrm_ca_trust_path` to the path of the file. If this variable is not set, the default CA chain is used instead which is located in the install path of the Python package [certifi](https://github.com/certifi/python-certifi).
Note
Each HTTP call is done by the Python requests library which does not use the systems built-in certificate store as a trust authority. Certificate validation will fail if the server’s certificate issuer is only added to the system’s truststore.
TLS 1.2 Support
---------------
As WinRM runs over the HTTP protocol, using HTTPS means that the TLS protocol is used to encrypt the WinRM messages. TLS will automatically attempt to negotiate the best protocol and cipher suite that is available to both the client and the server. If a match cannot be found then Ansible will error out with a message similar to:
```
HTTPSConnectionPool(host='server', port=5986): Max retries exceeded with url: /wsman (Caused by SSLError(SSLError(1, '[SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1056)')))
```
Commonly this is when the Windows host has not been configured to support TLS v1.2 but it could also mean the Ansible controller has an older OpenSSL version installed.
Windows 8 and Windows Server 2012 come with TLS v1.2 installed and enabled by default but older hosts, like Server 2008 R2 and Windows 7, have to be enabled manually.
Note
There is a bug with the TLS 1.2 patch for Server 2008 which will stop Ansible from connecting to the Windows host. This means that Server 2008 cannot be configured to use TLS 1.2. Server 2008 R2 and Windows 7 are not affected by this issue and can use TLS 1.2.
To verify what protocol the Windows host supports, you can run the following command on the Ansible controller:
```
openssl s_client -connect <hostname>:5986
```
The output will contain information about the TLS session and the `Protocol` line will display the version that was negotiated:
```
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1
Cipher : ECDHE-RSA-AES256-SHA
Session-ID: 962A00001C95D2A601BE1CCFA7831B85A7EEE897AECDBF3D9ECD4A3BE4F6AC9B
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976474
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: AE16000050DA9FD44D03BB8839B64449805D9E43DBD670346D3D9E05D1AEEA84
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976538
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
```
If the host is returning `TLSv1` then it should be configured so that TLS v1.2 is enable. You can do this by running the following PowerShell script:
```
Function Enable-TLS12 {
param(
[ValidateSet("Server", "Client")]
[String]$Component = "Server"
)
$protocols_path = 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols'
New-Item -Path "$protocols_path\TLS 1.2\$Component" -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name Enabled -Value 1 -Type DWORD -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name DisabledByDefault -Value 0 -Type DWORD -Force
}
Enable-TLS12 -Component Server
# Not required but highly recommended to enable the Client side TLS 1.2 components
Enable-TLS12 -Component Client
Restart-Computer
```
The below Ansible tasks can also be used to enable TLS v1.2:
```
- name: enable TLSv1.2 support
win_regedit:
path: HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\{{ item.type }}
name: '{{ item.property }}'
data: '{{ item.value }}'
type: dword
state: present
register: enable_tls12
loop:
- type: Server
property: Enabled
value: 1
- type: Server
property: DisabledByDefault
value: 0
- type: Client
property: Enabled
value: 1
- type: Client
property: DisabledByDefault
value: 0
- name: reboot if TLS config was applied
win_reboot:
when: enable_tls12 is changed
```
There are other ways to configure the TLS protocols as well as the cipher suites that are offered by the Windows host. One tool that can give you a GUI to manage these settings is [IIS Crypto](https://www.nartac.com/Products/IISCrypto/) from Nartac Software.
Limitations
-----------
Due to the design of the WinRM protocol , there are a few limitations when using WinRM that can cause issues when creating playbooks for Ansible. These include:
* Credentials are not delegated for most authentication types, which causes authentication errors when accessing network resources or installing certain programs.
* Many calls to the Windows Update API are blocked when running over WinRM.
* Some programs fail to install with WinRM due to no credential delegation or because they access forbidden Windows API like WUA over WinRM.
* Commands under WinRM are done under a non-interactive session, which can prevent certain commands or executables from running.
* You cannot run a process that interacts with `DPAPI`, which is used by some installers (like Microsoft SQL Server).
Some of these limitations can be mitigated by doing one of the following:
* Set `ansible_winrm_transport` to `credssp` or `kerberos` (with `ansible_winrm_kerberos_delegation=true`) to bypass the double hop issue and access network resources
* Use `become` to bypass all WinRM restrictions and run a command as it would locally. Unlike using an authentication transport like `credssp`, this will also remove the non-interactive restriction and API restrictions like WUA and DPAPI
* Use a scheduled task to run a command which can be created with the `win_scheduled_task` module. Like `become`, this bypasses all WinRM restrictions but can only run a command and not modules.
See also
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[List of Windows Modules](https://docs.ansible.com/ansible/2.9/modules/list_of_windows_modules.html#windows-modules "(in Ansible v2.9)")
Windows specific module list, all implemented in PowerShell
[User Mailing List](https://groups.google.com/group/ansible-project)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Blocks Blocks
======
Blocks create logical groups of tasks. Blocks also offer ways to handle task errors, similar to exception handling in many programming languages.
* [Grouping tasks with blocks](#grouping-tasks-with-blocks)
* [Handling errors with blocks](#handling-errors-with-blocks)
Grouping tasks with blocks
--------------------------
All tasks in a block inherit directives applied at the block level. Most of what you can apply to a single task (with the exception of loops) can be applied at the block level, so blocks make it much easier to set data or directives common to the tasks. The directive does not affect the block itself, it is only inherited by the tasks enclosed by a block. For example, a `when` statement is applied to the tasks within a block, not to the block itself.
Block example with named tasks inside the block
```
tasks:
- name: Install, configure, and start Apache
block:
- name: Install httpd and memcached
ansible.builtin.yum:
name:
- httpd
- memcached
state: present
- name: Apply the foo config template
ansible.builtin.template:
src: templates/src.j2
dest: /etc/foo.conf
- name: Start service bar and enable it
ansible.builtin.service:
name: bar
state: started
enabled: True
when: ansible_facts['distribution'] == 'CentOS'
become: true
become_user: root
ignore_errors: yes
```
In the example above, the ‘when’ condition will be evaluated before Ansible runs each of the three tasks in the block. All three tasks also inherit the privilege escalation directives, running as the root user. Finally, `ignore_errors: yes` ensures that Ansible continues to execute the playbook even if some of the tasks fail.
Names for blocks have been available since Ansible 2.3. We recommend using names in all tasks, within blocks or elsewhere, for better visibility into the tasks being executed when you run the playbook.
Handling errors with blocks
---------------------------
You can control how Ansible responds to task errors using blocks with `rescue` and `always` sections.
Rescue blocks specify tasks to run when an earlier task in a block fails. This approach is similar to exception handling in many programming languages. Ansible only runs rescue blocks after a task returns a ‘failed’ state. Bad task definitions and unreachable hosts will not trigger the rescue block.
Block error handling example
```
tasks:
- name: Handle the error
block:
- name: Print a message
ansible.builtin.debug:
msg: 'I execute normally'
- name: Force a failure
ansible.builtin.command: /bin/false
- name: Never print this
ansible.builtin.debug:
msg: 'I never execute, due to the above task failing, :-('
rescue:
- name: Print when errors
ansible.builtin.debug:
msg: 'I caught an error, can do stuff here to fix it, :-)'
```
You can also add an `always` section to a block. Tasks in the `always` section run no matter what the task status of the previous block is.
Block with always section
```
- name: Always do X
block:
- name: Print a message
ansible.builtin.debug:
msg: 'I execute normally'
- name: Force a failure
ansible.builtin.command: /bin/false
- name: Never print this
ansible.builtin.debug:
msg: 'I never execute :-('
always:
- name: Always do this
ansible.builtin.debug:
msg: "This always executes, :-)"
```
Together, these elements offer complex error handling.
Block with all sections
```
- name: Attempt and graceful roll back demo
block:
- name: Print a message
ansible.builtin.debug:
msg: 'I execute normally'
- name: Force a failure
ansible.builtin.command: /bin/false
- name: Never print this
ansible.builtin.debug:
msg: 'I never execute, due to the above task failing, :-('
rescue:
- name: Print when errors
ansible.builtin.debug:
msg: 'I caught an error'
- name: Force a failure in middle of recovery! >:-)
ansible.builtin.command: /bin/false
- name: Never print this
ansible.builtin.debug:
msg: 'I also never execute :-('
always:
- name: Always do this
ansible.builtin.debug:
msg: "This always executes"
```
The tasks in the `block` execute normally. If any tasks in the block return `failed`, the `rescue` section executes tasks to recover from the error. The `always` section runs regardless of the results of the `block` and `rescue` sections.
If an error occurs in the block and the rescue task succeeds, Ansible reverts the failed status of the original task for the run and continues to run the play as if the original task had succeeded. The rescued task is considered successful, and does not trigger `max_fail_percentage` or `any_errors_fatal` configurations. However, Ansible still reports a failure in the playbook statistics.
You can use blocks with `flush_handlers` in a rescue task to ensure that all handlers run even if an error occurs:
Block run handlers in error handling
```
tasks:
- name: Attempt and graceful roll back demo
block:
- name: Print a message
ansible.builtin.debug:
msg: 'I execute normally'
changed_when: yes
notify: run me even after an error
- name: Force a failure
ansible.builtin.command: /bin/false
rescue:
- name: Make sure all handlers run
meta: flush_handlers
handlers:
- name: Run me even after an error
ansible.builtin.debug:
msg: 'This handler runs even on error'
```
New in version 2.1.
Ansible provides a couple of variables for tasks in the `rescue` portion of a block:
ansible\_failed\_task
The task that returned ‘failed’ and triggered the rescue. For example, to get the name use `ansible_failed_task.name`.
ansible\_failed\_result
The captured return result of the failed task that triggered the rescue. This would equate to having used this var in the `register` keyword.
See also
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[Roles](playbooks_reuse_roles#playbooks-reuse-roles)
Playbook organization by roles
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Desired State Configuration Desired State Configuration
===========================
* [What is Desired State Configuration?](#what-is-desired-state-configuration)
* [Host Requirements](#host-requirements)
* [Why Use DSC?](#why-use-dsc)
* [How to Use DSC?](#how-to-use-dsc)
+ [Property Types](#property-types)
- [PSCredential](#pscredential)
- [CimInstance Type](#ciminstance-type)
- [HashTable Type](#hashtable-type)
- [Arrays](#arrays)
- [DateTime](#datetime)
+ [Run As Another User](#run-as-another-user)
* [Custom DSC Resources](#custom-dsc-resources)
+ [Finding Custom DSC Resources](#finding-custom-dsc-resources)
+ [Installing a Custom Resource](#installing-a-custom-resource)
* [Examples](#examples)
+ [Extract a zip file](#extract-a-zip-file)
+ [Create a directory](#create-a-directory)
+ [Interact with Azure](#interact-with-azure)
+ [Setup IIS Website](#setup-iis-website)
What is Desired State Configuration?
------------------------------------
Desired State Configuration, or DSC, is a tool built into PowerShell that can be used to define a Windows host setup through code. The overall purpose of DSC is the same as Ansible, it is just executed in a different manner. Since Ansible 2.4, the `win_dsc` module has been added and can be used to leverage existing DSC resources when interacting with a Windows host.
More details on DSC can be viewed at [DSC Overview](https://docs.microsoft.com/en-us/powershell/scripting/dsc/overview/overview).
Host Requirements
-----------------
To use the `win_dsc` module, a Windows host must have PowerShell v5.0 or newer installed. All supported hosts, except for Windows Server 2008 (non R2) can be upgraded to PowerShell v5.
Once the PowerShell requirements have been met, using DSC is as simple as creating a task with the `win_dsc` module.
Why Use DSC?
------------
DSC and Ansible modules have a common goal which is to define and ensure the state of a resource. Because of this, resources like the DSC [File resource](https://docs.microsoft.com/en-us/powershell/scripting/dsc/reference/resources/windows/fileresource) and Ansible `win_file` can be used to achieve the same result. Deciding which to use depends on the scenario.
Reasons for using an Ansible module over a DSC resource:
* The host does not support PowerShell v5.0, or it cannot easily be upgraded
* The DSC resource does not offer a feature present in an Ansible module. For example win\_regedit can manage the `REG_NONE` property type, while the DSC `Registry` resource cannot
* DSC resources have limited check mode support, while some Ansible modules have better checks
* DSC resources do not support diff mode, while some Ansible modules do
* Custom resources require further installation steps to be run on the host beforehand, while Ansible modules are built-in to Ansible
* There are bugs in a DSC resource where an Ansible module works
Reasons for using a DSC resource over an Ansible module:
* The Ansible module does not support a feature present in a DSC resource
* There is no Ansible module available
* There are bugs in an existing Ansible module
In the end, it doesn’t matter whether the task is performed with DSC or an Ansible module; what matters is that the task is performed correctly and the playbooks are still readable. If you have more experience with DSC over Ansible and it does the job, just use DSC for that task.
How to Use DSC?
---------------
The `win_dsc` module takes in a free-form of options so that it changes according to the resource it is managing. A list of built in resources can be found at [resources](https://docs.microsoft.com/en-us/powershell/scripting/dsc/resources/resources).
Using the [Registry](https://docs.microsoft.com/en-us/powershell/scripting/dsc/reference/resources/windows/registryresource) resource as an example, this is the DSC definition as documented by Microsoft:
```
Registry [string] #ResourceName
{
Key = [string]
ValueName = [string]
[ Ensure = [string] { Enable | Disable } ]
[ Force = [bool] ]
[ Hex = [bool] ]
[ DependsOn = [string[]] ]
[ ValueData = [string[]] ]
[ ValueType = [string] { Binary | Dword | ExpandString | MultiString | Qword | String } ]
}
```
When defining the task, `resource_name` must be set to the DSC resource being used - in this case the `resource_name` should be set to `Registry`. The `module_version` can refer to a specific version of the DSC resource installed; if left blank it will default to the latest version. The other options are parameters that are used to define the resource, such as `Key` and `ValueName`. While the options in the task are not case sensitive, keeping the case as-is is recommended because it makes it easier to distinguish DSC resource options from Ansible’s `win_dsc` options.
This is what the Ansible task version of the above DSC Registry resource would look like:
```
- name: Use win_dsc module with the Registry DSC resource
win_dsc:
resource_name: Registry
Ensure: Present
Key: HKEY_LOCAL_MACHINE\SOFTWARE\ExampleKey
ValueName: TestValue
ValueData: TestData
```
Starting in Ansible 2.8, the `win_dsc` module automatically validates the input options from Ansible with the DSC definition. This means Ansible will fail if the option name is incorrect, a mandatory option is not set, or the value is not a valid choice. When running Ansible with a verbosity level of 3 or more (`-vvv`), the return value will contain the possible invocation options based on the `resource_name` specified. Here is an example of the invocation output for the above `Registry` task:
```
changed: [2016] => {
"changed": true,
"invocation": {
"module_args": {
"DependsOn": null,
"Ensure": "Present",
"Force": null,
"Hex": null,
"Key": "HKEY_LOCAL_MACHINE\\SOFTWARE\\ExampleKey",
"PsDscRunAsCredential_password": null,
"PsDscRunAsCredential_username": null,
"ValueData": [
"TestData"
],
"ValueName": "TestValue",
"ValueType": null,
"module_version": "latest",
"resource_name": "Registry"
}
},
"module_version": "1.1",
"reboot_required": false,
"verbose_set": [
"Perform operation 'Invoke CimMethod' with following parameters, ''methodName' = ResourceSet,'className' = MSFT_DSCLocalConfigurationManager,'namespaceName' = root/Microsoft/Windows/DesiredStateConfiguration'.",
"An LCM method call arrived from computer SERVER2016 with user sid S-1-5-21-3088887838-4058132883-1884671576-1105.",
"[SERVER2016]: LCM: [ Start Set ] [[Registry]DirectResourceAccess]",
"[SERVER2016]: [[Registry]DirectResourceAccess] (SET) Create registry key 'HKLM:\\SOFTWARE\\ExampleKey'",
"[SERVER2016]: [[Registry]DirectResourceAccess] (SET) Set registry key value 'HKLM:\\SOFTWARE\\ExampleKey\\TestValue' to 'TestData' of type 'String'",
"[SERVER2016]: LCM: [ End Set ] [[Registry]DirectResourceAccess] in 0.1930 seconds.",
"[SERVER2016]: LCM: [ End Set ] in 0.2720 seconds.",
"Operation 'Invoke CimMethod' complete.",
"Time taken for configuration job to complete is 0.402 seconds"
],
"verbose_test": [
"Perform operation 'Invoke CimMethod' with following parameters, ''methodName' = ResourceTest,'className' = MSFT_DSCLocalConfigurationManager,'namespaceName' = root/Microsoft/Windows/DesiredStateConfiguration'.",
"An LCM method call arrived from computer SERVER2016 with user sid S-1-5-21-3088887838-4058132883-1884671576-1105.",
"[SERVER2016]: LCM: [ Start Test ] [[Registry]DirectResourceAccess]",
"[SERVER2016]: [[Registry]DirectResourceAccess] Registry key 'HKLM:\\SOFTWARE\\ExampleKey' does not exist",
"[SERVER2016]: LCM: [ End Test ] [[Registry]DirectResourceAccess] False in 0.2510 seconds.",
"[SERVER2016]: LCM: [ End Set ] in 0.3310 seconds.",
"Operation 'Invoke CimMethod' complete.",
"Time taken for configuration job to complete is 0.475 seconds"
]
}
```
The `invocation.module_args` key shows the actual values that were set as well as other possible values that were not set. Unfortunately this will not show the default value for a DSC property, only what was set from the Ansible task. Any `*_password` option will be masked in the output for security reasons, if there are any other sensitive module options, set `no_log: True` on the task to stop all task output from being logged.
### Property Types
Each DSC resource property has a type that is associated with it. Ansible will try to convert the defined options to the correct type during execution. For simple types like `[string]` and `[bool]` this is a simple operation, but complex types like `[PSCredential]` or arrays (like `[string[]]`) this require certain rules.
#### PSCredential
A `[PSCredential]` object is used to store credentials in a secure way, but Ansible has no way to serialize this over JSON. To set a DSC PSCredential property, the definition of that parameter should have two entries that are suffixed with `_username` and `_password` for the username and password respectively. For example:
```
PsDscRunAsCredential_username: '{{ ansible_user }}'
PsDscRunAsCredential_password: '{{ ansible_password }}'
SourceCredential_username: AdminUser
SourceCredential_password: PasswordForAdminUser
```
Note
On versions of Ansible older than 2.8, you should set `no_log: yes` on the task definition in Ansible to ensure any credentials used are not stored in any log file or console output.
A `[PSCredential]` is defined with `EmbeddedInstance("MSFT_Credential")` in a DSC resource MOF definition.
#### CimInstance Type
A `[CimInstance]` object is used by DSC to store a dictionary object based on a custom class defined by that resource. Defining a value that takes in a `[CimInstance]` in YAML is the same as defining a dictionary in YAML. For example, to define a `[CimInstance]` value in Ansible:
```
# [CimInstance]AuthenticationInfo == MSFT_xWebAuthenticationInformation
AuthenticationInfo:
Anonymous: no
Basic: yes
Digest: no
Windows: yes
```
In the above example, the CIM instance is a representation of the class [MSFT\_xWebAuthenticationInformation](https://github.com/dsccommunity/xWebAdministration/blob/master/source/DSCResources/MSFT_xWebSite/MSFT_xWebSite.schema.mof). This class accepts four boolean variables, `Anonymous`, `Basic`, `Digest`, and `Windows`. The keys to use in a `[CimInstance]` depend on the class it represents. Please read through the documentation of the resource to determine the keys that can be used and the types of each key value. The class definition is typically located in the `<resource name>.schema.mof`.
#### HashTable Type
A `[HashTable]` object is also a dictionary but does not have a strict set of keys that can/need to be defined. Like a `[CimInstance]`, define it like a normal dictionary value in YAML. A `[HashTable]]` is defined with `EmbeddedInstance("MSFT_KeyValuePair")` in a DSC resource MOF definition.
#### Arrays
Simple type arrays like `[string[]]` or `[UInt32[]]` are defined as a list or as a comma separated string which are then cast to their type. Using a list is recommended because the values are not manually parsed by the `win_dsc` module before being passed to the DSC engine. For example, to define a simple type array in Ansible:
```
# [string[]]
ValueData: entry1, entry2, entry3
ValueData:
- entry1
- entry2
- entry3
# [UInt32[]]
ReturnCode: 0,3010
ReturnCode:
- 0
- 3010
```
Complex type arrays like `[CimInstance[]]` (array of dicts), can be defined like this example:
```
# [CimInstance[]]BindingInfo == MSFT_xWebBindingInformation
BindingInfo:
- Protocol: https
Port: 443
CertificateStoreName: My
CertificateThumbprint: C676A89018C4D5902353545343634F35E6B3A659
HostName: DSCTest
IPAddress: '*'
SSLFlags: 1
- Protocol: http
Port: 80
IPAddress: '*'
```
The above example, is an array with two values of the class [MSFT\_xWebBindingInformation](https://github.com/dsccommunity/xWebAdministration/blob/master/source/DSCResources/MSFT_xWebSite/MSFT_xWebSite.schema.mof). When defining a `[CimInstance[]]`, be sure to read the resource documentation to find out what keys to use in the definition.
#### DateTime
A `[DateTime]` object is a DateTime string representing the date and time in the [ISO 8601](https://www.w3.org/TR/NOTE-datetime) date time format. The value for a `[DateTime]` field should be quoted in YAML to ensure the string is properly serialized to the Windows host. Here is an example of how to define a `[DateTime]` value in Ansible:
```
# As UTC-0 (No timezone)
DateTime: '2019-02-22T13:57:31.2311892+00:00'
# As UTC+4
DateTime: '2019-02-22T17:57:31.2311892+04:00'
# As UTC-4
DateTime: '2019-02-22T09:57:31.2311892-04:00'
```
All the values above are equal to a UTC date time of February 22nd 2019 at 1:57pm with 31 seconds and 2311892 milliseconds.
### Run As Another User
By default, DSC runs each resource as the SYSTEM account and not the account that Ansible use to run the module. This means that resources that are dynamically loaded based on a user profile, like the `HKEY_CURRENT_USER` registry hive, will be loaded under the `SYSTEM` profile. The parameter `PsDscRunAsCredential` is a parameter that can be set for every DSC resource force the DSC engine to run under a different account. As `PsDscRunAsCredential` has a type of `PSCredential`, it is defined with the `_username` and `_password` suffix.
Using the Registry resource type as an example, this is how to define a task to access the `HKEY_CURRENT_USER` hive of the Ansible user:
```
- name: Use win_dsc with PsDscRunAsCredential to run as a different user
win_dsc:
resource_name: Registry
Ensure: Present
Key: HKEY_CURRENT_USER\ExampleKey
ValueName: TestValue
ValueData: TestData
PsDscRunAsCredential_username: '{{ ansible_user }}'
PsDscRunAsCredential_password: '{{ ansible_password }}'
no_log: yes
```
Custom DSC Resources
--------------------
DSC resources are not limited to the built-in options from Microsoft. Custom modules can be installed to manage other resources that are not usually available.
### Finding Custom DSC Resources
You can use the [PSGallery](https://www.powershellgallery.com/) to find custom resources, along with documentation on how to install them on a Windows host.
The `Find-DscResource` cmdlet can also be used to find custom resources. For example:
```
# Find all DSC resources in the configured repositories
Find-DscResource
# Find all DSC resources that relate to SQL
Find-DscResource -ModuleName "*sql*"
```
Note
DSC resources developed by Microsoft that start with `x`, means the resource is experimental and comes with no support.
### Installing a Custom Resource
There are three ways that a DSC resource can be installed on a host:
* Manually with the `Install-Module` cmdlet
* Using the `win_psmodule` Ansible module
* Saving the module manually and copying it another host
This is an example of installing the `xWebAdministration` resources using `win_psmodule`:
```
- name: Install xWebAdministration DSC resource
win_psmodule:
name: xWebAdministration
state: present
```
Once installed, the win\_dsc module will be able to use the resource by referencing it with the `resource_name` option.
The first two methods above only work when the host has access to the internet. When a host does not have internet access, the module must first be installed using the methods above on another host with internet access and then copied across. To save a module to a local filepath, the following PowerShell cmdlet can be run:
```
Save-Module -Name xWebAdministration -Path C:\temp
```
This will create a folder called `xWebAdministration` in `C:\temp` which can be copied to any host. For PowerShell to see this offline resource, it must be copied to a directory set in the `PSModulePath` environment variable. In most cases the path `C:\Program Files\WindowsPowerShell\Module` is set through this variable, but the `win_path` module can be used to add different paths.
Examples
--------
### Extract a zip file
```
- name: Extract a zip file
win_dsc:
resource_name: Archive
Destination: C:\temp\output
Path: C:\temp\zip.zip
Ensure: Present
```
### Create a directory
```
- name: Create file with some text
win_dsc:
resource_name: File
DestinationPath: C:\temp\file
Contents: |
Hello
World
Ensure: Present
Type: File
- name: Create directory that is hidden is set with the System attribute
win_dsc:
resource_name: File
DestinationPath: C:\temp\hidden-directory
Attributes: Hidden,System
Ensure: Present
Type: Directory
```
### Interact with Azure
```
- name: Install xAzure DSC resources
win_psmodule:
name: xAzure
state: present
- name: Create virtual machine in Azure
win_dsc:
resource_name: xAzureVM
ImageName: a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201409.01-en.us-127GB.vhd
Name: DSCHOST01
ServiceName: ServiceName
StorageAccountName: StorageAccountName
InstanceSize: Medium
Windows: yes
Ensure: Present
Credential_username: '{{ ansible_user }}'
Credential_password: '{{ ansible_password }}'
```
### Setup IIS Website
```
- name: Install xWebAdministration module
win_psmodule:
name: xWebAdministration
state: present
- name: Install IIS features that are required
win_dsc:
resource_name: WindowsFeature
Name: '{{ item }}'
Ensure: Present
loop:
- Web-Server
- Web-Asp-Net45
- name: Setup web content
win_dsc:
resource_name: File
DestinationPath: C:\inetpub\IISSite\index.html
Type: File
Contents: |
<html>
<head><title>IIS Site</title></head>
<body>This is the body</body>
</html>
Ensure: present
- name: Create new website
win_dsc:
resource_name: xWebsite
Name: NewIISSite
State: Started
PhysicalPath: C:\inetpub\IISSite\index.html
BindingInfo:
- Protocol: https
Port: 8443
CertificateStoreName: My
CertificateThumbprint: C676A89018C4D5902353545343634F35E6B3A659
HostName: DSCTest
IPAddress: '*'
SSLFlags: 1
- Protocol: http
Port: 8080
IPAddress: '*'
AuthenticationInfo:
Anonymous: no
Basic: yes
Digest: no
Windows: yes
```
See also
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[List of Windows Modules](https://docs.ansible.com/ansible/2.9/modules/list_of_windows_modules.html#windows-modules "(in Ansible v2.9)")
Windows specific module list, all implemented in PowerShell
[User Mailing List](https://groups.google.com/group/ansible-project)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Working with dynamic inventory Working with dynamic inventory
==============================
* [Inventory script example: Cobbler](#inventory-script-example-cobbler)
* [Inventory script example: OpenStack](#inventory-script-example-openstack)
+ [Explicit use of OpenStack inventory script](#explicit-use-of-openstack-inventory-script)
+ [Implicit use of OpenStack inventory script](#implicit-use-of-openstack-inventory-script)
+ [Refreshing the cache](#refreshing-the-cache)
* [Other inventory scripts](#other-inventory-scripts)
* [Using inventory directories and multiple inventory sources](#using-inventory-directories-and-multiple-inventory-sources)
* [Static groups of dynamic groups](#static-groups-of-dynamic-groups)
If your Ansible inventory fluctuates over time, with hosts spinning up and shutting down in response to business demands, the static inventory solutions described in [How to build your inventory](intro_inventory#inventory) will not serve your needs. You may need to track hosts from multiple sources: cloud providers, LDAP, [Cobbler](https://cobbler.github.io), and/or enterprise CMDB systems.
Ansible integrates all of these options through a dynamic external inventory system. Ansible supports two ways to connect with external inventory: [Inventory Plugins](../plugins/inventory#inventory-plugins) and `inventory scripts`.
Inventory plugins take advantage of the most recent updates to the Ansible core code. We recommend plugins over scripts for dynamic inventory. You can [write your own plugin](https://docs.ansible.com/ansible/latest/dev_guide/developing_inventory.html#developing-inventory) to connect to additional dynamic inventory sources.
You can still use inventory scripts if you choose. When we implemented inventory plugins, we ensured backwards compatibility through the script inventory plugin. The examples below illustrate how to use inventory scripts.
If you prefer a GUI for handling dynamic inventory, the inventory database on AWX or [Red Hat Ansible Automation Platform](https://docs.ansible.com/ansible/latest/reference_appendices/tower.html#ansible-platform) syncs with all your dynamic inventory sources, provides web and REST access to the results, and offers a graphical inventory editor. With a database record of all of your hosts, you can correlate past event history and see which hosts have had failures on their last playbook runs.
Inventory script example: Cobbler
---------------------------------
Ansible integrates seamlessly with [Cobbler](https://cobbler.github.io), a Linux installation server originally written by Michael DeHaan and now led by James Cammarata, who works for Ansible.
While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic layer that can represent data for multiple configuration management systems (even at the same time) and serve as a ‘lightweight CMDB’.
To tie your Ansible inventory to Cobbler, copy [this script](https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/cobbler.py) to `/etc/ansible` and `chmod +x` the file. Run `cobblerd` any time you use Ansible and use the `-i` command line option (for example, `-i /etc/ansible/cobbler.py`) to communicate with Cobbler using Cobbler’s XMLRPC API.
Add a `cobbler.ini` file in `/etc/ansible` so Ansible knows where the Cobbler server is and some cache improvements can be used. For example:
```
[cobbler]
# Set Cobbler's hostname or IP address
host = http://127.0.0.1/cobbler_api
# API calls to Cobbler can be slow. For this reason, we cache the results of an API
# call. Set this to the path you want cache files to be written to. Two files
# will be written to this directory:
# - ansible-cobbler.cache
# - ansible-cobbler.index
cache_path = /tmp
# The number of seconds a cache file is considered valid. After this many
# seconds, a new API call will be made, and the cache file will be updated.
cache_max_age = 900
```
First test the script by running `/etc/ansible/cobbler.py` directly. You should see some JSON data output, but it may not have anything in it just yet.
Let’s explore what this does. In Cobbler, assume a scenario somewhat like the following:
```
cobbler profile add --name=webserver --distro=CentOS6-x86_64
cobbler profile edit --name=webserver --mgmt-classes="webserver" --ksmeta="a=2 b=3"
cobbler system edit --name=foo --dns-name="foo.example.com" --mgmt-classes="atlanta" --ksmeta="c=4"
cobbler system edit --name=bar --dns-name="bar.example.com" --mgmt-classes="atlanta" --ksmeta="c=5"
```
In the example above, the system ‘foo.example.com’ is addressable by ansible directly, but is also addressable when using the group names ‘webserver’ or ‘atlanta’. Since Ansible uses SSH, it contacts system foo over ‘foo.example.com’, only, never just ‘foo’. Similarly, if you tried “ansible foo”, it would not find the system… but “ansible ‘foo\*’” would do, because the system DNS name starts with ‘foo’.
The script provides more than host and group info. In addition, as a bonus, when the ‘setup’ module is run (which happens automatically when using playbooks), the variables ‘a’, ‘b’, and ‘c’ will all be auto-populated in the templates:
```
# file: /srv/motd.j2
Welcome, I am templated with a value of a={{ a }}, b={{ b }}, and c={{ c }}
```
Which could be executed just like this:
```
ansible webserver -m setup
ansible webserver -m template -a "src=/tmp/motd.j2 dest=/etc/motd"
```
Note
The name ‘webserver’ came from Cobbler, as did the variables for the config file. You can still pass in your own variables like normal in Ansible, but variables from the external inventory script will override any that have the same name.
So, with the template above (`motd.j2`), this results in the following data being written to `/etc/motd` for system ‘foo’:
```
Welcome, I am templated with a value of a=2, b=3, and c=4
```
And on system ‘bar’ (bar.example.com):
```
Welcome, I am templated with a value of a=2, b=3, and c=5
```
And technically, though there is no major good reason to do it, this also works:
```
ansible webserver -m ansible.builtin.shell -a "echo {{ a }}"
```
So, in other words, you can use those variables in arguments/actions as well.
Inventory script example: OpenStack
-----------------------------------
If you use an OpenStack-based cloud, instead of manually maintaining your own inventory file, you can use the `openstack_inventory.py` dynamic inventory to pull information about your compute instances directly from OpenStack.
You can download the latest version of the OpenStack inventory script [here](https://raw.githubusercontent.com/openstack/ansible-collections-openstack/master/scripts/inventory/openstack_inventory.py).
You can use the inventory script explicitly (by passing the `-i openstack_inventory.py` argument to Ansible) or implicitly (by placing the script at `/etc/ansible/hosts`).
### Explicit use of OpenStack inventory script
Download the latest version of the OpenStack dynamic inventory script and make it executable:
```
wget https://raw.githubusercontent.com/openstack/ansible-collections-openstack/master/scripts/inventory/openstack_inventory.py
chmod +x openstack_inventory.py
```
Note
Do not name it `openstack.py`. This name will conflict with imports from openstacksdk.
Source an OpenStack RC file:
```
source openstack.rc
```
Note
An OpenStack RC file contains the environment variables required by the client tools to establish a connection with the cloud provider, such as the authentication URL, user name, password and region name. For more information on how to download, create or source an OpenStack RC file, please refer to [Set environment variables using the OpenStack RC file](https://docs.openstack.org/user-guide/common/cli_set_environment_variables_using_openstack_rc.html).
You can confirm the file has been successfully sourced by running a simple command, such as `nova list` and ensuring it returns no errors.
Note
The OpenStack command line clients are required to run the `nova list` command. For more information on how to install them, please refer to [Install the OpenStack command-line clients](https://docs.openstack.org/user-guide/common/cli_install_openstack_command_line_clients.html).
You can test the OpenStack dynamic inventory script manually to confirm it is working as expected:
```
./openstack_inventory.py --list
```
After a few moments you should see some JSON output with information about your compute instances.
Once you confirm the dynamic inventory script is working as expected, you can tell Ansible to use the `openstack_inventory.py` script as an inventory file, as illustrated below:
```
ansible -i openstack_inventory.py all -m ansible.builtin.ping
```
### Implicit use of OpenStack inventory script
Download the latest version of the OpenStack dynamic inventory script, make it executable and copy it to `/etc/ansible/hosts`:
```
wget https://raw.githubusercontent.com/openstack/ansible-collections-openstack/master/scripts/inventory/openstack_inventory.py
chmod +x openstack_inventory.py
sudo cp openstack_inventory.py /etc/ansible/hosts
```
Download the sample configuration file, modify it to suit your needs and copy it to `/etc/ansible/openstack.yml`:
```
wget https://raw.githubusercontent.com/openstack/ansible-collections-openstack/master/scripts/inventory/openstack.yml
vi openstack.yml
sudo cp openstack.yml /etc/ansible/
```
You can test the OpenStack dynamic inventory script manually to confirm it is working as expected:
```
/etc/ansible/hosts --list
```
After a few moments you should see some JSON output with information about your compute instances.
### Refreshing the cache
Note that the OpenStack dynamic inventory script will cache results to avoid repeated API calls. To explicitly clear the cache, you can run the openstack\_inventory.py (or hosts) script with the `--refresh` parameter:
```
./openstack_inventory.py --refresh --list
```
Other inventory scripts
-----------------------
In Ansible 2.10 and later, inventory scripts moved to their associated collections. Many are now in the [community.general scripts/inventory directory](https://github.com/ansible-collections/community.general/tree/main/scripts/inventory). We recommend you use [Inventory Plugins](../plugins/inventory#inventory-plugins) instead.
Using inventory directories and multiple inventory sources
----------------------------------------------------------
If the location given to `-i` in Ansible is a directory (or as so configured in `ansible.cfg`), Ansible can use multiple inventory sources at the same time. When doing so, it is possible to mix both dynamic and statically managed inventory sources in the same ansible run. Instant hybrid cloud!
In an inventory directory, executable files are treated as dynamic inventory sources and most other files as static sources. Files which end with any of the following are ignored:
```
~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo
```
You can replace this list with your own selection by configuring an `inventory_ignore_extensions` list in `ansible.cfg`, or setting the [`ANSIBLE_INVENTORY_IGNORE`](../reference_appendices/config#envvar-ANSIBLE_INVENTORY_IGNORE) environment variable. The value in either case must be a comma-separated list of patterns, as shown above.
Any `group_vars` and `host_vars` subdirectories in an inventory directory are interpreted as expected, making inventory directories a powerful way to organize different sets of configurations. See [Using multiple inventory sources](intro_inventory#using-multiple-inventory-sources) for more information.
Static groups of dynamic groups
-------------------------------
When defining groups of groups in the static inventory file, the child groups must also be defined in the static inventory file, otherwise ansible returns an error. If you want to define a static group of dynamic child groups, define the dynamic groups as empty in the static inventory file. For example:
```
[tag_Name_staging_foo]
[tag_Name_staging_bar]
[staging:children]
tag_Name_staging_foo
tag_Name_staging_bar
```
See also
[How to build your inventory](intro_inventory#intro-inventory)
All about static inventory files
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Getting Started Getting Started
===============
Now that you have read the [installation guide](../installation_guide/intro_installation#installation-guide) and installed Ansible on a control node, you are ready to learn how Ansible works. A basic Ansible command or playbook:
* selects machines to execute against from inventory
* connects to those machines (or network devices, or other managed nodes), usually over SSH
* copies one or more modules to the remote machines and starts execution there
Ansible can do much more, but you should understand the most common use case before exploring all the powerful configuration, deployment, and orchestration features of Ansible. This page illustrates the basic process with a simple inventory and an ad hoc command. Once you understand how Ansible works, you can read more details about [ad hoc commands](intro_adhoc#intro-adhoc), organize your infrastructure with [inventory](intro_inventory#intro-inventory), and harness the full power of Ansible with [playbooks](playbooks_intro#playbooks-intro).
* [Selecting machines from inventory](#selecting-machines-from-inventory)
+ [Action: create a basic inventory](#action-create-a-basic-inventory)
+ [Beyond the basics](#beyond-the-basics)
* [Connecting to remote nodes](#connecting-to-remote-nodes)
+ [Action: check your SSH connections](#action-check-your-ssh-connections)
+ [Beyond the basics](#id1)
* [Copying and executing modules](#copying-and-executing-modules)
+ [Action: run your first Ansible commands](#action-run-your-first-ansible-commands)
+ [Action: Run your first playbook](#action-run-your-first-playbook)
+ [Beyond the basics](#id2)
* [Resources](#resources)
* [Next steps](#next-steps)
Selecting machines from inventory
---------------------------------
Ansible reads information about which machines you want to manage from your inventory. Although you can pass an IP address to an ad hoc command, you need inventory to take advantage of the full flexibility and repeatability of Ansible.
### Action: create a basic inventory
For this basic inventory, edit (or create) `/etc/ansible/hosts` and add a few remote systems to it. For this example, use either IP addresses or FQDNs:
```
192.0.2.50
aserver.example.org
bserver.example.org
```
### Beyond the basics
Your inventory can store much more than IPs and FQDNs. You can create [aliases](intro_inventory#inventory-aliases), set variable values for a single host with [host vars](intro_inventory#host-variables), or set variable values for multiple hosts with [group vars](intro_inventory#group-variables).
Connecting to remote nodes
--------------------------
Ansible communicates with remote machines over the [SSH protocol](https://www.ssh.com/ssh/protocol/). By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does.
### Action: check your SSH connections
Confirm that you can connect using SSH to all the nodes in your inventory using the same username. If necessary, add your public SSH key to the `authorized_keys` file on those systems.
### Beyond the basics
You can override the default remote user name in several ways, including:
* passing the `-u` parameter at the command line
* setting user information in your inventory file
* setting user information in your configuration file
* setting environment variables
See [Controlling how Ansible behaves: precedence rules](../reference_appendices/general_precedence#general-precedence-rules) for details on the (sometimes unintuitive) precedence of each method of passing user information. You can read more about connections in [Connection methods and details](connection_details#connections).
Copying and executing modules
-----------------------------
Once it has connected, Ansible transfers the modules required by your command or playbook to the remote machine(s) for execution.
### Action: run your first Ansible commands
Use the ping module to ping all the nodes in your inventory:
```
$ ansible all -m ping
```
You should see output for each host in your inventory, similar to this:
```
aserver.example.org | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
```
You can use `-u` as one way to specify the user to connect as, by default Ansible uses SSH, which defaults to the ‘current user’.
Now run a live command on all of your nodes:
```
$ ansible all -a "/bin/echo hello"
```
You should see output for each host in your inventory, similar to this:
```
aserver.example.org | CHANGED | rc=0 >>
hello
```
### Action: Run your first playbook
Playbooks are used to pull together tasks into reusable units.
Ansible does not store playbooks for you; they are simply YAML documents that you store and manage, passing them to Ansible to run as needed.
In a directory of your choice you can create your first playbook in a file called mytask.yml:
```
---
- name: My task
hosts: all
tasks:
- name: Leaving a mark
command: "touch /tmp/ansible_was_here"
```
You can run this command as follows:
```
$ ansible-playbook mytask.yaml
```
and may see output like this:
```
PLAY [My task] **************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************
ok: [aserver.example.org]
ok: [aserver.example.org]
ok: [192.0.2.50]
fatal: [192.0.2.50]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.0.2.50 port 22: No route to host", "unreachable": true}
TASK [Leaving a mark] *******************************************************************************************************************
[WARNING]: Consider using the file module with state=touch rather than running 'touch'. If you need to use command because file is
insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [aserver.example.org]
changed: [bserver.example.org]
PLAY RECAP ******************************************************************************************************************************
aserver.example.org : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
bserver.example.org : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
192.0.2.50 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
```
Read on to learn more about controlling which nodes your playbooks execute on, more sophisticated tasks, and the meaning of the output.
### Beyond the basics
By default Ansible uses SFTP to transfer files. If the machine or device you want to manage does not support SFTP, you can switch to SCP mode in [Configuring Ansible](../installation_guide/intro_configuration#intro-configuration). The files are placed in a temporary directory and executed from there.
If you need privilege escalation (sudo and similar) to run a command, pass the `become` flags:
```
# as bruce
$ ansible all -m ping -u bruce
# as bruce, sudoing to root (sudo is default method)
$ ansible all -m ping -u bruce --become
# as bruce, sudoing to batman
$ ansible all -m ping -u bruce --become --become-user batman
```
You can read more about privilege escalation in [Understanding privilege escalation: become](become#become).
Congratulations! You have contacted your nodes using Ansible. You used a basic inventory file and an ad hoc command to direct Ansible to connect to specific remote nodes, copy a module file there and execute it, and return output. You have a fully working infrastructure.
Resources
---------
* [Product Demos](https://github.com/ansible/product-demos)
* [Katakoda](https://katacoda.com/rhel-labs)
* [Workshops](https://github.com/ansible/workshops)
* [Ansible Examples](https://github.com/ansible/ansible-examples)
* [Ansible Baseline](https://github.com/ansible/ansible-baseline)
Next steps
----------
Next you can read about more real-world cases in [Introduction to ad hoc commands](intro_adhoc#intro-adhoc), explore what you can do with different modules, or read about the Ansible [Working with playbooks](playbooks#working-with-playbooks) language. Ansible is not just about running commands, it also has powerful configuration management and deployment features.
See also
[How to build your inventory](intro_inventory#intro-inventory)
More information about inventory
[Introduction to ad hoc commands](intro_adhoc#intro-adhoc)
Examples of basic commands
[Working with playbooks](playbooks#working-with-playbooks)
Learning Ansible’s configuration management language
[Ansible Demos](https://github.com/ansible/product-demos)
Demonstrations of different Ansible usecases
[RHEL Labs](https://katacoda.com/rhel-labs)
Labs to provide further knowledge on different topics
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Introduction to ad hoc commands Introduction to ad hoc commands
===============================
An Ansible ad hoc command uses the `/usr/bin/ansible` command-line tool to automate a single task on one or more managed nodes. ad hoc commands are quick and easy, but they are not reusable. So why learn about ad hoc commands first? ad hoc commands demonstrate the simplicity and power of Ansible. The concepts you learn here will port over directly to the playbook language. Before reading and executing these examples, please read [How to build your inventory](intro_inventory#intro-inventory).
* [Why use ad hoc commands?](#why-use-ad-hoc-commands)
* [Use cases for ad hoc tasks](#use-cases-for-ad-hoc-tasks)
+ [Rebooting servers](#rebooting-servers)
+ [Managing files](#managing-files)
+ [Managing packages](#managing-packages)
+ [Managing users and groups](#managing-users-and-groups)
+ [Managing services](#managing-services)
+ [Gathering facts](#gathering-facts)
Why use ad hoc commands?
------------------------
ad hoc commands are great for tasks you repeat rarely. For example, if you want to power off all the machines in your lab for Christmas vacation, you could execute a quick one-liner in Ansible without writing a playbook. An ad hoc command looks like this:
```
$ ansible [pattern] -m [module] -a "[module options]"
```
You can learn more about [patterns](intro_patterns#intro-patterns) and [modules](modules#working-with-modules) on other pages.
Use cases for ad hoc tasks
--------------------------
ad hoc tasks can be used to reboot servers, copy files, manage packages and users, and much more. You can use any Ansible module in an ad hoc task. ad hoc tasks, like playbooks, use a declarative model, calculating and executing the actions required to reach a specified final state. They achieve a form of idempotence by checking the current state before they begin and doing nothing unless the current state is different from the specified final state.
### Rebooting servers
The default module for the `ansible` command-line utility is the [ansible.builtin.command module](../collections/ansible/builtin/command_module#command-module). You can use an ad hoc task to call the command module and reboot all web servers in Atlanta, 10 at a time. Before Ansible can do this, you must have all servers in Atlanta listed in a group called [atlanta] in your inventory, and you must have working SSH credentials for each machine in that group. To reboot all the servers in the [atlanta] group:
```
$ ansible atlanta -a "/sbin/reboot"
```
By default Ansible uses only 5 simultaneous processes. If you have more hosts than the value set for the fork count, Ansible will talk to them, but it will take a little longer. To reboot the [atlanta] servers with 10 parallel forks:
```
$ ansible atlanta -a "/sbin/reboot" -f 10
```
/usr/bin/ansible will default to running from your user account. To connect as a different user:
```
$ ansible atlanta -a "/sbin/reboot" -f 10 -u username
```
Rebooting probably requires privilege escalation. You can connect to the server as `username` and run the command as the `root` user by using the [become](become#become) keyword:
```
$ ansible atlanta -a "/sbin/reboot" -f 10 -u username --become [--ask-become-pass]
```
If you add `--ask-become-pass` or `-K`, Ansible prompts you for the password to use for privilege escalation (sudo/su/pfexec/doas/etc).
Note
The [command module](../collections/ansible/builtin/command_module#command-module) does not support extended shell syntax like piping and redirects (although shell variables will always work). If your command requires shell-specific syntax, use the `shell` module instead. Read more about the differences on the [Working With Modules](modules#working-with-modules) page.
So far all our examples have used the default ‘command’ module. To use a different module, pass `-m` for module name. For example, to use the [ansible.builtin.shell module](../collections/ansible/builtin/shell_module#shell-module):
```
$ ansible raleigh -m ansible.builtin.shell -a 'echo $TERM'
```
When running any command with the Ansible *ad hoc* CLI (as opposed to [Playbooks](playbooks#working-with-playbooks)), pay particular attention to shell quoting rules, so the local shell retains the variable and passes it to Ansible. For example, using double rather than single quotes in the above example would evaluate the variable on the box you were on.
### Managing files
An ad hoc task can harness the power of Ansible and SCP to transfer many files to multiple machines in parallel. To transfer a file directly to all servers in the [atlanta] group:
```
$ ansible atlanta -m ansible.builtin.copy -a "src=/etc/hosts dest=/tmp/hosts"
```
If you plan to repeat a task like this, use the [ansible.builtin.template](../collections/ansible/builtin/template_module#template-module) module in a playbook.
The [ansible.builtin.file](../collections/ansible/builtin/file_module#file-module) module allows changing ownership and permissions on files. These same options can be passed directly to the `copy` module as well:
```
$ ansible webservers -m ansible.builtin.file -a "dest=/srv/foo/a.txt mode=600"
$ ansible webservers -m ansible.builtin.file -a "dest=/srv/foo/b.txt mode=600 owner=mdehaan group=mdehaan"
```
The `file` module can also create directories, similar to `mkdir -p`:
```
$ ansible webservers -m ansible.builtin.file -a "dest=/path/to/c mode=755 owner=mdehaan group=mdehaan state=directory"
```
As well as delete directories (recursively) and delete files:
```
$ ansible webservers -m ansible.builtin.file -a "dest=/path/to/c state=absent"
```
### Managing packages
You might also use an ad hoc task to install, update, or remove packages on managed nodes using a package management module like yum. To ensure a package is installed without updating it:
```
$ ansible webservers -m ansible.builtin.yum -a "name=acme state=present"
```
To ensure a specific version of a package is installed:
```
$ ansible webservers -m ansible.builtin.yum -a "name=acme-1.5 state=present"
```
To ensure a package is at the latest version:
```
$ ansible webservers -m ansible.builtin.yum -a "name=acme state=latest"
```
To ensure a package is not installed:
```
$ ansible webservers -m ansible.builtin.yum -a "name=acme state=absent"
```
Ansible has modules for managing packages under many platforms. If there is no module for your package manager, you can install packages using the command module or create a module for your package manager.
### Managing users and groups
You can create, manage, and remove user accounts on your managed nodes with ad hoc tasks:
```
$ ansible all -m ansible.builtin.user -a "name=foo password=<crypted password here>"
$ ansible all -m ansible.builtin.user -a "name=foo state=absent"
```
See the [ansible.builtin.user](../collections/ansible/builtin/user_module#user-module) module documentation for details on all of the available options, including how to manipulate groups and group membership.
### Managing services
Ensure a service is started on all webservers:
```
$ ansible webservers -m ansible.builtin.service -a "name=httpd state=started"
```
Alternatively, restart a service on all webservers:
```
$ ansible webservers -m ansible.builtin.service -a "name=httpd state=restarted"
```
Ensure a service is stopped:
```
$ ansible webservers -m ansible.builtin.service -a "name=httpd state=stopped"
```
### Gathering facts
Facts represent discovered variables about a system. You can use facts to implement conditional execution of tasks but also just to get ad hoc information about your systems. To see all facts:
```
$ ansible all -m ansible.builtin.setup
```
You can also filter this output to display only certain facts, see the [ansible.builtin.setup](../collections/ansible/builtin/setup_module#setup-module) module documentation for details.
Now that you understand the basic elements of Ansible execution, you are ready to learn to automate repetitive tasks using [Ansible Playbooks](playbooks_intro#playbooks-intro).
See also
[Configuring Ansible](../installation_guide/intro_configuration#intro-configuration)
All about the Ansible config file
[Collection Index](../collections/index#list-of-collections)
Browse existing collections, modules, and plugins
[Working with playbooks](playbooks#working-with-playbooks)
Using Ansible for configuration management & deployment
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible User Guide User Guide
==========
Note
**Making Open Source More Inclusive**
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. We ask that you open an issue or pull request if you come upon a term that we have missed. For more details, see [our CTO Chris Wright’s message](https://www.redhat.com/en/blog/making-open-source-more-inclusive-eradicating-problematic-language).
Welcome to the Ansible User Guide! This guide covers how to work with Ansible, including using the command line, working with inventory, interacting with data, writing tasks, plays, and playbooks; executing playbooks, and reference materials. This page outlines the most common situations and questions that bring readers to this section. If you prefer a traditional table of contents, you can find one at the bottom of the page.
Getting started
---------------
* I’d like an overview of how Ansible works. Where can I find:
+ a [quick video overview](https://docs.ansible.com/ansible/latest/user_guide/quickstart.html#quickstart-guide)
+ a [text introduction](intro_getting_started#intro-getting-started)
* I’m ready to learn about Ansible. What [Ansible concepts](basic_concepts#basic-concepts) do I need to learn?
* I want to use Ansible without writing a playbook. How do I use [ad hoc commands](intro_adhoc#intro-adhoc)?
Writing tasks, plays, and playbooks
-----------------------------------
* I’m writing my first playbook. What should I [know before I begin](playbooks_best_practices#playbooks-tips-and-tricks)?
* I have a specific use case for a task or play:
+ Executing tasks with elevated privileges or as a different user with [become](become#become)
+ Repeating a task once for each item in a list with [loops](playbooks_loops#playbooks-loops)
+ Executing tasks on a different machine with [delegation](playbooks_delegation#playbooks-delegation)
+ Running tasks only when certain conditions apply with [conditionals](playbooks_conditionals#playbooks-conditionals) and evaluating conditions with [tests](playbooks_tests#playbooks-tests)
+ Grouping a set of tasks together with [blocks](playbooks_blocks#playbooks-blocks)
+ Running tasks only when something has changed with [handlers](playbooks_handlers#handlers)
+ Changing the way Ansible [handles failures](playbooks_error_handling#playbooks-error-handling)
+ Setting remote [environment values](playbooks_environment#playbooks-environment)
* I want to leverage the power of re-usable Ansible artifacts. How do I create re-usable [files](playbooks_reuse#playbooks-reuse) and [roles](playbooks_reuse_roles#playbooks-reuse-roles)?
* I need to incorporate one file or playbook inside another. What is the difference between [including and importing](playbooks_reuse#dynamic-vs-static)?
* I want to run selected parts of my playbook. How do I add and use [tags](playbooks_tags#tags)?
Working with inventory
----------------------
* I have a list of servers and devices I want to automate. How do I create [inventory](intro_inventory#intro-inventory) to track them?
* I use cloud services and constantly have servers and devices starting and stopping. How do I track them using [dynamic inventory](intro_dynamic_inventory#intro-dynamic-inventory)?
* I want to automate specific sub-sets of my inventory. How do I use [patterns](intro_patterns#intro-patterns)?
Interacting with data
---------------------
* I want to use a single playbook against multiple systems with different attributes. How do I use [variables](playbooks_variables#playbooks-variables) to handle the differences?
* I want to retrieve data about my systems. How do I access [Ansible facts](playbooks_vars_facts#vars-and-facts)?
* I need to access sensitive data like passwords with Ansible. How can I protect that data with [Ansible vault](vault#vault)?
* I want to change the data I have, so I can use it in a task. How do I use [filters](playbooks_filters#playbooks-filters) to transform my data?
* I need to retrieve data from an external datastore. How do I use [lookups](playbooks_lookups#playbooks-lookups) to access databases and APIs?
* I want to ask playbook users to supply data. How do I get user input with [prompts](playbooks_prompts#playbooks-prompts)?
* I use certain modules frequently. How do I streamline my inventory and playbooks by [setting default values for module parameters](playbooks_module_defaults#module-defaults)?
Executing playbooks
-------------------
Once your playbook is ready to run, you may need to use these topics:
* Executing “dry run” playbooks with [check mode and diff](playbooks_checkmode#check-mode-dry)
* Running playbooks while troubleshooting with [start and step](playbooks_startnstep#playbooks-start-and-step)
* Correcting tasks during execution with the [Ansible debugger](playbooks_debugger#playbook-debugger)
* Controlling how my playbook executes with [strategies and more](playbooks_strategies#playbooks-strategies)
* Running tasks, plays, and playbooks [asynchronously](playbooks_async#playbooks-async)
Advanced features and reference
-------------------------------
* Using [advanced syntax](playbooks_advanced_syntax#playbooks-advanced-syntax)
* Manipulating [complex data](complex_data_manipulation#complex-data-manipulation)
* Using [plugins](../plugins/plugins#plugins-lookup)
* Using [playbook keywords](../reference_appendices/playbooks_keywords#playbook-keywords)
* Using [command-line tools](command_line_tools#command-line-tools)
* Rejecting [specific modules](plugin_filtering_config#plugin-filtering-config)
* Module [maintenance](modules_support#modules-support)
Traditional Table of Contents
-----------------------------
If you prefer to read the entire User Guide, here’s a list of the pages in order:
* [Ansible Quickstart Guide](https://docs.ansible.com/ansible/latest/user_guide/quickstart.html)
* [Ansible concepts](basic_concepts)
+ [Control node](basic_concepts#control-node)
+ [Managed nodes](basic_concepts#managed-nodes)
+ [Inventory](basic_concepts#inventory)
+ [Collections](basic_concepts#collections)
+ [Modules](basic_concepts#modules)
+ [Tasks](basic_concepts#tasks)
+ [Playbooks](basic_concepts#playbooks)
* [Getting Started](intro_getting_started)
+ [Selecting machines from inventory](intro_getting_started#selecting-machines-from-inventory)
+ [Connecting to remote nodes](intro_getting_started#connecting-to-remote-nodes)
+ [Copying and executing modules](intro_getting_started#copying-and-executing-modules)
+ [Resources](intro_getting_started#resources)
+ [Next steps](intro_getting_started#next-steps)
* [Introduction to ad hoc commands](intro_adhoc)
+ [Why use ad hoc commands?](intro_adhoc#why-use-ad-hoc-commands)
+ [Use cases for ad hoc tasks](intro_adhoc#use-cases-for-ad-hoc-tasks)
* [Working with playbooks](playbooks)
+ [Templating (Jinja2)](playbooks_templating)
+ [Advanced playbooks features](playbooks_special_topics)
+ [Playbook Example: Continuous Delivery and Rolling Upgrades](guide_rolling_upgrade)
* [Intro to playbooks](playbooks_intro)
+ [Playbook syntax](playbooks_intro#playbook-syntax)
+ [Playbook execution](playbooks_intro#playbook-execution)
+ [Ansible-Pull](playbooks_intro#ansible-pull)
+ [Verifying playbooks](playbooks_intro#verifying-playbooks)
* [Tips and tricks](playbooks_best_practices)
+ [General tips](playbooks_best_practices#general-tips)
+ [Playbook tips](playbooks_best_practices#playbook-tips)
+ [Inventory tips](playbooks_best_practices#inventory-tips)
+ [Execution tricks](playbooks_best_practices#execution-tricks)
* [Understanding privilege escalation: become](become)
+ [Using become](become#using-become)
+ [Risks and limitations of become](become#risks-and-limitations-of-become)
+ [Become and network automation](become#become-and-network-automation)
+ [Become and Windows](become#become-and-windows)
* [Loops](playbooks_loops)
+ [Comparing `loop` and `with_*`](playbooks_loops#comparing-loop-and-with)
+ [Standard loops](playbooks_loops#standard-loops)
+ [Registering variables with a loop](playbooks_loops#registering-variables-with-a-loop)
+ [Complex loops](playbooks_loops#complex-loops)
+ [Ensuring list input for `loop`: using `query` rather than `lookup`](playbooks_loops#ensuring-list-input-for-loop-using-query-rather-than-lookup)
+ [Adding controls to loops](playbooks_loops#adding-controls-to-loops)
+ [Migrating from with\_X to loop](playbooks_loops#migrating-from-with-x-to-loop)
* [Controlling where tasks run: delegation and local actions](playbooks_delegation)
+ [Tasks that cannot be delegated](playbooks_delegation#tasks-that-cannot-be-delegated)
+ [Delegating tasks](playbooks_delegation#delegating-tasks)
+ [Delegating facts](playbooks_delegation#delegating-facts)
+ [Local playbooks](playbooks_delegation#local-playbooks)
* [Conditionals](playbooks_conditionals)
+ [Basic conditionals with `when`](playbooks_conditionals#basic-conditionals-with-when)
+ [Commonly-used facts](playbooks_conditionals#commonly-used-facts)
* [Tests](playbooks_tests)
+ [Test syntax](playbooks_tests#test-syntax)
+ [Testing strings](playbooks_tests#testing-strings)
+ [Vault](playbooks_tests#vault)
+ [Testing truthiness](playbooks_tests#testing-truthiness)
+ [Comparing versions](playbooks_tests#comparing-versions)
+ [Set theory tests](playbooks_tests#set-theory-tests)
+ [Testing if a list contains a value](playbooks_tests#testing-if-a-list-contains-a-value)
+ [Testing if a list value is True](playbooks_tests#testing-if-a-list-value-is-true)
+ [Testing paths](playbooks_tests#testing-paths)
+ [Testing size formats](playbooks_tests#testing-size-formats)
+ [Testing task results](playbooks_tests#testing-task-results)
* [Blocks](playbooks_blocks)
+ [Grouping tasks with blocks](playbooks_blocks#grouping-tasks-with-blocks)
+ [Handling errors with blocks](playbooks_blocks#handling-errors-with-blocks)
* [Handlers: running operations on change](playbooks_handlers)
+ [Handler example](playbooks_handlers#handler-example)
+ [Controlling when handlers run](playbooks_handlers#controlling-when-handlers-run)
+ [Using variables with handlers](playbooks_handlers#using-variables-with-handlers)
* [Error handling in playbooks](playbooks_error_handling)
+ [Ignoring failed commands](playbooks_error_handling#ignoring-failed-commands)
+ [Ignoring unreachable host errors](playbooks_error_handling#ignoring-unreachable-host-errors)
+ [Resetting unreachable hosts](playbooks_error_handling#resetting-unreachable-hosts)
+ [Handlers and failure](playbooks_error_handling#handlers-and-failure)
+ [Defining failure](playbooks_error_handling#defining-failure)
+ [Defining “changed”](playbooks_error_handling#defining-changed)
+ [Ensuring success for command and shell](playbooks_error_handling#ensuring-success-for-command-and-shell)
+ [Aborting a play on all hosts](playbooks_error_handling#aborting-a-play-on-all-hosts)
+ [Controlling errors in blocks](playbooks_error_handling#controlling-errors-in-blocks)
* [Setting the remote environment](playbooks_environment)
+ [Setting the remote environment in a task](playbooks_environment#setting-the-remote-environment-in-a-task)
* [Working with language-specific version managers](playbooks_environment#working-with-language-specific-version-managers)
* [Re-using Ansible artifacts](playbooks_reuse)
+ [Creating re-usable files and roles](playbooks_reuse#creating-re-usable-files-and-roles)
+ [Re-using playbooks](playbooks_reuse#re-using-playbooks)
+ [Re-using files and roles](playbooks_reuse#re-using-files-and-roles)
+ [Re-using tasks as handlers](playbooks_reuse#re-using-tasks-as-handlers)
* [Roles](playbooks_reuse_roles)
+ [Role directory structure](playbooks_reuse_roles#role-directory-structure)
+ [Storing and finding roles](playbooks_reuse_roles#storing-and-finding-roles)
+ [Using roles](playbooks_reuse_roles#using-roles)
+ [Role argument validation](playbooks_reuse_roles#role-argument-validation)
+ [Running a role multiple times in one playbook](playbooks_reuse_roles#running-a-role-multiple-times-in-one-playbook)
+ [Using role dependencies](playbooks_reuse_roles#using-role-dependencies)
+ [Embedding modules and plugins in roles](playbooks_reuse_roles#embedding-modules-and-plugins-in-roles)
+ [Sharing roles: Ansible Galaxy](playbooks_reuse_roles#sharing-roles-ansible-galaxy)
* [Including and importing](playbooks_reuse_includes)
* [Tags](playbooks_tags)
+ [Adding tags with the tags keyword](playbooks_tags#adding-tags-with-the-tags-keyword)
+ [Special tags: always and never](playbooks_tags#special-tags-always-and-never)
+ [Selecting or skipping tags when you run a playbook](playbooks_tags#selecting-or-skipping-tags-when-you-run-a-playbook)
* [How to build your inventory](intro_inventory)
+ [Inventory basics: formats, hosts, and groups](intro_inventory#inventory-basics-formats-hosts-and-groups)
+ [Adding variables to inventory](intro_inventory#adding-variables-to-inventory)
+ [Assigning a variable to one machine: host variables](intro_inventory#assigning-a-variable-to-one-machine-host-variables)
+ [Assigning a variable to many machines: group variables](intro_inventory#assigning-a-variable-to-many-machines-group-variables)
+ [Organizing host and group variables](intro_inventory#organizing-host-and-group-variables)
+ [How variables are merged](intro_inventory#how-variables-are-merged)
+ [Using multiple inventory sources](intro_inventory#using-multiple-inventory-sources)
+ [Connecting to hosts: behavioral inventory parameters](intro_inventory#connecting-to-hosts-behavioral-inventory-parameters)
+ [Inventory setup examples](intro_inventory#inventory-setup-examples)
* [Working with dynamic inventory](intro_dynamic_inventory)
+ [Inventory script example: Cobbler](intro_dynamic_inventory#inventory-script-example-cobbler)
+ [Inventory script example: OpenStack](intro_dynamic_inventory#inventory-script-example-openstack)
+ [Other inventory scripts](intro_dynamic_inventory#other-inventory-scripts)
+ [Using inventory directories and multiple inventory sources](intro_dynamic_inventory#using-inventory-directories-and-multiple-inventory-sources)
+ [Static groups of dynamic groups](intro_dynamic_inventory#static-groups-of-dynamic-groups)
* [Patterns: targeting hosts and groups](intro_patterns)
+ [Using patterns](intro_patterns#using-patterns)
+ [Common patterns](intro_patterns#common-patterns)
+ [Limitations of patterns](intro_patterns#limitations-of-patterns)
+ [Advanced pattern options](intro_patterns#advanced-pattern-options)
+ [Patterns and ansible-playbook flags](intro_patterns#patterns-and-ansible-playbook-flags)
* [Connection methods and details](connection_details)
+ [ControlPersist and paramiko](connection_details#controlpersist-and-paramiko)
+ [Setting a remote user](connection_details#setting-a-remote-user)
+ [Setting up SSH keys](connection_details#setting-up-ssh-keys)
+ [Running against localhost](connection_details#running-against-localhost)
+ [Managing host key checking](connection_details#managing-host-key-checking)
+ [Other connection methods](connection_details#other-connection-methods)
* [Working with command line tools](command_line_tools)
+ [ansible](../cli/ansible)
+ [ansible-config](../cli/ansible-config)
+ [ansible-console](../cli/ansible-console)
+ [ansible-doc](../cli/ansible-doc)
+ [ansible-galaxy](../cli/ansible-galaxy)
+ [ansible-inventory](../cli/ansible-inventory)
+ [ansible-playbook](../cli/ansible-playbook)
+ [ansible-pull](../cli/ansible-pull)
+ [ansible-vault](../cli/ansible-vault)
* [Using Variables](playbooks_variables)
+ [Creating valid variable names](playbooks_variables#creating-valid-variable-names)
+ [Simple variables](playbooks_variables#simple-variables)
+ [When to quote variables (a YAML gotcha)](playbooks_variables#when-to-quote-variables-a-yaml-gotcha)
+ [List variables](playbooks_variables#list-variables)
+ [Dictionary variables](playbooks_variables#dictionary-variables)
+ [Registering variables](playbooks_variables#registering-variables)
+ [Referencing nested variables](playbooks_variables#referencing-nested-variables)
+ [Transforming variables with Jinja2 filters](playbooks_variables#transforming-variables-with-jinja2-filters)
+ [Where to set variables](playbooks_variables#where-to-set-variables)
+ [Variable precedence: Where should I put a variable?](playbooks_variables#variable-precedence-where-should-i-put-a-variable)
+ [Using advanced variable syntax](playbooks_variables#using-advanced-variable-syntax)
* [Discovering variables: facts and magic variables](playbooks_vars_facts)
+ [Ansible facts](playbooks_vars_facts#ansible-facts)
+ [Information about Ansible: magic variables](playbooks_vars_facts#information-about-ansible-magic-variables)
* [Encrypting content with Ansible Vault](vault)
+ [Managing vault passwords](vault#managing-vault-passwords)
+ [Encrypting content with Ansible Vault](vault#id1)
+ [Using encrypted variables and files](vault#using-encrypted-variables-and-files)
+ [Configuring defaults for using encrypted content](vault#configuring-defaults-for-using-encrypted-content)
+ [When are encrypted files made visible?](vault#when-are-encrypted-files-made-visible)
+ [Speeding up Ansible Vault](vault#speeding-up-ansible-vault)
+ [Format of files encrypted with Ansible Vault](vault#format-of-files-encrypted-with-ansible-vault)
* [Using filters to manipulate data](playbooks_filters)
+ [Handling undefined variables](playbooks_filters#handling-undefined-variables)
+ [Defining different values for true/false/null (ternary)](playbooks_filters#defining-different-values-for-true-false-null-ternary)
+ [Managing data types](playbooks_filters#managing-data-types)
+ [Formatting data: YAML and JSON](playbooks_filters#formatting-data-yaml-and-json)
+ [Combining and selecting data](playbooks_filters#combining-and-selecting-data)
+ [Randomizing data](playbooks_filters#randomizing-data)
+ [Managing list variables](playbooks_filters#managing-list-variables)
+ [Selecting from sets or lists (set theory)](playbooks_filters#selecting-from-sets-or-lists-set-theory)
+ [Calculating numbers (math)](playbooks_filters#calculating-numbers-math)
+ [Managing network interactions](playbooks_filters#managing-network-interactions)
+ [Encrypting and checksumming strings and passwords](playbooks_filters#encrypting-and-checksumming-strings-and-passwords)
+ [Manipulating text](playbooks_filters#manipulating-text)
+ [Manipulating strings](playbooks_filters#manipulating-strings)
+ [Managing UUIDs](playbooks_filters#managing-uuids)
+ [Handling dates and times](playbooks_filters#handling-dates-and-times)
+ [Getting Kubernetes resource names](playbooks_filters#getting-kubernetes-resource-names)
* [Lookups](playbooks_lookups)
+ [Using lookups in variables](playbooks_lookups#using-lookups-in-variables)
* [Interactive input: prompts](playbooks_prompts)
+ [Encrypting values supplied by `vars_prompt`](playbooks_prompts#encrypting-values-supplied-by-vars-prompt)
+ [Allowing special characters in `vars_prompt` values](playbooks_prompts#allowing-special-characters-in-vars-prompt-values)
* [Module defaults](playbooks_module_defaults)
+ [Module defaults groups](playbooks_module_defaults#module-defaults-groups)
* [Validating tasks: check mode and diff mode](playbooks_checkmode)
+ [Using check mode](playbooks_checkmode#using-check-mode)
+ [Using diff mode](playbooks_checkmode#using-diff-mode)
* [Executing playbooks for troubleshooting](playbooks_startnstep)
+ [start-at-task](playbooks_startnstep#start-at-task)
+ [Step mode](playbooks_startnstep#step-mode)
* [Debugging tasks](playbooks_debugger)
+ [Enabling the debugger](playbooks_debugger#enabling-the-debugger)
+ [Resolving errors in the debugger](playbooks_debugger#resolving-errors-in-the-debugger)
+ [Available debug commands](playbooks_debugger#available-debug-commands)
+ [How the debugger interacts with the free strategy](playbooks_debugger#how-the-debugger-interacts-with-the-free-strategy)
* [Controlling playbook execution: strategies and more](playbooks_strategies)
+ [Selecting a strategy](playbooks_strategies#selecting-a-strategy)
+ [Setting the number of forks](playbooks_strategies#setting-the-number-of-forks)
+ [Using keywords to control execution](playbooks_strategies#using-keywords-to-control-execution)
* [Asynchronous actions and polling](playbooks_async)
+ [Asynchronous ad hoc tasks](playbooks_async#asynchronous-ad-hoc-tasks)
+ [Asynchronous playbook tasks](playbooks_async#asynchronous-playbook-tasks)
* [Advanced Syntax](playbooks_advanced_syntax)
+ [Unsafe or raw strings](playbooks_advanced_syntax#unsafe-or-raw-strings)
+ [YAML anchors and aliases: sharing variable values](playbooks_advanced_syntax#yaml-anchors-and-aliases-sharing-variable-values)
* [Data manipulation](complex_data_manipulation)
+ [Loops and list comprehensions](complex_data_manipulation#loops-and-list-comprehensions)
+ [Complex Type transformations](complex_data_manipulation#complex-type-transformations)
* [Rejecting modules](plugin_filtering_config)
* [Sample Ansible setup](sample_setup)
+ [Sample directory layout](sample_setup#sample-directory-layout)
+ [Alternative directory layout](sample_setup#alternative-directory-layout)
+ [Sample group and host variables](sample_setup#sample-group-and-host-variables)
+ [Sample playbooks organized by function](sample_setup#sample-playbooks-organized-by-function)
+ [Sample task and handler files in a function-based role](sample_setup#sample-task-and-handler-files-in-a-function-based-role)
+ [What the sample setup enables](sample_setup#what-the-sample-setup-enables)
+ [Organizing for deployment or configuration](sample_setup#organizing-for-deployment-or-configuration)
+ [Using local Ansible modules](sample_setup#using-local-ansible-modules)
* [Working With Modules](modules)
+ [Introduction to modules](modules_intro)
+ [Module Maintenance & Support](modules_support)
+ [Return Values](../reference_appendices/common_return_values)
* [Working With Plugins](../plugins/plugins)
+ [Action Plugins](../plugins/action)
+ [Become Plugins](../plugins/become)
+ [Cache Plugins](../plugins/cache)
+ [Callback Plugins](../plugins/callback)
+ [Cliconf Plugins](../plugins/cliconf)
+ [Connection Plugins](../plugins/connection)
+ [Httpapi Plugins](../plugins/httpapi)
+ [Inventory Plugins](../plugins/inventory)
+ [Lookup Plugins](../plugins/lookup)
+ [Netconf Plugins](../plugins/netconf)
+ [Shell Plugins](../plugins/shell)
+ [Strategy Plugins](../plugins/strategy)
+ [Vars Plugins](../plugins/vars)
+ [Using filters to manipulate data](playbooks_filters)
+ [Tests](playbooks_tests)
+ [Rejecting modules](plugin_filtering_config)
* [Playbook Keywords](../reference_appendices/playbooks_keywords)
+ [Play](../reference_appendices/playbooks_keywords#play)
+ [Role](../reference_appendices/playbooks_keywords#role)
+ [Block](../reference_appendices/playbooks_keywords#block)
+ [Task](../reference_appendices/playbooks_keywords#task)
* [Ansible and BSD](intro_bsd)
+ [Connecting to BSD nodes](intro_bsd#connecting-to-bsd-nodes)
+ [Bootstrapping BSD](intro_bsd#bootstrapping-bsd)
+ [Setting the Python interpreter](intro_bsd#setting-the-python-interpreter)
+ [Which modules are available?](intro_bsd#which-modules-are-available)
+ [Using BSD as the control node](intro_bsd#using-bsd-as-the-control-node)
+ [BSD facts](intro_bsd#bsd-facts)
+ [BSD efforts and contributions](intro_bsd#bsd-efforts-and-contributions)
* [Windows Guides](windows)
+ [Setting up a Windows Host](windows_setup)
+ [Windows Remote Management](windows_winrm)
+ [Using Ansible and Windows](windows_usage)
+ [Desired State Configuration](windows_dsc)
+ [Windows performance](windows_performance)
+ [Windows Frequently Asked Questions](windows_faq)
* [Using collections](collections_using)
+ [Installing collections](collections_using#installing-collections)
+ [Downloading collections](collections_using#downloading-collections)
+ [Listing collections](collections_using#listing-collections)
+ [Verifying collections](collections_using#verifying-collections)
+ [Using collections in a Playbook](collections_using#using-collections-in-a-playbook)
+ [Simplifying module names with the `collections` keyword](collections_using#simplifying-module-names-with-the-collections-keyword)
+ [Using a playbook from a collection](collections_using#using-a-playbook-from-a-collection)
| programming_docs |
ansible Using Variables Using Variables
===============
Ansible uses variables to manage differences between systems. With Ansible, you can execute tasks and playbooks on multiple different systems with a single command. To represent the variations among those different systems, you can create variables with standard YAML syntax, including lists and dictionaries. You can define these variables in your playbooks, in your [inventory](intro_inventory#intro-inventory), in re-usable [files](playbooks_reuse#playbooks-reuse) or [roles](playbooks_reuse_roles#playbooks-reuse-roles), or at the command line. You can also create variables during a playbook run by registering the return value or values of a task as a new variable.
After you create variables, either by defining them in a file, passing them at the command line, or registering the return value or values of a task as a new variable, you can use those variables in module arguments, in [conditional “when” statements](playbooks_conditionals#playbooks-conditionals), in [templates](playbooks_templating#playbooks-templating), and in [loops](playbooks_loops#playbooks-loops). The [ansible-examples github repository](https://github.com/ansible/ansible-examples) contains many examples of using variables in Ansible.
Once you understand the concepts and examples on this page, read about [Ansible facts](playbooks_vars_facts#vars-and-facts), which are variables you retrieve from remote systems.
* [Creating valid variable names](#creating-valid-variable-names)
* [Simple variables](#simple-variables)
+ [Defining simple variables](#defining-simple-variables)
+ [Referencing simple variables](#referencing-simple-variables)
* [When to quote variables (a YAML gotcha)](#when-to-quote-variables-a-yaml-gotcha)
* [List variables](#list-variables)
+ [Defining variables as lists](#defining-variables-as-lists)
+ [Referencing list variables](#referencing-list-variables)
* [Dictionary variables](#dictionary-variables)
+ [Defining variables as key:value dictionaries](#defining-variables-as-key-value-dictionaries)
+ [Referencing key:value dictionary variables](#referencing-key-value-dictionary-variables)
* [Registering variables](#registering-variables)
* [Referencing nested variables](#referencing-nested-variables)
* [Transforming variables with Jinja2 filters](#transforming-variables-with-jinja2-filters)
* [Where to set variables](#where-to-set-variables)
+ [Defining variables in inventory](#defining-variables-in-inventory)
+ [Defining variables in a play](#defining-variables-in-a-play)
+ [Defining variables in included files and roles](#defining-variables-in-included-files-and-roles)
+ [Defining variables at runtime](#defining-variables-at-runtime)
- [key=value format](#key-value-format)
- [JSON string format](#json-string-format)
- [vars from a JSON or YAML file](#vars-from-a-json-or-yaml-file)
* [Variable precedence: Where should I put a variable?](#variable-precedence-where-should-i-put-a-variable)
+ [Understanding variable precedence](#understanding-variable-precedence)
+ [Scoping variables](#scoping-variables)
+ [Tips on where to set variables](#tips-on-where-to-set-variables)
* [Using advanced variable syntax](#using-advanced-variable-syntax)
Creating valid variable names
-----------------------------
Not all strings are valid Ansible variable names. A variable name can only include letters, numbers, and underscores. [Python keywords](https://docs.python.org/3/reference/lexical_analysis.html#keywords) or [playbook keywords](../reference_appendices/playbooks_keywords#playbook-keywords) are not valid variable names. A variable name cannot begin with a number.
Variable names can begin with an underscore. In many programming languages, variables that begin with an underscore are private. This is not true in Ansible. Variables that begin with an underscore are treated exactly the same as any other variable. Do not rely on this convention for privacy or security.
This table gives examples of valid and invalid variable names:
| Valid variable names | Not valid |
| --- | --- |
| `foo` | `*foo`, [Python keywords](https://docs.python.org/3/reference/lexical_analysis.html#keywords) such as `async` and `lambda` |
| `foo_env` | [playbook keywords](../reference_appendices/playbooks_keywords#playbook-keywords) such as `environment` |
| `foo_port` | `foo-port`, `foo port`, `foo.port` |
| `foo5`, `_foo` | `5foo`, `12` |
Simple variables
----------------
Simple variables combine a variable name with a single value. You can use this syntax (and the syntax for lists and dictionaries shown below) in a variety of places. For details about setting variables in inventory, in playbooks, in reusable files, in roles, or at the command line, see [Where to set variables](#setting-variables).
### Defining simple variables
You can define a simple variable using standard YAML syntax. For example:
```
remote_install_path: /opt/my_app_config
```
### Referencing simple variables
After you define a variable, use Jinja2 syntax to reference it. Jinja2 variables use double curly braces. For example, the expression `My amp goes to {{ max_amp_value }}` demonstrates the most basic form of variable substitution. You can use Jinja2 syntax in playbooks. For example:
```
ansible.builtin.template:
src: foo.cfg.j2
dest: '{{ remote_install_path }}/foo.cfg'
```
In this example, the variable defines the location of a file, which can vary from one system to another.
Note
Ansible allows Jinja2 loops and conditionals in [templates](playbooks_templating#playbooks-templating) but not in playbooks. You cannot create a loop of tasks. Ansible playbooks are pure machine-parseable YAML.
When to quote variables (a YAML gotcha)
---------------------------------------
If you start a value with `{{ foo }}`, you must quote the whole expression to create valid YAML syntax. If you do not quote the whole expression, the YAML parser cannot interpret the syntax - it might be a variable or it might be the start of a YAML dictionary. For guidance on writing YAML, see the [YAML Syntax](../reference_appendices/yamlsyntax#yaml-syntax) documentation.
If you use a variable without quotes like this:
```
- hosts: app_servers
vars:
app_path: {{ base_path }}/22
```
You will see: `ERROR! Syntax Error while loading YAML.` If you add quotes, Ansible works correctly:
```
- hosts: app_servers
vars:
app_path: "{{ base_path }}/22"
```
List variables
--------------
A list variable combines a variable name with multiple values. The multiple values can be stored as an itemized list or in square brackets `[]`, separated with commas.
### Defining variables as lists
You can define variables with multiple values using YAML lists. For example:
```
region:
- northeast
- southeast
- midwest
```
### Referencing list variables
When you use variables defined as a list (also called an array), you can use individual, specific fields from that list. The first item in a list is item 0, the second item is item 1. For example:
```
region: "{{ region[0] }}"
```
The value of this expression would be “northeast”.
Dictionary variables
--------------------
A dictionary stores the data in key-value pairs. Usually, dictionaries are used to store related data, such as the information contained in an ID or a user profile.
### Defining variables as key:value dictionaries
You can define more complex variables using YAML dictionaries. A YAML dictionary maps keys to values. For example:
```
foo:
field1: one
field2: two
```
### Referencing key:value dictionary variables
When you use variables defined as a key:value dictionary (also called a hash), you can use individual, specific fields from that dictionary using either bracket notation or dot notation:
```
foo['field1']
foo.field1
```
Both of these examples reference the same value (“one”). Bracket notation always works. Dot notation can cause problems because some keys collide with attributes and methods of python dictionaries. Use bracket notation if you use keys which start and end with two underscores (which are reserved for special meanings in python) or are any of the known public attributes:
`add`, `append`, `as_integer_ratio`, `bit_length`, `capitalize`, `center`, `clear`, `conjugate`, `copy`, `count`, `decode`, `denominator`, `difference`, `difference_update`, `discard`, `encode`, `endswith`, `expandtabs`, `extend`, `find`, `format`, `fromhex`, `fromkeys`, `get`, `has_key`, `hex`, `imag`, `index`, `insert`, `intersection`, `intersection_update`, `isalnum`, `isalpha`, `isdecimal`, `isdigit`, `isdisjoint`, `is_integer`, `islower`, `isnumeric`, `isspace`, `issubset`, `issuperset`, `istitle`, `isupper`, `items`, `iteritems`, `iterkeys`, `itervalues`, `join`, `keys`, `ljust`, `lower`, `lstrip`, `numerator`, `partition`, `pop`, `popitem`, `real`, `remove`, `replace`, `reverse`, `rfind`, `rindex`, `rjust`, `rpartition`, `rsplit`, `rstrip`, `setdefault`, `sort`, `split`, `splitlines`, `startswith`, `strip`, `swapcase`, `symmetric_difference`, `symmetric_difference_update`, `title`, `translate`, `union`, `update`, `upper`, `values`, `viewitems`, `viewkeys`, `viewvalues`, `zfill`.
Registering variables
---------------------
You can create variables from the output of an Ansible task with the task keyword `register`. You can use registered variables in any later tasks in your play. For example:
```
- hosts: web_servers
tasks:
- name: Run a shell command and register its output as a variable
ansible.builtin.shell: /usr/bin/foo
register: foo_result
ignore_errors: true
- name: Run a shell command using output of the previous task
ansible.builtin.shell: /usr/bin/bar
when: foo_result.rc == 5
```
For more examples of using registered variables in conditions on later tasks, see [Conditionals](playbooks_conditionals#playbooks-conditionals). Registered variables may be simple variables, list variables, dictionary variables, or complex nested data structures. The documentation for each module includes a `RETURN` section describing the return values for that module. To see the values for a particular task, run your playbook with `-v`.
Registered variables are stored in memory. You cannot cache registered variables for use in future plays. Registered variables are only valid on the host for the rest of the current playbook run.
Registered variables are host-level variables. When you register a variable in a task with a loop, the registered variable contains a value for each item in the loop. The data structure placed in the variable during the loop will contain a `results` attribute, that is a list of all responses from the module. For a more in-depth example of how this works, see the [Loops](playbooks_loops#playbooks-loops) section on using register with a loop.
Note
If a task fails or is skipped, Ansible still registers a variable with a failure or skipped status, unless the task is skipped based on tags. See [Tags](playbooks_tags#tags) for information on adding and using tags.
Referencing nested variables
----------------------------
Many registered variables (and [facts](playbooks_vars_facts#vars-and-facts)) are nested YAML or JSON data structures. You cannot access values from these nested data structures with the simple `{{ foo }}` syntax. You must use either bracket notation or dot notation. For example, to reference an IP address from your facts using the bracket notation:
```
{{ ansible_facts["eth0"]["ipv4"]["address"] }}
```
To reference an IP address from your facts using the dot notation:
```
{{ ansible_facts.eth0.ipv4.address }}
```
Transforming variables with Jinja2 filters
------------------------------------------
Jinja2 filters let you transform the value of a variable within a template expression. For example, the `capitalize` filter capitalizes any value passed to it; the `to_yaml` and `to_json` filters change the format of your variable values. Jinja2 includes many [built-in filters](https://jinja.palletsprojects.com/templates/#builtin-filters) and Ansible supplies many more filters. To find more examples of filters, see [Using filters to manipulate data](playbooks_filters#playbooks-filters).
Where to set variables
----------------------
You can define variables in a variety of places, such as in inventory, in playbooks, in reusable files, in roles, and at the command line. Ansible loads every possible variable it finds, then chooses the variable to apply based on [variable precedence rules](#ansible-variable-precedence).
### Defining variables in inventory
You can define different variables for each individual host, or set shared variables for a group of hosts in your inventory. For example, if all machines in the `[Boston]` group use ‘boston.ntp.example.com’ as an NTP server, you can set a group variable. The [How to build your inventory](intro_inventory#intro-inventory) page has details on setting [host variables](intro_inventory#host-variables) and [group variables](intro_inventory#group-variables) in inventory.
### Defining variables in a play
You can define variables directly in a playbook play:
```
- hosts: webservers
vars:
http_port: 80
```
When you define variables in a play, they are only visible to tasks executed in that play.
### Defining variables in included files and roles
You can define variables in reusable variables files and/or in reusable roles. When you define variables in reusable variable files, the sensitive variables are separated from playbooks. This separation enables you to store your playbooks in a source control software and even share the playbooks, without the risk of exposing passwords or other sensitive and personal data. For information about creating reusable files and roles, see [Re-using Ansible artifacts](playbooks_reuse#playbooks-reuse).
This example shows how you can include variables defined in an external file:
```
---
- hosts: all
remote_user: root
vars:
favcolor: blue
vars_files:
- /vars/external_vars.yml
tasks:
- name: This is just a placeholder
ansible.builtin.command: /bin/echo foo
```
The contents of each variables file is a simple YAML dictionary. For example:
```
---
# in the above example, this would be vars/external_vars.yml
somevar: somevalue
password: magic
```
Note
You can keep per-host and per-group variables in similar files. To learn about organizing your variables, see [Organizing host and group variables](intro_inventory#splitting-out-vars).
### Defining variables at runtime
You can define variables when you run your playbook by passing variables at the command line using the `--extra-vars` (or `-e`) argument. You can also request user input with a `vars_prompt` (see [Interactive input: prompts](playbooks_prompts#playbooks-prompts)). When you pass variables at the command line, use a single quoted string, that contains one or more variables, in one of the formats below.
#### key=value format
Values passed in using the `key=value` syntax are interpreted as strings. Use the JSON format if you need to pass non-string values such as Booleans, integers, floats, lists, and so on.
```
ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo"
```
#### JSON string format
```
ansible-playbook release.yml --extra-vars '{"version":"1.23.45","other_variable":"foo"}'
ansible-playbook arcade.yml --extra-vars '{"pacman":"mrs","ghosts":["inky","pinky","clyde","sue"]}'
```
When passing variables with `--extra-vars`, you must escape quotes and other special characters appropriately for both your markup (for example, JSON), and for your shell:
```
ansible-playbook arcade.yml --extra-vars "{\"name\":\"Conan O\'Brien\"}"
ansible-playbook arcade.yml --extra-vars '{"name":"Conan O'\\\''Brien"}'
ansible-playbook script.yml --extra-vars "{\"dialog\":\"He said \\\"I just can\'t get enough of those single and double-quotes"\!"\\\"\"}"
```
If you have a lot of special characters, use a JSON or YAML file containing the variable definitions.
#### vars from a JSON or YAML file
```
ansible-playbook release.yml --extra-vars "@some_file.json"
```
Variable precedence: Where should I put a variable?
---------------------------------------------------
You can set multiple variables with the same name in many different places. When you do this, Ansible loads every possible variable it finds, then chooses the variable to apply based on variable precedence. In other words, the different variables will override each other in a certain order.
Teams and projects that agree on guidelines for defining variables (where to define certain types of variables) usually avoid variable precedence concerns. We suggest that you define each variable in one place: figure out where to define a variable, and keep it simple. For examples, see [Tips on where to set variables](#variable-examples).
Some behavioral parameters that you can set in variables you can also set in Ansible configuration, as command-line options, and using playbook keywords. For example, you can define the user Ansible uses to connect to remote devices as a variable with `ansible_user`, in a configuration file with `DEFAULT_REMOTE_USER`, as a command-line option with `-u`, and with the playbook keyword `remote_user`. If you define the same parameter in a variable and by another method, the variable overrides the other setting. This approach allows host-specific settings to override more general settings. For examples and more details on the precedence of these various settings, see [Controlling how Ansible behaves: precedence rules](../reference_appendices/general_precedence#general-precedence-rules).
### Understanding variable precedence
Ansible does apply variable precedence, and you might have a use for it. Here is the order of precedence from least to greatest (the last listed variables override all other variables):
1. command line values (for example, `-u my_user`, these are not variables)
2. role defaults (defined in role/defaults/main.yml) [1](#id13)
3. inventory file or script group vars [2](#id14)
4. inventory group\_vars/all [3](#id15)
5. playbook group\_vars/all [3](#id15)
6. inventory group\_vars/\* [3](#id15)
7. playbook group\_vars/\* [3](#id15)
8. inventory file or script host vars [2](#id14)
9. inventory host\_vars/\* [3](#id15)
10. playbook host\_vars/\* [3](#id15)
11. host facts / cached set\_facts [4](#id16)
12. play vars
13. play vars\_prompt
14. play vars\_files
15. role vars (defined in role/vars/main.yml)
16. block vars (only for tasks in block)
17. task vars (only for the task)
18. include\_vars
19. set\_facts / registered vars
20. role (and include\_role) params
21. include params
22. extra vars (for example, `-e "user=my_user"`)(always win precedence)
In general, Ansible gives precedence to variables that were defined more recently, more actively, and with more explicit scope. Variables in the defaults folder inside a role are easily overridden. Anything in the vars directory of the role overrides previous versions of that variable in the namespace. Host and/or inventory variables override role defaults, but explicit includes such as the vars directory or an `include_vars` task override inventory variables.
Ansible merges different variables set in inventory so that more specific settings override more generic settings. For example, `ansible_ssh_user` specified as a group\_var is overridden by `ansible_user` specified as a host\_var. For details about the precedence of variables set in inventory, see [How variables are merged](intro_inventory#how-we-merge).
#### Footnotes
`1`
Tasks in each role see their own role’s defaults. Tasks defined outside of a role see the last role’s defaults.
`2(1,2)`
Variables defined in inventory file or provided by dynamic inventory.
`3(1,2,3,4,5,6)`
Includes vars added by ‘vars plugins’ as well as host\_vars and group\_vars which are added by the default vars plugin shipped with Ansible.
`4`
When created with set\_facts’s cacheable option, variables have the high precedence in the play, but are the same as a host facts precedence when they come from the cache.
Note
Within any section, redefining a var overrides the previous instance. If multiple groups have the same variable, the last one loaded wins. If you define a variable twice in a play’s `vars:` section, the second one wins.
Note
The previous describes the default config `hash_behaviour=replace`, switch to `merge` to only partially overwrite.
### Scoping variables
You can decide where to set a variable based on the scope you want that value to have. Ansible has three main scopes:
* Global: this is set by config, environment variables and the command line
* Play: each play and contained structures, vars entries (vars; vars\_files; vars\_prompt), role defaults and vars.
* Host: variables directly associated to a host, like inventory, include\_vars, facts or registered task outputs
Inside a template, you automatically have access to all variables that are in scope for a host, plus any registered variables, facts, and magic variables.
### Tips on where to set variables
You should choose where to define a variable based on the kind of control you might want over values.
Set variables in inventory that deal with geography or behavior. Since groups are frequently the entity that maps roles onto hosts, you can often set variables on the group instead of defining them on a role. Remember: child groups override parent groups, and host variables override group variables. See [Defining variables in inventory](#define-variables-in-inventory) for details on setting host and group variables.
Set common defaults in a `group_vars/all` file. See [Organizing host and group variables](intro_inventory#splitting-out-vars) for details on how to organize host and group variables in your inventory. Group variables are generally placed alongside your inventory file, but they can also be returned by dynamic inventory (see [Working with dynamic inventory](intro_dynamic_inventory#intro-dynamic-inventory)) or defined in AWX or on [Red Hat Ansible Automation Platform](https://docs.ansible.com/ansible/latest/reference_appendices/tower.html#ansible-platform) from the UI or API:
```
---
# file: /etc/ansible/group_vars/all
# this is the site wide default
ntp_server: default-time.example.com
```
Set location-specific variables in `group_vars/my_location` files. All groups are children of the `all` group, so variables set here override those set in `group_vars/all`:
```
---
# file: /etc/ansible/group_vars/boston
ntp_server: boston-time.example.com
```
If one host used a different NTP server, you could set that in a host\_vars file, which would override the group variable:
```
---
# file: /etc/ansible/host_vars/xyz.boston.example.com
ntp_server: override.example.com
```
Set defaults in roles to avoid undefined-variable errors. If you share your roles, other users can rely on the reasonable defaults you added in the `roles/x/defaults/main.yml` file, or they can easily override those values in inventory or at the command line. See [Roles](playbooks_reuse_roles#playbooks-reuse-roles) for more info. For example:
```
---
# file: roles/x/defaults/main.yml
# if no other value is supplied in inventory or as a parameter, this value will be used
http_port: 80
```
Set variables in roles to ensure a value is used in that role, and is not overridden by inventory variables. If you are not sharing your role with others, you can define app-specific behaviors like ports this way, in `roles/x/vars/main.yml`. If you are sharing roles with others, putting variables here makes them harder to override, although they still can by passing a parameter to the role or setting a variable with `-e`:
```
---
# file: roles/x/vars/main.yml
# this will absolutely be used in this role
http_port: 80
```
Pass variables as parameters when you call roles for maximum clarity, flexibility, and visibility. This approach overrides any defaults that exist for a role. For example:
```
roles:
- role: apache
vars:
http_port: 8080
```
When you read this playbook it is clear that you have chosen to set a variable or override a default. You can also pass multiple values, which allows you to run the same role multiple times. See [Running a role multiple times in one playbook](playbooks_reuse_roles#run-role-twice) for more details. For example:
```
roles:
- role: app_user
vars:
myname: Ian
- role: app_user
vars:
myname: Terry
- role: app_user
vars:
myname: Graham
- role: app_user
vars:
myname: John
```
Variables set in one role are available to later roles. You can set variables in a `roles/common_settings/vars/main.yml` file and use them in other roles and elsewhere in your playbook:
```
roles:
- role: common_settings
- role: something
vars:
foo: 12
- role: something_else
```
Note
There are some protections in place to avoid the need to namespace variables. In this example, variables defined in ‘common\_settings’ are available to ‘something’ and ‘something\_else’ tasks, but tasks in ‘something’ have foo set at 12, even if ‘common\_settings’ sets foo to 20.
Instead of worrying about variable precedence, we encourage you to think about how easily or how often you want to override a variable when deciding where to set it. If you are not sure what other variables are defined, and you need a particular value, use `--extra-vars` (`-e`) to override all other variables.
Using advanced variable syntax
------------------------------
For information about advanced YAML syntax used to declare variables and have more control over the data placed in YAML files used by Ansible, see [Advanced Syntax](playbooks_advanced_syntax#playbooks-advanced-syntax).
See also
[Intro to playbooks](playbooks_intro#about-playbooks)
An introduction to playbooks
[Conditionals](playbooks_conditionals#playbooks-conditionals)
Conditional statements in playbooks
[Using filters to manipulate data](playbooks_filters#playbooks-filters)
Jinja2 filters and their uses
[Loops](playbooks_loops#playbooks-loops)
Looping in playbooks
[Roles](playbooks_reuse_roles#playbooks-reuse-roles)
Playbook organization by roles
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[Special Variables](../reference_appendices/special_variables#special-variables)
List of special variables
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Sample Ansible setup Sample Ansible setup
====================
You have learned about playbooks, inventory, roles, and variables. This section pulls all those elements together, outlining a sample setup for automating a web service. You can find more example playbooks illustrating these patterns in our [ansible-examples repository](https://github.com/ansible/ansible-examples). (NOTE: These may not use all of the features in the latest release, but are still an excellent reference!).
The sample setup organizes playbooks, roles, inventory, and variables files by function, with tags at the play and task level for greater granularity and control. This is a powerful and flexible approach, but there are other ways to organize Ansible content. Your usage of Ansible should fit your needs, not ours, so feel free to modify this approach and organize your content as you see fit.
* [Sample directory layout](#sample-directory-layout)
* [Alternative directory layout](#alternative-directory-layout)
* [Sample group and host variables](#sample-group-and-host-variables)
* [Sample playbooks organized by function](#sample-playbooks-organized-by-function)
* [Sample task and handler files in a function-based role](#sample-task-and-handler-files-in-a-function-based-role)
* [What the sample setup enables](#what-the-sample-setup-enables)
* [Organizing for deployment or configuration](#organizing-for-deployment-or-configuration)
* [Using local Ansible modules](#using-local-ansible-modules)
Sample directory layout
-----------------------
This layout organizes most tasks in roles, with a single inventory file for each environment and a few playbooks in the top-level directory:
```
production # inventory file for production servers
staging # inventory file for staging environment
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
library/ # if any custom modules, put them here (optional)
module_utils/ # if any custom module_utils to support modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)
site.yml # main playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tier
tasks/ # task files included from playbooks
webservers-extra.yml # <-- avoids confusing playbook with task files
roles/
common/ # this hierarchy represents a "role"
tasks/ #
main.yml # <-- tasks file can include smaller files if warranted
handlers/ #
main.yml # <-- handlers file
templates/ # <-- files for use with the template resource
ntp.conf.j2 # <------- templates end in .j2
files/ #
bar.txt # <-- files for use with the copy resource
foo.sh # <-- script files for use with the script resource
vars/ #
main.yml # <-- variables associated with this role
defaults/ #
main.yml # <-- default lower priority variables for this role
meta/ #
main.yml # <-- role dependencies
library/ # roles can also include custom modules
module_utils/ # roles can also include custom module_utils
lookup_plugins/ # or other types of plugins, like lookup in this case
webtier/ # same kind of structure as "common" was above, done for the webtier role
monitoring/ # ""
fooapp/ # ""
```
Alternative directory layout
----------------------------
Alternatively you can put each inventory file with its `group_vars`/`host_vars` in a separate directory. This is particularly useful if your `group_vars`/`host_vars` don’t have that much in common in different environments. The layout could look something like this:
```
inventories/
production/
hosts # inventory file for production servers
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
staging/
hosts # inventory file for staging environment
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
stagehost1.yml # here we assign variables to particular systems
stagehost2.yml
library/
module_utils/
filter_plugins/
site.yml
webservers.yml
dbservers.yml
roles/
common/
webtier/
monitoring/
fooapp/
```
This layout gives you more flexibility for larger environments, as well as a total separation of inventory variables between different environments. However, this approach is harder to maintain, because there are more files. For more information on organizing group and host variables, see [Organizing host and group variables](intro_inventory#splitting-out-vars).
Sample group and host variables
-------------------------------
These sample group and host variables files record the variable values that apply to each machine or group of machines. For instance, the data center in Atlanta has its own NTP servers, so when setting up ntp.conf, we should use them:
```
---
# file: group_vars/atlanta
ntp: ntp-atlanta.example.com
backup: backup-atlanta.example.com
```
Similarly, the webservers have some configuration that does not apply to the database servers:
```
---
# file: group_vars/webservers
apacheMaxRequestsPerChild: 3000
apacheMaxClients: 900
```
Default values, or values that are universally true, belong in a file called group\_vars/all:
```
---
# file: group_vars/all
ntp: ntp-boston.example.com
backup: backup-boston.example.com
```
If necessary, you can define specific hardware variance in systems in a host\_vars file:
```
---
# file: host_vars/db-bos-1.example.com
foo_agent_port: 86
bar_agent_port: 99
```
Again, if you are using [dynamic inventory](intro_dynamic_inventory#dynamic-inventory), Ansible creates many dynamic groups automatically. So a tag like “class:webserver” would load in variables from the file “group\_vars/ec2\_tag\_class\_webserver” automatically.
Sample playbooks organized by function
--------------------------------------
With this setup, a single playbook can define all the infrastructure. The site.yml playbook imports two other playbooks, one for the webservers and one for the database servers:
```
---
# file: site.yml
- import_playbook: webservers.yml
- import_playbook: dbservers.yml
```
The webservers.yml file, also at the top level, maps the configuration of the webservers group to the roles related to the webservers group:
```
---
# file: webservers.yml
- hosts: webservers
roles:
- common
- webtier
```
With this setup, you can configure your whole infrastructure by “running” site.yml, or run a subset by running webservers.yml. This is analogous to the Ansible “–limit” parameter but a little more explicit:
```
ansible-playbook site.yml --limit webservers
ansible-playbook webservers.yml
```
Sample task and handler files in a function-based role
------------------------------------------------------
Ansible loads any file called `main.yml` in a role sub-directory. This sample `tasks/main.yml` file is simple - it sets up NTP, but it could do more if we wanted:
```
---
# file: roles/common/tasks/main.yml
- name: be sure ntp is installed
yum:
name: ntp
state: present
tags: ntp
- name: be sure ntp is configured
template:
src: ntp.conf.j2
dest: /etc/ntp.conf
notify:
- restart ntpd
tags: ntp
- name: be sure ntpd is running and enabled
service:
name: ntpd
state: started
enabled: yes
tags: ntp
```
Here is an example handlers file. As a review, handlers are only fired when certain tasks report changes, and are run at the end of each play:
```
---
# file: roles/common/handlers/main.yml
- name: restart ntpd
service:
name: ntpd
state: restarted
```
See [Roles](playbooks_reuse_roles#playbooks-reuse-roles) for more information.
What the sample setup enables
-----------------------------
The basic organizational structure described above enables a lot of different automation options. To reconfigure your entire infrastructure:
```
ansible-playbook -i production site.yml
```
To reconfigure NTP on everything:
```
ansible-playbook -i production site.yml --tags ntp
```
To reconfigure only the webservers:
```
ansible-playbook -i production webservers.yml
```
To reconfigure only the webservers in Boston:
```
ansible-playbook -i production webservers.yml --limit boston
```
To reconfigure only the first 10 webservers in Boston, and then the next 10:
```
ansible-playbook -i production webservers.yml --limit boston[0:9]
ansible-playbook -i production webservers.yml --limit boston[10:19]
```
The sample setup also supports basic ad hoc commands:
```
ansible boston -i production -m ping
ansible boston -i production -m command -a '/sbin/reboot'
```
To discover what tasks would run or what hostnames would be affected by a particular Ansible command:
```
# confirm what task names would be run if I ran this command and said "just ntp tasks"
ansible-playbook -i production webservers.yml --tags ntp --list-tasks
# confirm what hostnames might be communicated with if I said "limit to boston"
ansible-playbook -i production webservers.yml --limit boston --list-hosts
```
Organizing for deployment or configuration
------------------------------------------
The sample setup models a typical configuration topology. When doing multi-tier deployments, there are going to be some additional playbooks that hop between tiers to roll out an application. In this case, ‘site.yml’ may be augmented by playbooks like ‘deploy\_exampledotcom.yml’ but the general concepts still apply. Ansible allows you to deploy and configure using the same tool, so you would likely reuse groups and keep the OS configuration in separate playbooks or roles from the app deployment.
Consider “playbooks” as a sports metaphor – you can have one set of plays to use against all your infrastructure and situational plays that you use at different times and for different purposes.
Using local Ansible modules
---------------------------
If a playbook has a `./library` directory relative to its YAML file, this directory can be used to add Ansible modules that will automatically be in the Ansible module path. This is a great way to keep modules that go with a playbook together. This is shown in the directory structure example at the start of this section.
See also
[YAML Syntax](../reference_appendices/yamlsyntax#yaml-syntax)
Learn about YAML syntax
[Working with playbooks](playbooks#working-with-playbooks)
Review the basic playbook features
[Collection Index](../collections/index#list-of-collections)
Browse existing collections, modules, and plugins
[Should you develop a module?](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html#developing-modules)
Learn how to extend Ansible by writing your own modules
[Patterns: targeting hosts and groups](intro_patterns#intro-patterns)
Learn about how to select hosts
[GitHub examples directory](https://github.com/ansible/ansible-examples)
Complete playbook files from the github project source
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
ansible Working With Modules Working With Modules
====================
* [Introduction to modules](modules_intro)
* [Module Maintenance & Support](modules_support)
* [Return Values](../reference_appendices/common_return_values)
Ansible ships with a number of modules (called the ‘module library’) that can be executed directly on remote hosts or through [Playbooks](playbooks#working-with-playbooks).
Users can also write their own modules. These modules can control system resources, like services, packages, or files (anything really), or handle executing system commands.
See also
[Introduction to ad hoc commands](intro_adhoc#intro-adhoc)
Examples of using modules in /usr/bin/ansible
[Intro to playbooks](playbooks_intro#playbooks-intro)
Introduction to using modules with /usr/bin/ansible-playbook
[Developing Ansible modules](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#developing-modules-general)
How to write your own modules
[Python API](https://docs.ansible.com/ansible/latest/dev_guide/developing_api.html#developing-api)
Examples of using modules with the Python API
[Interpreter Discovery](../reference_appendices/interpreter_discovery#interpreter-discovery)
Configuring the right Python interpreter on target hosts
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Advanced Syntax Advanced Syntax
===============
The advanced YAML syntax examples on this page give you more control over the data placed in YAML files used by Ansible. You can find additional information about Python-specific YAML in the official [PyYAML Documentation](https://pyyaml.org/wiki/PyYAMLDocumentation#YAMLtagsandPythontypes).
* [Unsafe or raw strings](#unsafe-or-raw-strings)
* [YAML anchors and aliases: sharing variable values](#yaml-anchors-and-aliases-sharing-variable-values)
Unsafe or raw strings
---------------------
When handling values returned by lookup plugins, Ansible uses a data type called `unsafe` to block templating. Marking data as unsafe prevents malicious users from abusing Jinja2 templates to execute arbitrary code on target machines. The Ansible implementation ensures that unsafe values are never templated. It is more comprehensive than escaping Jinja2 with `{% raw %} ... {% endraw %}` tags.
You can use the same `unsafe` data type in variables you define, to prevent templating errors and information disclosure. You can mark values supplied by [vars\_prompts](playbooks_prompts#unsafe-prompts) as unsafe. You can also use `unsafe` in playbooks. The most common use cases include passwords that allow special characters like `{` or `%`, and JSON arguments that look like templates but should not be templated. For example:
```
---
mypassword: !unsafe 234%234{435lkj{{lkjsdf
```
In a playbook:
```
---
hosts: all
vars:
my_unsafe_variable: !unsafe 'unsafe % value'
tasks:
...
```
For complex variables such as hashes or arrays, use `!unsafe` on the individual elements:
```
---
my_unsafe_array:
- !unsafe 'unsafe element'
- 'safe element'
my_unsafe_hash:
unsafe_key: !unsafe 'unsafe value'
```
YAML anchors and aliases: sharing variable values
-------------------------------------------------
[YAML anchors and aliases](https://yaml.org/spec/1.2/spec.html#id2765878) help you define, maintain, and use shared variable values in a flexible way. You define an anchor with `&`, then refer to it using an alias, denoted with `*`. Here’s an example that sets three values with an anchor, uses two of those values with an alias, and overrides the third value:
```
---
...
vars:
app1:
jvm: &jvm_opts
opts: '-Xms1G -Xmx2G'
port: 1000
path: /usr/lib/app1
app2:
jvm:
<<: *jvm_opts
path: /usr/lib/app2
...
```
Here, `app1` and `app2` share the values for `opts` and `port` using the anchor `&jvm_opts` and the alias `*jvm_opts`. The value for `path` is merged by `<<` or [merge operator](https://yaml.org/type/merge.html).
Anchors and aliases also let you share complex sets of variable values, including nested variables. If you have one variable value that includes another variable value, you can define them separately:
```
vars:
webapp_version: 1.0
webapp_custom_name: ToDo_App-1.0
```
This is inefficient and, at scale, means more maintenance. To incorporate the version value in the name, you can use an anchor in `app_version` and an alias in `custom_name`:
```
vars:
webapp:
version: &my_version 1.0
custom_name:
- "ToDo_App"
- *my_version
```
Now, you can re-use the value of `app_version` within the value of `custom_name` and use the output in a template:
```
---
- name: Using values nested inside dictionary
hosts: localhost
vars:
webapp:
version: &my_version 1.0
custom_name:
- "ToDo_App"
- *my_version
tasks:
- name: Using Anchor value
ansible.builtin.debug:
msg: My app is called "{{ webapp.custom_name | join('-') }}".
```
You’ve anchored the value of `version` with the `&my_version` anchor, and re-used it with the `*my_version` alias. Anchors and aliases let you access nested values inside dictionaries.
See also
[Using Variables](playbooks_variables#playbooks-variables)
All about variables
[Data manipulation](complex_data_manipulation)
Doing complex data manipulation in Ansible
[User Mailing List](https://groups.google.com/group/ansible-project)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Ansible and BSD Ansible and BSD
===============
Managing BSD machines is different from managing other Unix-like machines. If you have managed nodes running BSD, review these topics.
* [Connecting to BSD nodes](#connecting-to-bsd-nodes)
* [Bootstrapping BSD](#bootstrapping-bsd)
* [Setting the Python interpreter](#setting-the-python-interpreter)
* [Which modules are available?](#which-modules-are-available)
* [Using BSD as the control node](#using-bsd-as-the-control-node)
* [BSD facts](#bsd-facts)
* [BSD efforts and contributions](#bsd-efforts-and-contributions)
Connecting to BSD nodes
-----------------------
Ansible connects to managed nodes using OpenSSH by default. This works on BSD if you use SSH keys for authentication. However, if you use SSH passwords for authentication, Ansible relies on sshpass. Most versions of sshpass do not deal well with BSD login prompts, so when using SSH passwords against BSD machines, use `paramiko` to connect instead of OpenSSH. You can do this in ansible.cfg globally or you can set it as an inventory/group/host variable. For example:
```
[freebsd]
mybsdhost1 ansible_connection=paramiko
```
Bootstrapping BSD
-----------------
Ansible is agentless by default, however, it requires Python on managed nodes. Only the [raw](../collections/ansible/builtin/raw_module#raw-module) module will operate without Python. Although this module can be used to bootstrap Ansible and install Python on BSD variants (see below), it is very limited and the use of Python is required to make full use of Ansible’s features.
The following example installs Python 2.7 which includes the json library required for full functionality of Ansible. On your control machine you can execute the following for most versions of FreeBSD:
```
ansible -m raw -a "pkg install -y python27" mybsdhost1
```
Or for OpenBSD:
```
ansible -m raw -a "pkg_add python%3.7"
```
Once this is done you can now use other Ansible modules apart from the `raw` module.
Note
This example demonstrated using pkg on FreeBSD and pkg\_add on OpenBSD, however you should be able to substitute the appropriate package tool for your BSD; the package name may also differ. Refer to the package list or documentation of the BSD variant you are using for the exact Python package name you intend to install.
Setting the Python interpreter
------------------------------
To support a variety of Unix-like operating systems and distributions, Ansible cannot always rely on the existing environment or `env` variables to locate the correct Python binary. By default, modules point at `/usr/bin/python` as this is the most common location. On BSD variants, this path may differ, so it is advised to inform Ansible of the binary’s location, through the `ansible_python_interpreter` inventory variable. For example:
```
[freebsd:vars]
ansible_python_interpreter=/usr/local/bin/python2.7
[openbsd:vars]
ansible_python_interpreter=/usr/local/bin/python3.7
```
If you use additional plugins beyond those bundled with Ansible, you can set similar variables for `bash`, `perl` or `ruby`, depending on how the plugin is written. For example:
```
[freebsd:vars]
ansible_python_interpreter=/usr/local/bin/python
ansible_perl_interpreter=/usr/bin/perl5
```
Which modules are available?
----------------------------
The majority of the core Ansible modules are written for a combination of Unix-like machines and other generic services, so most should function well on the BSDs with the obvious exception of those that are aimed at Linux-only technologies (such as LVG).
Using BSD as the control node
-----------------------------
Using BSD as the control machine is as simple as installing the Ansible package for your BSD variant or by following the `pip` or ‘from source’ instructions.
BSD facts
---------
Ansible gathers facts from the BSDs in a similar manner to Linux machines, but since the data, names and structures can vary for network, disks and other devices, one should expect the output to be slightly different yet still familiar to a BSD administrator.
BSD efforts and contributions
-----------------------------
BSD support is important to us at Ansible. Even though the majority of our contributors use and target Linux we have an active BSD community and strive to be as BSD-friendly as possible. Please feel free to report any issues or incompatibilities you discover with BSD; pull requests with an included fix are also welcome!
See also
[Introduction to ad hoc commands](intro_adhoc#intro-adhoc)
Examples of basic commands
[Working with playbooks](playbooks#working-with-playbooks)
Learning ansible’s configuration management language
[Should you develop a module?](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html#developing-modules)
How to write modules
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Using collections Using collections
=================
Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. As modules move from the core Ansible repository into collections, the module documentation will move to the [collections pages](../collections/index#list-of-collections).
You can install and use collections through [Ansible Galaxy](https://galaxy.ansible.com).
* For details on how to *develop* collections see [Developing collections](https://docs.ansible.com/ansible/latest/dev_guide/developing_collections.html#developing-collections).
* For the current development status of Collections and FAQ see [Ansible Collections Community Guide](https://github.com/ansible-collections/overview/blob/main/README.rst).
* [Installing collections](#installing-collections)
+ [Installing collections with `ansible-galaxy`](#installing-collections-with-ansible-galaxy)
+ [Installing an older version of a collection](#installing-an-older-version-of-a-collection)
+ [Installing a collection from a git repository](#installing-a-collection-from-a-git-repository)
+ [Install multiple collections with a requirements file](#install-multiple-collections-with-a-requirements-file)
+ [Downloading a collection for offline use](#downloading-a-collection-for-offline-use)
+ [Configuring the `ansible-galaxy` client](#configuring-the-ansible-galaxy-client)
* [Downloading collections](#downloading-collections)
* [Listing collections](#listing-collections)
* [Verifying collections](#verifying-collections)
+ [Verifying collections with `ansible-galaxy`](#verifying-collections-with-ansible-galaxy)
* [Using collections in a Playbook](#using-collections-in-a-playbook)
* [Simplifying module names with the `collections` keyword](#simplifying-module-names-with-the-collections-keyword)
+ [Using `collections` in roles](#using-collections-in-roles)
+ [Using `collections` in playbooks](#using-collections-in-playbooks)
* [Using a playbook from a collection](#using-a-playbook-from-a-collection)
Installing collections
----------------------
Note
If you install a collection manually as described in this paragraph, the collection will not be upgraded automatically when you upgrade the `ansible` package or `ansible-core`.
### Installing collections with `ansible-galaxy`
By default, `ansible-galaxy collection install` uses <https://galaxy.ansible.com> as the Galaxy server (as listed in the `ansible.cfg` file under [GALAXY\_SERVER](../reference_appendices/config#galaxy-server)). You do not need any further configuration.
See [Configuring the ansible-galaxy client](#galaxy-server-config) if you are using any other Galaxy server, such as Red Hat Automation Hub.
To install a collection hosted in Galaxy:
```
ansible-galaxy collection install my_namespace.my_collection
```
To upgrade a collection to the latest available version from the Galaxy server you can use the `--upgrade` option:
```
ansible-galaxy collection install my_namespace.my_collection --upgrade
```
You can also directly use the tarball from your build:
```
ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz -p ./collections
```
You can build and install a collection from a local source directory. The `ansible-galaxy` utility builds the collection using the `MANIFEST.json` or `galaxy.yml` metadata in the directory.
```
ansible-galaxy collection install /path/to/collection -p ./collections
```
You can also install multiple collections in a namespace directory.
```
ns/
├── collection1/
│ ├── MANIFEST.json
│ └── plugins/
└── collection2/
├── galaxy.yml
└── plugins/
```
```
ansible-galaxy collection install /path/to/ns -p ./collections
```
Note
The install command automatically appends the path `ansible_collections` to the one specified with the `-p` option unless the parent directory is already in a folder called `ansible_collections`.
When using the `-p` option to specify the install path, use one of the values configured in [COLLECTIONS\_PATHS](../reference_appendices/config#collections-paths), as this is where Ansible itself will expect to find collections. If you don’t specify a path, `ansible-galaxy collection install` installs the collection to the first path defined in [COLLECTIONS\_PATHS](../reference_appendices/config#collections-paths), which by default is `~/.ansible/collections`
You can also keep a collection adjacent to the current playbook, under a `collections/ansible_collections/` directory structure.
```
./
├── play.yml
├── collections/
│ └── ansible_collections/
│ └── my_namespace/
│ └── my_collection/<collection structure lives here>
```
See [Collection structure](https://docs.ansible.com/ansible/latest/dev_guide/developing_collections_structure.html#collection-structure) for details on the collection directory structure.
### Installing an older version of a collection
You can only have one version of a collection installed at a time. By default `ansible-galaxy` installs the latest available version. If you want to install a specific version, you can add a version range identifier. For example, to install the 1.0.0-beta.1 version of the collection:
```
ansible-galaxy collection install my_namespace.my_collection:==1.0.0-beta.1
```
You can specify multiple range identifiers separated by `,`. Use single quotes so the shell passes the entire command, including `>`, `!`, and other operators, along. For example, to install the most recent version that is greater than or equal to 1.0.0 and less than 2.0.0:
```
ansible-galaxy collection install 'my_namespace.my_collection:>=1.0.0,<2.0.0'
```
Ansible will always install the most recent version that meets the range identifiers you specify. You can use the following range identifiers:
* `*`: The most recent version. This is the default.
* `!=`: Not equal to the version specified.
* `==`: Exactly the version specified.
* `>=`: Greater than or equal to the version specified.
* `>`: Greater than the version specified.
* `<=`: Less than or equal to the version specified.
* `<`: Less than the version specified.
Note
By default `ansible-galaxy` ignores pre-release versions. To install a pre-release version, you must use the `==` range identifier to require it explicitly.
### Installing a collection from a git repository
You can install a collection from a git repository instead of from Galaxy or Automation Hub. As a developer, installing from a git repository lets you review your collection before you create the tarball and publish the collection. As a user, installing from a git repository lets you use collections or versions that are not in Galaxy or Automation Hub yet.
The repository must contain a `galaxy.yml` or `MANIFEST.json` file. This file provides metadata such as the version number and namespace of the collection.
#### Installing a collection from a git repository at the command line
To install a collection from a git repository at the command line, use the URI of the repository instead of a collection name or path to a `tar.gz` file. Prefix the URI with `git+` (or with `git@` to use a private repository with ssh authentication). You can specify a branch, commit, or tag using the comma-separated [git commit-ish](https://git-scm.com/docs/gitglossary#def_commit-ish) syntax.
For example:
```
# Install a collection in a repository using the latest commit on the branch 'devel'
ansible-galaxy collection install git+https://github.com/organization/repo_name.git,devel
# Install a collection from a private github repository
ansible-galaxy collection install [email protected]:organization/repo_name.git
# Install a collection from a local git repository
ansible-galaxy collection install git+file:///home/user/path/to/repo_name.git
```
Warning
Embedding credentials into a git URI is not secure. Use safe authentication options to prevent your credentials from being exposed in logs or elsewhere.
* Use [SSH](https://help.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh) authentication
* Use [netrc](https://linux.die.net/man/5/netrc) authentication
* Use [http.extraHeader](https://git-scm.com/docs/git-config#Documentation/git-config.txt-httpextraHeader) in your git configuration
* Use [url.<base>.pushInsteadOf](https://git-scm.com/docs/git-config#Documentation/git-config.txt-urlltbasegtpushInsteadOf) in your git configuration
#### Specifying the collection location within the git repository
When you install a collection from a git repository, Ansible uses the collection `galaxy.yml` or `MANIFEST.json` metadata file to build the collection. By default, Ansible searches two paths for collection `galaxy.yml` or `MANIFEST.json` metadata files:
* The top level of the repository.
* Each directory in the repository path (one level deep).
If a `galaxy.yml` or `MANIFEST.json` file exists in the top level of the repository, Ansible uses the collection metadata in that file to install an individual collection.
```
├── galaxy.yml
├── plugins/
│ ├── lookup/
│ ├── modules/
│ └── module_utils/
└─── README.md
```
If a `galaxy.yml` or `MANIFEST.json` file exists in one or more directories in the repository path (one level deep), Ansible installs each directory with a metadata file as a collection. For example, Ansible installs both collection1 and collection2 from this repository structure by default:
```
├── collection1
│ ├── docs/
│ ├── galaxy.yml
│ └── plugins/
│ ├── inventory/
│ └── modules/
└── collection2
├── docs/
├── galaxy.yml
├── plugins/
| ├── filter/
| └── modules/
└── roles/
```
If you have a different repository structure or only want to install a subset of collections, you can add a fragment to the end of your URI (before the optional comma-separated version) to indicate the location of the metadata file or files. The path should be a directory, not the metadata file itself. For example, to install only collection2 from the example repository with two collections:
```
ansible-galaxy collection install git+https://github.com/organization/repo_name.git#/collection2/
```
In some repositories, the main directory corresponds to the namespace:
```
namespace/
├── collectionA/
| ├── docs/
| ├── galaxy.yml
| ├── plugins/
| │ ├── README.md
| │ └── modules/
| ├── README.md
| └── roles/
└── collectionB/
├── docs/
├── galaxy.yml
├── plugins/
│ ├── connection/
│ └── modules/
├── README.md
└── roles/
```
You can install all collections in this repository, or install one collection from a specific commit:
```
# Install all collections in the namespace
ansible-galaxy collection install git+https://github.com/organization/repo_name.git#/namespace/
# Install an individual collection using a specific commit
ansible-galaxy collection install git+https://github.com/organization/repo_name.git#/namespace/collectionA/,7b60ddc245bc416b72d8ea6ed7b799885110f5e5
```
### Install multiple collections with a requirements file
You can set up a `requirements.yml` file to install multiple collections in one command. This file is a YAML file in the format:
```
---
collections:
# With just the collection name
- my_namespace.my_collection
# With the collection name, version, and source options
- name: my_namespace.my_other_collection
version: 'version range identifiers (default: ``*``)'
source: 'The Galaxy URL to pull the collection from (default: ``--api-server`` from cmdline)'
```
You can specify four keys for each collection entry:
* `name`
* `version`
* `source`
* `type`
The `version` key uses the same range identifier format documented in [Installing an older version of a collection](#collections-older-version).
The `type` key can be set to `galaxy`, `url`, `file`, and `git`. If `type` is omitted, the `name` key is used to implicitly determine the source of the collection.
When you install a collection with `type: git`, the `version` key can refer to a branch or to a [git commit-ish](https://git-scm.com/docs/gitglossary#def_commit-ish) object (commit or tag). For example:
```
collections:
- name: https://github.com/organization/repo_name.git
type: git
version: devel
```
You can also add roles to a `requirements.yml` file, under the `roles` key. The values follow the same format as a requirements file used in older Ansible releases.
```
---
roles:
# Install a role from Ansible Galaxy.
- name: geerlingguy.java
version: 1.9.6
collections:
# Install a collection from Ansible Galaxy.
- name: geerlingguy.php_roles
version: 0.9.3
source: https://galaxy.ansible.com
```
To install both roles and collections at the same time with one command, run the following:
```
$ ansible-galaxy install -r requirements.yml
```
Running `ansible-galaxy collection install -r` or `ansible-galaxy role install -r` will only install collections, or roles respectively.
Note
Installing both roles and collections from the same requirements file will not work when specifying a custom collection or role install path. In this scenario the collections will be skipped and the command will process each like `ansible-galaxy role install` would.
### Downloading a collection for offline use
To download the collection tarball from Galaxy for offline use:
1. Navigate to the collection page.
2. Click on Download tarball.
You may also need to manually download any dependent collections.
### Configuring the `ansible-galaxy` client
By default, `ansible-galaxy` uses <https://galaxy.ansible.com> as the Galaxy server (as listed in the `ansible.cfg` file under [GALAXY\_SERVER](../reference_appendices/config#galaxy-server)).
You can use either option below to configure `ansible-galaxy collection` to use other servers (such as Red Hat Automation Hub or a custom Galaxy server):
* Set the server list in the [GALAXY\_SERVER\_LIST](../reference_appendices/config#galaxy-server-list) configuration option in [The configuration file](../reference_appendices/config#ansible-configuration-settings-locations).
* Use the `--server` command line argument to limit to an individual server.
To configure a Galaxy server list in `ansible.cfg`:
1. Add the `server_list` option under the `[galaxy]` section to one or more server names.
2. Create a new section for each server name.
3. Set the `url` option for each server name.
4. Optionally, set the API token for each server name. Go to <https://galaxy.ansible.com/me/preferences> and click Show API key.
Note
The `url` option for each server name must end with a forward slash `/`. If you do not set the API token in your Galaxy server list, use the `--api-key` argument to pass in the token to the `ansible-galaxy collection publish` command.
For Automation Hub, you additionally need to:
1. Set the `auth_url` option for each server name.
2. Set the API token for each server name. Go to <https://cloud.redhat.com/ansible/automation-hub/token/> and click :Get API token from the version dropdown to copy your API token.
The following example shows how to configure multiple servers:
```
[galaxy]
server_list = automation_hub, my_org_hub, release_galaxy, test_galaxy
[galaxy_server.automation_hub]
url=https://cloud.redhat.com/api/automation-hub/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
token=my_ah_token
[galaxy_server.my_org_hub]
url=https://automation.my_org/
username=my_user
password=my_pass
[galaxy_server.release_galaxy]
url=https://galaxy.ansible.com/
token=my_token
[galaxy_server.test_galaxy]
url=https://galaxy-dev.ansible.com/
token=my_test_token
```
Note
You can use the `--server` command line argument to select an explicit Galaxy server in the `server_list` and the value of this argument should match the name of the server. To use a server not in the server list, set the value to the URL to access that server (all servers in the server list will be ignored). Also you cannot use the `--api-key` argument for any of the predefined servers. You can only use the `api_key` argument if you did not define a server list or if you specify a URL in the `--server` argument.
**Galaxy server list configuration options**
The [GALAXY\_SERVER\_LIST](../reference_appendices/config#galaxy-server-list) option is a list of server identifiers in a prioritized order. When searching for a collection, the install process will search in that order, for example, `automation_hub` first, then `my_org_hub`, `release_galaxy`, and finally `test_galaxy` until the collection is found. The actual Galaxy instance is then defined under the section `[galaxy_server.{{ id }}]` where `{{ id }}` is the server identifier defined in the list. This section can then define the following keys:
* `url`: The URL of the Galaxy instance to connect to. Required.
* `token`: An API token key to use for authentication against the Galaxy instance. Mutually exclusive with `username`.
* `username`: The username to use for basic authentication against the Galaxy instance. Mutually exclusive with `token`.
* `password`: The password to use, in conjunction with `username`, for basic authentication.
* `auth_url`: The URL of a Keycloak server ‘token\_endpoint’ if using SSO authentication (for example, Automation Hub). Mutually exclusive with `username`. Requires `token`.
As well as defining these server options in the `ansible.cfg` file, you can also define them as environment variables. The environment variable is in the form `ANSIBLE_GALAXY_SERVER_{{ id }}_{{ key }}` where `{{ id }}` is the upper case form of the server identifier and `{{ key }}` is the key to define. For example I can define `token` for `release_galaxy` by setting `ANSIBLE_GALAXY_SERVER_RELEASE_GALAXY_TOKEN=secret_token`.
For operations that use only one Galaxy server (for example, the `publish`, `info`, or `install` commands). the `ansible-galaxy collection` command uses the first entry in the `server_list`, unless you pass in an explicit server with the `--server` argument.
Note
Once a collection is found, any of its requirements are only searched within the same Galaxy instance as the parent collection. The install process will not search for a collection requirement in a different Galaxy instance.
Downloading collections
-----------------------
To download a collection and its dependencies for an offline install, run `ansible-galaxy collection download`. This downloads the collections specified and their dependencies to the specified folder and creates a `requirements.yml` file which can be used to install those collections on a host without access to a Galaxy server. All the collections are downloaded by default to the `./collections` folder.
Just like the `install` command, the collections are sourced based on the [configured galaxy server config](#galaxy-server-config). Even if a collection to download was specified by a URL or path to a tarball, the collection will be redownloaded from the configured Galaxy server.
Collections can be specified as one or multiple collections or with a `requirements.yml` file just like `ansible-galaxy collection install`.
To download a single collection and its dependencies:
```
ansible-galaxy collection download my_namespace.my_collection
```
To download a single collection at a specific version:
```
ansible-galaxy collection download my_namespace.my_collection:1.0.0
```
To download multiple collections either specify multiple collections as command line arguments as shown above or use a requirements file in the format documented with [Install multiple collections with a requirements file](#collection-requirements-file).
```
ansible-galaxy collection download -r requirements.yml
```
You can also download a source collection directory. The collection is built with the mandatory `galaxy.yml` file.
```
ansible-galaxy collection download /path/to/collection
ansible-galaxy collection download git+file:///path/to/collection/.git
```
You can download multiple source collections from a single namespace by providing the path to the namespace.
```
ns/
├── collection1/
│ ├── galaxy.yml
│ └── plugins/
└── collection2/
├── galaxy.yml
└── plugins/
```
```
ansible-galaxy collection install /path/to/ns
```
All the collections are downloaded by default to the `./collections` folder but you can use `-p` or `--download-path` to specify another path:
```
ansible-galaxy collection download my_namespace.my_collection -p ~/offline-collections
```
Once you have downloaded the collections, the folder contains the collections specified, their dependencies, and a `requirements.yml` file. You can use this folder as is with `ansible-galaxy collection install` to install the collections on a host without access to a Galaxy or Automation Hub server.
```
# This must be run from the folder that contains the offline collections and requirements.yml file downloaded
# by the internet-connected host
cd ~/offline-collections
ansible-galaxy collection install -r requirements.yml
```
Listing collections
-------------------
To list installed collections, run `ansible-galaxy collection list`. This shows all of the installed collections found in the configured collections search paths. It will also show collections under development which contain a galaxy.yml file instead of a MANIFEST.json. The path where the collections are located are displayed as well as version information. If no version information is available, a `*` is displayed for the version number.
```
# /home/astark/.ansible/collections/ansible_collections
Collection Version
-------------------------- -------
cisco.aci 0.0.5
cisco.mso 0.0.4
sandwiches.ham *
splunk.es 0.0.5
# /usr/share/ansible/collections/ansible_collections
Collection Version
----------------- -------
fortinet.fortios 1.0.6
pureport.pureport 0.0.8
sensu.sensu_go 1.3.0
```
Run with `-vvv` to display more detailed information.
To list a specific collection, pass a valid fully qualified collection name (FQCN) to the command `ansible-galaxy collection list`. All instances of the collection will be listed.
```
> ansible-galaxy collection list fortinet.fortios
# /home/astark/.ansible/collections/ansible_collections
Collection Version
---------------- -------
fortinet.fortios 1.0.1
# /usr/share/ansible/collections/ansible_collections
Collection Version
---------------- -------
fortinet.fortios 1.0.6
```
To search other paths for collections, use the `-p` option. Specify multiple search paths by separating them with a `:`. The list of paths specified on the command line will be added to the beginning of the configured collections search paths.
```
> ansible-galaxy collection list -p '/opt/ansible/collections:/etc/ansible/collections'
# /opt/ansible/collections/ansible_collections
Collection Version
--------------- -------
sandwiches.club 1.7.2
# /etc/ansible/collections/ansible_collections
Collection Version
-------------- -------
sandwiches.pbj 1.2.0
# /home/astark/.ansible/collections/ansible_collections
Collection Version
-------------------------- -------
cisco.aci 0.0.5
cisco.mso 0.0.4
fortinet.fortios 1.0.1
sandwiches.ham *
splunk.es 0.0.5
# /usr/share/ansible/collections/ansible_collections
Collection Version
----------------- -------
fortinet.fortios 1.0.6
pureport.pureport 0.0.8
sensu.sensu_go 1.3.0
```
Verifying collections
---------------------
### Verifying collections with `ansible-galaxy`
Once installed, you can verify that the content of the installed collection matches the content of the collection on the server. This feature expects that the collection is installed in one of the configured collection paths and that the collection exists on one of the configured galaxy servers.
```
ansible-galaxy collection verify my_namespace.my_collection
```
The output of the `ansible-galaxy collection verify` command is quiet if it is successful. If a collection has been modified, the altered files are listed under the collection name.
```
ansible-galaxy collection verify my_namespace.my_collection
Collection my_namespace.my_collection contains modified content in the following files:
my_namespace.my_collection
plugins/inventory/my_inventory.py
plugins/modules/my_module.py
```
You can use the `-vvv` flag to display additional information, such as the version and path of the installed collection, the URL of the remote collection used for validation, and successful verification output.
```
ansible-galaxy collection verify my_namespace.my_collection -vvv
...
Verifying 'my_namespace.my_collection:1.0.0'.
Installed collection found at '/path/to/ansible_collections/my_namespace/my_collection/'
Remote collection found at 'https://galaxy.ansible.com/download/my_namespace-my_collection-1.0.0.tar.gz'
Successfully verified that checksums for 'my_namespace.my_collection:1.0.0' match the remote collection
```
If you have a pre-release or non-latest version of a collection installed you should include the specific version to verify. If the version is omitted, the installed collection is verified against the latest version available on the server.
```
ansible-galaxy collection verify my_namespace.my_collection:1.0.0
```
In addition to the `namespace.collection_name:version` format, you can provide the collections to verify in a `requirements.yml` file. Dependencies listed in `requirements.yml` are not included in the verify process and should be verified separately.
```
ansible-galaxy collection verify -r requirements.yml
```
Verifying against `tar.gz` files is not supported. If your `requirements.yml` contains paths to tar files or URLs for installation, you can use the `--ignore-errors` flag to ensure that all collections using the `namespace.name` format in the file are processed.
Using collections in a Playbook
-------------------------------
Once installed, you can reference a collection content by its fully qualified collection name (FQCN):
```
- hosts: all
tasks:
- my_namespace.my_collection.mymodule:
option1: value
```
This works for roles or any type of plugin distributed within the collection:
```
- hosts: all
tasks:
- import_role:
name: my_namespace.my_collection.role1
- my_namespace.mycollection.mymodule:
option1: value
- debug:
msg: '{{ lookup("my_namespace.my_collection.lookup1", 'param1')| my_namespace.my_collection.filter1 }}'
```
Simplifying module names with the `collections` keyword
-------------------------------------------------------
The `collections` keyword lets you define a list of collections that your role or playbook should search for unqualified module and action names. So you can use the `collections` keyword, then simply refer to modules and action plugins by their short-form names throughout that role or playbook.
Warning
If your playbook uses both the `collections` keyword and one or more roles, the roles do not inherit the collections set by the playbook. This is one of the reasons we recommend you always use FQCN. See below for roles details.
### Using `collections` in roles
Within a role, you can control which collections Ansible searches for the tasks inside the role using the `collections` keyword in the role’s `meta/main.yml`. Ansible will use the collections list defined inside the role even if the playbook that calls the role defines different collections in a separate `collections` keyword entry. Roles defined inside a collection always implicitly search their own collection first, so you don’t need to use the `collections` keyword to access modules, actions, or other roles contained in the same collection.
```
# myrole/meta/main.yml
collections:
- my_namespace.first_collection
- my_namespace.second_collection
- other_namespace.other_collection
```
### Using `collections` in playbooks
In a playbook, you can control the collections Ansible searches for modules and action plugins to execute. However, any roles you call in your playbook define their own collections search order; they do not inherit the calling playbook’s settings. This is true even if the role does not define its own `collections` keyword.
```
- hosts: all
collections:
- my_namespace.my_collection
tasks:
- import_role:
name: role1
- mymodule:
option1: value
- debug:
msg: '{{ lookup("my_namespace.my_collection.lookup1", "param1")| my_namespace.my_collection.filter1 }}'
```
The `collections` keyword merely creates an ordered ‘search path’ for non-namespaced plugin and role references. It does not install content or otherwise change Ansible’s behavior around the loading of plugins or roles. Note that an FQCN is still required for non-action or module plugins (for example, lookups, filters, tests).
Using a playbook from a collection
----------------------------------
New in version 2.11.
You can also distribute playbooks in your collection and invoke them using the same semantics you use for plugins:
```
ansible-playbook my_namespace.my_collection.playbook1 -i ./myinventory
```
From inside a playbook:
```
- import_playbook: my_namespace.my_collection.playbookX
```
A few recommendations when creating such playbooks, `hosts:` should be generic or at least have a variable input.
```
- hosts: all # Use --limit or customized inventory to restrict hosts targeted
- hosts: localhost # For things you want to restrict to the controller
- hosts: '{{target|default("webservers")}}' # Assumes inventory provides a 'webservers' group, but can also use ``-e 'target=host1,host2'``
```
This will have an implied entry in the `collections:` keyword of `my_namespace.my_collection` just as with roles.
See also
[Developing collections](https://docs.ansible.com/ansible/latest/dev_guide/developing_collections.html#developing-collections)
Develop or modify a collection.
[Collection Galaxy metadata structure](https://docs.ansible.com/ansible/latest/dev_guide/collections_galaxy_meta.html#collections-galaxy-meta)
Understand the collections metadata structure.
[Mailing List](https://groups.google.com/group/ansible-devel)
The development mailing list
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Ansible concepts Ansible concepts
================
These concepts are common to all uses of Ansible. You need to understand them to use Ansible for any kind of automation. This basic introduction provides the background you need to follow the rest of the User Guide.
* [Control node](#control-node)
* [Managed nodes](#managed-nodes)
* [Inventory](#inventory)
* [Collections](#collections)
* [Modules](#modules)
* [Tasks](#tasks)
* [Playbooks](#playbooks)
Control node
------------
Any machine with Ansible installed. You can run Ansible commands and playbooks by invoking the `ansible` or `ansible-playbook` command from any control node. You can use any computer that has a Python installation as a control node - laptops, shared desktops, and servers can all run Ansible. However, you cannot use a Windows machine as a control node. You can have multiple control nodes.
Managed nodes
-------------
The network devices (and/or servers) you manage with Ansible. Managed nodes are also sometimes called “hosts”. Ansible is not installed on managed nodes.
Inventory
---------
A list of managed nodes. An inventory file is also sometimes called a “hostfile”. Your inventory can specify information like IP address for each managed node. An inventory can also organize managed nodes, creating and nesting groups for easier scaling. To learn more about inventory, see [the Working with Inventory](intro_inventory#intro-inventory) section.
Collections
-----------
Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. You can install and use collections through [Ansible Galaxy](https://galaxy.ansible.com). To learn more about collections, see [Using collections](collections_using#collections).
Modules
-------
The units of code Ansible executes. Each module has a particular use, from administering users on a specific type of database to managing VLAN interfaces on a specific type of network device. You can invoke a single module with a task, or invoke several different modules in a playbook. Starting in Ansible 2.10, modules are grouped in collections. For an idea of how many collections Ansible includes, take a look at the [Collection Index](../collections/index#list-of-collections).
Tasks
-----
The units of action in Ansible. You can execute a single task once with an ad hoc command.
Playbooks
---------
Ordered lists of tasks, saved so you can run those tasks in that order repeatedly. Playbooks can include variables as well as tasks. Playbooks are written in YAML and are easy to read, write, share and understand. To learn more about playbooks, see [Intro to playbooks](playbooks_intro#about-playbooks).
ansible Python3 in templates Python3 in templates
====================
Ansible uses Jinja2 to leverage Python data types and standard functions in templates and variables. You can use these data types and standard functions to perform a rich set of operations on your data. However, if you use templates, you must be aware of differences between Python versions.
These topics help you design templates that work on both Python2 and Python3. They might also help if you are upgrading from Python2 to Python3. Upgrading within Python2 or Python3 does not usually introduce changes that affect Jinja2 templates.
Dictionary views
----------------
In Python2, the [`dict.keys()`](https://docs.python.org/3/library/stdtypes.html#dict.keys "(in Python v3.10)"), [`dict.values()`](https://docs.python.org/3/library/stdtypes.html#dict.values "(in Python v3.10)"), and [`dict.items()`](https://docs.python.org/3/library/stdtypes.html#dict.items "(in Python v3.10)") methods return a list. Jinja2 returns that to Ansible via a string representation that Ansible can turn back into a list.
In Python3, those methods return a [dictionary view](https://docs.python.org/3/library/stdtypes.html#dict-views "(in Python v3.10)") object. The string representation that Jinja2 returns for dictionary views cannot be parsed back into a list by Ansible. It is, however, easy to make this portable by using the `list` filter whenever using [`dict.keys()`](https://docs.python.org/3/library/stdtypes.html#dict.keys "(in Python v3.10)"), [`dict.values()`](https://docs.python.org/3/library/stdtypes.html#dict.values "(in Python v3.10)"), or [`dict.items()`](https://docs.python.org/3/library/stdtypes.html#dict.items "(in Python v3.10)"):
```
vars:
hosts:
testhost1: 127.0.0.2
testhost2: 127.0.0.3
tasks:
- debug:
msg: '{{ item }}'
# Only works with Python 2
#loop: "{{ hosts.keys() }}"
# Works with both Python 2 and Python 3
loop: "{{ hosts.keys() | list }}"
```
dict.iteritems()
----------------
Python2 dictionaries have [`iterkeys()`](https://docs.python.org/2/library/stdtypes.html#dict.iterkeys "(in Python v2.7)"), [`itervalues()`](https://docs.python.org/2/library/stdtypes.html#dict.itervalues "(in Python v2.7)"), and [`iteritems()`](https://docs.python.org/2/library/stdtypes.html#dict.iteritems "(in Python v2.7)") methods.
Python3 dictionaries do not have these methods. Use [`dict.keys()`](https://docs.python.org/3/library/stdtypes.html#dict.keys "(in Python v3.10)"), [`dict.values()`](https://docs.python.org/3/library/stdtypes.html#dict.values "(in Python v3.10)"), and [`dict.items()`](https://docs.python.org/3/library/stdtypes.html#dict.items "(in Python v3.10)") to make your playbooks and templates compatible with both Python2 and Python3:
```
vars:
hosts:
testhost1: 127.0.0.2
testhost2: 127.0.0.3
tasks:
- debug:
msg: '{{ item }}'
# Only works with Python 2
#loop: "{{ hosts.iteritems() }}"
# Works with both Python 2 and Python 3
loop: "{{ hosts.items() | list }}"
```
See also
* The [Dictionary views](#pb-py-compat-dict-views) entry for information on why the `list filter` is necessary here.
ansible Understanding privilege escalation: become Understanding privilege escalation: become
==========================================
Ansible uses existing privilege escalation systems to execute tasks with root privileges or with another user’s permissions. Because this feature allows you to ‘become’ another user, different from the user that logged into the machine (remote user), we call it `become`. The `become` keyword leverages existing privilege escalation tools like `sudo`, `su`, `pfexec`, `doas`, `pbrun`, `dzdo`, `ksu`, `runas`, `machinectl` and others.
* [Using become](#using-become)
+ [Become directives](#become-directives)
+ [Become connection variables](#become-connection-variables)
+ [Become command-line options](#become-command-line-options)
* [Risks and limitations of become](#risks-and-limitations-of-become)
+ [Risks of becoming an unprivileged user](#risks-of-becoming-an-unprivileged-user)
+ [Not supported by all connection plugins](#not-supported-by-all-connection-plugins)
+ [Only one method may be enabled per host](#only-one-method-may-be-enabled-per-host)
+ [Privilege escalation must be general](#privilege-escalation-must-be-general)
+ [May not access environment variables populated by pamd\_systemd](#may-not-access-environment-variables-populated-by-pamd-systemd)
* [Become and network automation](#become-and-network-automation)
+ [Setting enable mode for all tasks](#setting-enable-mode-for-all-tasks)
- [Passwords for enable mode](#passwords-for-enable-mode)
+ [authorize and auth\_pass](#authorize-and-auth-pass)
* [Become and Windows](#become-and-windows)
+ [Administrative rights](#administrative-rights)
+ [Local service accounts](#local-service-accounts)
+ [Become without setting a password](#become-without-setting-a-password)
+ [Accounts without a password](#accounts-without-a-password)
+ [Become flags for Windows](#become-flags-for-windows)
+ [Limitations of become on Windows](#limitations-of-become-on-windows)
Using become
------------
You can control the use of `become` with play or task directives, connection variables, or at the command line. If you set privilege escalation properties in multiple ways, review the [general precedence rules](../reference_appendices/general_precedence#general-precedence-rules) to understand which settings will be used.
A full list of all become plugins that are included in Ansible can be found in the [Plugin List](../plugins/become#become-plugin-list).
### Become directives
You can set the directives that control `become` at the play or task level. You can override these by setting connection variables, which often differ from one host to another. These variables and directives are independent. For example, setting `become_user` does not set `become`.
become
set to `yes` to activate privilege escalation.
become\_user
set to user with desired privileges — the user you `become`, NOT the user you login as. Does NOT imply `become: yes`, to allow it to be set at host level. Default value is `root`.
become\_method
(at play or task level) overrides the default method set in ansible.cfg, set to use any of the [Become Plugins](../plugins/become#become-plugins).
become\_flags
(at play or task level) permit the use of specific flags for the tasks or role. One common use is to change the user to nobody when the shell is set to nologin. Added in Ansible 2.2.
For example, to manage a system service (which requires `root` privileges) when connected as a non-`root` user, you can use the default value of `become_user` (`root`):
```
- name: Ensure the httpd service is running
service:
name: httpd
state: started
become: yes
```
To run a command as the `apache` user:
```
- name: Run a command as the apache user
command: somecommand
become: yes
become_user: apache
```
To do something as the `nobody` user when the shell is nologin:
```
- name: Run a command as nobody
command: somecommand
become: yes
become_method: su
become_user: nobody
become_flags: '-s /bin/sh'
```
To specify a password for sudo, run `ansible-playbook` with `--ask-become-pass` (`-K` for short). If you run a playbook utilizing `become` and the playbook seems to hang, most likely it is stuck at the privilege escalation prompt. Stop it with `CTRL-c`, then execute the playbook with `-K` and the appropriate password.
### Become connection variables
You can define different `become` options for each managed node or group. You can define these variables in inventory or use them as normal variables.
ansible\_become
overrides the `become` directive, decides if privilege escalation is used or not.
ansible\_become\_method
which privilege escalation method should be used
ansible\_become\_user
set the user you become through privilege escalation; does not imply `ansible_become: yes`
ansible\_become\_password
set the privilege escalation password. See [Using encrypted variables and files](vault#playbooks-vault) for details on how to avoid having secrets in plain text
ansible\_common\_remote\_group
determines if Ansible should try to `chgrp` its temporary files to a group if `setfacl` and `chown` both fail. See [Risks of becoming an unprivileged user](#risks-of-becoming-an-unprivileged-user) for more information. Added in version 2.10.
For example, if you want to run all tasks as `root` on a server named `webserver`, but you can only connect as the `manager` user, you could use an inventory entry like this:
```
webserver ansible_user=manager ansible_become=yes
```
Note
The variables defined above are generic for all become plugins but plugin specific ones can also be set instead. Please see the documentation for each plugin for a list of all options the plugin has and how they can be defined. A full list of become plugins in Ansible can be found at [Become Plugins](../plugins/become#become-plugins).
### Become command-line options
`--ask-become-pass, -K`
ask for privilege escalation password; does not imply become will be used. Note that this password will be used for all hosts.
`--become, -b`
run operations with become (no password implied)
`--become-method=BECOME\_METHOD`
privilege escalation method to use (default=sudo), valid choices: [ sudo | su | pbrun | pfexec | doas | dzdo | ksu | runas | machinectl ]
`--become-user=BECOME\_USER`
run operations as this user (default=root), does not imply –become/-b
Risks and limitations of become
-------------------------------
Although privilege escalation is mostly intuitive, there are a few limitations on how it works. Users should be aware of these to avoid surprises.
### Risks of becoming an unprivileged user
Ansible modules are executed on the remote machine by first substituting the parameters into the module file, then copying the file to the remote machine, and finally executing it there.
Everything is fine if the module file is executed without using `become`, when the `become_user` is root, or when the connection to the remote machine is made as root. In these cases Ansible creates the module file with permissions that only allow reading by the user and root, or only allow reading by the unprivileged user being switched to.
However, when both the connection user and the `become_user` are unprivileged, the module file is written as the user that Ansible connects as (the `remote_user`), but the file needs to be readable by the user Ansible is set to `become`. The details of how Ansible solves this can vary based on platform. However, on POSIX systems, Ansible solves this problem in the following way:
First, if **setfacl** is installed and available in the remote `PATH`, and the temporary directory on the remote host is mounted with POSIX.1e filesystem ACL support, Ansible will use POSIX ACLs to share the module file with the second unprivileged user.
Next, if POSIX ACLs are **not** available or **setfacl** could not be run, Ansible will attempt to change ownership of the module file using **chown** for systems which support doing so as an unprivileged user.
New in Ansible 2.11, at this point, Ansible will try **chmod +a** which is a macOS-specific way of setting ACLs on files.
New in Ansible 2.10, if all of the above fails, Ansible will then check the value of the configuration setting `ansible_common_remote_group`. Many systems will allow a given user to change the group ownership of a file to a group the user is in. As a result, if the second unprivileged user (the `become_user`) has a UNIX group in common with the user Ansible is connected as (the `remote_user`), and if `ansible_common_remote_group` is defined to be that group, Ansible can try to change the group ownership of the module file to that group by using **chgrp**, thereby likely making it readable to the `become_user`.
At this point, if `ansible_common_remote_group` was defined and a **chgrp** was attempted and returned successfully, Ansible assumes (but, importantly, does not check) that the new group ownership is enough and does not fall back further. That is, Ansible **does not check** that the `become_user` does in fact share a group with the `remote_user`; so long as the command exits successfully, Ansible considers the result successful and does not proceed to check `allow_world_readable_tmpfiles` per below.
If `ansible_common_remote_group` is **not** set and the chown above it failed, or if `ansible_common_remote_group` *is* set but the **chgrp** (or following group-permissions **chmod**) returned a non-successful exit code, Ansible will lastly check the value of `allow_world_readable_tmpfiles`. If this is set, Ansible will place the module file in a world-readable temporary directory, with world-readable permissions to allow the `become_user` (and incidentally any other user on the system) to read the contents of the file. **If any of the parameters passed to the module are sensitive in nature, and you do not trust the remote machines, then this is a potential security risk.**
Once the module is done executing, Ansible deletes the temporary file.
Several ways exist to avoid the above logic flow entirely:
* Use `pipelining`. When pipelining is enabled, Ansible does not save the module to a temporary file on the client. Instead it pipes the module to the remote python interpreter’s stdin. Pipelining does not work for python modules involving file transfer (for example: [copy](../collections/ansible/builtin/copy_module#copy-module), [fetch](../collections/ansible/builtin/fetch_module#fetch-module), [template](../collections/ansible/builtin/template_module#template-module)), or for non-python modules.
* Avoid becoming an unprivileged user. Temporary files are protected by UNIX file permissions when you `become` root or do not use `become`. In Ansible 2.1 and above, UNIX file permissions are also secure if you make the connection to the managed machine as root and then use `become` to access an unprivileged account.
Warning
Although the Solaris ZFS filesystem has filesystem ACLs, the ACLs are not POSIX.1e filesystem acls (they are NFSv4 ACLs instead). Ansible cannot use these ACLs to manage its temp file permissions so you may have to resort to `allow_world_readable_tmpfiles` if the remote machines use ZFS.
Changed in version 2.1.
Ansible makes it hard to unknowingly use `become` insecurely. Starting in Ansible 2.1, Ansible defaults to issuing an error if it cannot execute securely with `become`. If you cannot use pipelining or POSIX ACLs, must connect as an unprivileged user, must use `become` to execute as a different unprivileged user, and decide that your managed nodes are secure enough for the modules you want to run there to be world readable, you can turn on `allow_world_readable_tmpfiles` in the `ansible.cfg` file. Setting `allow_world_readable_tmpfiles` will change this from an error into a warning and allow the task to run as it did prior to 2.1.
Changed in version 2.10.
Ansible 2.10 introduces the above-mentioned `ansible_common_remote_group` fallback. As mentioned above, if enabled, it is used when `remote_user` and `become_user` are both unprivileged users. Refer to the text above for details on when this fallback happens.
Warning
As mentioned above, if `ansible_common_remote_group` and `allow_world_readable_tmpfiles` are both enabled, it is unlikely that the world-readable fallback will ever trigger, and yet Ansible might still be unable to access the module file. This is because after the group ownership change is successful, Ansible does not fall back any further, and also does not do any check to ensure that the `become_user` is actually a member of the “common group”. This is a design decision made by the fact that doing such a check would require another round-trip connection to the remote machine, which is a time-expensive operation. Ansible does, however, emit a warning in this case.
### Not supported by all connection plugins
Privilege escalation methods must also be supported by the connection plugin used. Most connection plugins will warn if they do not support become. Some will just ignore it as they always run as root (jail, chroot, and so on).
### Only one method may be enabled per host
Methods cannot be chained. You cannot use `sudo /bin/su -` to become a user, you need to have privileges to run the command as that user in sudo or be able to su directly to it (the same for pbrun, pfexec or other supported methods).
### Privilege escalation must be general
You cannot limit privilege escalation permissions to certain commands. Ansible does not always use a specific command to do something but runs modules (code) from a temporary file name which changes every time. If you have ‘/sbin/service’ or ‘/bin/chmod’ as the allowed commands this will fail with ansible as those paths won’t match with the temporary file that Ansible creates to run the module. If you have security rules that constrain your sudo/pbrun/doas environment to running specific command paths only, use Ansible from a special account that does not have this constraint, or use AWX or the [Red Hat Ansible Automation Platform](https://docs.ansible.com/ansible/latest/reference_appendices/tower.html#ansible-platform) to manage indirect access to SSH credentials.
### May not access environment variables populated by pamd\_systemd
For most Linux distributions using `systemd` as their init, the default methods used by `become` do not open a new “session”, in the sense of systemd. Because the `pam_systemd` module will not fully initialize a new session, you might have surprises compared to a normal session opened through ssh: some environment variables set by `pam_systemd`, most notably `XDG_RUNTIME_DIR`, are not populated for the new user and instead inherited or just emptied.
This might cause trouble when trying to invoke systemd commands that depend on `XDG_RUNTIME_DIR` to access the bus:
```
$ echo $XDG_RUNTIME_DIR
$ systemctl --user status
Failed to connect to bus: Permission denied
```
To force `become` to open a new systemd session that goes through `pam_systemd`, you can use `become_method: machinectl`.
For more information, see [this systemd issue](https://github.com/systemd/systemd/issues/825#issuecomment-127917622).
Become and network automation
-----------------------------
As of version 2.6, Ansible supports `become` for privilege escalation (entering `enable` mode or privileged EXEC mode) on all Ansible-maintained network platforms that support `enable` mode. Using `become` replaces the `authorize` and `auth_pass` options in a `provider` dictionary.
You must set the connection type to either `connection: ansible.netcommon.network_cli` or `connection: ansible.netcommon.httpapi` to use `become` for privilege escalation on network devices. Check the [Platform Options](../network/user_guide/platform_index#platform-options) documentation for details.
You can use escalated privileges on only the specific tasks that need them, on an entire play, or on all plays. Adding `become: yes` and `become_method: enable` instructs Ansible to enter `enable` mode before executing the task, play, or playbook where those parameters are set.
If you see this error message, the task that generated it requires `enable` mode to succeed:
```
Invalid input (privileged mode required)
```
To set `enable` mode for a specific task, add `become` at the task level:
```
- name: Gather facts (eos)
arista.eos.eos_facts:
gather_subset:
- "!hardware"
become: yes
become_method: enable
```
To set enable mode for all tasks in a single play, add `become` at the play level:
```
- hosts: eos-switches
become: yes
become_method: enable
tasks:
- name: Gather facts (eos)
arista.eos.eos_facts:
gather_subset:
- "!hardware"
```
### Setting enable mode for all tasks
Often you wish for all tasks in all plays to run using privilege mode, that is best achieved by using `group_vars`:
**group\_vars/eos.yml**
```
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: arista.eos.eos
ansible_user: myuser
ansible_become: yes
ansible_become_method: enable
```
#### Passwords for enable mode
If you need a password to enter `enable` mode, you can specify it in one of two ways:
* providing the [`--ask-become-pass`](../cli/ansible-playbook#cmdoption-ansible-playbook-K) command line option
* setting the `ansible_become_password` connection variable
Warning
As a reminder passwords should never be stored in plain text. For information on encrypting your passwords and other secrets with Ansible Vault, see [Encrypting content with Ansible Vault](vault#vault).
### authorize and auth\_pass
Ansible still supports `enable` mode with `connection: local` for legacy network playbooks. To enter `enable` mode with `connection: local`, use the module options `authorize` and `auth_pass`:
```
- hosts: eos-switches
ansible_connection: local
tasks:
- name: Gather facts (eos)
eos_facts:
gather_subset:
- "!hardware"
provider:
authorize: yes
auth_pass: " {{ secret_auth_pass }}"
```
We recommend updating your playbooks to use `become` for network-device `enable` mode consistently. The use of `authorize` and of `provider` dictionaries will be deprecated in future. Check the [Platform Options](../network/user_guide/platform_index#platform-options) and [Network modules](https://docs.ansible.com/ansible/2.9/modules/list_of_network_modules.html#network-modules "(in Ansible v2.9)") documentation for details.
Become and Windows
------------------
Since Ansible 2.3, `become` can be used on Windows hosts through the `runas` method. Become on Windows uses the same inventory setup and invocation arguments as `become` on a non-Windows host, so the setup and variable names are the same as what is defined in this document.
While `become` can be used to assume the identity of another user, there are other uses for it with Windows hosts. One important use is to bypass some of the limitations that are imposed when running on WinRM, such as constrained network delegation or accessing forbidden system calls like the WUA API. You can use `become` with the same user as `ansible_user` to bypass these limitations and run commands that are not normally accessible in a WinRM session.
### Administrative rights
Many tasks in Windows require administrative privileges to complete. When using the `runas` become method, Ansible will attempt to run the module with the full privileges that are available to the remote user. If it fails to elevate the user token, it will continue to use the limited token during execution.
A user must have the `SeDebugPrivilege` to run a become process with elevated privileges. This privilege is assigned to Administrators by default. If the debug privilege is not available, the become process will run with a limited set of privileges and groups.
To determine the type of token that Ansible was able to get, run the following task:
```
- Check my user name
ansible.windows.win_whoami:
become: yes
```
The output will look something similar to the below:
```
ok: [windows] => {
"account": {
"account_name": "vagrant-domain",
"domain_name": "DOMAIN",
"sid": "S-1-5-21-3088887838-4058132883-1884671576-1105",
"type": "User"
},
"authentication_package": "Kerberos",
"changed": false,
"dns_domain_name": "DOMAIN.LOCAL",
"groups": [
{
"account_name": "Administrators",
"attributes": [
"Mandatory",
"Enabled by default",
"Enabled",
"Owner"
],
"domain_name": "BUILTIN",
"sid": "S-1-5-32-544",
"type": "Alias"
},
{
"account_name": "INTERACTIVE",
"attributes": [
"Mandatory",
"Enabled by default",
"Enabled"
],
"domain_name": "NT AUTHORITY",
"sid": "S-1-5-4",
"type": "WellKnownGroup"
},
],
"impersonation_level": "SecurityAnonymous",
"label": {
"account_name": "High Mandatory Level",
"domain_name": "Mandatory Label",
"sid": "S-1-16-12288",
"type": "Label"
},
"login_domain": "DOMAIN",
"login_time": "2018-11-18T20:35:01.9696884+00:00",
"logon_id": 114196830,
"logon_server": "DC01",
"logon_type": "Interactive",
"privileges": {
"SeBackupPrivilege": "disabled",
"SeChangeNotifyPrivilege": "enabled-by-default",
"SeCreateGlobalPrivilege": "enabled-by-default",
"SeCreatePagefilePrivilege": "disabled",
"SeCreateSymbolicLinkPrivilege": "disabled",
"SeDebugPrivilege": "enabled",
"SeDelegateSessionUserImpersonatePrivilege": "disabled",
"SeImpersonatePrivilege": "enabled-by-default",
"SeIncreaseBasePriorityPrivilege": "disabled",
"SeIncreaseQuotaPrivilege": "disabled",
"SeIncreaseWorkingSetPrivilege": "disabled",
"SeLoadDriverPrivilege": "disabled",
"SeManageVolumePrivilege": "disabled",
"SeProfileSingleProcessPrivilege": "disabled",
"SeRemoteShutdownPrivilege": "disabled",
"SeRestorePrivilege": "disabled",
"SeSecurityPrivilege": "disabled",
"SeShutdownPrivilege": "disabled",
"SeSystemEnvironmentPrivilege": "disabled",
"SeSystemProfilePrivilege": "disabled",
"SeSystemtimePrivilege": "disabled",
"SeTakeOwnershipPrivilege": "disabled",
"SeTimeZonePrivilege": "disabled",
"SeUndockPrivilege": "disabled"
},
"rights": [
"SeNetworkLogonRight",
"SeBatchLogonRight",
"SeInteractiveLogonRight",
"SeRemoteInteractiveLogonRight"
],
"token_type": "TokenPrimary",
"upn": "[email protected]",
"user_flags": []
}
```
Under the `label` key, the `account_name` entry determines whether the user has Administrative rights. Here are the labels that can be returned and what they represent:
* `Medium`: Ansible failed to get an elevated token and ran under a limited token. Only a subset of the privileges assigned to user are available during the module execution and the user does not have administrative rights.
* `High`: An elevated token was used and all the privileges assigned to the user are available during the module execution.
* `System`: The `NT AUTHORITY\System` account is used and has the highest level of privileges available.
The output will also show the list of privileges that have been granted to the user. When the privilege value is `disabled`, the privilege is assigned to the logon token but has not been enabled. In most scenarios these privileges are automatically enabled when required.
If running on a version of Ansible that is older than 2.5 or the normal `runas` escalation process fails, an elevated token can be retrieved by:
* Set the `become_user` to `System` which has full control over the operating system.
* Grant `SeTcbPrivilege` to the user Ansible connects with on WinRM. `SeTcbPrivilege` is a high-level privilege that grants full control over the operating system. No user is given this privilege by default, and care should be taken if you grant this privilege to a user or group. For more information on this privilege, please see [Act as part of the operating system](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn221957(v=ws.11)). You can use the below task to set this privilege on a Windows host:
```
- name: grant the ansible user the SeTcbPrivilege right
ansible.windows.win_user_right:
name: SeTcbPrivilege
users: '{{ansible_user}}'
action: add
```
* Turn UAC off on the host and reboot before trying to become the user. UAC is a security protocol that is designed to run accounts with the `least privilege` principle. You can turn UAC off by running the following tasks:
```
- name: turn UAC off
win_regedit:
path: HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system
name: EnableLUA
data: 0
type: dword
state: present
register: uac_result
- name: reboot after disabling UAC
win_reboot:
when: uac_result is changed
```
Note
Granting the `SeTcbPrivilege` or turning UAC off can cause Windows security vulnerabilities and care should be given if these steps are taken.
### Local service accounts
Prior to Ansible version 2.5, `become` only worked on Windows with a local or domain user account. Local service accounts like `System` or `NetworkService` could not be used as `become_user` in these older versions. This restriction has been lifted since the 2.5 release of Ansible. The three service accounts that can be set under `become_user` are:
* System
* NetworkService
* LocalService
Because local service accounts do not have passwords, the `ansible_become_password` parameter is not required and is ignored if specified.
### Become without setting a password
As of Ansible 2.8, `become` can be used to become a Windows local or domain account without requiring a password for that account. For this method to work, the following requirements must be met:
* The connection user has the `SeDebugPrivilege` privilege assigned
* The connection user is part of the `BUILTIN\Administrators` group
* The `become_user` has either the `SeBatchLogonRight` or `SeNetworkLogonRight` user right
Using become without a password is achieved in one of two different methods:
* Duplicating an existing logon session’s token if the account is already logged on
* Using S4U to generate a logon token that is valid on the remote host only
In the first scenario, the become process is spawned from another logon of that user account. This could be an existing RDP logon, console logon, but this is not guaranteed to occur all the time. This is similar to the `Run only when user is logged on` option for a Scheduled Task.
In the case where another logon of the become account does not exist, S4U is used to create a new logon and run the module through that. This is similar to the `Run whether user is logged on or not` with the `Do not store password` option for a Scheduled Task. In this scenario, the become process will not be able to access any network resources like a normal WinRM process.
To make a distinction between using become with no password and becoming an account that has no password make sure to keep `ansible_become_password` as undefined or set `ansible_become_password:`.
Note
Because there are no guarantees an existing token will exist for a user when Ansible runs, there’s a high change the become process will only have access to local resources. Use become with a password if the task needs to access network resources
### Accounts without a password
Warning
As a general security best practice, you should avoid allowing accounts without passwords.
Ansible can be used to become a Windows account that does not have a password (like the `Guest` account). To become an account without a password, set up the variables like normal but set `ansible_become_password: ''`.
Before become can work on an account like this, the local policy [Accounts: Limit local account use of blank passwords to console logon only](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852174(v=ws.11)) must be disabled. This can either be done through a Group Policy Object (GPO) or with this Ansible task:
```
- name: allow blank password on become
ansible.windows.win_regedit:
path: HKLM:\SYSTEM\CurrentControlSet\Control\Lsa
name: LimitBlankPasswordUse
data: 0
type: dword
state: present
```
Note
This is only for accounts that do not have a password. You still need to set the account’s password under `ansible_become_password` if the become\_user has a password.
### Become flags for Windows
Ansible 2.5 added the `become_flags` parameter to the `runas` become method. This parameter can be set using the `become_flags` task directive or set in Ansible’s configuration using `ansible_become_flags`. The two valid values that are initially supported for this parameter are `logon_type` and `logon_flags`.
Note
These flags should only be set when becoming a normal user account, not a local service account like LocalSystem.
The key `logon_type` sets the type of logon operation to perform. The value can be set to one of the following:
* `interactive`: The default logon type. The process will be run under a context that is the same as when running a process locally. This bypasses all WinRM restrictions and is the recommended method to use.
* `batch`: Runs the process under a batch context that is similar to a scheduled task with a password set. This should bypass most WinRM restrictions and is useful if the `become_user` is not allowed to log on interactively.
* `new_credentials`: Runs under the same credentials as the calling user, but outbound connections are run under the context of the `become_user` and `become_password`, similar to `runas.exe /netonly`. The `logon_flags` flag should also be set to `netcredentials_only`. Use this flag if the process needs to access a network resource (like an SMB share) using a different set of credentials.
* `network`: Runs the process under a network context without any cached credentials. This results in the same type of logon session as running a normal WinRM process without credential delegation, and operates under the same restrictions.
* `network_cleartext`: Like the `network` logon type, but instead caches the credentials so it can access network resources. This is the same type of logon session as running a normal WinRM process with credential delegation.
For more information, see [dwLogonType](https://docs.microsoft.com/en-gb/windows/desktop/api/winbase/nf-winbase-logonusera).
The `logon_flags` key specifies how Windows will log the user on when creating the new process. The value can be set to none or multiple of the following:
* `with_profile`: The default logon flag set. The process will load the user’s profile in the `HKEY_USERS` registry key to `HKEY_CURRENT_USER`.
* `netcredentials_only`: The process will use the same token as the caller but will use the `become_user` and `become_password` when accessing a remote resource. This is useful in inter-domain scenarios where there is no trust relationship, and should be used with the `new_credentials` `logon_type`.
By default `logon_flags=with_profile` is set, if the profile should not be loaded set `logon_flags=` or if the profile should be loaded with `netcredentials_only`, set `logon_flags=with_profile,netcredentials_only`.
For more information, see [dwLogonFlags](https://docs.microsoft.com/en-gb/windows/desktop/api/winbase/nf-winbase-createprocesswithtokenw).
Here are some examples of how to use `become_flags` with Windows tasks:
```
- name: copy a file from a fileshare with custom credentials
ansible.windows.win_copy:
src: \\server\share\data\file.txt
dest: C:\temp\file.txt
remote_src: yes
vars:
ansible_become: yes
ansible_become_method: runas
ansible_become_user: DOMAIN\user
ansible_become_password: Password01
ansible_become_flags: logon_type=new_credentials logon_flags=netcredentials_only
- name: run a command under a batch logon
ansible.windows.win_whoami:
become: yes
become_flags: logon_type=batch
- name: run a command and not load the user profile
ansible.windows.win_whomai:
become: yes
become_flags: logon_flags=
```
### Limitations of become on Windows
* Running a task with `async` and `become` on Windows Server 2008, 2008 R2 and Windows 7 only works when using Ansible 2.7 or newer.
* By default, the become user logs on with an interactive session, so it must have the right to do so on the Windows host. If it does not inherit the `SeAllowLogOnLocally` privilege or inherits the `SeDenyLogOnLocally` privilege, the become process will fail. Either add the privilege or set the `logon_type` flag to change the logon type used.
* Prior to Ansible version 2.3, become only worked when `ansible_winrm_transport` was either `basic` or `credssp`. This restriction has been lifted since the 2.4 release of Ansible for all hosts except Windows Server 2008 (non R2 version).
* The Secondary Logon service `seclogon` must be running to use `ansible_become_method: runas`
See also
[Mailing List](https://groups.google.com/forum/#!forum/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Introduction to modules Introduction to modules
=======================
Modules (also referred to as “task plugins” or “library plugins”) are discrete units of code that can be used from the command line or in a playbook task. Ansible executes each module, usually on the remote managed node, and collects return values. In Ansible 2.10 and later, most modules are hosted in collections.
You can execute modules from the command line:
```
ansible webservers -m service -a "name=httpd state=started"
ansible webservers -m ping
ansible webservers -m command -a "/sbin/reboot -t now"
```
Each module supports taking arguments. Nearly all modules take `key=value` arguments, space delimited. Some modules take no arguments, and the command/shell modules simply take the string of the command you want to run.
From playbooks, Ansible modules are executed in a very similar way:
```
- name: reboot the servers
command: /sbin/reboot -t now
```
Another way to pass arguments to a module is using YAML syntax, also called ‘complex args’
```
- name: restart webserver
service:
name: httpd
state: restarted
```
All modules return JSON format data. This means modules can be written in any programming language. Modules should be idempotent, and should avoid making any changes if they detect that the current state matches the desired final state. When used in an Ansible playbook, modules can trigger ‘change events’ in the form of notifying [handlers](playbooks_handlers#handlers) to run additional tasks.
You can access the documentation for each module from the command line with the ansible-doc tool:
```
ansible-doc yum
```
For a list of all available modules, see the [Collection docs](../collections/index#list-of-collections), or run the following at a command prompt:
```
ansible-doc -l
```
See also
[Introduction to ad hoc commands](intro_adhoc#intro-adhoc)
Examples of using modules in /usr/bin/ansible
[Working with playbooks](playbooks#working-with-playbooks)
Examples of using modules with /usr/bin/ansible-playbook
[Should you develop a module?](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html#developing-modules)
How to write your own modules
[Python API](https://docs.ansible.com/ansible/latest/dev_guide/developing_api.html#developing-api)
Examples of using modules with the Python API
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Loops Loops
=====
Ansible offers the `loop`, `with_<lookup>`, and `until` keywords to execute a task multiple times. Examples of commonly-used loops include changing ownership on several files and/or directories with the [file module](../collections/ansible/builtin/file_module#file-module), creating multiple users with the [user module](../collections/ansible/builtin/user_module#user-module), and repeating a polling step until a certain result is reached.
Note
* We added `loop` in Ansible 2.5. It is not yet a full replacement for `with_<lookup>`, but we recommend it for most use cases.
* We have not deprecated the use of `with_<lookup>` - that syntax will still be valid for the foreseeable future.
* We are looking to improve `loop` syntax - watch this page and the [changelog](https://github.com/ansible/ansible/tree/devel/changelogs) for updates.
* [Comparing `loop` and `with_*`](#comparing-loop-and-with)
* [Standard loops](#standard-loops)
+ [Iterating over a simple list](#iterating-over-a-simple-list)
+ [Iterating over a list of hashes](#iterating-over-a-list-of-hashes)
+ [Iterating over a dictionary](#iterating-over-a-dictionary)
* [Registering variables with a loop](#registering-variables-with-a-loop)
* [Complex loops](#complex-loops)
+ [Iterating over nested lists](#iterating-over-nested-lists)
+ [Retrying a task until a condition is met](#retrying-a-task-until-a-condition-is-met)
+ [Looping over inventory](#looping-over-inventory)
* [Ensuring list input for `loop`: using `query` rather than `lookup`](#ensuring-list-input-for-loop-using-query-rather-than-lookup)
* [Adding controls to loops](#adding-controls-to-loops)
+ [Limiting loop output with `label`](#limiting-loop-output-with-label)
+ [Pausing within a loop](#pausing-within-a-loop)
+ [Tracking progress through a loop with `index_var`](#tracking-progress-through-a-loop-with-index-var)
+ [Defining inner and outer variable names with `loop_var`](#defining-inner-and-outer-variable-names-with-loop-var)
+ [Extended loop variables](#extended-loop-variables)
+ [Accessing the name of your loop\_var](#accessing-the-name-of-your-loop-var)
* [Migrating from with\_X to loop](#migrating-from-with-x-to-loop)
+ [with\_list](#with-list)
+ [with\_items](#with-items)
+ [with\_indexed\_items](#with-indexed-items)
+ [with\_flattened](#with-flattened)
+ [with\_together](#with-together)
+ [with\_dict](#with-dict)
+ [with\_sequence](#with-sequence)
+ [with\_subelements](#with-subelements)
+ [with\_nested/with\_cartesian](#with-nested-with-cartesian)
+ [with\_random\_choice](#with-random-choice)
Comparing `loop` and `with_*`
-----------------------------
* The `with_<lookup>` keywords rely on [Lookup Plugins](../plugins/lookup#lookup-plugins) - even `items` is a lookup.
* The `loop` keyword is equivalent to `with_list`, and is the best choice for simple loops.
* The `loop` keyword will not accept a string as input, see [Ensuring list input for loop: using query rather than lookup](#query-vs-lookup).
* Generally speaking, any use of `with_*` covered in [Migrating from with\_X to loop](#migrating-to-loop) can be updated to use `loop`.
* Be careful when changing `with_items` to `loop`, as `with_items` performed implicit single-level flattening. You may need to use `flatten(1)` with `loop` to match the exact outcome. For example, to get the same output as:
```
with_items:
- 1
- [2,3]
- 4
```
you would need:
```
loop: "{{ [1, [2,3] ,4] | flatten(1) }}"
```
* Any `with_*` statement that requires using `lookup` within a loop should not be converted to use the `loop` keyword. For example, instead of doing:
```
loop: "{{ lookup('fileglob', '*.txt', wantlist=True) }}"
```
it’s cleaner to keep:
```
with_fileglob: '*.txt'
```
Standard loops
--------------
### Iterating over a simple list
Repeated tasks can be written as standard loops over a simple list of strings. You can define the list directly in the task:
```
- name: Add several users
ansible.builtin.user:
name: "{{ item }}"
state: present
groups: "wheel"
loop:
- testuser1
- testuser2
```
You can define the list in a variables file, or in the ‘vars’ section of your play, then refer to the name of the list in the task:
```
loop: "{{ somelist }}"
```
Either of these examples would be the equivalent of:
```
- name: Add user testuser1
ansible.builtin.user:
name: "testuser1"
state: present
groups: "wheel"
- name: Add user testuser2
ansible.builtin.user:
name: "testuser2"
state: present
groups: "wheel"
```
You can pass a list directly to a parameter for some plugins. Most of the packaging modules, like [yum](../collections/ansible/builtin/yum_module#yum-module) and [apt](../collections/ansible/builtin/apt_module#apt-module), have this capability. When available, passing the list to a parameter is better than looping over the task. For example:
```
- name: Optimal yum
ansible.builtin.yum:
name: "{{ list_of_packages }}"
state: present
- name: Non-optimal yum, slower and may cause issues with interdependencies
ansible.builtin.yum:
name: "{{ item }}"
state: present
loop: "{{ list_of_packages }}"
```
Check the [module documentation](https://docs.ansible.com/ansible/2.9/modules/modules_by_category.html#modules-by-category "(in Ansible v2.9)") to see if you can pass a list to any particular module’s parameter(s).
### Iterating over a list of hashes
If you have a list of hashes, you can reference subkeys in a loop. For example:
```
- name: Add several users
ansible.builtin.user:
name: "{{ item.name }}"
state: present
groups: "{{ item.groups }}"
loop:
- { name: 'testuser1', groups: 'wheel' }
- { name: 'testuser2', groups: 'root' }
```
When combining [conditionals](playbooks_conditionals#playbooks-conditionals) with a loop, the `when:` statement is processed separately for each item. See [Basic conditionals with when](playbooks_conditionals#the-when-statement) for examples.
### Iterating over a dictionary
To loop over a dict, use the [dict2items](playbooks_filters#dict-filter):
```
- name: Using dict2items
ansible.builtin.debug:
msg: "{{ item.key }} - {{ item.value }}"
loop: "{{ tag_data | dict2items }}"
vars:
tag_data:
Environment: dev
Application: payment
```
Here, we are iterating over `tag_data` and printing the key and the value from it.
Registering variables with a loop
---------------------------------
You can register the output of a loop as a variable. For example:
```
- name: Register loop output as a variable
ansible.builtin.shell: "echo {{ item }}"
loop:
- "one"
- "two"
register: echo
```
When you use `register` with a loop, the data structure placed in the variable will contain a `results` attribute that is a list of all responses from the module. This differs from the data structure returned when using `register` without a loop:
```
{
"changed": true,
"msg": "All items completed",
"results": [
{
"changed": true,
"cmd": "echo \"one\" ",
"delta": "0:00:00.003110",
"end": "2013-12-19 12:00:05.187153",
"invocation": {
"module_args": "echo \"one\"",
"module_name": "shell"
},
"item": "one",
"rc": 0,
"start": "2013-12-19 12:00:05.184043",
"stderr": "",
"stdout": "one"
},
{
"changed": true,
"cmd": "echo \"two\" ",
"delta": "0:00:00.002920",
"end": "2013-12-19 12:00:05.245502",
"invocation": {
"module_args": "echo \"two\"",
"module_name": "shell"
},
"item": "two",
"rc": 0,
"start": "2013-12-19 12:00:05.242582",
"stderr": "",
"stdout": "two"
}
]
}
```
Subsequent loops over the registered variable to inspect the results may look like:
```
- name: Fail if return code is not 0
ansible.builtin.fail:
msg: "The command ({{ item.cmd }}) did not have a 0 return code"
when: item.rc != 0
loop: "{{ echo.results }}"
```
During iteration, the result of the current item will be placed in the variable:
```
- name: Place the result of the current item in the variable
ansible.builtin.shell: echo "{{ item }}"
loop:
- one
- two
register: echo
changed_when: echo.stdout != "one"
```
Complex loops
-------------
### Iterating over nested lists
You can use Jinja2 expressions to iterate over complex lists. For example, a loop can combine nested lists:
```
- name: Give users access to multiple databases
community.mysql.mysql_user:
name: "{{ item[0] }}"
priv: "{{ item[1] }}.*:ALL"
append_privs: yes
password: "foo"
loop: "{{ ['alice', 'bob'] |product(['clientdb', 'employeedb', 'providerdb'])|list }}"
```
### Retrying a task until a condition is met
New in version 1.4.
You can use the `until` keyword to retry a task until a certain condition is met. Here’s an example:
```
- name: Retry a task until a certain condition is met
ansible.builtin.shell: /usr/bin/foo
register: result
until: result.stdout.find("all systems go") != -1
retries: 5
delay: 10
```
This task runs up to 5 times with a delay of 10 seconds between each attempt. If the result of any attempt has “all systems go” in its stdout, the task succeeds. The default value for “retries” is 3 and “delay” is 5.
To see the results of individual retries, run the play with `-vv`.
When you run a task with `until` and register the result as a variable, the registered variable will include a key called “attempts”, which records the number of the retries for the task.
Note
You must set the `until` parameter if you want a task to retry. If `until` is not defined, the value for the `retries` parameter is forced to 1.
### Looping over inventory
To loop over your inventory, or just a subset of it, you can use a regular `loop` with the `ansible_play_batch` or `groups` variables:
```
- name: Show all the hosts in the inventory
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ groups['all'] }}"
- name: Show all the hosts in the current play
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ ansible_play_batch }}"
```
There is also a specific lookup plugin `inventory_hostnames` that can be used like this:
```
- name: Show all the hosts in the inventory
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ query('inventory_hostnames', 'all') }}"
- name: Show all the hosts matching the pattern, ie all but the group www
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ query('inventory_hostnames', 'all:!www') }}"
```
More information on the patterns can be found in [Patterns: targeting hosts and groups](intro_patterns#intro-patterns).
Ensuring list input for `loop`: using `query` rather than `lookup`
------------------------------------------------------------------
The `loop` keyword requires a list as input, but the `lookup` keyword returns a string of comma-separated values by default. Ansible 2.5 introduced a new Jinja2 function named [query](../plugins/lookup#query) that always returns a list, offering a simpler interface and more predictable output from lookup plugins when using the `loop` keyword.
You can force `lookup` to return a list to `loop` by using `wantlist=True`, or you can use `query` instead.
These examples do the same thing:
```
loop: "{{ query('inventory_hostnames', 'all') }}"
loop: "{{ lookup('inventory_hostnames', 'all', wantlist=True) }}"
```
Adding controls to loops
------------------------
New in version 2.1.
The `loop_control` keyword lets you manage your loops in useful ways.
### Limiting loop output with `label`
New in version 2.2.
When looping over complex data structures, the console output of your task can be enormous. To limit the displayed output, use the `label` directive with `loop_control`:
```
- name: Create servers
digital_ocean:
name: "{{ item.name }}"
state: present
loop:
- name: server1
disks: 3gb
ram: 15Gb
network:
nic01: 100Gb
nic02: 10Gb
...
loop_control:
label: "{{ item.name }}"
```
The output of this task will display just the `name` field for each `item` instead of the entire contents of the multi-line `{{ item }}` variable.
Note
This is for making console output more readable, not protecting sensitive data. If there is sensitive data in `loop`, set `no_log: yes` on the task to prevent disclosure.
### Pausing within a loop
New in version 2.2.
To control the time (in seconds) between the execution of each item in a task loop, use the `pause` directive with `loop_control`:
```
# main.yml
- name: Create servers, pause 3s before creating next
community.digitalocean.digital_ocean:
name: "{{ item }}"
state: present
loop:
- server1
- server2
loop_control:
pause: 3
```
### Tracking progress through a loop with `index_var`
New in version 2.5.
To keep track of where you are in a loop, use the `index_var` directive with `loop_control`. This directive specifies a variable name to contain the current loop index:
```
- name: Count our fruit
ansible.builtin.debug:
msg: "{{ item }} with index {{ my_idx }}"
loop:
- apple
- banana
- pear
loop_control:
index_var: my_idx
```
Note
`index_var` is 0 indexed.
### Defining inner and outer variable names with `loop_var`
New in version 2.1.
You can nest two looping tasks using `include_tasks`. However, by default Ansible sets the loop variable `item` for each loop. This means the inner, nested loop will overwrite the value of `item` from the outer loop. You can specify the name of the variable for each loop using `loop_var` with `loop_control`:
```
# main.yml
- include_tasks: inner.yml
loop:
- 1
- 2
- 3
loop_control:
loop_var: outer_item
# inner.yml
- name: Print outer and inner items
ansible.builtin.debug:
msg: "outer item={{ outer_item }} inner item={{ item }}"
loop:
- a
- b
- c
```
Note
If Ansible detects that the current loop is using a variable which has already been defined, it will raise an error to fail the task.
### Extended loop variables
New in version 2.8.
As of Ansible 2.8 you can get extended loop information using the `extended` option to loop control. This option will expose the following information.
| | |
| --- | --- |
| Variable | Description |
| `ansible_loop.allitems` | The list of all items in the loop |
| `ansible_loop.index` | The current iteration of the loop. (1 indexed) |
| `ansible_loop.index0` | The current iteration of the loop. (0 indexed) |
| `ansible_loop.revindex` | The number of iterations from the end of the loop (1 indexed) |
| `ansible_loop.revindex0` | The number of iterations from the end of the loop (0 indexed) |
| `ansible_loop.first` | `True` if first iteration |
| `ansible_loop.last` | `True` if last iteration |
| `ansible_loop.length` | The number of items in the loop |
| `ansible_loop.previtem` | The item from the previous iteration of the loop. Undefined during the first iteration. |
| `ansible_loop.nextitem` | The item from the following iteration of the loop. Undefined during the last iteration. |
```
loop_control:
extended: yes
```
### Accessing the name of your loop\_var
New in version 2.8.
As of Ansible 2.8 you can get the name of the value provided to `loop_control.loop_var` using the `ansible_loop_var` variable
For role authors, writing roles that allow loops, instead of dictating the required `loop_var` value, you can gather the value via:
```
"{{ lookup('vars', ansible_loop_var) }}"
```
Migrating from with\_X to loop
------------------------------
In most cases, loops work best with the `loop` keyword instead of `with_X` style loops. The `loop` syntax is usually best expressed using filters instead of more complex use of `query` or `lookup`.
These examples show how to convert many common `with_` style loops to `loop` and filters.
### with\_list
`with_list` is directly replaced by `loop`.
```
- name: with_list
ansible.builtin.debug:
msg: "{{ item }}"
with_list:
- one
- two
- name: with_list -> loop
ansible.builtin.debug:
msg: "{{ item }}"
loop:
- one
- two
```
### with\_items
`with_items` is replaced by `loop` and the `flatten` filter.
```
- name: with_items
ansible.builtin.debug:
msg: "{{ item }}"
with_items: "{{ items }}"
- name: with_items -> loop
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ items|flatten(levels=1) }}"
```
### with\_indexed\_items
`with_indexed_items` is replaced by `loop`, the `flatten` filter and `loop_control.index_var`.
```
- name: with_indexed_items
ansible.builtin.debug:
msg: "{{ item.0 }} - {{ item.1 }}"
with_indexed_items: "{{ items }}"
- name: with_indexed_items -> loop
ansible.builtin.debug:
msg: "{{ index }} - {{ item }}"
loop: "{{ items|flatten(levels=1) }}"
loop_control:
index_var: index
```
### with\_flattened
`with_flattened` is replaced by `loop` and the `flatten` filter.
```
- name: with_flattened
ansible.builtin.debug:
msg: "{{ item }}"
with_flattened: "{{ items }}"
- name: with_flattened -> loop
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ items|flatten }}"
```
### with\_together
`with_together` is replaced by `loop` and the `zip` filter.
```
- name: with_together
ansible.builtin.debug:
msg: "{{ item.0 }} - {{ item.1 }}"
with_together:
- "{{ list_one }}"
- "{{ list_two }}"
- name: with_together -> loop
ansible.builtin.debug:
msg: "{{ item.0 }} - {{ item.1 }}"
loop: "{{ list_one|zip(list_two)|list }}"
```
Another example with complex data
```
- name: with_together -> loop
ansible.builtin.debug:
msg: "{{ item.0 }} - {{ item.1 }} - {{ item.2 }}"
loop: "{{ data[0]|zip(*data[1:])|list }}"
vars:
data:
- ['a', 'b', 'c']
- ['d', 'e', 'f']
- ['g', 'h', 'i']
```
### with\_dict
`with_dict` can be substituted by `loop` and either the `dictsort` or `dict2items` filters.
```
- name: with_dict
ansible.builtin.debug:
msg: "{{ item.key }} - {{ item.value }}"
with_dict: "{{ dictionary }}"
- name: with_dict -> loop (option 1)
ansible.builtin.debug:
msg: "{{ item.key }} - {{ item.value }}"
loop: "{{ dictionary|dict2items }}"
- name: with_dict -> loop (option 2)
ansible.builtin.debug:
msg: "{{ item.0 }} - {{ item.1 }}"
loop: "{{ dictionary|dictsort }}"
```
### with\_sequence
`with_sequence` is replaced by `loop` and the `range` function, and potentially the `format` filter.
```
- name: with_sequence
ansible.builtin.debug:
msg: "{{ item }}"
with_sequence: start=0 end=4 stride=2 format=testuser%02x
- name: with_sequence -> loop
ansible.builtin.debug:
msg: "{{ 'testuser%02x' | format(item) }}"
# range is exclusive of the end point
loop: "{{ range(0, 4 + 1, 2)|list }}"
```
### with\_subelements
`with_subelements` is replaced by `loop` and the `subelements` filter.
```
- name: with_subelements
ansible.builtin.debug:
msg: "{{ item.0.name }} - {{ item.1 }}"
with_subelements:
- "{{ users }}"
- mysql.hosts
- name: with_subelements -> loop
ansible.builtin.debug:
msg: "{{ item.0.name }} - {{ item.1 }}"
loop: "{{ users|subelements('mysql.hosts') }}"
```
### with\_nested/with\_cartesian
`with_nested` and `with_cartesian` are replaced by loop and the `product` filter.
```
- name: with_nested
ansible.builtin.debug:
msg: "{{ item.0 }} - {{ item.1 }}"
with_nested:
- "{{ list_one }}"
- "{{ list_two }}"
- name: with_nested -> loop
ansible.builtin.debug:
msg: "{{ item.0 }} - {{ item.1 }}"
loop: "{{ list_one|product(list_two)|list }}"
```
### with\_random\_choice
`with_random_choice` is replaced by just use of the `random` filter, without need of `loop`.
```
- name: with_random_choice
ansible.builtin.debug:
msg: "{{ item }}"
with_random_choice: "{{ my_list }}"
- name: with_random_choice -> loop (No loop is needed here)
ansible.builtin.debug:
msg: "{{ my_list|random }}"
tags: random
```
See also
[Intro to playbooks](playbooks_intro#about-playbooks)
An introduction to playbooks
[Roles](playbooks_reuse_roles#playbooks-reuse-roles)
Playbook organization by roles
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[Conditionals](playbooks_conditionals#playbooks-conditionals)
Conditional statements in playbooks
[Using Variables](playbooks_variables#playbooks-variables)
All about variables
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Tags Tags
====
If you have a large playbook, it may be useful to run only specific parts of it instead of running the entire playbook. You can do this with Ansible tags. Using tags to execute or skip selected tasks is a two-step process:
1. Add tags to your tasks, either individually or with tag inheritance from a block, play, role, or import.
2. Select or skip tags when you run your playbook.
* [Adding tags with the tags keyword](#adding-tags-with-the-tags-keyword)
+ [Adding tags to individual tasks](#adding-tags-to-individual-tasks)
+ [Adding tags to includes](#adding-tags-to-includes)
+ [Tag inheritance: adding tags to multiple tasks](#tag-inheritance-adding-tags-to-multiple-tasks)
- [Adding tags to blocks](#adding-tags-to-blocks)
- [Adding tags to plays](#adding-tags-to-plays)
- [Adding tags to roles](#adding-tags-to-roles)
- [Adding tags to imports](#adding-tags-to-imports)
- [Tag inheritance for includes: blocks and the `apply` keyword](#tag-inheritance-for-includes-blocks-and-the-apply-keyword)
* [Special tags: always and never](#special-tags-always-and-never)
* [Selecting or skipping tags when you run a playbook](#selecting-or-skipping-tags-when-you-run-a-playbook)
+ [Previewing the results of using tags](#previewing-the-results-of-using-tags)
+ [Selectively running tagged tasks in re-usable files](#selectively-running-tagged-tasks-in-re-usable-files)
+ [Configuring tags globally](#configuring-tags-globally)
Adding tags with the tags keyword
---------------------------------
You can add tags to a single task or include. You can also add tags to multiple tasks by defining them at the level of a block, play, role, or import. The keyword `tags` addresses all these use cases. The `tags` keyword always defines tags and adds them to tasks; it does not select or skip tasks for execution. You can only select or skip tasks based on tags at the command line when you run a playbook. See [Selecting or skipping tags when you run a playbook](#using-tags) for more details.
### Adding tags to individual tasks
At the simplest level, you can apply one or more tags to an individual task. You can add tags to tasks in playbooks, in task files, or within a role. Here is an example that tags two tasks with different tags:
```
tasks:
- name: Install the servers
ansible.builtin.yum:
name:
- httpd
- memcached
state: present
tags:
- packages
- webservers
- name: Configure the service
ansible.builtin.template:
src: templates/src.j2
dest: /etc/foo.conf
tags:
- configuration
```
You can apply the same tag to more than one individual task. This example tags several tasks with the same tag, “ntp”:
```
---
# file: roles/common/tasks/main.yml
- name: Install ntp
ansible.builtin.yum:
name: ntp
state: present
tags: ntp
- name: Configure ntp
ansible.builtin.template:
src: ntp.conf.j2
dest: /etc/ntp.conf
notify:
- restart ntpd
tags: ntp
- name: Enable and run ntpd
ansible.builtin.service:
name: ntpd
state: started
enabled: yes
tags: ntp
- name: Install NFS utils
ansible.builtin.yum:
name:
- nfs-utils
- nfs-util-lib
state: present
tags: filesharing
```
If you ran these four tasks in a playbook with `--tags ntp`, Ansible would run the three tasks tagged `ntp` and skip the one task that does not have that tag.
### Adding tags to includes
You can apply tags to dynamic includes in a playbook. As with tags on an individual task, tags on an `include_*` task apply only to the include itself, not to any tasks within the included file or role. If you add `mytag` to a dynamic include, then run that playbook with `--tags mytag`, Ansible runs the include itself, runs any tasks within the included file or role tagged with `mytag`, and skips any tasks within the included file or role without that tag. See [Selectively running tagged tasks in re-usable files](#selective-reuse) for more details.
You add tags to includes the same way you add tags to any other task:
```
---
# file: roles/common/tasks/main.yml
- name: Dynamic re-use of database tasks
include_tasks: db.yml
tags: db
```
You can add a tag only to the dynamic include of a role. In this example, the `foo` tag will `not` apply to tasks inside the `bar` role:
```
---
- hosts: webservers
tasks:
- name: Include the bar role
include_role:
name: bar
tags:
- foo
```
With plays, blocks, the `role` keyword, and static imports, Ansible applies tag inheritance, adding the tags you define to every task inside the play, block, role, or imported file. However, tag inheritance does *not* apply to dynamic re-use with `include_role` and `include_tasks`. With dynamic re-use (includes), the tags you define apply only to the include itself. If you need tag inheritance, use a static import. If you cannot use an import because the rest of your playbook uses includes, see [Tag inheritance for includes: blocks and the apply keyword](#apply-keyword) for ways to work around this behavior.
### Tag inheritance: adding tags to multiple tasks
If you want to apply the same tag or tags to multiple tasks without adding a `tags` line to every task, you can define the tags at the level of your play or block, or when you add a role or import a file. Ansible applies the tags down the dependency chain to all child tasks. With roles and imports, Ansible appends the tags set by the `roles` section or import to any tags set on individual tasks or blocks within the role or imported file. This is called tag inheritance. Tag inheritance is convenient, because you do not have to tag every task. However, the tags still apply to the tasks individually.
#### Adding tags to blocks
If you want to apply a tag to many, but not all, of the tasks in your play, use a [block](playbooks_blocks#playbooks-blocks) and define the tags at that level. For example, we could edit the NTP example shown above to use a block:
```
# myrole/tasks/main.yml
tasks:
- name: ntp tasks
tags: ntp
block:
- name: Install ntp
ansible.builtin.yum:
name: ntp
state: present
- name: Configure ntp
ansible.builtin.template:
src: ntp.conf.j2
dest: /etc/ntp.conf
notify:
- restart ntpd
- name: Enable and run ntpd
ansible.builtin.service:
name: ntpd
state: started
enabled: yes
- name: Install NFS utils
ansible.builtin.yum:
name:
- nfs-utils
- nfs-util-lib
state: present
tags: filesharing
```
#### Adding tags to plays
If all the tasks in a play should get the same tag, you can add the tag at the level of the play. For example, if you had a play with only the NTP tasks, you could tag the entire play:
```
- hosts: all
tags: ntp
tasks:
- name: Install ntp
ansible.builtin.yum:
name: ntp
state: present
- name: Configure ntp
ansible.builtin.template:
src: ntp.conf.j2
dest: /etc/ntp.conf
notify:
- restart ntpd
- name: Enable and run ntpd
ansible.builtin.service:
name: ntpd
state: started
enabled: yes
- hosts: fileservers
tags: filesharing
tasks:
...
```
#### Adding tags to roles
There are three ways to add tags to roles:
1. Add the same tag or tags to all tasks in the role by setting tags under `roles`. See examples in this section.
2. Add the same tag or tags to all tasks in the role by setting tags on a static `import_role` in your playbook. See examples in [Adding tags to imports](#tags-on-imports).
3. Add a tag or tags to individual tasks or blocks within the role itself. This is the only approach that allows you to select or skip some tasks within the role. To select or skip tasks within the role, you must have tags set on individual tasks or blocks, use the dynamic `include_role` in your playbook, and add the same tag or tags to the include. When you use this approach, and then run your playbook with `--tags foo`, Ansible runs the include itself plus any tasks in the role that also have the tag `foo`. See [Adding tags to includes](#tags-on-includes) for details.
When you incorporate a role in your playbook statically with the `roles` keyword, Ansible adds any tags you define to all the tasks in the role. For example:
```
roles:
- role: webserver
vars:
port: 5000
tags: [ web, foo ]
```
or:
```
---
- hosts: webservers
roles:
- role: foo
tags:
- bar
- baz
# using YAML shorthand, this is equivalent to:
# - { role: foo, tags: ["bar", "baz"] }
```
#### Adding tags to imports
You can also apply a tag or tags to all the tasks imported by the static `import_role` and `import_tasks` statements:
```
---
- hosts: webservers
tasks:
- name: Import the foo role
import_role:
name: foo
tags:
- bar
- baz
- name: Import tasks from foo.yml
import_tasks: foo.yml
tags: [ web, foo ]
```
#### Tag inheritance for includes: blocks and the `apply` keyword
By default, Ansible does not apply [tag inheritance](#tag-inheritance) to dynamic re-use with `include_role` and `include_tasks`. If you add tags to an include, they apply only to the include itself, not to any tasks in the included file or role. This allows you to execute selected tasks within a role or task file - see [Selectively running tagged tasks in re-usable files](#selective-reuse) when you run your playbook.
If you want tag inheritance, you probably want to use imports. However, using both includes and imports in a single playbook can lead to difficult-to-diagnose bugs. For this reason, if your playbook uses `include_*` to re-use roles or tasks, and you need tag inheritance on one include, Ansible offers two workarounds. You can use the `apply` keyword:
```
- name: Apply the db tag to the include and to all tasks in db.yaml
include_tasks:
file: db.yml
# adds 'db' tag to tasks within db.yml
apply:
tags: db
# adds 'db' tag to this 'include_tasks' itself
tags: db
```
Or you can use a block:
```
- block:
- name: Include tasks from db.yml
include_tasks: db.yml
tags: db
```
Special tags: always and never
------------------------------
Ansible reserves two tag names for special behavior: always and never. If you assign the `always` tag to a task or play, Ansible will always run that task or play, unless you specifically skip it (`--skip-tags always`).
For example:
```
tasks:
- name: Print a message
ansible.builtin.debug:
msg: "Always runs"
tags:
- always
- name: Print a message
ansible.builtin.debug:
msg: "runs when you use tag1"
tags:
- tag1
```
Warning
* Fact gathering is tagged with ‘always’ by default. It is only skipped if you apply a tag and then use a different tag in `--tags` or the same tag in `--skip-tags`.
Warning
* The role argument specification validation task is tagged with ‘always’ by default. This validation will be skipped if you use `--skip-tags always`.
New in version 2.5.
If you assign the `never` tag to a task or play, Ansible will skip that task or play unless you specifically request it (`--tags never`).
For example:
```
tasks:
- name: Run the rarely-used debug task
ansible.builtin.debug:
msg: '{{ showmevar }}'
tags: [ never, debug ]
```
The rarely-used debug task in the example above only runs when you specifically request the `debug` or `never` tags.
Selecting or skipping tags when you run a playbook
--------------------------------------------------
Once you have added tags to your tasks, includes, blocks, plays, roles, and imports, you can selectively execute or skip tasks based on their tags when you run [ansible-playbook](../cli/ansible-playbook#ansible-playbook). Ansible runs or skips all tasks with tags that match the tags you pass at the command line. If you have added a tag at the block or play level, with `roles`, or with an import, that tag applies to every task within the block, play, role, or imported role or file. If you have a role with lots of tags and you want to call subsets of the role at different times, either [use it with dynamic includes](#selective-reuse), or split the role into multiple roles.
[ansible-playbook](../cli/ansible-playbook#ansible-playbook) offers five tag-related command-line options:
* `--tags all` - run all tasks, ignore tags (default behavior)
* `--tags [tag1, tag2]` - run only tasks with either the tag `tag1` or the tag `tag2`
* `--skip-tags [tag3, tag4]` - run all tasks except those with either the tag `tag3` or the tag `tag4`
* `--tags tagged` - run only tasks with at least one tag
* `--tags untagged` - run only tasks with no tags
For example, to run only tasks and blocks tagged `configuration` and `packages` in a very long playbook:
```
ansible-playbook example.yml --tags "configuration,packages"
```
To run all tasks except those tagged `packages`:
```
ansible-playbook example.yml --skip-tags "packages"
```
### Previewing the results of using tags
When you run a role or playbook, you might not know or remember which tasks have which tags, or which tags exist at all. Ansible offers two command-line flags for [ansible-playbook](../cli/ansible-playbook#ansible-playbook) that help you manage tagged playbooks:
* `--list-tags` - generate a list of available tags
* `--list-tasks` - when used with `--tags tagname` or `--skip-tags tagname`, generate a preview of tagged tasks
For example, if you do not know whether the tag for configuration tasks is `config` or `conf` in a playbook, role, or tasks file, you can display all available tags without running any tasks:
```
ansible-playbook example.yml --list-tags
```
If you do not know which tasks have the tags `configuration` and `packages`, you can pass those tags and add `--list-tasks`. Ansible lists the tasks but does not execute any of them.
```
ansible-playbook example.yml --tags "configuration,packages" --list-tasks
```
These command-line flags have one limitation: they cannot show tags or tasks within dynamically included files or roles. See [Comparing includes and imports: dynamic and static re-use](playbooks_reuse#dynamic-vs-static) for more information on differences between static imports and dynamic includes.
### Selectively running tagged tasks in re-usable files
If you have a role or a tasks file with tags defined at the task or block level, you can selectively run or skip those tagged tasks in a playbook if you use a dynamic include instead of a static import. You must use the same tag on the included tasks and on the include statement itself. For example you might create a file with some tagged and some untagged tasks:
```
# mixed.yml
tasks:
- name: Run the task with no tags
ansible.builtin.debug:
msg: this task has no tags
- name: Run the tagged task
ansible.builtin.debug:
msg: this task is tagged with mytag
tags: mytag
- block:
- name: Run the first block task with mytag
...
- name: Run the second block task with mytag
...
tags:
- mytag
```
And you might include the tasks file above in a playbook:
```
# myplaybook.yml
- hosts: all
tasks:
- name: Run tasks from mixed.yml
include_tasks:
name: mixed.yml
tags: mytag
```
When you run the playbook with `ansible-playbook -i hosts myplaybook.yml --tags "mytag"`, Ansible skips the task with no tags, runs the tagged individual task, and runs the two tasks in the block.
### Configuring tags globally
If you run or skip certain tags by default, you can use the [TAGS\_RUN](../reference_appendices/config#tags-run) and [TAGS\_SKIP](../reference_appendices/config#tags-skip) options in Ansible configuration to set those defaults.
See also
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[Roles](playbooks_reuse_roles#playbooks-reuse-roles)
Playbook organization by roles
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Data manipulation Data manipulation
=================
In many cases, you need to do some complex operation with your variables, while Ansible is not recommended as a data processing/manipulation tool, you can use the existing Jinja2 templating in conjunction with the many added Ansible filters, lookups and tests to do some very complex transformations.
Let’s start with a quick definition of each type of plugin:
* lookups: Mainly used to query ‘external data’, in Ansible these were the primary part of loops using the `with_<lookup>` construct, but they can be used independently to return data for processing. They normally return a list due to their primary function in loops as mentioned previously. Used with the `lookup` or `query` Jinja2 operators.
* filters: used to change/transform data, used with the `|` Jinja2 operator.
* tests: used to validate data, used with the `is` Jinja2 operator.
Loops and list comprehensions
-----------------------------
Most programming languages have loops (`for`, `while`, and so on) and list comprehensions to do transformations on lists including lists of objects. Jinja2 has a few filters that provide this functionality: `map`, `select`, `reject`, `selectattr`, `rejectattr`.
* map: this is a basic for loop that just allows you to change every item in a list, using the ‘attribute’ keyword you can do the transformation based on attributes of the list elements.
* select/reject: this is a for loop with a condition, that allows you to create a subset of a list that matches (or not) based on the result of the condition.
* selectattr/rejectattr: very similar to the above but it uses a specific attribute of the list elements for the conditional statement.
Use a loop to create exponential backoff for retries/until.
```
- name: retry ping 10 times with exponential backup delay
ping:
retries: 10
delay: '{{item|int}}'
loop: '{{ range(1, 10)|map('pow', 2) }}'
```
### Extract keys from a dictionary matching elements from a list
The Python equivalent code would be:
```
chains = [1, 2]
for chain in chains:
for config in chains_config[chain]['configs']:
print(config['type'])
```
There are several ways to do it in Ansible, this is just one example:
Way to extract matching keys from a list of dictionaries
```
tasks:
- name: Show extracted list of keys from a list of dictionaries
ansible.builtin.debug:
msg: "{{ chains | map('extract', chains_config) | map(attribute='configs') | flatten | map(attribute='type') | flatten }}"
vars:
chains: [1, 2]
chains_config:
1:
foo: bar
configs:
- type: routed
version: 0.1
- type: bridged
version: 0.2
2:
foo: baz
configs:
- type: routed
version: 1.0
- type: bridged
version: 1.1
```
Results of debug task, a list with the extracted keys
```
ok: [localhost] => {
"msg": [
"routed",
"bridged",
"routed",
"bridged"
]
}
```
Get the unique list of values of a variable that vary per host
```
vars:
unique_value_list: "{{ groups['all'] | map ('extract', hostvars, 'varname') | list | unique}}"
```
### Find mount point
In this case, we want to find the mount point for a given path across our machines, since we already collect mount facts, we can use the following:
Use selectattr to filter mounts into list I can then sort and select the last from
```
- hosts: all
gather_facts: True
vars:
path: /var/lib/cache
tasks:
- name: The mount point for {{path}}, found using the Ansible mount facts, [-1] is the same as the 'last' filter
ansible.builtin.debug:
msg: "{{(ansible_facts.mounts | selectattr('mount', 'in', path) | list | sort(attribute='mount'))[-1]['mount']}}"
```
### Omit elements from a list
The special `omit` variable ONLY works with module options, but we can still use it in other ways as an identifier to tailor a list of elements:
Inline list filtering when feeding a module option
```
- name: Enable a list of Windows features, by name
ansible.builtin.set_fact:
win_feature_list: "{{ namestuff | reject('equalto', omit) | list }}"
vars:
namestuff:
- "{{ (fs_installed_smb_v1 | default(False)) | ternary(omit, 'FS-SMB1') }}"
- "foo"
- "bar"
```
Another way is to avoid adding elements to the list in the first place, so you can just use it directly:
Using set\_fact in a loop to increment a list conditionally
```
- name: Build unique list with some items conditionally omitted
ansible.builtin.set_fact:
namestuff: ' {{ (namestuff | default([])) | union([item]) }}'
when: item != omit
loop:
- "{{ (fs_installed_smb_v1 | default(False)) | ternary(omit, 'FS-SMB1') }}"
- "foo"
- "bar"
```
### Combine values from same list of dicts
Combining positive and negative filters from examples above, you can get a ‘value when it exists’ and a ‘fallback’ when it doesn’t.
Use selectattr and rejectattr to get the ansible\_host or inventory\_hostname as needed
```
- hosts: localhost
tasks:
- name: Check hosts in inventory that respond to ssh port
wait_for:
host: "{{ item }}"
port: 22
loop: '{{ has_ah + no_ah }}'
vars:
has_ah: '{{ hostvars|dictsort|selectattr("1.ansible_host", "defined")|map(attribute="1.ansible_host")|list }}'
no_ah: '{{ hostvars|dictsort|rejectattr("1.ansible_host", "defined")|map(attribute="0")|list }}'
```
### Custom Fileglob Based on a Variable
This example uses [Python argument list unpacking](https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists) to create a custom list of fileglobs based on a variable.
Using fileglob with a list based on a variable.
```
- hosts: all
vars:
mygroups
- prod
- web
tasks:
- name: Copy a glob of files based on a list of groups
copy:
src: "{{ item }}"
dest: "/tmp/{{ item }}"
loop: '{{ q("fileglob", *globlist) }}'
vars:
globlist: '{{ mygroups | map("regex_replace", "^(.*)$", "files/\1/*.conf") | list }}'
```
Complex Type transformations
----------------------------
Jinja provides filters for simple data type transformations (`int`, `bool`, and so on), but when you want to transform data structures things are not as easy. You can use loops and list comprehensions as shown above to help, also other filters and lookups can be chained and leveraged to achieve more complex transformations.
### Create dictionary from list
In most languages it is easy to create a dictionary (a.k.a. map/associative array/hash and so on) from a list of pairs, in Ansible there are a couple of ways to do it and the best one for you might depend on the source of your data.
These example produces `{"a": "b", "c": "d"}`
Simple list to dict by assuming the list is [key, value , key, value, …]
```
vars:
single_list: [ 'a', 'b', 'c', 'd' ]
mydict: "{{ dict(single_list | slice(2)) }}"
```
It is simpler when we have a list of pairs:
```
vars:
list_of_pairs: [ ['a', 'b'], ['c', 'd'] ]
mydict: "{{ dict(list_of_pairs) }}"
```
Both end up being the same thing, with `slice(2)` transforming `single_list` to a `list_of_pairs` generator.
A bit more complex, using `set_fact` and a `loop` to create/update a dictionary with key value pairs from 2 lists:
Using set\_fact to create a dictionary from a set of lists
```
- name: Uses 'combine' to update the dictionary and 'zip' to make pairs of both lists
ansible.builtin.set_fact:
mydict: "{{ mydict | default({}) | combine({item[0]: item[1]}) }}"
loop: "{{ (keys | zip(values)) | list }}"
vars:
keys:
- foo
- var
- bar
values:
- a
- b
- c
```
This results in `{"foo": "a", "var": "b", "bar": "c"}`.
You can even combine these simple examples with other filters and lookups to create a dictionary dynamically by matching patterns to variable names:
Using ‘vars’ to define dictionary from a set of lists without needing a task
```
vars:
myvarnames: "{{ q('varnames', '^my') }}"
mydict: "{{ dict(myvarnames | zip(q('vars', *myvarnames))) }}"
```
A quick explanation, since there is a lot to unpack from these two lines:
* The `varnames` lookup returns a list of variables that match “begin with `my`”.
* Then feeding the list from the previous step into the `vars` lookup to get the list of values. The `*` is used to ‘dereference the list’ (a pythonism that works in Jinja), otherwise it would take the list as a single argument.
* Both lists get passed to the `zip` filter to pair them off into a unified list (key, value, key2, value2, …).
* The dict function then takes this ‘list of pairs’ to create the dictionary.
An example on how to use facts to find a host’s data that meets condition X:
```
vars:
uptime_of_host_most_recently_rebooted: "{{ansible_play_hosts_all | map('extract', hostvars, 'ansible_uptime_seconds') | sort | first}}"
```
Using an example from @zoradache on reddit, to show the ‘uptime in days/hours/minutes’ (assumes facts where gathered). <https://www.reddit.com/r/ansible/comments/gj5a93/trying_to_get_uptime_from_seconds/fqj2qr3/>
```
- name: Show the uptime in a certain format
ansible.builtin.debug:
msg: Timedelta {{ now() - now().fromtimestamp(now(fmt='%s') | int - ansible_uptime_seconds) }}
```
See also
[Using filters to manipulate data](playbooks_filters)
Jinja2 filters included with Ansible
[Tests](playbooks_tests)
Jinja2 tests included with Ansible
[Jinja2 Docs](https://jinja.palletsprojects.com/)
Jinja2 documentation, includes lists for core filters and tests
| programming_docs |
ansible Working with playbooks Working with playbooks
======================
Playbooks record and execute Ansible’s configuration, deployment, and orchestration functions. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.
If Ansible modules are the tools in your workshop, playbooks are your instruction manuals, and your inventory of hosts are your raw material.
At a basic level, playbooks can be used to manage configurations of and deployments to remote machines. At a more advanced level, they can sequence multi-tier rollouts involving rolling updates, and can delegate actions to other hosts, interacting with monitoring servers and load balancers along the way.
Playbooks are designed to be human-readable and are developed in a basic text language. There are multiple ways to organize playbooks and the files they include, and we’ll offer up some suggestions on that and making the most out of Ansible.
You should look at [Example Playbooks](https://github.com/ansible/ansible-examples) while reading along with the playbook documentation. These illustrate best practices as well as how to put many of the various concepts together.
* [Templating (Jinja2)](playbooks_templating)
+ [Using filters to manipulate data](playbooks_filters)
+ [Tests](playbooks_tests)
+ [Lookups](playbooks_lookups)
+ [Python3 in templates](playbooks_python_version)
+ [Get the current time](playbooks_templating#get-the-current-time)
* [Advanced playbooks features](playbooks_special_topics)
* [Playbook Example: Continuous Delivery and Rolling Upgrades](guide_rolling_upgrade)
+ [What is continuous delivery?](guide_rolling_upgrade#what-is-continuous-delivery)
+ [Site deployment](guide_rolling_upgrade#site-deployment)
+ [Reusable content: roles](guide_rolling_upgrade#reusable-content-roles)
+ [Configuration: group variables](guide_rolling_upgrade#configuration-group-variables)
+ [The rolling upgrade](guide_rolling_upgrade#the-rolling-upgrade)
+ [Managing other load balancers](guide_rolling_upgrade#managing-other-load-balancers)
+ [Continuous delivery end-to-end](guide_rolling_upgrade#continuous-delivery-end-to-end)
ansible Including and importing Including and importing
=======================
The content on this page has been moved to [Re-using Ansible artifacts](playbooks_reuse#playbooks-reuse).
See also
[YAML Syntax](../reference_appendices/yamlsyntax#yaml-syntax)
Learn about YAML syntax
[Working with playbooks](playbooks#working-with-playbooks)
Review the basic Playbook language features
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[Using Variables](playbooks_variables#playbooks-variables)
All about variables in playbooks
[Conditionals](playbooks_conditionals#playbooks-conditionals)
Conditionals in playbooks
[Loops](playbooks_loops#playbooks-loops)
Loops in playbooks
[Collection Index](../collections/index#list-of-collections)
Browse existing collections, modules, and plugins
[Should you develop a module?](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html#developing-modules)
Learn how to extend Ansible by writing your own modules
[GitHub Ansible examples](https://github.com/ansible/ansible-examples)
Complete playbook files from the GitHub project source
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
ansible Windows Guides Windows Guides
==============
The following sections provide information on managing Windows hosts with Ansible.
Because Windows is a non-POSIX-compliant operating system, there are differences between how Ansible interacts with them and the way Windows works. These guides will highlight some of the differences between Linux/Unix hosts and hosts running Windows.
* [Setting up a Windows Host](windows_setup)
+ [Host Requirements](windows_setup#host-requirements)
+ [WinRM Setup](windows_setup#winrm-setup)
+ [Windows SSH Setup](windows_setup#windows-ssh-setup)
* [Windows Remote Management](windows_winrm)
+ [What is WinRM?](windows_winrm#what-is-winrm)
+ [Authentication Options](windows_winrm#authentication-options)
+ [Non-Administrator Accounts](windows_winrm#non-administrator-accounts)
+ [WinRM Encryption](windows_winrm#winrm-encryption)
+ [Inventory Options](windows_winrm#inventory-options)
+ [IPv6 Addresses](windows_winrm#ipv6-addresses)
+ [HTTPS Certificate Validation](windows_winrm#https-certificate-validation)
+ [TLS 1.2 Support](windows_winrm#tls-1-2-support)
+ [Limitations](windows_winrm#limitations)
* [Using Ansible and Windows](windows_usage)
+ [Use Cases](windows_usage#use-cases)
+ [Path Formatting for Windows](windows_usage#path-formatting-for-windows)
+ [Limitations](windows_usage#limitations)
+ [Developing Windows Modules](windows_usage#developing-windows-modules)
* [Desired State Configuration](windows_dsc)
+ [What is Desired State Configuration?](windows_dsc#what-is-desired-state-configuration)
+ [Host Requirements](windows_dsc#host-requirements)
+ [Why Use DSC?](windows_dsc#why-use-dsc)
+ [How to Use DSC?](windows_dsc#how-to-use-dsc)
+ [Custom DSC Resources](windows_dsc#custom-dsc-resources)
+ [Examples](windows_dsc#examples)
* [Windows performance](windows_performance)
+ [Optimise PowerShell performance to reduce Ansible task overhead](windows_performance#optimise-powershell-performance-to-reduce-ansible-task-overhead)
+ [Fix high-CPU-on-boot for VMs/cloud instances](windows_performance#fix-high-cpu-on-boot-for-vms-cloud-instances)
* [Windows Frequently Asked Questions](windows_faq)
+ [Does Ansible work with Windows XP or Server 2003?](windows_faq#does-ansible-work-with-windows-xp-or-server-2003)
+ [Are Server 2008, 2008 R2 and Windows 7 supported?](windows_faq#are-server-2008-2008-r2-and-windows-7-supported)
+ [Can I manage Windows Nano Server with Ansible?](windows_faq#can-i-manage-windows-nano-server-with-ansible)
+ [Can Ansible run on Windows?](windows_faq#can-ansible-run-on-windows)
+ [Can I use SSH keys to authenticate to Windows hosts?](windows_faq#can-i-use-ssh-keys-to-authenticate-to-windows-hosts)
+ [Why can I run a command locally that does not work under Ansible?](windows_faq#why-can-i-run-a-command-locally-that-does-not-work-under-ansible)
+ [This program won’t install on Windows with Ansible](windows_faq#this-program-won-t-install-on-windows-with-ansible)
+ [What Windows modules are available?](windows_faq#what-windows-modules-are-available)
+ [Can I run Python modules on Windows hosts?](windows_faq#can-i-run-python-modules-on-windows-hosts)
+ [Can I connect to Windows hosts over SSH?](windows_faq#can-i-connect-to-windows-hosts-over-ssh)
+ [Why is connecting to a Windows host via SSH failing?](windows_faq#why-is-connecting-to-a-windows-host-via-ssh-failing)
+ [Why are my credentials being rejected?](windows_faq#why-are-my-credentials-being-rejected)
+ [Why am I getting an error SSL CERTIFICATE\_VERIFY\_FAILED?](windows_faq#why-am-i-getting-an-error-ssl-certificate-verify-failed)
ansible Patterns: targeting hosts and groups Patterns: targeting hosts and groups
====================================
When you execute Ansible through an ad hoc command or by running a playbook, you must choose which managed nodes or groups you want to execute against. Patterns let you run commands and playbooks against specific hosts and/or groups in your inventory. An Ansible pattern can refer to a single host, an IP address, an inventory group, a set of groups, or all hosts in your inventory. Patterns are highly flexible - you can exclude or require subsets of hosts, use wildcards or regular expressions, and more. Ansible executes on all inventory hosts included in the pattern.
* [Using patterns](#using-patterns)
* [Common patterns](#common-patterns)
* [Limitations of patterns](#limitations-of-patterns)
* [Advanced pattern options](#advanced-pattern-options)
+ [Using variables in patterns](#using-variables-in-patterns)
+ [Using group position in patterns](#using-group-position-in-patterns)
+ [Using regexes in patterns](#using-regexes-in-patterns)
* [Patterns and ansible-playbook flags](#patterns-and-ansible-playbook-flags)
Using patterns
--------------
You use a pattern almost any time you execute an ad hoc command or a playbook. The pattern is the only element of an [ad hoc command](intro_adhoc#intro-adhoc) that has no flag. It is usually the second element:
```
ansible <pattern> -m <module_name> -a "<module options>"
```
For example:
```
ansible webservers -m service -a "name=httpd state=restarted"
```
In a playbook the pattern is the content of the `hosts:` line for each play:
```
- name: <play_name>
hosts: <pattern>
```
For example:
```
- name: restart webservers
hosts: webservers
```
Since you often want to run a command or playbook against multiple hosts at once, patterns often refer to inventory groups. Both the ad hoc command and the playbook above will execute against all machines in the `webservers` group.
Common patterns
---------------
This table lists common patterns for targeting inventory hosts and groups.
| Description | Pattern(s) | Targets |
| --- | --- | --- |
| All hosts | all (or \*) | |
| One host | host1 | |
| Multiple hosts | host1:host2 (or host1,host2) | |
| One group | webservers | |
| Multiple groups | webservers:dbservers | all hosts in webservers plus all hosts in dbservers |
| Excluding groups | webservers:!atlanta | all hosts in webservers except those in atlanta |
| Intersection of groups | webservers:&staging | any hosts in webservers that are also in staging |
Note
You can use either a comma (`,`) or a colon (`:`) to separate a list of hosts. The comma is preferred when dealing with ranges and IPv6 addresses.
Once you know the basic patterns, you can combine them. This example:
```
webservers:dbservers:&staging:!phoenix
```
targets all machines in the groups ‘webservers’ and ‘dbservers’ that are also in the group ‘staging’, except any machines in the group ‘phoenix’.
You can use wildcard patterns with FQDNs or IP addresses, as long as the hosts are named in your inventory by FQDN or IP address:
```
192.0.\*
\*.example.com
\*.com
```
You can mix wildcard patterns and groups at the same time:
```
one*.com:dbservers
```
Limitations of patterns
-----------------------
Patterns depend on inventory. If a host or group is not listed in your inventory, you cannot use a pattern to target it. If your pattern includes an IP address or hostname that does not appear in your inventory, you will see an error like this:
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: Could not match supplied host pattern, ignoring: *.not_in_inventory.com
```
Your pattern must match your inventory syntax. If you define a host as an [alias](intro_inventory#inventory-aliases):
```
atlanta:
host1:
http_port: 80
maxRequestsPerChild: 808
host: 127.0.0.2
```
you must use the alias in your pattern. In the example above, you must use `host1` in your pattern. If you use the IP address, you will once again get the error:
```
[WARNING]: Could not match supplied host pattern, ignoring: 127.0.0.2
```
Advanced pattern options
------------------------
The common patterns described above will meet most of your needs, but Ansible offers several other ways to define the hosts and groups you want to target.
### Using variables in patterns
You can use variables to enable passing group specifiers via the `-e` argument to ansible-playbook:
```
webservers:!{{ excluded }}:&{{ required }}
```
### Using group position in patterns
You can define a host or subset of hosts by its position in a group. For example, given the following group:
```
[webservers]
cobweb
webbing
weber
```
you can use subscripts to select individual hosts or ranges within the webservers group:
```
webservers[0] # == cobweb
webservers[-1] # == weber
webservers[0:2] # == webservers[0],webservers[1]
# == cobweb,webbing
webservers[1:] # == webbing,weber
webservers[:3] # == cobweb,webbing,weber
```
### Using regexes in patterns
You can specify a pattern as a regular expression by starting the pattern with `~`:
```
~(web|db).*\.example\.com
```
Patterns and ansible-playbook flags
-----------------------------------
You can change the behavior of the patterns defined in playbooks using command-line options. For example, you can run a playbook that defines `hosts: all` on a single host by specifying `-i 127.0.0.2,` (note the trailing comma). This works even if the host you target is not defined in your inventory. You can also limit the hosts you target on a particular run with the `--limit` flag:
```
ansible-playbook site.yml --limit datacenter2
```
Finally, you can use `--limit` to read the list of hosts from a file by prefixing the file name with `@`:
```
ansible-playbook site.yml --limit @retry_hosts.txt
```
If [RETRY\_FILES\_ENABLED](../reference_appendices/config#retry-files-enabled) is set to `True`, a `.retry` file will be created after the `ansible-playbook` run containing a list of failed hosts from all plays. This file is overwritten each time `ansible-playbook` finishes running.
ansible-playbook site.yml –limit @site.retry
To apply your knowledge of patterns with Ansible commands and playbooks, read [Introduction to ad hoc commands](intro_adhoc#intro-adhoc) and [Intro to playbooks](playbooks_intro#playbooks-intro).
See also
[Introduction to ad hoc commands](intro_adhoc#intro-adhoc)
Examples of basic commands
[Working with playbooks](playbooks#working-with-playbooks)
Learning the Ansible configuration management language
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Lookups Lookups
=======
Lookup plugins retrieve data from outside sources such as files, databases, key/value stores, APIs, and other services. Like all templating, lookups execute and are evaluated on the Ansible control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. Before Ansible 2.5, lookups were mostly used indirectly in `with_<lookup>` constructs for looping. Starting with Ansible 2.5, lookups are used more explicitly as part of Jinja2 expressions fed into the `loop` keyword.
Using lookups in variables
--------------------------
You can populate variables using lookups. Ansible evaluates the value each time it is executed in a task (or template):
```
vars:
motd_value: "{{ lookup('file', '/etc/motd') }}"
tasks:
- debug:
msg: "motd value is {{ motd_value }}"
```
For more details and a list of lookup plugins in ansible-core, see [Working With Plugins](../plugins/plugins#plugins-lookup). You may also find lookup plugins in collections. You can review a list of lookup plugins installed on your control machine with the command `ansible-doc -l -t lookup`.
See also
[Working with playbooks](playbooks#working-with-playbooks)
An introduction to playbooks
[Conditionals](playbooks_conditionals#playbooks-conditionals)
Conditional statements in playbooks
[Using Variables](playbooks_variables#playbooks-variables)
All about variables
[Loops](playbooks_loops#playbooks-loops)
Looping in playbooks
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Handlers: running operations on change Handlers: running operations on change
======================================
Sometimes you want a task to run only when a change is made on a machine. For example, you may want to restart a service if a task updates the configuration of that service, but not if the configuration is unchanged. Ansible uses handlers to address this use case. Handlers are tasks that only run when notified. Each handler should have a globally unique name.
* [Handler example](#handler-example)
* [Controlling when handlers run](#controlling-when-handlers-run)
* [Using variables with handlers](#using-variables-with-handlers)
Handler example
---------------
This playbook, `verify-apache.yml`, contains a single play with a handler:
```
---
- name: Verify apache installation
hosts: webservers
vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: Ensure apache is at the latest version
ansible.builtin.yum:
name: httpd
state: latest
- name: Write the apache config file
ansible.builtin.template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
notify:
- Restart apache
- name: Ensure apache is running
ansible.builtin.service:
name: httpd
state: started
handlers:
- name: Restart apache
ansible.builtin.service:
name: httpd
state: restarted
```
In this example playbook, the second task notifies the handler. A single task can notify more than one handler:
```
- name: Template configuration file
ansible.builtin.template:
src: template.j2
dest: /etc/foo.conf
notify:
- Restart memcached
- Restart apache
handlers:
- name: Restart memcached
ansible.builtin.service:
name: memcached
state: restarted
- name: Restart apache
ansible.builtin.service:
name: apache
state: restarted
```
Controlling when handlers run
-----------------------------
By default, handlers run after all the tasks in a particular play have been completed. This approach is efficient, because the handler only runs once, regardless of how many tasks notify it. For example, if multiple tasks update a configuration file and notify a handler to restart Apache, Ansible only bounces Apache once to avoid unnecessary restarts.
If you need handlers to run before the end of the play, add a task to flush them using the [meta module](../collections/ansible/builtin/meta_module#meta-module), which executes Ansible actions:
```
tasks:
- name: Some tasks go here
ansible.builtin.shell: ...
- name: Flush handlers
meta: flush_handlers
- name: Some other tasks
ansible.builtin.shell: ...
```
The `meta: flush_handlers` task triggers any handlers that have been notified at that point in the play.
Using variables with handlers
-----------------------------
You may want your Ansible handlers to use variables. For example, if the name of a service varies slightly by distribution, you want your output to show the exact name of the restarted service for each target machine. Avoid placing variables in the name of the handler. Since handler names are templated early on, Ansible may not have a value available for a handler name like this:
```
handlers:
# This handler name may cause your play to fail!
- name: Restart "{{ web_service_name }}"
```
If the variable used in the handler name is not available, the entire play fails. Changing that variable mid-play **will not** result in newly created handler.
Instead, place variables in the task parameters of your handler. You can load the values using `include_vars` like this:
```
tasks:
- name: Set host variables based on distribution
include_vars: "{{ ansible_facts.distribution }}.yml"
handlers:
- name: Restart web service
ansible.builtin.service:
name: "{{ web_service_name | default('httpd') }}"
state: restarted
```
Handlers can also “listen” to generic topics, and tasks can notify those topics as follows:
```
handlers:
- name: Restart memcached
ansible.builtin.service:
name: memcached
state: restarted
listen: "restart web services"
- name: Restart apache
ansible.builtin.service:
name: apache
state: restarted
listen: "restart web services"
tasks:
- name: Restart everything
ansible.builtin.command: echo "this task will restart the web services"
notify: "restart web services"
```
This use makes it much easier to trigger multiple handlers. It also decouples handlers from their names, making it easier to share handlers among playbooks and roles (especially when using 3rd party roles from a shared source like Galaxy).
Note
* Handlers always run in the order they are defined, not in the order listed in the notify-statement. This is also the case for handlers using `listen`.
* Handler names and `listen` topics live in a global namespace.
* Handler names are templatable and `listen` topics are not.
* Use unique handler names. If you trigger more than one handler with the same name, the first one(s) get overwritten. Only the last one defined will run.
* You can notify a handler defined inside a static include.
* You cannot notify a handler defined inside a dynamic include.
* A handler can not run import\_role or include\_role.
When using handlers within roles, note that:
* handlers notified within `pre_tasks`, `tasks`, and `post_tasks` sections are automatically flushed at the end of section where they were notified.
* handlers notified within `roles` section are automatically flushed at the end of `tasks` section, but before any `tasks` handlers.
* handlers are play scoped and as such can be used outside of the role they are defined in.
| programming_docs |
ansible Setting the remote environment Setting the remote environment
==============================
New in version 1.1.
You can use the `environment` keyword at the play, block, or task level to set an environment variable for an action on a remote host. With this keyword, you can enable using a proxy for a task that does http requests, set the required environment variables for language-specific version managers, and more.
When you set a value with `environment:` at the play or block level, it is available only to tasks within the play or block that are executed by the same user. The `environment:` keyword does not affect Ansible itself, Ansible configuration settings, the environment for other users, or the execution of other plugins like lookups and filters. Variables set with `environment:` do not automatically become Ansible facts, even when you set them at the play level. You must include an explicit `gather_facts` task in your playbook and set the `environment` keyword on that task to turn these values into Ansible facts.
Setting the remote environment in a task
----------------------------------------
You can set the environment directly at the task level:
```
- hosts: all
remote_user: root
tasks:
- name: Install cobbler
ansible.builtin.package:
name: cobbler
state: present
environment:
http_proxy: http://proxy.example.com:8080
```
You can re-use environment settings by defining them as variables in your play and accessing them in a task as you would access any stored Ansible variable:
```
- hosts: all
remote_user: root
# create a variable named "proxy_env" that is a dictionary
vars:
proxy_env:
http_proxy: http://proxy.example.com:8080
tasks:
- name: Install cobbler
ansible.builtin.package:
name: cobbler
state: present
environment: "{{ proxy_env }}"
```
You can store environment settings for re-use in multiple playbooks by defining them in a group\_vars file:
```
---
# file: group_vars/boston
ntp_server: ntp.bos.example.com
backup: bak.bos.example.com
proxy_env:
http_proxy: http://proxy.bos.example.com:8080
https_proxy: http://proxy.bos.example.com:8080
```
You can set the remote environment at the play level:
```
- hosts: testing
roles:
- php
- nginx
environment:
http_proxy: http://proxy.example.com:8080
```
These examples show proxy settings, but you can provide any number of settings this way.
Working with language-specific version managers
===============================================
Some language-specific version managers (such as rbenv and nvm) require you to set environment variables while these tools are in use. When using these tools manually, you usually source some environment variables from a script or from lines added to your shell configuration file. In Ansible, you can do this with the environment keyword at the play level:
```
---
### A playbook demonstrating a common npm workflow:
# - Check for package.json in the application directory
# - If package.json exists:
# * Run npm prune
# * Run npm install
- hosts: application
become: false
vars:
node_app_dir: /var/local/my_node_app
environment:
NVM_DIR: /var/local/nvm
PATH: /var/local/nvm/versions/node/v4.2.1/bin:{{ ansible_env.PATH }}
tasks:
- name: Check for package.json
ansible.builtin.stat:
path: '{{ node_app_dir }}/package.json'
register: packagejson
- name: Run npm prune
ansible.builtin.command: npm prune
args:
chdir: '{{ node_app_dir }}'
when: packagejson.stat.exists
- name: Run npm install
community.general.npm:
path: '{{ node_app_dir }}'
when: packagejson.stat.exists
```
Note
The example above uses `ansible_env` as part of the PATH. Basing variables on `ansible_env` is risky. Ansible populates `ansible_env` values by gathering facts, so the value of the variables depends on the remote\_user or become\_user Ansible used when gathering those facts. If you change remote\_user/become\_user the values in `ansible-env` may not be the ones you expect.
Warning
Environment variables are normally passed in clear text (shell plugin dependent) so they are not a recommended way of passing secrets to the module being executed.
You can also specify the environment at the task level:
```
---
- name: Install ruby 2.3.1
ansible.builtin.command: rbenv install {{ rbenv_ruby_version }}
args:
creates: '{{ rbenv_root }}/versions/{{ rbenv_ruby_version }}/bin/ruby'
vars:
rbenv_root: /usr/local/rbenv
rbenv_ruby_version: 2.3.1
environment:
CONFIGURE_OPTS: '--disable-install-doc'
RBENV_ROOT: '{{ rbenv_root }}'
PATH: '{{ rbenv_root }}/bin:{{ rbenv_root }}/shims:{{ rbenv_plugins }}/ruby-build/bin:{{ ansible_env.PATH }}'
```
See also
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Connection methods and details Connection methods and details
==============================
This section shows you how to expand and refine the connection methods Ansible uses for your inventory.
ControlPersist and paramiko
---------------------------
By default, Ansible uses native OpenSSH, because it supports ControlPersist (a performance feature), Kerberos, and options in `~/.ssh/config` such as Jump Host setup. If your control machine uses an older version of OpenSSH that does not support ControlPersist, Ansible will fallback to a Python implementation of OpenSSH called ‘paramiko’.
Setting a remote user
---------------------
By default, Ansible connects to all remote devices with the user name you are using on the control node. If that user name does not exist on a remote device, you can set a different user name for the connection. If you just need to do some tasks as a different user, look at [Understanding privilege escalation: become](become#become). You can set the connection user in a playbook:
```
---
- name: update webservers
hosts: webservers
remote_user: admin
tasks:
- name: thing to do first in this playbook
. . .
```
as a host variable in inventory:
```
other1.example.com ansible_connection=ssh ansible_user=myuser
other2.example.com ansible_connection=ssh ansible_user=myotheruser
```
or as a group variable in inventory:
```
cloud:
hosts:
cloud1: my_backup.cloud.com
cloud2: my_backup2.cloud.com
vars:
ansible_user: admin
```
Setting up SSH keys
-------------------
By default, Ansible assumes you are using SSH keys to connect to remote machines. SSH keys are encouraged, but you can use password authentication if needed with the `--ask-pass` option. If you need to provide a password for [privilege escalation](become#become) (sudo, pbrun, and so on), use `--ask-become-pass`.
Note
Ansible does not expose a channel to allow communication between the user and the ssh process to accept a password manually to decrypt an ssh key when using the ssh connection plugin (which is the default). The use of `ssh-agent` is highly recommended.
To set up SSH agent to avoid retyping passwords, you can do:
```
$ ssh-agent bash
$ ssh-add ~/.ssh/id_rsa
```
Depending on your setup, you may wish to use Ansible’s `--private-key` command line option to specify a pem file instead. You can also add the private key file:
```
$ ssh-agent bash
$ ssh-add ~/.ssh/keypair.pem
```
Another way to add private key files without using ssh-agent is using `ansible_ssh_private_key_file` in an inventory file as explained here: [How to build your inventory](intro_inventory#intro-inventory).
Running against localhost
-------------------------
You can run commands against the control node by using “localhost” or “127.0.0.1” for the server name:
```
$ ansible localhost -m ping -e 'ansible_python_interpreter="/usr/bin/env python"'
```
You can specify localhost explicitly by adding this to your inventory file:
```
localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env python"
```
Managing host key checking
--------------------------
Ansible enables host key checking by default. Checking host keys guards against server spoofing and man-in-the-middle attacks, but it does require some maintenance.
If a host is reinstalled and has a different key in ‘known\_hosts’, this will result in an error message until corrected. If a new host is not in ‘known\_hosts’ your control node may prompt for confirmation of the key, which results in an interactive experience if using Ansible, from say, cron. You might not want this.
If you understand the implications and wish to disable this behavior, you can do so by editing `/etc/ansible/ansible.cfg` or `~/.ansible.cfg`:
```
[defaults]
host_key_checking = False
```
Alternatively this can be set by the [`ANSIBLE_HOST_KEY_CHECKING`](../reference_appendices/config#envvar-ANSIBLE_HOST_KEY_CHECKING) environment variable:
```
$ export ANSIBLE_HOST_KEY_CHECKING=False
```
Also note that host key checking in paramiko mode is reasonably slow, therefore switching to ‘ssh’ is also recommended when using this feature.
Other connection methods
------------------------
Ansible can use a variety of connection methods beyond SSH. You can select any connection plugin, including managing things locally and managing chroot, lxc, and jail containers. A mode called ‘ansible-pull’ can also invert the system and have systems ‘phone home’ via scheduled git checkouts to pull configuration directives from a central repository.
ansible Module Maintenance & Support Module Maintenance & Support
============================
If you are using a module and you discover a bug, you may want to know where to report that bug, who is responsible for fixing it, and how you can track changes to the module. If you are a Red Hat subscriber, you may want to know whether you can get support for the issue you are facing.
Starting in Ansible 2.10, most modules live in collections. The distribution method for each collection reflects the maintenance and support for the modules in that collection.
* [Maintenance](#maintenance)
* [Issue Reporting](#issue-reporting)
* [Support](#support)
Maintenance
-----------
| Collection | Code location | Maintained by |
| --- | --- | --- |
| ansible.builtin | [ansible/ansible repo](https://github.com/ansible/ansible/tree/devel/lib/ansible/modules) on GitHub | core team |
| distributed on Galaxy | various; follow `repo` link | community or partners |
| distributed on Automation Hub | various; follow `repo` link | content team or partners |
Issue Reporting
---------------
If you find a bug that affects a plugin in the main Ansible repo, also known as `ansible-core`:
1. Confirm that you are running the latest stable version of Ansible or the devel branch.
2. Look at the [issue tracker in the Ansible repo](https://github.com/ansible/ansible/issues) to see if an issue has already been filed.
3. Create an issue if one does not already exist. Include as much detail as you can about the behavior you discovered.
If you find a bug that affects a plugin in a Galaxy collection:
1. Find the collection on Galaxy.
2. Find the issue tracker for the collection.
3. Look there to see if an issue has already been filed.
4. Create an issue if one does not already exist. Include as much detail as you can about the behavior you discovered.
Some partner collections may be hosted in private repositories.
If you are not sure whether the behavior you see is a bug, if you have questions, if you want to discuss development-oriented topics, or if you just want to get in touch, use one of our Google groups or IRC channels to [communicate with Ansiblers](https://docs.ansible.com/ansible/latest/community/communication.html#communication).
If you find a bug that affects a module in an Automation Hub collection:
1. If the collection offers an Issue Tracker link on Automation Hub, click there and open an issue on the collection repository. If it does not, follow the standard process for reporting issues on the [Red Hat Customer Portal](https://access.redhat.com/). You must have a subscription to the Red Hat Ansible Automation Platform to create an issue on the portal.
Support
-------
All plugins that remain in `ansible-core` and all collections hosted in Automation Hub are supported by Red Hat. No other plugins or collections are supported by Red Hat. If you have a subscription to the Red Hat Ansible Automation Platform, you can find more information and resources on the [Red Hat Customer Portal.](https://access.redhat.com/)
See also
[Introduction to ad hoc commands](intro_adhoc#intro-adhoc)
Examples of using modules in /usr/bin/ansible
[Working with playbooks](playbooks#working-with-playbooks)
Examples of using modules with /usr/bin/ansible-playbook
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Playbook Example: Continuous Delivery and Rolling Upgrades Playbook Example: Continuous Delivery and Rolling Upgrades
==========================================================
* [What is continuous delivery?](#what-is-continuous-delivery)
* [Site deployment](#site-deployment)
* [Reusable content: roles](#reusable-content-roles)
* [Configuration: group variables](#configuration-group-variables)
* [The rolling upgrade](#the-rolling-upgrade)
* [Managing other load balancers](#managing-other-load-balancers)
* [Continuous delivery end-to-end](#continuous-delivery-end-to-end)
What is continuous delivery?
----------------------------
Continuous delivery (CD) means frequently delivering updates to your software application.
The idea is that by updating more often, you do not have to wait for a specific timed period, and your organization gets better at the process of responding to change.
Some Ansible users are deploying updates to their end users on an hourly or even more frequent basis – sometimes every time there is an approved code change. To achieve this, you need tools to be able to quickly apply those updates in a zero-downtime way.
This document describes in detail how to achieve this goal, using one of Ansible’s most complete example playbooks as a template: lamp\_haproxy. This example uses a lot of Ansible features: roles, templates, and group variables, and it also comes with an orchestration playbook that can do zero-downtime rolling upgrades of the web application stack.
Note
[Click here for the latest playbooks for this example](https://github.com/ansible/ansible-examples/tree/master/lamp_haproxy).
The playbooks deploy Apache, PHP, MySQL, Nagios, and HAProxy to a CentOS-based set of servers.
We’re not going to cover how to run these playbooks here. Read the included README in the github project along with the example for that information. Instead, we’re going to take a close look at every part of the playbook and describe what it does.
Site deployment
---------------
Let’s start with `site.yml`. This is our site-wide deployment playbook. It can be used to initially deploy the site, as well as push updates to all of the servers:
```
---
# This playbook deploys the whole application stack in this site.
# Apply common configuration to all hosts
- hosts: all
roles:
- common
# Configure and deploy database servers.
- hosts: dbservers
roles:
- db
# Configure and deploy the web servers. Note that we include two roles
# here, the 'base-apache' role which simply sets up Apache, and 'web'
# which includes our example web application.
- hosts: webservers
roles:
- base-apache
- web
# Configure and deploy the load balancer(s).
- hosts: lbservers
roles:
- haproxy
# Configure and deploy the Nagios monitoring node(s).
- hosts: monitoring
roles:
- base-apache
- nagios
```
Note
If you’re not familiar with terms like playbooks and plays, you should review [Working with playbooks](playbooks#working-with-playbooks).
In this playbook we have 5 plays. The first one targets `all` hosts and applies the `common` role to all of the hosts. This is for site-wide things like yum repository configuration, firewall configuration, and anything else that needs to apply to all of the servers.
The next four plays run against specific host groups and apply specific roles to those servers. Along with the roles for Nagios monitoring, the database, and the web application, we’ve implemented a `base-apache` role that installs and configures a basic Apache setup. This is used by both the sample web application and the Nagios hosts.
Reusable content: roles
-----------------------
By now you should have a bit of understanding about roles and how they work in Ansible. Roles are a way to organize content: tasks, handlers, templates, and files, into reusable components.
This example has six roles: `common`, `base-apache`, `db`, `haproxy`, `nagios`, and `web`. How you organize your roles is up to you and your application, but most sites will have one or more common roles that are applied to all systems, and then a series of application-specific roles that install and configure particular parts of the site.
Roles can have variables and dependencies, and you can pass in parameters to roles to modify their behavior. You can read more about roles in the [Roles](playbooks_reuse_roles#playbooks-reuse-roles) section.
Configuration: group variables
------------------------------
Group variables are variables that are applied to groups of servers. They can be used in templates and in playbooks to customize behavior and to provide easily-changed settings and parameters. They are stored in a directory called `group_vars` in the same location as your inventory. Here is lamp\_haproxy’s `group_vars/all` file. As you might expect, these variables are applied to all of the machines in your inventory:
```
---
httpd_port: 80
ntpserver: 192.0.2.23
```
This is a YAML file, and you can create lists and dictionaries for more complex variable structures. In this case, we are just setting two variables, one for the port for the web server, and one for the NTP server that our machines should use for time synchronization.
Here’s another group variables file. This is `group_vars/dbservers` which applies to the hosts in the `dbservers` group:
```
---
mysqlservice: mysqld
mysql_port: 3306
dbuser: root
dbname: foodb
upassword: usersecret
```
If you look in the example, there are group variables for the `webservers` group and the `lbservers` group, similarly.
These variables are used in a variety of places. You can use them in playbooks, like this, in `roles/db/tasks/main.yml`:
```
- name: Create Application Database
mysql_db:
name: "{{ dbname }}"
state: present
- name: Create Application DB User
mysql_user:
name: "{{ dbuser }}"
password: "{{ upassword }}"
priv: "*.*:ALL"
host: '%'
state: present
```
You can also use these variables in templates, like this, in `roles/common/templates/ntp.conf.j2`:
```
driftfile /var/lib/ntp/drift
restrict 127.0.0.1
restrict -6 ::1
server {{ ntpserver }}
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
```
You can see that the variable substitution syntax of {{ and }} is the same for both templates and variables. The syntax inside the curly braces is Jinja2, and you can do all sorts of operations and apply different filters to the data inside. In templates, you can also use for loops and if statements to handle more complex situations, like this, in `roles/common/templates/iptables.j2`:
```
{% if inventory_hostname in groups['dbservers'] %}
-A INPUT -p tcp --dport 3306 -j ACCEPT
{% endif %}
```
This is testing to see if the inventory name of the machine we’re currently operating on (`inventory_hostname`) exists in the inventory group `dbservers`. If so, that machine will get an iptables ACCEPT line for port 3306.
Here’s another example, from the same template:
```
{% for host in groups['monitoring'] %}
-A INPUT -p tcp -s {{ hostvars[host].ansible_default_ipv4.address }} --dport 5666 -j ACCEPT
{% endfor %}
```
This loops over all of the hosts in the group called `monitoring`, and adds an ACCEPT line for each monitoring hosts’ default IPv4 address to the current machine’s iptables configuration, so that Nagios can monitor those hosts.
You can learn a lot more about Jinja2 and its capabilities [here](https://jinja.palletsprojects.com/), and you can read more about Ansible variables in general in the [Using Variables](playbooks_variables#playbooks-variables) section.
The rolling upgrade
-------------------
Now you have a fully-deployed site with web servers, a load balancer, and monitoring. How do you update it? This is where Ansible’s orchestration features come into play. While some applications use the term ‘orchestration’ to mean basic ordering or command-blasting, Ansible refers to orchestration as ‘conducting machines like an orchestra’, and has a pretty sophisticated engine for it.
Ansible has the capability to do operations on multi-tier applications in a coordinated way, making it easy to orchestrate a sophisticated zero-downtime rolling upgrade of our web application. This is implemented in a separate playbook, called `rolling_update.yml`.
Looking at the playbook, you can see it is made up of two plays. The first play is very simple and looks like this:
```
- hosts: monitoring
tasks: []
```
What’s going on here, and why are there no tasks? You might know that Ansible gathers “facts” from the servers before operating upon them. These facts are useful for all sorts of things: networking information, OS/distribution versions, and so on. In our case, we need to know something about all of the monitoring servers in our environment before we perform the update, so this simple play forces a fact-gathering step on our monitoring servers. You will see this pattern sometimes, and it’s a useful trick to know.
The next part is the update play. The first part looks like this:
```
- hosts: webservers
user: root
serial: 1
```
This is just a normal play definition, operating on the `webservers` group. The `serial` keyword tells Ansible how many servers to operate on at once. If it’s not specified, Ansible will parallelize these operations up to the default “forks” limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate on that many hosts at once. If you had just a handful of webservers, you may want to set `serial` to 1, for one host at a time. If you have 100, maybe you could set `serial` to 10, for ten at a time.
Here is the next part of the update play:
```
pre_tasks:
- name: disable nagios alerts for this host webserver service
nagios:
action: disable_alerts
host: "{{ inventory_hostname }}"
services: webserver
delegate_to: "{{ item }}"
loop: "{{ groups.monitoring }}"
- name: disable the server in haproxy
shell: echo "disable server myapplb/{{ inventory_hostname }}" | socat stdio /var/lib/haproxy/stats
delegate_to: "{{ item }}"
loop: "{{ groups.lbservers }}"
```
Note
* The `serial` keyword forces the play to be executed in ‘batches’. Each batch counts as a full play with a subselection of hosts. This has some consequences on play behavior. For example, if all hosts in a batch fails, the play fails, which in turn fails the entire run. You should consider this when combining with `max_fail_percentage`.
The `pre_tasks` keyword just lets you list tasks to run before the roles are called. This will make more sense in a minute. If you look at the names of these tasks, you can see that we are disabling Nagios alerts and then removing the webserver that we are currently updating from the HAProxy load balancing pool.
The `delegate_to` and `loop` arguments, used together, cause Ansible to loop over each monitoring server and load balancer, and perform that operation (delegate that operation) on the monitoring or load balancing server, “on behalf” of the webserver. In programming terms, the outer loop is the list of web servers, and the inner loop is the list of monitoring servers.
Note that the HAProxy step looks a little complicated. We’re using HAProxy in this example because it’s freely available, though if you have (for instance) an F5 or Netscaler in your infrastructure (or maybe you have an AWS Elastic IP setup?), you can use Ansible modules to communicate with them instead. You might also wish to use other monitoring modules instead of nagios, but this just shows the main goal of the ‘pre tasks’ section – take the server out of monitoring, and take it out of rotation.
The next step simply re-applies the proper roles to the web servers. This will cause any configuration management declarations in `web` and `base-apache` roles to be applied to the web servers, including an update of the web application code itself. We don’t have to do it this way–we could instead just purely update the web application, but this is a good example of how roles can be used to reuse tasks:
```
roles:
- common
- base-apache
- web
```
Finally, in the `post_tasks` section, we reverse the changes to the Nagios configuration and put the web server back in the load balancing pool:
```
post_tasks:
- name: Enable the server in haproxy
shell: echo "enable server myapplb/{{ inventory_hostname }}" | socat stdio /var/lib/haproxy/stats
delegate_to: "{{ item }}"
loop: "{{ groups.lbservers }}"
- name: re-enable nagios alerts
nagios:
action: enable_alerts
host: "{{ inventory_hostname }}"
services: webserver
delegate_to: "{{ item }}"
loop: "{{ groups.monitoring }}"
```
Again, if you were using a Netscaler or F5 or Elastic Load Balancer, you would just substitute in the appropriate modules instead.
Managing other load balancers
-----------------------------
In this example, we use the simple HAProxy load balancer to front-end the web servers. It’s easy to configure and easy to manage. As we have mentioned, Ansible has support for a variety of other load balancers like Citrix NetScaler, F5 BigIP, Amazon Elastic Load Balancers, and more. See the [Working With Modules](modules#working-with-modules) documentation for more information.
For other load balancers, you may need to send shell commands to them (like we do for HAProxy above), or call an API, if your load balancer exposes one. For the load balancers for which Ansible has modules, you may want to run them as a `local_action` if they contact an API. You can read more about local actions in the [Controlling where tasks run: delegation and local actions](playbooks_delegation#playbooks-delegation) section. Should you develop anything interesting for some hardware where there is not a module, it might make for a good contribution!
Continuous delivery end-to-end
------------------------------
Now that you have an automated way to deploy updates to your application, how do you tie it all together? A lot of organizations use a continuous integration tool like [Jenkins](https://jenkins.io/) or [Atlassian Bamboo](https://www.atlassian.com/software/bamboo) to tie the development, test, release, and deploy steps together. You may also want to use a tool like [Gerrit](https://www.gerritcodereview.com/) to add a code review step to commits to either the application code itself, or to your Ansible playbooks, or both.
Depending on your environment, you might be deploying continuously to a test environment, running an integration test battery against that environment, and then deploying automatically into production. Or you could keep it simple and just use the rolling-update for on-demand deployment into test or production specifically. This is all up to you.
For integration with Continuous Integration systems, you can easily trigger playbook runs using the `ansible-playbook` command line tool, or, if you’re using AWX, the `tower-cli` command or the built-in REST API. (The tower-cli command ‘joblaunch’ will spawn a remote job over the REST API and is pretty slick).
This should give you a good idea of how to structure a multi-tier application with Ansible, and orchestrate operations upon that app, with the eventual goal of continuous delivery to your customers. You could extend the idea of the rolling upgrade to lots of different parts of the app; maybe add front-end web servers along with application servers, for instance, or replace the SQL database with something like MongoDB or Riak. Ansible gives you the capability to easily manage complicated environments and automate common operations.
See also
[lamp\_haproxy example](https://github.com/ansible/ansible-examples/tree/master/lamp_haproxy)
The lamp\_haproxy example discussed here.
[Working with playbooks](playbooks#working-with-playbooks)
An introduction to playbooks
[Roles](playbooks_reuse_roles#playbooks-reuse-roles)
An introduction to playbook roles
[Using Variables](playbooks_variables#playbooks-variables)
An introduction to Ansible variables
[Ansible.com: Continuous Delivery](https://www.ansible.com/use-cases/continuous-delivery)
An introduction to Continuous Delivery with Ansible
| programming_docs |
ansible How to build your inventory How to build your inventory
===========================
Ansible works against multiple managed nodes or “hosts” in your infrastructure at the same time, using a list or group of lists known as inventory. Once your inventory is defined, you use [patterns](intro_patterns#intro-patterns) to select the hosts or groups you want Ansible to run against.
The default location for inventory is a file called `/etc/ansible/hosts`. You can specify a different inventory file at the command line using the `-i <path>` option. You can also use multiple inventory files at the same time as described in [Using multiple inventory sources](#using-multiple-inventory-sources), and/or pull inventory from dynamic or cloud sources or different formats (YAML, ini, and so on), as described in [Working with dynamic inventory](intro_dynamic_inventory#intro-dynamic-inventory). Introduced in version 2.4, Ansible has [Inventory Plugins](../plugins/inventory#inventory-plugins) to make this flexible and customizable.
* [Inventory basics: formats, hosts, and groups](#inventory-basics-formats-hosts-and-groups)
+ [Default groups](#default-groups)
+ [Hosts in multiple groups](#hosts-in-multiple-groups)
+ [Adding ranges of hosts](#adding-ranges-of-hosts)
* [Adding variables to inventory](#adding-variables-to-inventory)
* [Assigning a variable to one machine: host variables](#assigning-a-variable-to-one-machine-host-variables)
+ [Inventory aliases](#inventory-aliases)
* [Assigning a variable to many machines: group variables](#assigning-a-variable-to-many-machines-group-variables)
+ [Inheriting variable values: group variables for groups of groups](#inheriting-variable-values-group-variables-for-groups-of-groups)
* [Organizing host and group variables](#organizing-host-and-group-variables)
* [How variables are merged](#how-variables-are-merged)
* [Using multiple inventory sources](#using-multiple-inventory-sources)
* [Connecting to hosts: behavioral inventory parameters](#connecting-to-hosts-behavioral-inventory-parameters)
+ [Non-SSH connection types](#non-ssh-connection-types)
* [Inventory setup examples](#inventory-setup-examples)
+ [Example: One inventory per environment](#example-one-inventory-per-environment)
+ [Example: Group by function](#example-group-by-function)
+ [Example: Group by location](#example-group-by-location)
Inventory basics: formats, hosts, and groups
--------------------------------------------
The inventory file can be in one of many formats, depending on the inventory plugins you have. The most common formats are INI and YAML. A basic INI `/etc/ansible/hosts` might look like this:
```
mail.example.com
[webservers]
foo.example.com
bar.example.com
[dbservers]
one.example.com
two.example.com
three.example.com
```
The headings in brackets are group names, which are used in classifying hosts and deciding what hosts you are controlling at what times and for what purpose. Group names should follow the same guidelines as [Creating valid variable names](playbooks_variables#valid-variable-names).
Here’s that same basic inventory file in YAML format:
```
all:
hosts:
mail.example.com:
children:
webservers:
hosts:
foo.example.com:
bar.example.com:
dbservers:
hosts:
one.example.com:
two.example.com:
three.example.com:
```
### Default groups
There are two default groups: `all` and `ungrouped`. The `all` group contains every host. The `ungrouped` group contains all hosts that don’t have another group aside from `all`. Every host will always belong to at least 2 groups (`all` and `ungrouped` or `all` and some other group). Though `all` and `ungrouped` are always present, they can be implicit and not appear in group listings like `group_names`.
### Hosts in multiple groups
You can (and probably will) put each host in more than one group. For example a production webserver in a datacenter in Atlanta might be included in groups called [prod] and [atlanta] and [webservers]. You can create groups that track:
* What - An application, stack or microservice (for example, database servers, web servers, and so on).
* Where - A datacenter or region, to talk to local DNS, storage, and so on (for example, east, west).
* When - The development stage, to avoid testing on production resources (for example, prod, test).
Extending the previous YAML inventory to include what, when, and where would look like:
```
all:
hosts:
mail.example.com:
children:
webservers:
hosts:
foo.example.com:
bar.example.com:
dbservers:
hosts:
one.example.com:
two.example.com:
three.example.com:
east:
hosts:
foo.example.com:
one.example.com:
two.example.com:
west:
hosts:
bar.example.com:
three.example.com:
prod:
hosts:
foo.example.com:
one.example.com:
two.example.com:
test:
hosts:
bar.example.com:
three.example.com:
```
You can see that `one.example.com` exists in the `dbservers`, `east`, and `prod` groups.
You can also use nested groups to simplify `prod` and `test` in this inventory, for the same result:
```
all:
hosts:
mail.example.com:
children:
webservers:
hosts:
foo.example.com:
bar.example.com:
dbservers:
hosts:
one.example.com:
two.example.com:
three.example.com:
east:
hosts:
foo.example.com:
one.example.com:
two.example.com:
west:
hosts:
bar.example.com:
three.example.com:
prod:
children:
east:
test:
children:
west:
```
You can find more examples on how to organize your inventories and group your hosts in [Inventory setup examples](#inventory-setup-examples).
### Adding ranges of hosts
If you have a lot of hosts with a similar pattern, you can add them as a range rather than listing each hostname separately:
In INI:
```
[webservers]
www[01:50].example.com
```
In YAML:
```
...
webservers:
hosts:
www[01:50].example.com:
```
You can specify a stride (increments between sequence numbers) when defining a numeric range of hosts:
In INI:
```
[webservers]
www[01:50:2].example.com
```
In YAML:
```
...
webservers:
hosts:
www[01:50:2].example.com:
```
For numeric patterns, leading zeros can be included or removed, as desired. Ranges are inclusive. You can also define alphabetic ranges:
```
[databases]
db-[a:f].example.com
```
Adding variables to inventory
-----------------------------
You can store variable values that relate to a specific host or group in inventory. To start with, you may add variables directly to the hosts and groups in your main inventory file. As you add more and more managed nodes to your Ansible inventory, however, you will likely want to store variables in separate host and group variable files. See [Defining variables in inventory](playbooks_variables#define-variables-in-inventory) for details.
Assigning a variable to one machine: host variables
---------------------------------------------------
You can easily assign a variable to a single host, then use it later in playbooks. In INI:
```
[atlanta]
host1 http_port=80 maxRequestsPerChild=808
host2 http_port=303 maxRequestsPerChild=909
```
In YAML:
```
atlanta:
hosts:
host1:
http_port: 80
maxRequestsPerChild: 808
host2:
http_port: 303
maxRequestsPerChild: 909
```
Unique values like non-standard SSH ports work well as host variables. You can add them to your Ansible inventory by adding the port number after the hostname with a colon:
```
badwolf.example.com:5309
```
Connection variables also work well as host variables:
```
[targets]
localhost ansible_connection=local
other1.example.com ansible_connection=ssh ansible_user=myuser
other2.example.com ansible_connection=ssh ansible_user=myotheruser
```
Note
If you list non-standard SSH ports in your SSH config file, the `openssh` connection will find and use them, but the `paramiko` connection will not.
### Inventory aliases
You can also define aliases in your inventory:
In INI:
```
jumper ansible_port=5555 ansible_host=192.0.2.50
```
In YAML:
```
...
hosts:
jumper:
ansible_port: 5555
ansible_host: 192.0.2.50
```
In the above example, running Ansible against the host alias “jumper” will connect to 192.0.2.50 on port 5555. See [behavioral inventory parameters](#behavioral-parameters) to further customize the connection to hosts.
Note
Values passed in the INI format using the `key=value` syntax are interpreted differently depending on where they are declared:
* When declared inline with the host, INI values are interpreted as Python literal structures (strings, numbers, tuples, lists, dicts, booleans, None). Host lines accept multiple `key=value` parameters per line. Therefore they need a way to indicate that a space is part of a value rather than a separator.
* When declared in a `:vars` section, INI values are interpreted as strings. For example `var=FALSE` would create a string equal to ‘FALSE’. Unlike host lines, `:vars` sections accept only a single entry per line, so everything after the `=` must be the value for the entry.
* If a variable value set in an INI inventory must be a certain type (for example, a string or a boolean value), always specify the type with a filter in your task. Do not rely on types set in INI inventories when consuming variables.
* Consider using YAML format for inventory sources to avoid confusion on the actual type of a variable. The YAML inventory plugin processes variable values consistently and correctly.
Generally speaking, this is not the best way to define variables that describe your system policy. Setting variables in the main inventory file is only a shorthand. See [Organizing host and group variables](#splitting-out-vars) for guidelines on storing variable values in individual files in the ‘host\_vars’ directory.
Assigning a variable to many machines: group variables
------------------------------------------------------
If all hosts in a group share a variable value, you can apply that variable to an entire group at once. In INI:
```
[atlanta]
host1
host2
[atlanta:vars]
ntp_server=ntp.atlanta.example.com
proxy=proxy.atlanta.example.com
```
In YAML:
```
atlanta:
hosts:
host1:
host2:
vars:
ntp_server: ntp.atlanta.example.com
proxy: proxy.atlanta.example.com
```
Group variables are a convenient way to apply variables to multiple hosts at once. Before executing, however, Ansible always flattens variables, including inventory variables, to the host level. If a host is a member of multiple groups, Ansible reads variable values from all of those groups. If you assign different values to the same variable in different groups, Ansible chooses which value to use based on internal [rules for merging](#how-we-merge).
### Inheriting variable values: group variables for groups of groups
You can make groups of groups using the `:children` suffix in INI or the `children:` entry in YAML. You can apply variables to these groups of groups using `:vars` or `vars:`:
In INI:
```
[atlanta]
host1
host2
[raleigh]
host2
host3
[southeast:children]
atlanta
raleigh
[southeast:vars]
some_server=foo.southeast.example.com
halon_system_timeout=30
self_destruct_countdown=60
escape_pods=2
[usa:children]
southeast
northeast
southwest
northwest
```
In YAML:
```
all:
children:
usa:
children:
southeast:
children:
atlanta:
hosts:
host1:
host2:
raleigh:
hosts:
host2:
host3:
vars:
some_server: foo.southeast.example.com
halon_system_timeout: 30
self_destruct_countdown: 60
escape_pods: 2
northeast:
northwest:
southwest:
```
If you need to store lists or hash data, or prefer to keep host and group specific variables separate from the inventory file, see [Organizing host and group variables](#splitting-out-vars).
Child groups have a couple of properties to note:
* Any host that is member of a child group is automatically a member of the parent group.
* A child group’s variables will have higher precedence (override) a parent group’s variables.
* Groups can have multiple parents and children, but not circular relationships.
* Hosts can also be in multiple groups, but there will only be **one** instance of a host, merging the data from the multiple groups.
Organizing host and group variables
-----------------------------------
Although you can store variables in the main inventory file, storing separate host and group variables files may help you organize your variable values more easily. Host and group variable files must use YAML syntax. Valid file extensions include ‘.yml’, ‘.yaml’, ‘.json’, or no file extension. See [YAML Syntax](../reference_appendices/yamlsyntax#yaml-syntax) if you are new to YAML.
Ansible loads host and group variable files by searching paths relative to the inventory file or the playbook file. If your inventory file at `/etc/ansible/hosts` contains a host named ‘foosball’ that belongs to two groups, ‘raleigh’ and ‘webservers’, that host will use variables in YAML files at the following locations:
```
/etc/ansible/group_vars/raleigh # can optionally end in '.yml', '.yaml', or '.json'
/etc/ansible/group_vars/webservers
/etc/ansible/host_vars/foosball
```
For example, if you group hosts in your inventory by datacenter, and each datacenter uses its own NTP server and database server, you can create a file called `/etc/ansible/group_vars/raleigh` to store the variables for the `raleigh` group:
```
---
ntp_server: acme.example.org
database_server: storage.example.org
```
You can also create *directories* named after your groups or hosts. Ansible will read all the files in these directories in lexicographical order. An example with the ‘raleigh’ group:
```
/etc/ansible/group_vars/raleigh/db_settings
/etc/ansible/group_vars/raleigh/cluster_settings
```
All hosts in the ‘raleigh’ group will have the variables defined in these files available to them. This can be very useful to keep your variables organized when a single file gets too big, or when you want to use [Ansible Vault](vault#playbooks-vault) on some group variables.
You can also add `group_vars/` and `host_vars/` directories to your playbook directory. The `ansible-playbook` command looks for these directories in the current working directory by default. Other Ansible commands (for example, `ansible`, `ansible-console`, and so on) will only look for `group_vars/` and `host_vars/` in the inventory directory. If you want other commands to load group and host variables from a playbook directory, you must provide the `--playbook-dir` option on the command line. If you load inventory files from both the playbook directory and the inventory directory, variables in the playbook directory will override variables set in the inventory directory.
Keeping your inventory file and variables in a git repo (or other version control) is an excellent way to track changes to your inventory and host variables.
How variables are merged
------------------------
By default variables are merged/flattened to the specific host before a play is run. This keeps Ansible focused on the Host and Task, so groups don’t really survive outside of inventory and host matching. By default, Ansible overwrites variables including the ones defined for a group and/or host (see [DEFAULT\_HASH\_BEHAVIOUR](../reference_appendices/config#default-hash-behaviour)). The order/precedence is (from lowest to highest):
* all group (because it is the ‘parent’ of all other groups)
* parent group
* child group
* host
By default Ansible merges groups at the same parent/child level in ASCII order, and the last group loaded overwrites the previous groups. For example, an a\_group will be merged with b\_group and b\_group vars that match will overwrite the ones in a\_group.
You can change this behavior by setting the group variable `ansible_group_priority` to change the merge order for groups of the same level (after the parent/child order is resolved). The larger the number, the later it will be merged, giving it higher priority. This variable defaults to `1` if not set. For example:
```
a_group:
vars:
testvar: a
ansible_group_priority: 10
b_group:
vars:
testvar: b
```
In this example, if both groups have the same priority, the result would normally have been `testvar == b`, but since we are giving the `a_group` a higher priority the result will be `testvar == a`.
Note
`ansible_group_priority` can only be set in the inventory source and not in group\_vars/, as the variable is used in the loading of group\_vars.
Using multiple inventory sources
--------------------------------
You can target multiple inventory sources (directories, dynamic inventory scripts or files supported by inventory plugins) at the same time by giving multiple inventory parameters from the command line or by configuring [`ANSIBLE_INVENTORY`](../reference_appendices/config#envvar-ANSIBLE_INVENTORY). This can be useful when you want to target normally separate environments, like staging and production, at the same time for a specific action.
Target two sources from the command line like this:
```
ansible-playbook get_logs.yml -i staging -i production
```
Keep in mind that if there are variable conflicts in the inventories, they are resolved according to the rules described in [How variables are merged](#how-we-merge) and [Variable precedence: Where should I put a variable?](playbooks_variables#ansible-variable-precedence). The merging order is controlled by the order of the inventory source parameters. If `[all:vars]` in staging inventory defines `myvar = 1`, but production inventory defines `myvar = 2`, the playbook will be run with `myvar = 2`. The result would be reversed if the playbook was run with `-i production -i staging`.
**Aggregating inventory sources with a directory**
You can also create an inventory by combining multiple inventory sources and source types under a directory. This can be useful for combining static and dynamic hosts and managing them as one inventory. The following inventory combines an inventory plugin source, a dynamic inventory script, and a file with static hosts:
```
inventory/
openstack.yml # configure inventory plugin to get hosts from Openstack cloud
dynamic-inventory.py # add additional hosts with dynamic inventory script
static-inventory # add static hosts and groups
group_vars/
all.yml # assign variables to all hosts
```
You can target this inventory directory simply like this:
```
ansible-playbook example.yml -i inventory
```
It can be useful to control the merging order of the inventory sources if there’s variable conflicts or group of groups dependencies to the other inventory sources. The inventories are merged in ASCII order according to the filenames so the result can be controlled by adding prefixes to the files:
```
inventory/
01-openstack.yml # configure inventory plugin to get hosts from Openstack cloud
02-dynamic-inventory.py # add additional hosts with dynamic inventory script
03-static-inventory # add static hosts
group_vars/
all.yml # assign variables to all hosts
```
If `01-openstack.yml` defines `myvar = 1` for the group `all`, `02-dynamic-inventory.py` defines `myvar = 2`, and `03-static-inventory` defines `myvar = 3`, the playbook will be run with `myvar = 3`.
For more details on inventory plugins and dynamic inventory scripts see [Inventory Plugins](../plugins/inventory#inventory-plugins) and [Working with dynamic inventory](intro_dynamic_inventory#intro-dynamic-inventory).
Connecting to hosts: behavioral inventory parameters
----------------------------------------------------
As described above, setting the following variables control how Ansible interacts with remote hosts.
Host connection:
Note
Ansible does not expose a channel to allow communication between the user and the ssh process to accept a password manually to decrypt an ssh key when using the ssh connection plugin (which is the default). The use of `ssh-agent` is highly recommended.
ansible\_connection
Connection type to the host. This can be the name of any of ansible’s connection plugins. SSH protocol types are `smart`, `ssh` or `paramiko`. The default is smart. Non-SSH based types are described in the next section.
General for all connections:
ansible\_host
The name of the host to connect to, if different from the alias you wish to give to it.
ansible\_port
The connection port number, if not the default (22 for ssh)
ansible\_user
The user name to use when connecting to the host
ansible\_password
The password to use to authenticate to the host (never store this variable in plain text; always use a vault. See [Keep vaulted variables safely visible](playbooks_best_practices#tip-for-variables-and-vaults))
Specific to the SSH connection:
ansible\_ssh\_private\_key\_file
Private key file used by ssh. Useful if using multiple keys and you don’t want to use SSH agent.
ansible\_ssh\_common\_args
This setting is always appended to the default command line for **sftp**, **scp**, and **ssh**. Useful to configure a `ProxyCommand` for a certain host (or group).
ansible\_sftp\_extra\_args
This setting is always appended to the default **sftp** command line.
ansible\_scp\_extra\_args
This setting is always appended to the default **scp** command line.
ansible\_ssh\_extra\_args
This setting is always appended to the default **ssh** command line.
ansible\_ssh\_pipelining
Determines whether or not to use SSH pipelining. This can override the `pipelining` setting in `ansible.cfg`.
ansible\_ssh\_executable (added in version 2.2)
This setting overrides the default behavior to use the system **ssh**. This can override the `ssh_executable` setting in `ansible.cfg`.
Privilege escalation (see [Ansible Privilege Escalation](become#become) for further details):
ansible\_become
Equivalent to `ansible_sudo` or `ansible_su`, allows to force privilege escalation
ansible\_become\_method
Allows to set privilege escalation method
ansible\_become\_user
Equivalent to `ansible_sudo_user` or `ansible_su_user`, allows to set the user you become through privilege escalation
ansible\_become\_password
Equivalent to `ansible_sudo_password` or `ansible_su_password`, allows you to set the privilege escalation password (never store this variable in plain text; always use a vault. See [Keep vaulted variables safely visible](playbooks_best_practices#tip-for-variables-and-vaults))
ansible\_become\_exe
Equivalent to `ansible_sudo_exe` or `ansible_su_exe`, allows you to set the executable for the escalation method selected
ansible\_become\_flags
Equivalent to `ansible_sudo_flags` or `ansible_su_flags`, allows you to set the flags passed to the selected escalation method. This can be also set globally in `ansible.cfg` in the `sudo_flags` option
Remote host environment parameters:
ansible\_shell\_type
The shell type of the target system. You should not use this setting unless you have set the [ansible\_shell\_executable](#ansible-shell-executable) to a non-Bourne (sh) compatible shell. By default commands are formatted using `sh`-style syntax. Setting this to `csh` or `fish` will cause commands executed on target systems to follow those shell’s syntax instead.
ansible\_python\_interpreter
The target host python path. This is useful for systems with more than one Python or not located at **/usr/bin/python** such as \*BSD, or where **/usr/bin/python** is not a 2.X series Python. We do not use the **/usr/bin/env** mechanism as that requires the remote user’s path to be set right and also assumes the **python** executable is named python, where the executable might be named something like **python2.6**.
ansible\_\*\_interpreter
Works for anything such as ruby or perl and works just like [ansible\_python\_interpreter](#ansible-python-interpreter). This replaces shebang of modules which will run on that host.
New in version 2.1.
ansible\_shell\_executable
This sets the shell the ansible controller will use on the target machine, overrides `executable` in `ansible.cfg` which defaults to **/bin/sh**. You should really only change it if is not possible to use **/bin/sh** (in other words, if **/bin/sh** is not installed on the target machine or cannot be run from sudo.).
Examples from an Ansible-INI host file:
```
some_host ansible_port=2222 ansible_user=manager
aws_host ansible_ssh_private_key_file=/home/example/.ssh/aws.pem
freebsd_host ansible_python_interpreter=/usr/local/bin/python
ruby_module_host ansible_ruby_interpreter=/usr/bin/ruby.1.9.3
```
### Non-SSH connection types
As stated in the previous section, Ansible executes playbooks over SSH but it is not limited to this connection type. With the host specific parameter `ansible_connection=<connector>`, the connection type can be changed. The following non-SSH based connectors are available:
**local**
This connector can be used to deploy the playbook to the control machine itself.
**docker**
This connector deploys the playbook directly into Docker containers using the local Docker client. The following parameters are processed by this connector:
ansible\_host
The name of the Docker container to connect to.
ansible\_user
The user name to operate within the container. The user must exist inside the container.
ansible\_become
If set to `true` the `become_user` will be used to operate within the container.
ansible\_docker\_extra\_args
Could be a string with any additional arguments understood by Docker, which are not command specific. This parameter is mainly used to configure a remote Docker daemon to use.
Here is an example of how to instantly deploy to created containers:
```
- name: Create a jenkins container
community.general.docker_container:
docker_host: myserver.net:4243
name: my_jenkins
image: jenkins
- name: Add the container to inventory
ansible.builtin.add_host:
name: my_jenkins
ansible_connection: docker
ansible_docker_extra_args: "--tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/client-cert.pem --tlskey=/path/to/client-key.pem -H=tcp://myserver.net:4243"
ansible_user: jenkins
changed_when: false
- name: Create a directory for ssh keys
delegate_to: my_jenkins
ansible.builtin.file:
path: "/var/jenkins_home/.ssh/jupiter"
state: directory
```
For a full list with available plugins and examples, see [Plugin List](../plugins/connection#connection-plugin-list).
Note
If you’re reading the docs from the beginning, this may be the first example you’ve seen of an Ansible playbook. This is not an inventory file. Playbooks will be covered in great detail later in the docs.
Inventory setup examples
------------------------
See also [Sample Ansible setup](sample_setup#sample-setup), which shows inventory along with playbooks and other Ansible artifacts.
### Example: One inventory per environment
If you need to manage multiple environments it’s sometimes prudent to have only hosts of a single environment defined per inventory. This way, it is harder to, for instance, accidentally change the state of nodes inside the “test” environment when you actually wanted to update some “staging” servers.
For the example mentioned above you could have an `inventory_test` file:
```
[dbservers]
db01.test.example.com
db02.test.example.com
[appservers]
app01.test.example.com
app02.test.example.com
app03.test.example.com
```
That file only includes hosts that are part of the “test” environment. Define the “staging” machines in another file called `inventory_staging`:
```
[dbservers]
db01.staging.example.com
db02.staging.example.com
[appservers]
app01.staging.example.com
app02.staging.example.com
app03.staging.example.com
```
To apply a playbook called `site.yml` to all the app servers in the test environment, use the following command:
```
ansible-playbook -i inventory_test -l appservers site.yml
```
### Example: Group by function
In the previous section you already saw an example for using groups in order to cluster hosts that have the same function. This allows you, for instance, to define firewall rules inside a playbook or role affecting only database servers:
```
- hosts: dbservers
tasks:
- name: Allow access from 10.0.0.1
ansible.builtin.iptables:
chain: INPUT
jump: ACCEPT
source: 10.0.0.1
```
### Example: Group by location
Other tasks might be focused on where a certain host is located. Let’s say that `db01.test.example.com` and `app01.test.example.com` are located in DC1 while `db02.test.example.com` is in DC2:
```
[dc1]
db01.test.example.com
app01.test.example.com
[dc2]
db02.test.example.com
```
In practice, you might even end up mixing all these setups as you might need to, on one day, update all nodes in a specific data center while, on another day, update all the application servers no matter their location.
See also
[Inventory Plugins](../plugins/inventory#inventory-plugins)
Pulling inventory from dynamic or static sources
[Working with dynamic inventory](intro_dynamic_inventory#intro-dynamic-inventory)
Pulling inventory from dynamic sources, such as cloud providers
[Introduction to ad hoc commands](intro_adhoc#intro-adhoc)
Examples of basic commands
[Working with playbooks](playbooks#working-with-playbooks)
Learning Ansible’s configuration, deployment, and orchestration language.
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Intro to playbooks Intro to playbooks
==================
Ansible Playbooks offer a repeatable, re-usable, simple configuration management and multi-machine deployment system, one that is well suited to deploying complex applications. If you need to execute a task with Ansible more than once, write a playbook and put it under source control. Then you can use the playbook to push out new configuration or confirm the configuration of remote systems. The playbooks in the [ansible-examples repository](https://github.com/ansible/ansible-examples) illustrate many useful techniques. You may want to look at these in another tab as you read the documentation.
Playbooks can:
* declare configurations
* orchestrate steps of any manual ordered process, on multiple sets of machines, in a defined order
* launch tasks synchronously or [asynchronously](playbooks_async#playbooks-async)
* [Playbook syntax](#playbook-syntax)
* [Playbook execution](#playbook-execution)
+ [Task execution](#task-execution)
+ [Desired state and ‘idempotency’](#desired-state-and-idempotency)
+ [Running playbooks](#running-playbooks)
* [Ansible-Pull](#ansible-pull)
* [Verifying playbooks](#verifying-playbooks)
+ [ansible-lint](#ansible-lint)
Playbook syntax
---------------
Playbooks are expressed in YAML format with a minimum of syntax. If you are not familiar with YAML, look at our overview of [YAML Syntax](../reference_appendices/yamlsyntax#yaml-syntax) and consider installing an add-on for your text editor (see [Other Tools and Programs](https://docs.ansible.com/ansible/latest/community/other_tools_and_programs.html#other-tools-and-programs)) to help you write clean YAML syntax in your playbooks.
A playbook is composed of one or more ‘plays’ in an ordered list. The terms ‘playbook’ and ‘play’ are sports analogies. Each play executes part of the overall goal of the playbook, running one or more tasks. Each task calls an Ansible module.
Playbook execution
------------------
A playbook runs in order from top to bottom. Within each play, tasks also run in order from top to bottom. Playbooks with multiple ‘plays’ can orchestrate multi-machine deployments, running one play on your webservers, then another play on your database servers, then a third play on your network infrastructure, and so on. At a minimum, each play defines two things:
* the managed nodes to target, using a [pattern](intro_patterns#intro-patterns)
* at least one task to execute
Note
In Ansible 2.10 and later, we recommend you use the fully-qualified collection name in your playbooks to ensure the correct module is selected, because multiple collections can contain modules with the same name (for example, `user`). See [Using collections in a Playbook](collections_using#collections-using-playbook).
In this example, the first play targets the web servers; the second play targets the database servers:
```
---
- name: Update web servers
hosts: webservers
remote_user: root
tasks:
- name: Ensure apache is at the latest version
ansible.builtin.yum:
name: httpd
state: latest
- name: Write the apache config file
ansible.builtin.template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
- name: Update db servers
hosts: databases
remote_user: root
tasks:
- name: Ensure postgresql is at the latest version
ansible.builtin.yum:
name: postgresql
state: latest
- name: Ensure that postgresql is started
ansible.builtin.service:
name: postgresql
state: started
```
Your playbook can include more than just a hosts line and tasks. For example, the playbook above sets a `remote_user` for each play. This is the user account for the SSH connection. You can add other [Playbook Keywords](../reference_appendices/playbooks_keywords#playbook-keywords) at the playbook, play, or task level to influence how Ansible behaves. Playbook keywords can control the [connection plugin](../plugins/connection#connection-plugins), whether to use [privilege escalation](become#become), how to handle errors, and more. To support a variety of environments, Ansible lets you set many of these parameters as command-line flags, in your Ansible configuration, or in your inventory. Learning the [precedence rules](../reference_appendices/general_precedence#general-precedence-rules) for these sources of data will help you as you expand your Ansible ecosystem.
### Task execution
By default, Ansible executes each task in order, one at a time, against all machines matched by the host pattern. Each task executes a module with specific arguments. When a task has executed on all target machines, Ansible moves on to the next task. You can use [strategies](playbooks_strategies#playbooks-strategies) to change this default behavior. Within each play, Ansible applies the same task directives to all hosts. If a task fails on a host, Ansible takes that host out of the rotation for the rest of the playbook.
When you run a playbook, Ansible returns information about connections, the `name` lines of all your plays and tasks, whether each task has succeeded or failed on each machine, and whether each task has made a change on each machine. At the bottom of the playbook execution, Ansible provides a summary of the nodes that were targeted and how they performed. General failures and fatal “unreachable” communication attempts are kept separate in the counts.
### Desired state and ‘idempotency’
Most Ansible modules check whether the desired final state has already been achieved, and exit without performing any actions if that state has been achieved, so that repeating the task does not change the final state. Modules that behave this way are often called ‘idempotent.’ Whether you run a playbook once, or multiple times, the outcome should be the same. However, not all playbooks and not all modules behave this way. If you are unsure, test your playbooks in a sandbox environment before running them multiple times in production.
### Running playbooks
To run your playbook, use the [ansible-playbook](../cli/ansible-playbook#ansible-playbook) command:
```
ansible-playbook playbook.yml -f 10
```
Use the `--verbose` flag when running your playbook to see detailed output from successful modules as well as unsuccessful ones.
Ansible-Pull
------------
Should you want to invert the architecture of Ansible, so that nodes check in to a central location, instead of pushing configuration out to them, you can.
The `ansible-pull` is a small script that will checkout a repo of configuration instructions from git, and then run `ansible-playbook` against that content.
Assuming you load balance your checkout location, `ansible-pull` scales essentially infinitely.
Run `ansible-pull --help` for details.
There’s also a [clever playbook](https://github.com/ansible/ansible-examples/blob/master/language_features/ansible_pull.yml) available to configure `ansible-pull` via a crontab from push mode.
Verifying playbooks
-------------------
You may want to verify your playbooks to catch syntax errors and other problems before you run them. The [ansible-playbook](../cli/ansible-playbook#ansible-playbook) command offers several options for verification, including `--check`, `--diff`, `--list-hosts`, `--list-tasks`, and `--syntax-check`. The [Tools for validating playbooks](https://docs.ansible.com/ansible/latest/community/other_tools_and_programs.html#validate-playbook-tools) describes other tools for validating and testing playbooks.
### ansible-lint
You can use [ansible-lint](https://docs.ansible.com/ansible-lint/index.html) for detailed, Ansible-specific feedback on your playbooks before you execute them. For example, if you run `ansible-lint` on the playbook called `verify-apache.yml` near the top of this page, you should get the following results:
```
$ ansible-lint verify-apache.yml
[403] Package installs should not use latest
verify-apache.yml:8
Task/Handler: ensure apache is at the latest version
```
The [ansible-lint default rules](https://docs.ansible.com/ansible-lint/rules/default_rules.html) page describes each error. For `[403]`, the recommended fix is to change `state: latest` to `state: present` in the playbook.
See also
[ansible-lint](https://docs.ansible.com/ansible-lint/index.html)
Learn how to test Ansible Playbooks syntax
[YAML Syntax](../reference_appendices/yamlsyntax#yaml-syntax)
Learn about YAML syntax
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips for managing playbooks in the real world
[Collection Index](../collections/index#list-of-collections)
Browse existing collections, modules, and plugins
[Should you develop a module?](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html#developing-modules)
Learn to extend Ansible by writing your own modules
[Patterns: targeting hosts and groups](intro_patterns#intro-patterns)
Learn about how to select hosts
[GitHub examples directory](https://github.com/ansible/ansible-examples)
Complete end-to-end playbook examples
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
ansible Executing playbooks for troubleshooting Executing playbooks for troubleshooting
=======================================
When you are testing new plays or debugging playbooks, you may need to run the same play multiple times. To make this more efficient, Ansible offers two alternative ways to execute a playbook: start-at-task and step mode.
start-at-task
-------------
To start executing your playbook at a particular task (usually the task that failed on the previous run), use the `--start-at-task` option:
```
ansible-playbook playbook.yml --start-at-task="install packages"
```
In this example, Ansible starts executing your playbook at a task named “install packages”. This feature does not work with tasks inside dynamically re-used roles or tasks (`include_*`), see [Comparing includes and imports: dynamic and static re-use](playbooks_reuse#dynamic-vs-static).
Step mode
---------
To execute a playbook interactively, use `--step`:
```
ansible-playbook playbook.yml --step
```
With this option, Ansible stops on each task, and asks if it should execute that task. For example, if you have a task called “configure ssh”, the playbook run will stop and ask:
```
Perform task: configure ssh (y/n/c):
```
Answer “y” to execute the task, answer “n” to skip the task, and answer “c” to exit step mode, executing all remaining tasks without asking.
See also
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[Debugging tasks](playbooks_debugger#playbook-debugger)
Using the Ansible debugger
ansible Templating (Jinja2) Templating (Jinja2)
===================
Ansible uses Jinja2 templating to enable dynamic expressions and access to variables. Ansible includes a lot of specialized filters and tests for templating. You can use all the [standard filters and tests](https://jinja.palletsprojects.com/en/3.0.x/templates/#builtin-filters "(in Jinja v3.0.x)") included in Jinja2 as well. Ansible also offers a new plugin type: [Lookup Plugins](../plugins/lookup#lookup-plugins).
All templating happens on the Ansible controller **before** the task is sent and executed on the target machine. This approach minimizes the package requirements on the target (jinja2 is only required on the controller). It also limits the amount of data Ansible passes to the target machine. Ansible parses templates on the controller and passes only the information needed for each task to the target machine, instead of passing all the data on the controller and parsing it on the target.
* [Using filters to manipulate data](playbooks_filters)
+ [Handling undefined variables](playbooks_filters#handling-undefined-variables)
+ [Defining different values for true/false/null (ternary)](playbooks_filters#defining-different-values-for-true-false-null-ternary)
+ [Managing data types](playbooks_filters#managing-data-types)
+ [Formatting data: YAML and JSON](playbooks_filters#formatting-data-yaml-and-json)
+ [Combining and selecting data](playbooks_filters#combining-and-selecting-data)
+ [Randomizing data](playbooks_filters#randomizing-data)
+ [Managing list variables](playbooks_filters#managing-list-variables)
+ [Selecting from sets or lists (set theory)](playbooks_filters#selecting-from-sets-or-lists-set-theory)
+ [Calculating numbers (math)](playbooks_filters#calculating-numbers-math)
+ [Managing network interactions](playbooks_filters#managing-network-interactions)
+ [Encrypting and checksumming strings and passwords](playbooks_filters#encrypting-and-checksumming-strings-and-passwords)
+ [Manipulating text](playbooks_filters#manipulating-text)
+ [Manipulating strings](playbooks_filters#manipulating-strings)
+ [Managing UUIDs](playbooks_filters#managing-uuids)
+ [Handling dates and times](playbooks_filters#handling-dates-and-times)
+ [Getting Kubernetes resource names](playbooks_filters#getting-kubernetes-resource-names)
* [Tests](playbooks_tests)
+ [Test syntax](playbooks_tests#test-syntax)
+ [Testing strings](playbooks_tests#testing-strings)
+ [Vault](playbooks_tests#vault)
+ [Testing truthiness](playbooks_tests#testing-truthiness)
+ [Comparing versions](playbooks_tests#comparing-versions)
+ [Set theory tests](playbooks_tests#set-theory-tests)
+ [Testing if a list contains a value](playbooks_tests#testing-if-a-list-contains-a-value)
+ [Testing if a list value is True](playbooks_tests#testing-if-a-list-value-is-true)
+ [Testing paths](playbooks_tests#testing-paths)
+ [Testing size formats](playbooks_tests#testing-size-formats)
+ [Testing task results](playbooks_tests#testing-task-results)
* [Lookups](playbooks_lookups)
+ [Using lookups in variables](playbooks_lookups#using-lookups-in-variables)
* [Python3 in templates](playbooks_python_version)
+ [Dictionary views](playbooks_python_version#dictionary-views)
+ [dict.iteritems()](playbooks_python_version#dict-iteritems)
Get the current time
--------------------
New in version 2.8.
The `now()` Jinja2 function retrieves a Python datetime object or a string representation for the current time.
The `now()` function supports 2 arguments:
utc
Specify `True` to get the current time in UTC. Defaults to `False`.
fmt
Accepts a [strftime](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) string that returns a formatted date time string.
See also
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[Conditionals](playbooks_conditionals#playbooks-conditionals)
Conditional statements in playbooks
[Loops](playbooks_loops#playbooks-loops)
Looping in playbooks
[Roles](playbooks_reuse_roles#playbooks-reuse-roles)
Playbook organization by roles
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Interactive input: prompts Interactive input: prompts
==========================
If you want your playbook to prompt the user for certain input, add a ‘vars\_prompt’ section. Prompting the user for variables lets you avoid recording sensitive data like passwords. In addition to security, prompts support flexibility. For example, if you use one playbook across multiple software releases, you could prompt for the particular release version.
* [Encrypting values supplied by `vars_prompt`](#encrypting-values-supplied-by-vars-prompt)
* [Allowing special characters in `vars_prompt` values](#allowing-special-characters-in-vars-prompt-values)
Here is a most basic example:
```
---
- hosts: all
vars_prompt:
- name: username
prompt: What is your username?
private: no
- name: password
prompt: What is your password?
tasks:
- name: Print a message
ansible.builtin.debug:
msg: 'Logging in as {{ username }}'
```
The user input is hidden by default but it can be made visible by setting `private: no`.
Note
Prompts for individual `vars_prompt` variables will be skipped for any variable that is already defined through the command line `--extra-vars` option, or when running from a non-interactive session (such as cron or Ansible AWX). See [Defining variables at runtime](playbooks_variables#passing-variables-on-the-command-line).
If you have a variable that changes infrequently, you can provide a default value that can be overridden:
```
vars_prompt:
- name: release_version
prompt: Product release version
default: "1.0"
```
Encrypting values supplied by `vars_prompt`
-------------------------------------------
You can encrypt the entered value so you can use it, for instance, with the user module to define a password:
```
vars_prompt:
- name: my_password2
prompt: Enter password2
private: yes
encrypt: sha512_crypt
confirm: yes
salt_size: 7
```
If you have [Passlib](https://passlib.readthedocs.io/en/stable/) installed, you can use any crypt scheme the library supports:
* *des\_crypt* - DES Crypt
* *bsdi\_crypt* - BSDi Crypt
* *bigcrypt* - BigCrypt
* *crypt16* - Crypt16
* *md5\_crypt* - MD5 Crypt
* *bcrypt* - BCrypt
* *sha1\_crypt* - SHA-1 Crypt
* *sun\_md5\_crypt* - Sun MD5 Crypt
* *sha256\_crypt* - SHA-256 Crypt
* *sha512\_crypt* - SHA-512 Crypt
* *apr\_md5\_crypt* - Apache’s MD5-Crypt variant
* *phpass* - PHPass’ Portable Hash
* *pbkdf2\_digest* - Generic PBKDF2 Hashes
* *cta\_pbkdf2\_sha1* - Cryptacular’s PBKDF2 hash
* *dlitz\_pbkdf2\_sha1* - Dwayne Litzenberger’s PBKDF2 hash
* *scram* - SCRAM Hash
* *bsd\_nthash* - FreeBSD’s MCF-compatible nthash encoding
The only parameters accepted are ‘salt’ or ‘salt\_size’. You can use your own salt by defining ‘salt’, or have one generated automatically using ‘salt\_size’. By default Ansible generates a salt of size 8.
New in version 2.7.
If you do not have Passlib installed, Ansible uses the [crypt](https://docs.python.org/2/library/crypt.html) library as a fallback. Ansible supports at most four crypt schemes, depending on your platform at most the following crypt schemes are supported:
* *bcrypt* - BCrypt
* *md5\_crypt* - MD5 Crypt
* *sha256\_crypt* - SHA-256 Crypt
* *sha512\_crypt* - SHA-512 Crypt
New in version 2.8.
Allowing special characters in `vars_prompt` values
---------------------------------------------------
Some special characters, such as `{` and `%` can create templating errors. If you need to accept special characters, use the `unsafe` option:
```
vars_prompt:
- name: my_password_with_weird_chars
prompt: Enter password
unsafe: yes
private: yes
```
See also
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[Conditionals](playbooks_conditionals#playbooks-conditionals)
Conditional statements in playbooks
[Using Variables](playbooks_variables#playbooks-variables)
All about variables
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Tips and tricks Tips and tricks
===============
These tips and tricks have helped us optimize our Ansible usage, and we offer them here as suggestions. We hope they will help you organize content, write playbooks, maintain inventory, and execute Ansible. Ultimately, though, you should use Ansible in the way that makes most sense for your organization and your goals.
* [General tips](#general-tips)
+ [Keep it simple](#keep-it-simple)
+ [Use version control](#use-version-control)
* [Playbook tips](#playbook-tips)
+ [Use whitespace](#use-whitespace)
+ [Always name tasks](#always-name-tasks)
+ [Always mention the state](#always-mention-the-state)
+ [Use comments](#use-comments)
* [Inventory tips](#inventory-tips)
+ [Use dynamic inventory with clouds](#use-dynamic-inventory-with-clouds)
+ [Group inventory by function](#group-inventory-by-function)
+ [Separate production and staging inventory](#separate-production-and-staging-inventory)
+ [Keep vaulted variables safely visible](#keep-vaulted-variables-safely-visible)
* [Execution tricks](#execution-tricks)
+ [Try it in staging first](#try-it-in-staging-first)
+ [Update in batches](#update-in-batches)
+ [Handling OS and distro differences](#handling-os-and-distro-differences)
General tips
------------
These concepts apply to all Ansible activities and artifacts.
### Keep it simple
Whenever you can, do things simply. Use advanced features only when necessary, and select the feature that best matches your use case. For example, you will probably not need `vars`, `vars_files`, `vars_prompt` and `--extra-vars` all at once, while also using an external inventory file. If something feels complicated, it probably is. Take the time to look for a simpler solution.
### Use version control
Keep your playbooks, roles, inventory, and variables files in git or another version control system and make commits to the repository when you make changes. Version control gives you an audit trail describing when and why you changed the rules that automate your infrastructure.
Playbook tips
-------------
These tips help make playbooks and roles easier to read, maintain, and debug.
### Use whitespace
Generous use of whitespace, for example, a blank line before each block or task, makes a playbook easy to scan.
### Always name tasks
Task names are optional, but extremely useful. In its output, Ansible shows you the name of each task it runs. Choose names that describe what each task does and why.
### Always mention the state
For many modules, the ‘state’ parameter is optional. Different modules have different default settings for ‘state’, and some modules support several ‘state’ settings. Explicitly setting ‘state=present’ or ‘state=absent’ makes playbooks and roles clearer.
### Use comments
Even with task names and explicit state, sometimes a part of a playbook or role (or inventory/variable file) needs more explanation. Adding a comment (any line starting with ‘#’) helps others (and possibly yourself in future) understand what a play or task (or variable setting) does, how it does it, and why.
Inventory tips
--------------
These tips help keep your inventory well organized.
### Use dynamic inventory with clouds
With cloud providers and other systems that maintain canonical lists of your infrastructure, use [dynamic inventory](intro_dynamic_inventory#intro-dynamic-inventory) to retrieve those lists instead of manually updating static inventory files. With cloud resources, you can use tags to differentiate production and staging environments.
### Group inventory by function
A system can be in multiple groups. See [How to build your inventory](intro_inventory#intro-inventory) and [Patterns: targeting hosts and groups](intro_patterns#intro-patterns). If you create groups named for the function of the nodes in the group, for example *webservers* or *dbservers*, your playbooks can target machines based on function. You can assign function-specific variables using the group variable system, and design Ansible roles to handle function-specific use cases. See [Roles](playbooks_reuse_roles#playbooks-reuse-roles).
### Separate production and staging inventory
You can keep your production environment separate from development, test, and staging environments by using separate inventory files or directories for each environment. This way you pick with -i what you are targeting. Keeping all your environments in one file can lead to surprises!
### Keep vaulted variables safely visible
You should encrypt sensitive or secret variables with Ansible Vault. However, encrypting the variable names as well as the variable values makes it hard to find the source of the values. You can keep the names of your variables accessible (by `grep`, for example) without exposing any secrets by adding a layer of indirection:
1. Create a `group_vars/` subdirectory named after the group.
2. Inside this subdirectory, create two files named `vars` and `vault`.
3. In the `vars` file, define all of the variables needed, including any sensitive ones.
4. Copy all of the sensitive variables over to the `vault` file and prefix these variables with `vault_`.
5. Adjust the variables in the `vars` file to point to the matching `vault_` variables using jinja2 syntax: `db_password: {{ vault_db_password }}`.
6. Encrypt the `vault` file to protect its contents.
7. Use the variable name from the `vars` file in your playbooks.
When running a playbook, Ansible finds the variables in the unencrypted file, which pulls the sensitive variable values from the encrypted file. There is no limit to the number of variable and vault files or their names.
Execution tricks
----------------
These tips apply to using Ansible, rather than to Ansible artifacts.
### Try it in staging first
Testing changes in a staging environment before rolling them out in production is always a great idea. Your environments need not be the same size and you can use group variables to control the differences between those environments.
### Update in batches
Use the ‘serial’ keyword to control how many machines you update at once in the batch. See [Controlling where tasks run: delegation and local actions](playbooks_delegation#playbooks-delegation).
### Handling OS and distro differences
Group variables files and the `group_by` module work together to help Ansible execute across a range of operating systems and distributions that require different settings, packages, and tools. The `group_by` module creates a dynamic group of hosts matching certain criteria. This group does not need to be defined in the inventory file. This approach lets you execute different tasks on different operating systems or distributions. For example:
```
---
- name: talk to all hosts just so we can learn about them
hosts: all
tasks:
- name: Classify hosts depending on their OS distribution
group_by:
key: os_{{ ansible_facts['distribution'] }}
# now just on the CentOS hosts...
- hosts: os_CentOS
gather_facts: False
tasks:
- # tasks that only happen on CentOS go in this play
```
The first play categorizes all systems into dynamic groups based on the operating system name. Later plays can use these groups as patterns on the `hosts` line. You can also add group-specific settings in group vars files. All three names must match: the name created by the `group_by` task, the name of the pattern in subsequent plays, and the name of the group vars file. For example:
```
---
# file: group_vars/all
asdf: 10
---
# file: group_vars/os_CentOS.yml
asdf: 42
```
In this example, CentOS machines get the value of ‘42’ for asdf, but other machines get ‘10’. This can be used not only to set variables, but also to apply certain roles to only certain systems.
You can use the same setup with `include_vars` when you only need OS-specific variables, not tasks:
```
- hosts: all
tasks:
- name: Set OS distribution dependent variables
include_vars: "os_{{ ansible_facts['distribution'] }}.yml"
- debug:
var: asdf
```
This pulls in variables from the group\_vars/os\_CentOS.yml file.
See also
[YAML Syntax](../reference_appendices/yamlsyntax#yaml-syntax)
Learn about YAML syntax
[Working with playbooks](playbooks#working-with-playbooks)
Review the basic playbook features
[Collection Index](../collections/index#list-of-collections)
Browse existing collections, modules, and plugins
[Should you develop a module?](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html#developing-modules)
Learn how to extend Ansible by writing your own modules
[Patterns: targeting hosts and groups](intro_patterns#intro-patterns)
Learn about how to select hosts
[GitHub examples directory](https://github.com/ansible/ansible-examples)
Complete playbook files from the github project source
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
| programming_docs |
ansible Controlling playbook execution: strategies and more Controlling playbook execution: strategies and more
===================================================
By default, Ansible runs each task on all hosts affected by a play before starting the next task on any host, using 5 forks. If you want to change this default behavior, you can use a different strategy plugin, change the number of forks, or apply one of several keywords like `serial`.
* [Selecting a strategy](#selecting-a-strategy)
* [Setting the number of forks](#setting-the-number-of-forks)
* [Using keywords to control execution](#using-keywords-to-control-execution)
+ [Setting the batch size with `serial`](#setting-the-batch-size-with-serial)
+ [Restricting execution with `throttle`](#restricting-execution-with-throttle)
+ [Ordering execution based on inventory](#ordering-execution-based-on-inventory)
+ [Running on a single machine with `run_once`](#running-on-a-single-machine-with-run-once)
Selecting a strategy
--------------------
The default behavior described above is the [linear strategy](../collections/ansible/builtin/linear_strategy#linear-strategy). Ansible offers other strategies, including the [debug strategy](../collections/ansible/builtin/debug_strategy#debug-strategy) (see also [Debugging tasks](playbooks_debugger#playbook-debugger)) and the [free strategy](../collections/ansible/builtin/free_strategy#free-strategy), which allows each host to run until the end of the play as fast as it can:
```
- hosts: all
strategy: free
tasks:
...
```
You can select a different strategy for each play as shown above, or set your preferred strategy globally in `ansible.cfg`, under the `defaults` stanza:
```
[defaults]
strategy = free
```
All strategies are implemented as [strategy plugins](../plugins/strategy#strategy-plugins). Please review the documentation for each strategy plugin for details on how it works.
Setting the number of forks
---------------------------
If you have the processing power available and want to use more forks, you can set the number in `ansible.cfg`:
```
[defaults]
forks = 30
```
or pass it on the command line: `ansible-playbook -f 30 my_playbook.yml`.
Using keywords to control execution
-----------------------------------
In addition to strategies, several [keywords](../reference_appendices/playbooks_keywords#playbook-keywords) also affect play execution. You can set a number, a percentage, or a list of numbers of hosts you want to manage at a time with `serial`. Ansible completes the play on the specified number or percentage of hosts before starting the next batch of hosts. You can restrict the number of workers allotted to a block or task with `throttle`. You can control how Ansible selects the next host in a group to execute against with `order`. You can run a task on a single host with `run_once`. These keywords are not strategies. They are directives or options applied to a play, block, or task.
### Setting the batch size with `serial`
By default, Ansible runs in parallel against all the hosts in the [pattern](intro_patterns#intro-patterns) you set in the `hosts:` field of each play. If you want to manage only a few machines at a time, for example during a rolling update, you can define how many hosts Ansible should manage at a single time using the `serial` keyword:
```
---
- name: test play
hosts: webservers
serial: 3
gather_facts: False
tasks:
- name: first task
command: hostname
- name: second task
command: hostname
```
In the above example, if we had 6 hosts in the group ‘webservers’, Ansible would execute the play completely (both tasks) on 3 of the hosts before moving on to the next 3 hosts:
```
PLAY [webservers] ****************************************
TASK [first task] ****************************************
changed: [web3]
changed: [web2]
changed: [web1]
TASK [second task] ***************************************
changed: [web1]
changed: [web2]
changed: [web3]
PLAY [webservers] ****************************************
TASK [first task] ****************************************
changed: [web4]
changed: [web5]
changed: [web6]
TASK [second task] ***************************************
changed: [web4]
changed: [web5]
changed: [web2]
PLAY RECAP ***********************************************
web1 : ok=2 changed=2 unreachable=0 failed=0
web2 : ok=2 changed=2 unreachable=0 failed=0
web3 : ok=2 changed=2 unreachable=0 failed=0
web4 : ok=2 changed=2 unreachable=0 failed=0
web5 : ok=2 changed=2 unreachable=0 failed=0
web6 : ok=2 changed=2 unreachable=0 failed=0
```
You can also specify a percentage with the `serial` keyword. Ansible applies the percentage to the total number of hosts in a play to determine the number of hosts per pass:
```
---
- name: test play
hosts: webservers
serial: "30%"
```
If the number of hosts does not divide equally into the number of passes, the final pass contains the remainder. In this example, if you had 20 hosts in the webservers group, the first batch would contain 6 hosts, the second batch would contain 6 hosts, the third batch would contain 6 hosts, and the last batch would contain 2 hosts.
You can also specify batch sizes as a list. For example:
```
---
- name: test play
hosts: webservers
serial:
- 1
- 5
- 10
```
In the above example, the first batch would contain a single host, the next would contain 5 hosts, and (if there are any hosts left), every following batch would contain either 10 hosts or all the remaining hosts, if fewer than 10 hosts remained.
You can list multiple batch sizes as percentages:
```
---
- name: test play
hosts: webservers
serial:
- "10%"
- "20%"
- "100%"
```
You can also mix and match the values:
```
---
- name: test play
hosts: webservers
serial:
- 1
- 5
- "20%"
```
Note
No matter how small the percentage, the number of hosts per pass will always be 1 or greater.
### Restricting execution with `throttle`
The `throttle` keyword limits the number of workers for a particular task. It can be set at the block and task level. Use `throttle` to restrict tasks that may be CPU-intensive or interact with a rate-limiting API:
```
tasks:
- command: /path/to/cpu_intensive_command
throttle: 1
```
If you have already restricted the number of forks or the number of machines to execute against in parallel, you can reduce the number of workers with `throttle`, but you cannot increase it. In other words, to have an effect, your `throttle` setting must be lower than your `forks` or `serial` setting if you are using them together.
### Ordering execution based on inventory
The `order` keyword controls the order in which hosts are run. Possible values for order are:
inventory:
(default) The order provided in the inventory
reverse\_inventory:
The reverse of the order provided by the inventory
sorted:
Sorted alphabetically sorted by name
reverse\_sorted:
Sorted by name in reverse alphabetical order
shuffle:
Randomly ordered on each run
Other keywords that affect play execution include `ignore_errors`, `ignore_unreachable`, and `any_errors_fatal`. These options are documented in [Error handling in playbooks](playbooks_error_handling#playbooks-error-handling).
### Running on a single machine with `run_once`
If you want a task to run only on the first host in your batch of hosts, set `run_once` to true on that task:
```
---
# ...
tasks:
# ...
- command: /opt/application/upgrade_db.py
run_once: true
# ...
```
Ansible executes this task on the first host in the current batch and applies all results and facts to all the hosts in the same batch. This approach is similar to applying a conditional to a task such as:
```
- command: /opt/application/upgrade_db.py
when: inventory_hostname == webservers[0]
```
However, with `run_once`, the results are applied to all the hosts. To run the task on a specific host, instead of the first host in the batch, delegate the task:
```
- command: /opt/application/upgrade_db.py
run_once: true
delegate_to: web01.example.org
```
As always with [delegation](playbooks_delegation#playbooks-delegation), the action will be executed on the delegated host, but the information is still that of the original host in the task.
Note
When used together with `serial`, tasks marked as `run_once` will be run on one host in *each* serial batch. If the task must run only once regardless of `serial` mode, use `when: inventory_hostname == ansible_play_hosts_all[0]` construct.
Note
Any conditional (in other words, `when:`) will use the variables of the ‘first host’ to decide if the task runs or not, no other hosts will be tested.
Note
If you want to avoid the default behavior of setting the fact for all hosts, set `delegate_facts: True` for the specific task or block.
See also
[Intro to playbooks](playbooks_intro#about-playbooks)
An introduction to playbooks
[Controlling where tasks run: delegation and local actions](playbooks_delegation#playbooks-delegation)
Running tasks on or assigning facts to specific machines
[Roles](playbooks_reuse_roles#playbooks-reuse-roles)
Playbook organization by roles
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Using filters to manipulate data Using filters to manipulate data
================================
Filters let you transform JSON data into YAML data, split a URL to extract the hostname, get the SHA1 hash of a string, add or multiply integers, and much more. You can use the Ansible-specific filters documented here to manipulate your data, or use any of the standard filters shipped with Jinja2 - see the list of [built-in filters](https://jinja.palletsprojects.com/en/3.0.x/templates/#builtin-filters "(in Jinja v3.0.x)") in the official Jinja2 template documentation. You can also use [Python methods](https://jinja.palletsprojects.com/en/3.0.x/templates/#python-methods "(in Jinja v3.0.x)") to transform data. You can [create custom Ansible filters as plugins](https://docs.ansible.com/ansible/latest/dev_guide/developing_plugins.html#developing-filter-plugins), though we generally welcome new filters into the ansible-core repo so everyone can use them.
Because templating happens on the Ansible controller, **not** on the target host, filters execute on the controller and transform data locally.
* [Handling undefined variables](#handling-undefined-variables)
+ [Providing default values](#providing-default-values)
+ [Making variables optional](#making-variables-optional)
+ [Defining mandatory values](#defining-mandatory-values)
* [Defining different values for true/false/null (ternary)](#defining-different-values-for-true-false-null-ternary)
* [Managing data types](#managing-data-types)
+ [Discovering the data type](#discovering-the-data-type)
+ [Transforming dictionaries into lists](#transforming-dictionaries-into-lists)
+ [Transforming lists into dictionaries](#transforming-lists-into-dictionaries)
+ [Forcing the data type](#forcing-the-data-type)
* [Formatting data: YAML and JSON](#formatting-data-yaml-and-json)
+ [Filter `to_json` and Unicode support](#filter-to-json-and-unicode-support)
* [Combining and selecting data](#combining-and-selecting-data)
+ [Combining items from multiple lists: zip and zip\_longest](#combining-items-from-multiple-lists-zip-and-zip-longest)
+ [Combining objects and subelements](#combining-objects-and-subelements)
+ [Combining hashes/dictionaries](#combining-hashes-dictionaries)
+ [Selecting values from arrays or hashtables](#selecting-values-from-arrays-or-hashtables)
+ [Combining lists](#combining-lists)
- [permutations](#permutations)
- [combinations](#combinations)
- [products](#products)
+ [Selecting JSON data: JSON queries](#selecting-json-data-json-queries)
* [Randomizing data](#randomizing-data)
+ [Random MAC addresses](#random-mac-addresses)
+ [Random items or numbers](#random-items-or-numbers)
+ [Shuffling a list](#shuffling-a-list)
* [Managing list variables](#managing-list-variables)
* [Selecting from sets or lists (set theory)](#selecting-from-sets-or-lists-set-theory)
* [Calculating numbers (math)](#calculating-numbers-math)
* [Managing network interactions](#managing-network-interactions)
+ [IP address filters](#ip-address-filters)
+ [Network CLI filters](#network-cli-filters)
+ [Network XML filters](#network-xml-filters)
+ [Network VLAN filters](#network-vlan-filters)
* [Encrypting and checksumming strings and passwords](#encrypting-and-checksumming-strings-and-passwords)
* [Manipulating text](#manipulating-text)
+ [Adding comments to files](#adding-comments-to-files)
+ [URLEncode Variables](#urlencode-variables)
+ [Splitting URLs](#splitting-urls)
+ [Searching strings with regular expressions](#searching-strings-with-regular-expressions)
+ [Managing file names and path names](#managing-file-names-and-path-names)
* [Manipulating strings](#manipulating-strings)
* [Managing UUIDs](#managing-uuids)
* [Handling dates and times](#handling-dates-and-times)
* [Getting Kubernetes resource names](#getting-kubernetes-resource-names)
Handling undefined variables
----------------------------
Filters can help you manage missing or undefined variables by providing defaults or making some variables optional. If you configure Ansible to ignore most undefined variables, you can mark some variables as requiring values with the `mandatory` filter.
### Providing default values
You can provide default values for variables directly in your templates using the Jinja2 ‘default’ filter. This is often a better approach than failing if a variable is not defined:
```
{{ some_variable | default(5) }}
```
In the above example, if the variable ‘some\_variable’ is not defined, Ansible uses the default value 5, rather than raising an “undefined variable” error and failing. If you are working within a role, you can also add a `defaults/main.yml` to define the default values for variables in your role.
Beginning in version 2.8, attempting to access an attribute of an Undefined value in Jinja will return another Undefined value, rather than throwing an error immediately. This means that you can now simply use a default with a value in a nested data structure (in other words, `{{ foo.bar.baz | default('DEFAULT') }}`) when you do not know if the intermediate values are defined.
If you want to use the default value when variables evaluate to false or an empty string you have to set the second parameter to `true`:
```
{{ lookup('env', 'MY_USER') | default('admin', true) }}
```
### Making variables optional
By default Ansible requires values for all variables in a templated expression. However, you can make specific variables optional. For example, you might want to use a system default for some items and control the value for others. To make a variable optional, set the default value to the special variable `omit`:
```
- name: Touch files with an optional mode
ansible.builtin.file:
dest: "{{ item.path }}"
state: touch
mode: "{{ item.mode | default(omit) }}"
loop:
- path: /tmp/foo
- path: /tmp/bar
- path: /tmp/baz
mode: "0444"
```
In this example, the default mode for the files `/tmp/foo` and `/tmp/bar` is determined by the umask of the system. Ansible does not send a value for `mode`. Only the third file, `/tmp/baz`, receives the `mode=0444` option.
Note
If you are “chaining” additional filters after the `default(omit)` filter, you should instead do something like this: `"{{ foo | default(None) | some_filter or omit }}"`. In this example, the default `None` (Python null) value will cause the later filters to fail, which will trigger the `or omit` portion of the logic. Using `omit` in this manner is very specific to the later filters you are chaining though, so be prepared for some trial and error if you do this.
### Defining mandatory values
If you configure Ansible to ignore undefined variables, you may want to define some values as mandatory. By default, Ansible fails if a variable in your playbook or command is undefined. You can configure Ansible to allow undefined variables by setting [DEFAULT\_UNDEFINED\_VAR\_BEHAVIOR](../reference_appendices/config#default-undefined-var-behavior) to `false`. In that case, you may want to require some variables to be defined. You can do this with:
```
{{ variable | mandatory }}
```
The variable value will be used as is, but the template evaluation will raise an error if it is undefined.
Defining different values for true/false/null (ternary)
-------------------------------------------------------
You can create a test, then define one value to use when the test returns true and another when the test returns false (new in version 1.9):
```
{{ (status == 'needs_restart') | ternary('restart', 'continue') }}
```
In addition, you can define a one value to use on true, one value on false and a third value on null (new in version 2.8):
```
{{ enabled | ternary('no shutdown', 'shutdown', omit) }}
```
Managing data types
-------------------
You might need to know, change, or set the data type on a variable. For example, a registered variable might contain a dictionary when your next task needs a list, or a user [prompt](playbooks_prompts#playbooks-prompts) might return a string when your playbook needs a boolean value. Use the `type_debug`, `dict2items`, and `items2dict` filters to manage data types. You can also use the data type itself to cast a value as a specific data type.
### Discovering the data type
New in version 2.3.
If you are unsure of the underlying Python type of a variable, you can use the `type_debug` filter to display it. This is useful in debugging when you need a particular type of variable:
```
{{ myvar | type_debug }}
```
### Transforming dictionaries into lists
New in version 2.6.
Use the `dict2items` filter to transform a dictionary into a list of items suitable for [looping](playbooks_loops#playbooks-loops):
```
{{ dict | dict2items }}
```
Dictionary data (before applying the `dict2items` filter):
```
tags:
Application: payment
Environment: dev
```
List data (after applying the `dict2items` filter):
```
- key: Application
value: payment
- key: Environment
value: dev
```
New in version 2.8.
The `dict2items` filter is the reverse of the `items2dict` filter.
If you want to configure the names of the keys, the `dict2items` filter accepts 2 keyword arguments. Pass the `key_name` and `value_name` arguments to configure the names of the keys in the list output:
```
{{ files | dict2items(key_name='file', value_name='path') }}
```
Dictionary data (before applying the `dict2items` filter):
```
files:
users: /etc/passwd
groups: /etc/group
```
List data (after applying the `dict2items` filter):
```
- file: users
path: /etc/passwd
- file: groups
path: /etc/group
```
### Transforming lists into dictionaries
New in version 2.7.
Use the `items2dict` filter to transform a list into a dictionary, mapping the content into `key: value` pairs:
```
{{ tags | items2dict }}
```
List data (before applying the `items2dict` filter):
```
tags:
- key: Application
value: payment
- key: Environment
value: dev
```
Dictionary data (after applying the `items2dict` filter):
```
Application: payment
Environment: dev
```
The `items2dict` filter is the reverse of the `dict2items` filter.
Not all lists use `key` to designate keys and `value` to designate values. For example:
```
fruits:
- fruit: apple
color: red
- fruit: pear
color: yellow
- fruit: grapefruit
color: yellow
```
In this example, you must pass the `key_name` and `value_name` arguments to configure the transformation. For example:
```
{{ tags | items2dict(key_name='fruit', value_name='color') }}
```
If you do not pass these arguments, or do not pass the correct values for your list, you will see `KeyError: key` or `KeyError: my_typo`.
### Forcing the data type
You can cast values as certain types. For example, if you expect the input “True” from a [vars\_prompt](playbooks_prompts#playbooks-prompts) and you want Ansible to recognize it as a boolean value instead of a string:
```
- debug:
msg: test
when: some_string_value | bool
```
If you want to perform a mathematical comparison on a fact and you want Ansible to recognize it as an integer instead of a string:
```
- shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6
```
New in version 1.6.
Formatting data: YAML and JSON
------------------------------
You can switch a data structure in a template from or to JSON or YAML format, with options for formatting, indenting, and loading data. The basic filters are occasionally useful for debugging:
```
{{ some_variable | to_json }}
{{ some_variable | to_yaml }}
```
For human readable output, you can use:
```
{{ some_variable | to_nice_json }}
{{ some_variable | to_nice_yaml }}
```
You can change the indentation of either format:
```
{{ some_variable | to_nice_json(indent=2) }}
{{ some_variable | to_nice_yaml(indent=8) }}
```
The `to_yaml` and `to_nice_yaml` filters use the [PyYAML library](https://pyyaml.org/) which has a default 80 symbol string length limit. That causes unexpected line break after 80th symbol (if there is a space after 80th symbol) To avoid such behavior and generate long lines, use the `width` option. You must use a hardcoded number to define the width, instead of a construction like `float("inf")`, because the filter does not support proxying Python functions. For example:
```
{{ some_variable | to_yaml(indent=8, width=1337) }}
{{ some_variable | to_nice_yaml(indent=8, width=1337) }}
```
The filter does support passing through other YAML parameters. For a full list, see the [PyYAML documentation](https://pyyaml.org/wiki/PyYAMLDocumentation).
If you are reading in some already formatted data:
```
{{ some_variable | from_json }}
{{ some_variable | from_yaml }}
```
for example:
```
tasks:
- name: Register JSON output as a variable
ansible.builtin.shell: cat /some/path/to/file.json
register: result
- name: Set a variable
ansible.builtin.set_fact:
myvar: "{{ result.stdout | from_json }}"
```
### Filter `to_json` and Unicode support
By default `to_json` and `to_nice_json` will convert data received to ASCII, so:
```
{{ 'München'| to_json }}
```
will return:
```
'M\u00fcnchen'
```
To keep Unicode characters, pass the parameter `ensure_ascii=False` to the filter:
```
{{ 'München'| to_json(ensure_ascii=False) }}
'München'
```
New in version 2.7.
To parse multi-document YAML strings, the `from_yaml_all` filter is provided. The `from_yaml_all` filter will return a generator of parsed YAML documents.
for example:
```
tasks:
- name: Register a file content as a variable
ansible.builtin.shell: cat /some/path/to/multidoc-file.yaml
register: result
- name: Print the transformed variable
ansible.builtin.debug:
msg: '{{ item }}'
loop: '{{ result.stdout | from_yaml_all | list }}'
```
Combining and selecting data
----------------------------
You can combine data from multiple sources and types, and select values from large data structures, giving you precise control over complex data.
### Combining items from multiple lists: zip and zip\_longest
New in version 2.3.
To get a list combining the elements of other lists use `zip`:
```
- name: Give me list combo of two lists
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5,6] | zip(['a','b','c','d','e','f']) | list }}"
# => [[1, "a"], [2, "b"], [3, "c"], [4, "d"], [5, "e"], [6, "f"]]
- name: Give me shortest combo of two lists
ansible.builtin.debug:
msg: "{{ [1,2,3] | zip(['a','b','c','d','e','f']) | list }}"
# => [[1, "a"], [2, "b"], [3, "c"]]
```
To always exhaust all lists use `zip_longest`:
```
- name: Give me longest combo of three lists , fill with X
ansible.builtin.debug:
msg: "{{ [1,2,3] | zip_longest(['a','b','c','d','e','f'], [21, 22, 23], fillvalue='X') | list }}"
# => [[1, "a", 21], [2, "b", 22], [3, "c", 23], ["X", "d", "X"], ["X", "e", "X"], ["X", "f", "X"]]
```
Similarly to the output of the `items2dict` filter mentioned above, these filters can be used to construct a `dict`:
```
{{ dict(keys_list | zip(values_list)) }}
```
List data (before applying the `zip` filter):
```
keys_list:
- one
- two
values_list:
- apple
- orange
```
Dictionary data (after applying the `zip` filter):
```
one: apple
two: orange
```
### Combining objects and subelements
New in version 2.7.
The `subelements` filter produces a product of an object and the subelement values of that object, similar to the `subelements` lookup. This lets you specify individual subelements to use in a template. For example, this expression:
```
{{ users | subelements('groups', skip_missing=True) }}
```
Data before applying the `subelements` filter:
```
users:
- name: alice
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
groups:
- wheel
- docker
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
```
Data after applying the `subelements` filter:
```
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- wheel
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- docker
-
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
- docker
```
You can use the transformed data with `loop` to iterate over the same subelement for multiple objects:
```
- name: Set authorized ssh key, extracting just that data from 'users'
ansible.posix.authorized_key:
user: "{{ item.0.name }}"
key: "{{ lookup('file', item.1) }}"
loop: "{{ users | subelements('authorized') }}"
```
### Combining hashes/dictionaries
New in version 2.0.
The `combine` filter allows hashes to be merged. For example, the following would override keys in one hash:
```
{{ {'a':1, 'b':2} | combine({'b':3}) }}
```
The resulting hash would be:
```
{'a':1, 'b':3}
```
The filter can also take multiple arguments to merge:
```
{{ a | combine(b, c, d) }}
{{ [a, b, c, d] | combine }}
```
In this case, keys in `d` would override those in `c`, which would override those in `b`, and so on.
The filter also accepts two optional parameters: `recursive` and `list_merge`.
recursive
Is a boolean, default to `False`. Should the `combine` recursively merge nested hashes. Note: It does **not** depend on the value of the `hash_behaviour` setting in `ansible.cfg`.
list\_merge
Is a string, its possible values are `replace` (default), `keep`, `append`, `prepend`, `append_rp` or `prepend_rp`. It modifies the behaviour of `combine` when the hashes to merge contain arrays/lists.
```
default:
a:
x: default
y: default
b: default
c: default
patch:
a:
y: patch
z: patch
b: patch
```
If `recursive=False` (the default), nested hash aren’t merged:
```
{{ default | combine(patch) }}
```
This would result in:
```
a:
y: patch
z: patch
b: patch
c: default
```
If `recursive=True`, recurse into nested hash and merge their keys:
```
{{ default | combine(patch, recursive=True) }}
```
This would result in:
```
a:
x: default
y: patch
z: patch
b: patch
c: default
```
If `list_merge='replace'` (the default), arrays from the right hash will “replace” the ones in the left hash:
```
default:
a:
- default
patch:
a:
- patch
```
```
{{ default | combine(patch) }}
```
This would result in:
```
a:
- patch
```
If `list_merge='keep'`, arrays from the left hash will be kept:
```
{{ default | combine(patch, list_merge='keep') }}
```
This would result in:
```
a:
- default
```
If `list_merge='append'`, arrays from the right hash will be appended to the ones in the left hash:
```
{{ default | combine(patch, list_merge='append') }}
```
This would result in:
```
a:
- default
- patch
```
If `list_merge='prepend'`, arrays from the right hash will be prepended to the ones in the left hash:
```
{{ default | combine(patch, list_merge='prepend') }}
```
This would result in:
```
a:
- patch
- default
```
If `list_merge='append_rp'`, arrays from the right hash will be appended to the ones in the left hash. Elements of arrays in the left hash that are also in the corresponding array of the right hash will be removed (“rp” stands for “remove present”). Duplicate elements that aren’t in both hashes are kept:
```
default:
a:
- 1
- 1
- 2
- 3
patch:
a:
- 3
- 4
- 5
- 5
```
```
{{ default | combine(patch, list_merge='append_rp') }}
```
This would result in:
```
a:
- 1
- 1
- 2
- 3
- 4
- 5
- 5
```
If `list_merge='prepend_rp'`, the behavior is similar to the one for `append_rp`, but elements of arrays in the right hash are prepended:
```
{{ default | combine(patch, list_merge='prepend_rp') }}
```
This would result in:
```
a:
- 3
- 4
- 5
- 5
- 1
- 1
- 2
```
`recursive` and `list_merge` can be used together:
```
default:
a:
a':
x: default_value
y: default_value
list:
- default_value
b:
- 1
- 1
- 2
- 3
patch:
a:
a':
y: patch_value
z: patch_value
list:
- patch_value
b:
- 3
- 4
- 4
- key: value
```
```
{{ default | combine(patch, recursive=True, list_merge='append_rp') }}
```
This would result in:
```
a:
a':
x: default_value
y: patch_value
z: patch_value
list:
- default_value
- patch_value
b:
- 1
- 1
- 2
- 3
- 4
- 4
- key: value
```
### Selecting values from arrays or hashtables
New in version 2.1.
The `extract` filter is used to map from a list of indices to a list of values from a container (hash or array):
```
{{ [0,2] | map('extract', ['x','y','z']) | list }}
{{ ['x','y'] | map('extract', {'x': 42, 'y': 31}) | list }}
```
The results of the above expressions would be:
```
['x', 'z']
[42, 31]
```
The filter can take another argument:
```
{{ groups['x'] | map('extract', hostvars, 'ec2_ip_address') | list }}
```
This takes the list of hosts in group ‘x’, looks them up in `hostvars`, and then looks up the `ec2_ip_address` of the result. The final result is a list of IP addresses for the hosts in group ‘x’.
The third argument to the filter can also be a list, for a recursive lookup inside the container:
```
{{ ['a'] | map('extract', b, ['x','y']) | list }}
```
This would return a list containing the value of `b[‘a’][‘x’][‘y’]`.
### Combining lists
This set of filters returns a list of combined lists.
#### permutations
To get permutations of a list:
```
- name: Give me largest permutations (order matters)
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations | list }}"
- name: Give me permutations of sets of three
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations(3) | list }}"
```
#### combinations
Combinations always require a set size:
```
- name: Give me combinations for sets of two
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.combinations(2) | list }}"
```
Also see the [Combining items from multiple lists: zip and zip\_longest](#zip-filter)
#### products
The product filter returns the [cartesian product](https://docs.python.org/3/library/itertools.html#itertools.product) of the input iterables. This is roughly equivalent to nested for-loops in a generator expression.
For example:
```
- name: Generate multiple hostnames
ansible.builtin.debug:
msg: "{{ ['foo', 'bar'] | product(['com']) | map('join', '.') | join(',') }}"
```
This would result in:
```
{ "msg": "foo.com,bar.com" }
```
### Selecting JSON data: JSON queries
To select a single element or a data subset from a complex data structure in JSON format (for example, Ansible facts), use the `json_query` filter. The `json_query` filter lets you query a complex JSON structure and iterate over it using a loop structure.
Note
This filter has migrated to the [community.general](https://galaxy.ansible.com/community/general) collection. Follow the installation instructions to install that collection.
Note
You must manually install the **jmespath** dependency on the Ansible controller before using this filter. This filter is built upon **jmespath**, and you can use the same syntax. For examples, see [jmespath examples](http://jmespath.org/examples.html).
Consider this data structure:
```
{
"domain_definition": {
"domain": {
"cluster": [
{
"name": "cluster1"
},
{
"name": "cluster2"
}
],
"server": [
{
"name": "server11",
"cluster": "cluster1",
"port": "8080"
},
{
"name": "server12",
"cluster": "cluster1",
"port": "8090"
},
{
"name": "server21",
"cluster": "cluster2",
"port": "9080"
},
{
"name": "server22",
"cluster": "cluster2",
"port": "9090"
}
],
"library": [
{
"name": "lib1",
"target": "cluster1"
},
{
"name": "lib2",
"target": "cluster2"
}
]
}
}
}
```
To extract all clusters from this structure, you can use the following query:
```
- name: Display all cluster names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.cluster[*].name') }}"
```
To extract all server names:
```
- name: Display all server names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[*].name') }}"
```
To extract ports from cluster1:
```
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster1'].port"
```
Note
You can use a variable to make the query more readable.
To print out the ports from cluster1 in a comma separated string:
```
- name: Display all ports from cluster1 as a string
ansible.builtin.debug:
msg: "{{ domain_definition | community.general.json_query('domain.server[?cluster==`cluster1`].port') | join(', ') }}"
```
Note
In the example above, quoting literals using backticks avoids escaping quotes and maintains readability.
You can use YAML [single quote escaping](https://yaml.org/spec/current.html#id2534365):
```
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[?cluster==''cluster1''].port') }}"
```
Note
Escaping single quotes within single quotes in YAML is done by doubling the single quote.
To get a hash map with all ports and names of a cluster:
```
- name: Display all server ports and names from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster2'].{name: name, port: port}"
```
To extract ports from all clusters with name starting with ‘server1’:
```
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?starts_with(name,'server1')].port"
```
To extract ports from all clusters with name containing ‘server1’:
```
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?contains(name,'server1')].port"
```
Note
while using `starts_with` and `contains`, you have to use `` to\_json | from\_json `` filter for correct parsing of data structure.
Randomizing data
----------------
When you need a randomly generated value, use one of these filters.
### Random MAC addresses
New in version 2.6.
This filter can be used to generate a random MAC address from a string prefix.
Note
This filter has migrated to the [community.general](https://galaxy.ansible.com/community/general) collection. Follow the installation instructions to install that collection.
To get a random MAC address from a string prefix starting with ‘52:54:00’:
```
"{{ '52:54:00' | community.general.random_mac }}"
# => '52:54:00:ef:1c:03'
```
Note that if anything is wrong with the prefix string, the filter will issue an error.
New in version 2.9.
As of Ansible version 2.9, you can also initialize the random number generator from a seed to create random-but-idempotent MAC addresses:
```
"{{ '52:54:00' | community.general.random_mac(seed=inventory_hostname) }}"
```
### Random items or numbers
The `random` filter in Ansible is an extension of the default Jinja2 random filter, and can be used to return a random item from a sequence of items or to generate a random number based on a range.
To get a random item from a list:
```
"{{ ['a','b','c'] | random }}"
# => 'c'
```
To get a random number between 0 (inclusive) and a specified integer (exclusive):
```
"{{ 60 | random }} * * * * root /script/from/cron"
# => '21 * * * * root /script/from/cron'
```
To get a random number from 0 to 100 but in steps of 10:
```
{{ 101 | random(step=10) }}
# => 70
```
To get a random number from 1 to 100 but in steps of 10:
```
{{ 101 | random(1, 10) }}
# => 31
{{ 101 | random(start=1, step=10) }}
# => 51
```
You can initialize the random number generator from a seed to create random-but-idempotent numbers:
```
"{{ 60 | random(seed=inventory_hostname) }} * * * * root /script/from/cron"
```
Note
If you use the `seed` parameter, you will get a different result with Python 3 and Python 2. This may break procedures such as password generation when you upgrade the version of Python used on your Ansible controller.
### Shuffling a list
The `shuffle` filter randomizes an existing list, giving a different order every invocation.
To get a random list from an existing list:
```
{{ ['a','b','c'] | shuffle }}
# => ['c','a','b']
{{ ['a','b','c'] | shuffle }}
# => ['b','c','a']
```
You can initialize the shuffle generator from a seed to generate a random-but-idempotent order:
```
{{ ['a','b','c'] | shuffle(seed=inventory_hostname) }}
# => ['b','a','c']
```
The shuffle filter returns a list whenever possible. If you use it with a non ‘listable’ item, the filter does nothing.
Note
If you use the `seed` parameter, you will get a different result with Python 3 and Python 2. This may break procedures such as password generation when you upgrade the version of Python used on your Ansible controller.
Managing list variables
-----------------------
You can search for the minimum or maximum value in a list, or flatten a multi-level list.
To get the minimum value from list of numbers:
```
{{ list1 | min }}
```
New in version 2.11.
To get the minimum value in a list of objects:
```
{{ [{'val': 1}, {'val': 2}] | min(attribute='val') }}
```
To get the maximum value from a list of numbers:
```
{{ [3, 4, 2] | max }}
```
New in version 2.11.
To get the maximum value in a list of objects:
```
{{ [{'val': 1}, {'val': 2}] | max(attribute='val') }}
```
New in version 2.5.
Flatten a list (same thing the `flatten` lookup does):
```
{{ [3, [4, 2] ] | flatten }}
# => [3, 4, 2]
```
Flatten only the first level of a list (akin to the `items` lookup):
```
{{ [3, [4, [2]] ] | flatten(levels=1) }}
# => [3, 4, [2]]
```
New in version 2.11.
Preserve nulls in a list, by default flatten removes them.
```
{{ [3, None, [4, [2]] ] | flatten(levels=1, skip_nulls=False) }}
# => [3, None, 4, [2]]
```
Selecting from sets or lists (set theory)
-----------------------------------------
You can select or combine items from sets or lists.
New in version 1.4.
To get a unique set from a list:
```
# list1: [1, 2, 5, 1, 3, 4, 10]
{{ list1 | unique }}
# => [1, 2, 5, 3, 4, 10]
```
To get a union of two lists:
```
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | union(list2) }}
# => [1, 2, 5, 1, 3, 4, 10, 11, 99]
```
To get the intersection of 2 lists (unique list of all items in both):
```
# list1: [1, 2, 5, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | intersect(list2) }}
# => [1, 2, 5, 3, 4]
```
To get the difference of 2 lists (items in 1 that don’t exist in 2):
```
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | difference(list2) }}
# => [10]
```
To get the symmetric difference of 2 lists (items exclusive to each list):
```
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | symmetric_difference(list2) }}
# => [10, 11, 99]
```
Calculating numbers (math)
--------------------------
New in version 1.9.
You can calculate logs, powers, and roots of numbers with Ansible filters. Jinja2 provides other mathematical functions like abs() and round().
Get the logarithm (default is e):
```
{{ 8 | log }}
# => 2.0794415416798357
```
Get the base 10 logarithm:
```
{{ 8 | log(10) }}
# => 0.9030899869919435
```
Give me the power of 2! (or 5):
```
{{ 8 | pow(5) }}
# => 32768.0
```
Square root, or the 5th:
```
{{ 8 | root }}
# => 2.8284271247461903
{{ 8 | root(5) }}
# => 1.5157165665103982
```
Managing network interactions
-----------------------------
These filters help you with common network tasks.
Note
These filters have migrated to the [ansible.netcommon](https://galaxy.ansible.com/ansible/netcommon) collection. Follow the installation instructions to install that collection.
### IP address filters
New in version 1.9.
To test if a string is a valid IP address:
```
{{ myvar | ansible.netcommon.ipaddr }}
```
You can also require a specific IP protocol version:
```
{{ myvar | ansible.netcommon.ipv4 }}
{{ myvar | ansible.netcommon.ipv6 }}
```
IP address filter can also be used to extract specific information from an IP address. For example, to get the IP address itself from a CIDR, you can use:
```
{{ '192.0.2.1/24' | ansible.netcommon.ipaddr('address') }}
# => 192.168.0.1
```
More information about `ipaddr` filter and complete usage guide can be found in [ipaddr filter](playbooks_filters_ipaddr#playbooks-filters-ipaddr).
### Network CLI filters
New in version 2.4.
To convert the output of a network device CLI command into structured JSON output, use the `parse_cli` filter:
```
{{ output | ansible.netcommon.parse_cli('path/to/spec') }}
```
The `parse_cli` filter will load the spec file and pass the command output through it, returning JSON output. The YAML spec file defines how to parse the CLI output.
The spec file should be valid formatted YAML. It defines how to parse the CLI output and return JSON data. Below is an example of a valid spec file that will parse the output from the `show vlan` command.
```
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
```
The spec file above will return a JSON data structure that is a list of hashes with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values directives. Here is an example of how to parse the output into a hash value using the same `show vlan` command.
```
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
```
Another common use case for parsing CLI commands is to break a large command into blocks that can be parsed. This can be done using the `start_block` and `end_block` directives to break the command into blocks that can be parsed.
```
---
vars:
interface:
name: "{{ item[0].match[0] }}"
state: "{{ item[1].state }}"
mode: "{{ item[2].match[0] }}"
keys:
interfaces:
value: "{{ interface }}"
start_block: "^Ethernet.*$"
end_block: "^$"
items:
- "^(?P<name>Ethernet\\d\\/\\d*)"
- "admin state is (?P<state>.+),"
- "Port mode is (.+)"
```
The example above will parse the output of `show interface` into a list of hashes.
The network filters also support parsing the output of a CLI command using the TextFSM library. To parse the CLI output with TextFSM use the following filter:
```
{{ output.stdout[0] | ansible.netcommon.parse_cli_textfsm('path/to/fsm') }}
```
Use of the TextFSM filter requires the TextFSM library to be installed.
### Network XML filters
New in version 2.5.
To convert the XML output of a network device command into structured JSON output, use the `parse_xml` filter:
```
{{ output | ansible.netcommon.parse_xml('path/to/spec') }}
```
The `parse_xml` filter will load the spec file and pass the command output through formatted as JSON.
The spec file should be valid formatted YAML. It defines how to parse the XML output and return JSON data.
Below is an example of a valid spec file that will parse the output from the `show vlan | display xml` command.
```
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
```
The spec file above will return a JSON data structure that is a list of hashes with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values directives. Here is an example of how to parse the output into a hash value using the same `show vlan | display xml` command.
```
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
```
The value of `top` is the XPath relative to the XML root node. In the example XML output given below, the value of `top` is `configuration/vlans/vlan`, which is an XPath expression relative to the root node (<rpc-reply>). `configuration` in the value of `top` is the outer most container node, and `vlan` is the inner-most container node.
`items` is a dictionary of key-value pairs that map user-defined names to XPath expressions that select elements. The Xpath expression is relative to the value of the XPath value contained in `top`. For example, the `vlan_id` in the spec file is a user defined name and its value `vlan-id` is the relative to the value of XPath in `top`
Attributes of XML tags can be extracted using XPath expressions. The value of `state` in the spec is an XPath expression used to get the attributes of the `vlan` tag in output XML.:
```
<rpc-reply>
<configuration>
<vlans>
<vlan inactive="inactive">
<name>vlan-1</name>
<vlan-id>200</vlan-id>
<description>This is vlan-1</description>
</vlan>
</vlans>
</configuration>
</rpc-reply>
```
Note
For more information on supported XPath expressions, see [XPath Support](https://docs.python.org/2/library/xml.etree.elementtree.html#xpath-support).
### Network VLAN filters
New in version 2.8.
Use the `vlan_parser` filter to transform an unsorted list of VLAN integers into a sorted string list of integers according to IOS-like VLAN list rules. This list has the following properties:
* Vlans are listed in ascending order.
* Three or more consecutive VLANs are listed with a dash.
* The first line of the list can be first\_line\_len characters long.
* Subsequent list lines can be other\_line\_len characters.
To sort a VLAN list:
```
{{ [3003, 3004, 3005, 100, 1688, 3002, 3999] | ansible.netcommon.vlan_parser }}
```
This example renders the following sorted list:
```
['100,1688,3002-3005,3999']
```
Another example Jinja template:
```
{% set parsed_vlans = vlans | ansible.netcommon.vlan_parser %}
switchport trunk allowed vlan {{ parsed_vlans[0] }}
{% for i in range (1, parsed_vlans | count) %}
switchport trunk allowed vlan add {{ parsed_vlans[i] }}
{% endfor %}
```
This allows for dynamic generation of VLAN lists on a Cisco IOS tagged interface. You can store an exhaustive raw list of the exact VLANs required for an interface and then compare that to the parsed IOS output that would actually be generated for the configuration.
Encrypting and checksumming strings and passwords
-------------------------------------------------
New in version 1.9.
To get the sha1 hash of a string:
```
{{ 'test1' | hash('sha1') }}
# => "b444ac06613fc8d63795be9ad0beaf55011936ac"
```
To get the md5 hash of a string:
```
{{ 'test1' | hash('md5') }}
# => "5a105e8b9d40e1329780d62ea2265d8a"
```
Get a string checksum:
```
{{ 'test2' | checksum }}
# => "109f4b3c50d7b0df729d299bc6f8e9ef9066971f"
```
Other hashes (platform dependent):
```
{{ 'test2' | hash('blowfish') }}
```
To get a sha512 password hash (random salt):
```
{{ 'passwordsaresecret' | password_hash('sha512') }}
# => "$6$UIv3676O/ilZzWEE$ktEfFF19NQPF2zyxqxGkAceTnbEgpEKuGBtk6MlU4v2ZorWaVQUMyurgmHCh2Fr4wpmQ/Y.AlXMJkRnIS4RfH/"
```
To get a sha256 password hash with a specific salt:
```
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt') }}
# => "$5$mysecretsalt$ReKNyDYjkKNqRVwouShhsEqZ3VOE8eoVO4exihOfvG4"
```
An idempotent method to generate unique hashes per system is to use a salt that is consistent between runs:
```
{{ 'secretpassword' | password_hash('sha512', 65534 | random(seed=inventory_hostname) | string) }}
# => "$6$43927$lQxPKz2M2X.NWO.gK.t7phLwOKQMcSq72XxDZQ0XzYV6DlL1OD72h417aj16OnHTGxNzhftXJQBcjbunLEepM0"
```
Note
If you use the `seed` parameter, you will get a different result with Python 3 and Python 2. This may break procedures such as password generation when you upgrade the version of Python used on your Ansible controller.
Hash types available depend on the control system running Ansible, ‘hash’ depends on [hashlib](https://docs.python.org/3.8/library/hashlib.html), password\_hash depends on [passlib](https://passlib.readthedocs.io/en/stable/lib/passlib.hash.html). The [crypt](https://docs.python.org/3.8/library/crypt.html) is used as a fallback if `passlib` is not installed.
New in version 2.7.
Some hash types allow providing a rounds parameter:
```
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=10000) }}
# => "$5$rounds=10000$mysecretsalt$Tkm80llAxD4YHll6AgNIztKn0vzAACsuuEfYeGP7tm7"
```
Manipulating text
-----------------
Several filters work with text, including URLs, file names, and path names.
### Adding comments to files
The `comment` filter lets you create comments in a file from text in a template, with a variety of comment styles. By default Ansible uses `#` to start a comment line and adds a blank comment line above and below your comment text. For example the following:
```
{{ "Plain style (default)" | comment }}
```
produces this output:
```
#
# Plain style (default)
#
```
Ansible offers styles for comments in C (`//...`), C block (`/*...*/`), Erlang (`%...`) and XML (`<!--...-->`):
```
{{ "C style" | comment('c') }}
{{ "C block style" | comment('cblock') }}
{{ "Erlang style" | comment('erlang') }}
{{ "XML style" | comment('xml') }}
```
You can define a custom comment character. This filter:
```
{{ "My Special Case" | comment(decoration="! ") }}
```
produces:
```
!
! My Special Case
!
```
You can fully customize the comment style:
```
{{ "Custom style" | comment('plain', prefix='#######\n#', postfix='#\n#######\n ###\n #') }}
```
That creates the following output:
```
#######
#
# Custom style
#
#######
###
#
```
The filter can also be applied to any Ansible variable. For example to make the output of the `ansible_managed` variable more readable, we can change the definition in the `ansible.cfg` file to this:
```
[defaults]
ansible_managed = This file is managed by Ansible.%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
```
and then use the variable with the `comment` filter:
```
{{ ansible_managed | comment }}
```
which produces this output:
```
#
# This file is managed by Ansible.
#
# template: /home/ansible/env/dev/ansible_managed/roles/role1/templates/test.j2
# date: 2015-09-10 11:02:58
# user: ansible
# host: myhost
#
```
### URLEncode Variables
The `urlencode` filter quotes data for use in a URL path or query using UTF-8:
```
{{ 'Trollhättan' | urlencode }}
# => 'Trollh%C3%A4ttan'
```
### Splitting URLs
New in version 2.4.
The `urlsplit` filter extracts the fragment, hostname, netloc, password, path, port, query, scheme, and username from an URL. With no arguments, returns a dictionary of all the fields:
```
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('hostname') }}
# => 'www.acme.com'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('netloc') }}
# => 'user:[email protected]:9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('username') }}
# => 'user'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('password') }}
# => 'password'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('path') }}
# => '/dir/index.html'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('port') }}
# => '9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('scheme') }}
# => 'http'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('query') }}
# => 'query=term'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('fragment') }}
# => 'fragment'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit }}
# =>
# {
# "fragment": "fragment",
# "hostname": "www.acme.com",
# "netloc": "user:[email protected]:9000",
# "password": "password",
# "path": "/dir/index.html",
# "port": 9000,
# "query": "query=term",
# "scheme": "http",
# "username": "user"
# }
```
### Searching strings with regular expressions
To search in a string or extract parts of a string with a regular expression, use the `regex_search` filter:
```
# Extracts the database name from a string
{{ 'server1/database42' | regex_search('database[0-9]+') }}
# => 'database42'
# Returns an empty string if it cannot find a match
{{ 'ansible' | regex_search('foobar') }}
# => ''
# Example for a case insensitive search in multiline mode
{{ 'foo\nBAR' | regex_search('^bar', multiline=True, ignorecase=True) }}
# => 'BAR'
# Extracts server and database id from a string
{{ 'server1/database42' | regex_search('server([0-9]+)/database([0-9]+)', '\\1', '\\2') }}
# => ['1', '42']
# Extracts dividend and divisor from a division
{{ '21/42' | regex_search('(?P<dividend>[0-9]+)/(?P<divisor>[0-9]+)', '\\g<dividend>', '\\g<divisor>') }}
# => ['21', '42']
```
To extract all occurrences of regex matches in a string, use the `regex_findall` filter:
```
# Returns a list of all IPv4 addresses in the string
{{ 'Some DNS servers are 8.8.8.8 and 8.8.4.4' | regex_findall('\\b(?:[0-9]{1,3}\\.){3}[0-9]{1,3}\\b') }}
# => ['8.8.8.8', '8.8.4.4']
# Returns all lines that end with "ar"
{{ 'CAR\ntar\nfoo\nbar\n' | regex_findall('^.ar$', multiline=True, ignorecase=True) }}
# => ['CAR', 'tar', 'bar']
```
To replace text in a string with regex, use the `regex_replace` filter:
```
# Convert "ansible" to "able"
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
# => 'able'
# Convert "foobar" to "bar"
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
# => 'bar'
# Convert "localhost:80" to "localhost, 80" using named groups
{{ 'localhost:80' | regex_replace('^(?P<host>.+):(?P<port>\\d+)$', '\\g<host>, \\g<port>') }}
# => 'localhost, 80'
# Convert "localhost:80" to "localhost"
{{ 'localhost:80' | regex_replace(':80') }}
# => 'localhost'
# Comment all lines that end with "ar"
{{ 'CAR\ntar\nfoo\nbar\n' | regex_replace('^(.ar)$', '#\\1', multiline=True, ignorecase=True) }}
# => '#CAR\n#tar\nfoo\n#bar\n'
```
Note
If you want to match the whole string and you are using `*` make sure to always wraparound your regular expression with the start/end anchors. For example `^(.*)$` will always match only one result, while `(.*)` on some Python versions will match the whole string and an empty string at the end, which means it will make two replacements:
```
# add "https://" prefix to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '(.+)', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '^', 'https://') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', 'https://\\1') | list }}
# append ':80' to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', '\\1:80') | list }}
{{ hosts | map('regex_replace', '(.+)', '\\1:80') | list }}
{{ hosts | map('regex_replace', '$', ':80') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', '\\1:80') | list }}
```
Note
Prior to ansible 2.0, if `regex_replace` filter was used with variables inside YAML arguments (as opposed to simpler ‘key=value’ arguments), then you needed to escape backreferences (for example, `\\1`) with 4 backslashes (`\\\\`) instead of 2 (`\\`).
New in version 2.0.
To escape special characters within a standard Python regex, use the `regex_escape` filter (using the default `re_type='python'` option):
```
# convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$'
{{ '^f.*o(.*)$' | regex_escape() }}
```
New in version 2.8.
To escape special characters within a POSIX basic regex, use the `regex_escape` filter with the `re_type='posix_basic'` option:
```
# convert '^f.*o(.*)$' to '\^f\.\*o(\.\*)\$'
{{ '^f.*o(.*)$' | regex_escape('posix_basic') }}
```
### Managing file names and path names
To get the last name of a file path, like ‘foo.txt’ out of ‘/etc/asdf/foo.txt’:
```
{{ path | basename }}
```
To get the last name of a windows style file path (new in version 2.0):
```
{{ path | win_basename }}
```
To separate the windows drive letter from the rest of a file path (new in version 2.0):
```
{{ path | win_splitdrive }}
```
To get only the windows drive letter:
```
{{ path | win_splitdrive | first }}
```
To get the rest of the path without the drive letter:
```
{{ path | win_splitdrive | last }}
```
To get the directory from a path:
```
{{ path | dirname }}
```
To get the directory from a windows path (new version 2.0):
```
{{ path | win_dirname }}
```
To expand a path containing a tilde (`~`) character (new in version 1.5):
```
{{ path | expanduser }}
```
To expand a path containing environment variables:
```
{{ path | expandvars }}
```
Note
`expandvars` expands local variables; using it on remote paths can lead to errors.
New in version 2.6.
To get the real path of a link (new in version 1.8):
```
{{ path | realpath }}
```
To get the relative path of a link, from a start point (new in version 1.7):
```
{{ path | relpath('/etc') }}
```
To get the root and extension of a path or file name (new in version 2.0):
```
# with path == 'nginx.conf' the return would be ('nginx', '.conf')
{{ path | splitext }}
```
The `splitext` filter always returns a pair of strings. The individual components can be accessed by using the `first` and `last` filters:
```
# with path == 'nginx.conf' the return would be 'nginx'
{{ path | splitext | first }}
# with path == 'nginx.conf' the return would be '.conf'
{{ path | splitext | last }}
```
To join one or more path components:
```
{{ ('/etc', path, 'subdir', file) | path_join }}
```
New in version 2.10.
Manipulating strings
--------------------
To add quotes for shell usage:
```
- name: Run a shell command
ansible.builtin.shell: echo {{ string_value | quote }}
```
To concatenate a list into a string:
```
{{ list | join(" ") }}
```
To split a sting into a list:
```
{{ csv_string | split(",") }}
```
New in version 2.11.
To work with Base64 encoded strings:
```
{{ encoded | b64decode }}
{{ decoded | string | b64encode }}
```
As of version 2.6, you can define the type of encoding to use, the default is `utf-8`:
```
{{ encoded | b64decode(encoding='utf-16-le') }}
{{ decoded | string | b64encode(encoding='utf-16-le') }}
```
Note
The `string` filter is only required for Python 2 and ensures that text to encode is a unicode string. Without that filter before b64encode the wrong value will be encoded.
New in version 2.6.
Managing UUIDs
--------------
To create a namespaced UUIDv5:
```
{{ string | to_uuid(namespace='11111111-2222-3333-4444-555555555555') }}
```
New in version 2.10.
To create a namespaced UUIDv5 using the default Ansible namespace ‘361E6D51-FAEC-444A-9079-341386DA8E2E’:
```
{{ string | to_uuid }}
```
New in version 1.9.
To make use of one attribute from each item in a list of complex variables, use the `Jinja2 map filter`:
```
# get a comma-separated list of the mount points (for example, "/,/mnt/stuff") on a host
{{ ansible_mounts | map(attribute='mount') | join(',') }}
```
Handling dates and times
------------------------
To get a date object from a string use the `to_datetime` filter:
```
# Get total amount of seconds between two dates. Default date format is %Y-%m-%d %H:%M:%S but you can pass your own format
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).total_seconds() }}
# Get remaining seconds after delta has been calculated. NOTE: This does NOT convert years, days, hours, and so on to seconds. For that, use total_seconds()
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2016-08-14 18:00:00" | to_datetime)).seconds }}
# This expression evaluates to "12" and not "132". Delta is 2 hours, 12 seconds
# get amount of days between two dates. This returns only number of days and discards remaining hours, minutes, and seconds
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).days }}
```
Note
For a full list of format codes for working with python date format strings, see <https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior>.
New in version 2.4.
To format a date using a string (like with the shell date command), use the “strftime” filter:
```
# Display year-month-day
{{ '%Y-%m-%d' | strftime }}
# => "2021-03-19"
# Display hour:min:sec
{{ '%H:%M:%S' | strftime }}
# => "21:51:04"
# Use ansible_date_time.epoch fact
{{ '%Y-%m-%d %H:%M:%S' | strftime(ansible_date_time.epoch) }}
# => "2021-03-19 21:54:09"
# Use arbitrary epoch value
{{ '%Y-%m-%d' | strftime(0) }} # => 1970-01-01
{{ '%Y-%m-%d' | strftime(1441357287) }} # => 2015-09-04
```
Note
To get all string possibilities, check <https://docs.python.org/3/library/time.html#time.strftime>
Getting Kubernetes resource names
---------------------------------
Note
These filters have migrated to the [kuberernetes.core](https://galaxy.ansible.com/kubernetes/core) collection. Follow the installation instructions to install that collection.
Use the “k8s\_config\_resource\_name” filter to obtain the name of a Kubernetes ConfigMap or Secret, including its hash:
```
{{ configmap_resource_definition | kuberernetes.core.k8s_config_resource_name }}
```
This can then be used to reference hashes in Pod specifications:
```
my_secret:
kind: Secret
metadata:
name: my_secret_name
deployment_resource:
kind: Deployment
spec:
template:
spec:
containers:
- envFrom:
- secretRef:
name: {{ my_secret | kuberernetes.core.k8s_config_resource_name }}
```
New in version 2.8.
See also
[Intro to playbooks](playbooks_intro#about-playbooks)
An introduction to playbooks
[Conditionals](playbooks_conditionals#playbooks-conditionals)
Conditional statements in playbooks
[Using Variables](playbooks_variables#playbooks-variables)
All about variables
[Loops](playbooks_loops#playbooks-loops)
Looping in playbooks
[Roles](playbooks_reuse_roles#playbooks-reuse-roles)
Playbook organization by roles
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Working with command line tools Working with command line tools
===============================
Most users are familiar with `ansible` and `ansible-playbook`, but those are not the only utilities Ansible provides. Below is a complete list of Ansible utilities. Each page contains a description of the utility and a listing of supported parameters.
* [ansible](../cli/ansible)
* [ansible-config](../cli/ansible-config)
* [ansible-console](../cli/ansible-console)
* [ansible-doc](../cli/ansible-doc)
* [ansible-galaxy](../cli/ansible-galaxy)
* [ansible-inventory](../cli/ansible-inventory)
* [ansible-playbook](../cli/ansible-playbook)
* [ansible-pull](../cli/ansible-pull)
* [ansible-vault](../cli/ansible-vault)
ansible Re-using Ansible artifacts Re-using Ansible artifacts
==========================
You can write a simple playbook in one very large file, and most users learn the one-file approach first. However, breaking tasks up into different files is an excellent way to organize complex sets of tasks and reuse them. Smaller, more distributed artifacts let you re-use the same variables, tasks, and plays in multiple playbooks to address different use cases. You can use distributed artifacts across multiple parent playbooks or even multiple times within one playbook. For example, you might want to update your customer database as part of several different playbooks. If you put all the tasks related to updating your database in a tasks file, you can re-use them in many playbooks while only maintaining them in one place.
* [Creating re-usable files and roles](#creating-re-usable-files-and-roles)
* [Re-using playbooks](#re-using-playbooks)
* [Re-using files and roles](#re-using-files-and-roles)
+ [Includes: dynamic re-use](#includes-dynamic-re-use)
+ [Imports: static re-use](#imports-static-re-use)
+ [Comparing includes and imports: dynamic and static re-use](#comparing-includes-and-imports-dynamic-and-static-re-use)
* [Re-using tasks as handlers](#re-using-tasks-as-handlers)
+ [Triggering included (dynamic) handlers](#triggering-included-dynamic-handlers)
+ [Triggering imported (static) handlers](#triggering-imported-static-handlers)
Creating re-usable files and roles
----------------------------------
Ansible offers four distributed, re-usable artifacts: variables files, task files, playbooks, and roles.
* A variables file contains only variables.
* A task file contains only tasks.
* A playbook contains at least one play, and may contain variables, tasks, and other content. You can re-use tightly focused playbooks, but you can only re-use them statically, not dynamically.
* A role contains a set of related tasks, variables, defaults, handlers, and even modules or other plugins in a defined file-tree. Unlike variables files, task files, or playbooks, roles can be easily uploaded and shared via Ansible Galaxy. See [Roles](playbooks_reuse_roles#playbooks-reuse-roles) for details about creating and using roles.
New in version 2.4.
Re-using playbooks
------------------
You can incorporate multiple playbooks into a main playbook. However, you can only use imports to re-use playbooks. For example:
```
- import_playbook: webservers.yml
- import_playbook: databases.yml
```
Importing incorporates playbooks in other playbooks statically. Ansible runs the plays and tasks in each imported playbook in the order they are listed, just as if they had been defined directly in the main playbook.
Re-using files and roles
------------------------
Ansible offers two ways to re-use files and roles in a playbook: dynamic and static.
* For dynamic re-use, add an `include_*` task in the tasks section of a play:
+ [include\_role](../collections/ansible/builtin/include_role_module#include-role-module)
+ [include\_tasks](../collections/ansible/builtin/include_tasks_module#include-tasks-module)
+ [include\_vars](../collections/ansible/builtin/include_vars_module#include-vars-module)
* For static re-use, add an `import_*` task in the tasks section of a play:
+ [import\_role](../collections/ansible/builtin/import_role_module#import-role-module)
+ [import\_tasks](../collections/ansible/builtin/import_tasks_module#import-tasks-module)
Task include and import statements can be used at arbitrary depth.
You can still use the bare [roles](playbooks_reuse_roles#roles-keyword) keyword at the play level to incorporate a role in a playbook statically. However, the bare [include](../collections/ansible/builtin/include_module#include-module) keyword, once used for both task files and playbook-level includes, is now deprecated.
### Includes: dynamic re-use
Including roles, tasks, or variables adds them to a playbook dynamically. Ansible processes included files and roles as they come up in a playbook, so included tasks can be affected by the results of earlier tasks within the top-level playbook. Included roles and tasks are similar to handlers - they may or may not run, depending on the results of other tasks in the top-level playbook.
The primary advantage of using `include_*` statements is looping. When a loop is used with an include, the included tasks or role will be executed once for each item in the loop.
You can pass variables into includes. See [Variable precedence: Where should I put a variable?](playbooks_variables#ansible-variable-precedence) for more details on variable inheritance and precedence.
### Imports: static re-use
Importing roles, tasks, or playbooks adds them to a playbook statically. Ansible pre-processes imported files and roles before it runs any tasks in a playbook, so imported content is never affected by other tasks within the top-level playbook.
You can pass variables to imports. You must pass variables if you want to run an imported file more than once in a playbook. For example:
```
tasks:
- import_tasks: wordpress.yml
vars:
wp_user: timmy
- import_tasks: wordpress.yml
vars:
wp_user: alice
- import_tasks: wordpress.yml
vars:
wp_user: bob
```
See [Variable precedence: Where should I put a variable?](playbooks_variables#ansible-variable-precedence) for more details on variable inheritance and precedence.
### Comparing includes and imports: dynamic and static re-use
Each approach to re-using distributed Ansible artifacts has advantages and limitations. You may choose dynamic re-use for some playbooks and static re-use for others. Although you can use both dynamic and static re-use in a single playbook, it is best to select one approach per playbook. Mixing static and dynamic re-use can introduce difficult-to-diagnose bugs into your playbooks. This table summarizes the main differences so you can choose the best approach for each playbook you create.
| | Include\_\* | Import\_\* |
| --- | --- | --- |
| Type of re-use | Dynamic | Static |
| When processed | At runtime, when encountered | Pre-processed during playbook parsing |
| Task or play | All includes are tasks | `import_playbook` cannot be a task |
| Task options | Apply only to include task itself | Apply to all child tasks in import |
| Calling from loops | Executed once for each loop item | Cannot be used in a loop |
| Using `--list-tags` | Tags within includes not listed | All tags appear with `--list-tags` |
| Using `--list-tasks` | Tasks within includes not listed | All tasks appear with `--list-tasks` |
| Notifying handlers | Cannot trigger handlers within includes | Can trigger individual imported handlers |
| Using `--start-at-task` | Cannot start at tasks within includes | Can start at imported tasks |
| Using inventory variables | Can `include_*: {{ inventory_var }}` | Cannot `import_*: {{ inventory_var }}` |
| With playbooks | No `include_playbook` | Can import full playbooks |
| With variables files | Can include variables files | Use `vars_files:` to import variables |
Note
* There are also big differences in resource consumption and performance, imports are quite lean and fast, while includes require a lot of management and accounting.
Re-using tasks as handlers
--------------------------
You can also use includes and imports in the [Handlers: running operations on change](playbooks_handlers#handlers) section of a playbook. For instance, if you want to define how to restart Apache, you only have to do that once for all of your playbooks. You might make a `restarts.yml` file that looks like:
```
# restarts.yml
- name: Restart apache
ansible.builtin.service:
name: apache
state: restarted
- name: Restart mysql
ansible.builtin.service:
name: mysql
state: restarted
```
You can trigger handlers from either an import or an include, but the procedure is different for each method of re-use. If you include the file, you must notify the include itself, which triggers all the tasks in `restarts.yml`. If you import the file, you must notify the individual task(s) within `restarts.yml`. You can mix direct tasks and handlers with included or imported tasks and handlers.
### Triggering included (dynamic) handlers
Includes are executed at run-time, so the name of the include exists during play execution, but the included tasks do not exist until the include itself is triggered. To use the `Restart apache` task with dynamic re-use, refer to the name of the include itself. This approach triggers all tasks in the included file as handlers. For example, with the task file shown above:
```
- name: Trigger an included (dynamic) handler
hosts: localhost
handlers:
- name: Restart services
include_tasks: restarts.yml
tasks:
- command: "true"
notify: Restart services
```
### Triggering imported (static) handlers
Imports are processed before the play begins, so the name of the import no longer exists during play execution, but the names of the individual imported tasks do exist. To use the `Restart apache` task with static re-use, refer to the name of each task or tasks within the imported file. For example, with the task file shown above:
```
- name: Trigger an imported (static) handler
hosts: localhost
handlers:
- name: Restart services
import_tasks: restarts.yml
tasks:
- command: "true"
notify: Restart apache
- command: "true"
notify: Restart mysql
```
See also
[Utilities modules](https://docs.ansible.com/ansible/2.9/modules/list_of_utilities_modules.html#utilities-modules "(in Ansible v2.9)")
Documentation of the `include*` and `import*` modules discussed here.
[Working with playbooks](playbooks#working-with-playbooks)
Review the basic Playbook language features
[Using Variables](playbooks_variables#playbooks-variables)
All about variables in playbooks
[Conditionals](playbooks_conditionals#playbooks-conditionals)
Conditionals in playbooks
[Loops](playbooks_loops#playbooks-loops)
Loops in playbooks
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[Galaxy User Guide](../galaxy/user_guide#ansible-galaxy)
How to share roles on galaxy, role management
[GitHub Ansible examples](https://github.com/ansible/ansible-examples)
Complete playbook files from the GitHub project source
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
ansible Rejecting modules Rejecting modules
=================
If you want to avoid using certain modules, you can add them to a reject list to prevent Ansible from loading them. To reject plugins, create a yaml configuration file. The default location for this file is `/etc/ansible/plugin_filters.yml`. You can select a different path for the reject list using the [PLUGIN\_FILTERS\_CFG](../reference_appendices/config#plugin-filters-cfg) setting in the `defaults` section of your ansible.cfg. Here is an example reject list:
```
---
filter_version: '1.0'
module_rejectlist:
# Deprecated
- docker
# We only allow pip, not easy_install
- easy_install
```
The file contains two fields:
* A file version so that you can update the format while keeping backwards compatibility in the future. The present version should be the string, `"1.0"`
* A list of modules to reject. Ansible will not load any module in this list when it searches for a module to invoke for a task.
Note
The `stat` module is required for Ansible to run. Do not add this module to your reject list.
ansible Using Ansible and Windows Using Ansible and Windows
=========================
When using Ansible to manage Windows, many of the syntax and rules that apply for Unix/Linux hosts also apply to Windows, but there are still some differences when it comes to components like path separators and OS-specific tasks. This document covers details specific to using Ansible for Windows.
* [Use Cases](#use-cases)
+ [Installing Software](#installing-software)
+ [Installing Updates](#installing-updates)
+ [Set Up Users and Groups](#set-up-users-and-groups)
- [Local](#local)
- [Domain](#domain)
+ [Running Commands](#running-commands)
- [Choosing Command or Shell](#choosing-command-or-shell)
- [Argument Rules](#argument-rules)
+ [Creating and Running a Scheduled Task](#creating-and-running-a-scheduled-task)
* [Path Formatting for Windows](#path-formatting-for-windows)
+ [YAML Style](#yaml-style)
+ [Legacy key=value Style](#legacy-key-value-style)
* [Limitations](#limitations)
* [Developing Windows Modules](#developing-windows-modules)
Use Cases
---------
Ansible can be used to orchestrate a multitude of tasks on Windows servers. Below are some examples and info about common tasks.
### Installing Software
There are three main ways that Ansible can be used to install software:
* Using the `win_chocolatey` module. This sources the program data from the default public [Chocolatey](https://chocolatey.org/) repository. Internal repositories can be used instead by setting the `source` option.
* Using the `win_package` module. This installs software using an MSI or .exe installer from a local/network path or URL.
* Using the `win_command` or `win_shell` module to run an installer manually.
The `win_chocolatey` module is recommended since it has the most complete logic for checking to see if a package has already been installed and is up-to-date.
Below are some examples of using all three options to install 7-Zip:
```
# Install/uninstall with chocolatey
- name: Ensure 7-Zip is installed via Chocolatey
win_chocolatey:
name: 7zip
state: present
- name: Ensure 7-Zip is not installed via Chocolatey
win_chocolatey:
name: 7zip
state: absent
# Install/uninstall with win_package
- name: Download the 7-Zip package
win_get_url:
url: https://www.7-zip.org/a/7z1701-x64.msi
dest: C:\temp\7z.msi
- name: Ensure 7-Zip is installed via win_package
win_package:
path: C:\temp\7z.msi
state: present
- name: Ensure 7-Zip is not installed via win_package
win_package:
path: C:\temp\7z.msi
state: absent
# Install/uninstall with win_command
- name: Download the 7-Zip package
win_get_url:
url: https://www.7-zip.org/a/7z1701-x64.msi
dest: C:\temp\7z.msi
- name: Check if 7-Zip is already installed
win_reg_stat:
name: HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{23170F69-40C1-2702-1701-000001000000}
register: 7zip_installed
- name: Ensure 7-Zip is installed via win_command
win_command: C:\Windows\System32\msiexec.exe /i C:\temp\7z.msi /qn /norestart
when: 7zip_installed.exists == false
- name: Ensure 7-Zip is uninstalled via win_command
win_command: C:\Windows\System32\msiexec.exe /x {23170F69-40C1-2702-1701-000001000000} /qn /norestart
when: 7zip_installed.exists == true
```
Some installers like Microsoft Office or SQL Server require credential delegation or access to components restricted by WinRM. The best method to bypass these issues is to use `become` with the task. With `become`, Ansible will run the installer as if it were run interactively on the host.
Note
Many installers do not properly pass back error information over WinRM. In these cases, if the install has been verified to work locally the recommended method is to use become.
Note
Some installers restart the WinRM or HTTP services, or cause them to become temporarily unavailable, making Ansible assume the system is unreachable.
### Installing Updates
The `win_updates` and `win_hotfix` modules can be used to install updates or hotfixes on a host. The module `win_updates` is used to install multiple updates by category, while `win_hotfix` can be used to install a single update or hotfix file that has been downloaded locally.
Note
The `win_hotfix` module has a requirement that the DISM PowerShell cmdlets are present. These cmdlets were only added by default on Windows Server 2012 and newer and must be installed on older Windows hosts.
The following example shows how `win_updates` can be used:
```
- name: Install all critical and security updates
win_updates:
category_names:
- CriticalUpdates
- SecurityUpdates
state: installed
register: update_result
- name: Reboot host if required
win_reboot:
when: update_result.reboot_required
```
The following example show how `win_hotfix` can be used to install a single update or hotfix:
```
- name: Download KB3172729 for Server 2012 R2
win_get_url:
url: http://download.windowsupdate.com/d/msdownload/update/software/secu/2016/07/windows8.1-kb3172729-x64_e8003822a7ef4705cbb65623b72fd3cec73fe222.msu
dest: C:\temp\KB3172729.msu
- name: Install hotfix
win_hotfix:
hotfix_kb: KB3172729
source: C:\temp\KB3172729.msu
state: present
register: hotfix_result
- name: Reboot host if required
win_reboot:
when: hotfix_result.reboot_required
```
### Set Up Users and Groups
Ansible can be used to create Windows users and groups both locally and on a domain.
#### Local
The modules `win_user`, `win_group` and `win_group_membership` manage Windows users, groups and group memberships locally.
The following is an example of creating local accounts and groups that can access a folder on the same host:
```
- name: Create local group to contain new users
win_group:
name: LocalGroup
description: Allow access to C:\Development folder
- name: Create local user
win_user:
name: '{{ item.name }}'
password: '{{ item.password }}'
groups: LocalGroup
update_password: no
password_never_expires: yes
loop:
- name: User1
password: Password1
- name: User2
password: Password2
- name: Create Development folder
win_file:
path: C:\Development
state: directory
- name: Set ACL of Development folder
win_acl:
path: C:\Development
rights: FullControl
state: present
type: allow
user: LocalGroup
- name: Remove parent inheritance of Development folder
win_acl_inheritance:
path: C:\Development
reorganize: yes
state: absent
```
#### Domain
The modules `win_domain_user` and `win_domain_group` manages users and groups in a domain. The below is an example of ensuring a batch of domain users are created:
```
- name: Ensure each account is created
win_domain_user:
name: '{{ item.name }}'
upn: '{{ item.name }}@MY.DOMAIN.COM'
password: '{{ item.password }}'
password_never_expires: no
groups:
- Test User
- Application
company: Ansible
update_password: on_create
loop:
- name: Test User
password: Password
- name: Admin User
password: SuperSecretPass01
- name: Dev User
password: '@fvr3IbFBujSRh!3hBg%wgFucD8^x8W5'
```
### Running Commands
In cases where there is no appropriate module available for a task, a command or script can be run using the `win_shell`, `win_command`, `raw`, and `script` modules.
The `raw` module simply executes a Powershell command remotely. Since `raw` has none of the wrappers that Ansible typically uses, `become`, `async` and environment variables do not work.
The `script` module executes a script from the Ansible controller on one or more Windows hosts. Like `raw`, `script` currently does not support `become`, `async`, or environment variables.
The `win_command` module is used to execute a command which is either an executable or batch file, while the `win_shell` module is used to execute commands within a shell.
#### Choosing Command or Shell
The `win_shell` and `win_command` modules can both be used to execute a command or commands. The `win_shell` module is run within a shell-like process like `PowerShell` or `cmd`, so it has access to shell operators like `<`, `>`, `|`, `;`, `&&`, and `||`. Multi-lined commands can also be run in `win_shell`.
The `win_command` module simply runs a process outside of a shell. It can still run a shell command like `mkdir` or `New-Item` by passing the shell commands to a shell executable like `cmd.exe` or `PowerShell.exe`.
Here are some examples of using `win_command` and `win_shell`:
```
- name: Run a command under PowerShell
win_shell: Get-Service -Name service | Stop-Service
- name: Run a command under cmd
win_shell: mkdir C:\temp
args:
executable: cmd.exe
- name: Run a multiple shell commands
win_shell: |
New-Item -Path C:\temp -ItemType Directory
Remove-Item -Path C:\temp -Force -Recurse
$path_info = Get-Item -Path C:\temp
$path_info.FullName
- name: Run an executable using win_command
win_command: whoami.exe
- name: Run a cmd command
win_command: cmd.exe /c mkdir C:\temp
- name: Run a vbs script
win_command: cscript.exe script.vbs
```
Note
Some commands like `mkdir`, `del`, and `copy` only exist in the CMD shell. To run them with `win_command` they must be prefixed with `cmd.exe /c`.
#### Argument Rules
When running a command through `win_command`, the standard Windows argument rules apply:
* Each argument is delimited by a white space, which can either be a space or a tab.
* An argument can be surrounded by double quotes `"`. Anything inside these quotes is interpreted as a single argument even if it contains whitespace.
* A double quote preceded by a backslash `\` is interpreted as just a double quote `"` and not as an argument delimiter.
* Backslashes are interpreted literally unless it immediately precedes double quotes; for example `\` == `\` and `\"` == `"`
* If an even number of backslashes is followed by a double quote, one backslash is used in the argument for every pair, and the double quote is used as a string delimiter for the argument.
* If an odd number of backslashes is followed by a double quote, one backslash is used in the argument for every pair, and the double quote is escaped and made a literal double quote in the argument.
With those rules in mind, here are some examples of quoting:
```
- win_command: C:\temp\executable.exe argument1 "argument 2" "C:\path\with space" "double \"quoted\""
argv[0] = C:\temp\executable.exe
argv[1] = argument1
argv[2] = argument 2
argv[3] = C:\path\with space
argv[4] = double "quoted"
- win_command: '"C:\Program Files\Program\program.exe" "escaped \\\" backslash" unquoted-end-backslash\'
argv[0] = C:\Program Files\Program\program.exe
argv[1] = escaped \" backslash
argv[2] = unquoted-end-backslash\
# Due to YAML and Ansible parsing '\"' must be written as '{% raw %}\\{% endraw %}"'
- win_command: C:\temp\executable.exe C:\no\space\path "arg with end \ before end quote{% raw %}\\{% endraw %}"
argv[0] = C:\temp\executable.exe
argv[1] = C:\no\space\path
argv[2] = arg with end \ before end quote\"
```
For more information, see [escaping arguments](https://msdn.microsoft.com/en-us/library/17w5ykft(v=vs.85).aspx).
### Creating and Running a Scheduled Task
WinRM has some restrictions in place that cause errors when running certain commands. One way to bypass these restrictions is to run a command through a scheduled task. A scheduled task is a Windows component that provides the ability to run an executable on a schedule and under a different account.
Ansible version 2.5 added modules that make it easier to work with scheduled tasks in Windows. The following is an example of running a script as a scheduled task that deletes itself after running:
```
- name: Create scheduled task to run a process
win_scheduled_task:
name: adhoc-task
username: SYSTEM
actions:
- path: PowerShell.exe
arguments: |
Start-Sleep -Seconds 30 # This isn't required, just here as a demonstration
New-Item -Path C:\temp\test -ItemType Directory
# Remove this action if the task shouldn't be deleted on completion
- path: cmd.exe
arguments: /c schtasks.exe /Delete /TN "adhoc-task" /F
triggers:
- type: registration
- name: Wait for the scheduled task to complete
win_scheduled_task_stat:
name: adhoc-task
register: task_stat
until: (task_stat.state is defined and task_stat.state.status != "TASK_STATE_RUNNING") or (task_stat.task_exists == False)
retries: 12
delay: 10
```
Note
The modules used in the above example were updated/added in Ansible version 2.5.
Path Formatting for Windows
---------------------------
Windows differs from a traditional POSIX operating system in many ways. One of the major changes is the shift from `/` as the path separator to `\`. This can cause major issues with how playbooks are written, since `\` is often used as an escape character on POSIX systems.
Ansible allows two different styles of syntax; each deals with path separators for Windows differently:
### YAML Style
When using the YAML syntax for tasks, the rules are well-defined by the YAML standard:
* When using a normal string (without quotes), YAML will not consider the backslash an escape character.
* When using single quotes `'`, YAML will not consider the backslash an escape character.
* When using double quotes `"`, the backslash is considered an escape character and needs to escaped with another backslash.
Note
You should only quote strings when it is absolutely necessary or required by YAML, and then use single quotes.
The YAML specification considers the following [escape sequences](https://yaml.org/spec/current.html#id2517668):
* `\0`, `\\`, `\"`, `\_`, `\a`, `\b`, `\e`, `\f`, `\n`, `\r`, `\t`, `\v`, `\L`, `\N` and `\P` – Single character escape
* `<TAB>`, `<SPACE>`, `<NBSP>`, `<LNSP>`, `<PSP>` – Special characters
* `\x..` – 2-digit hex escape
* `\u....` – 4-digit hex escape
* `\U........` – 8-digit hex escape
Here are some examples on how to write Windows paths:
```
# GOOD
tempdir: C:\Windows\Temp
# WORKS
tempdir: 'C:\Windows\Temp'
tempdir: "C:\\Windows\\Temp"
# BAD, BUT SOMETIMES WORKS
tempdir: C:\\Windows\\Temp
tempdir: 'C:\\Windows\\Temp'
tempdir: C:/Windows/Temp
```
This is an example which will fail:
```
# FAILS
tempdir: "C:\Windows\Temp"
```
This example shows the use of single quotes when they are required:
```
---
- name: Copy tomcat config
win_copy:
src: log4j.xml
dest: '{{tc_home}}\lib\log4j.xml'
```
### Legacy key=value Style
The legacy `key=value` syntax is used on the command line for ad hoc commands, or inside playbooks. The use of this style is discouraged within playbooks because backslash characters need to be escaped, making playbooks harder to read. The legacy syntax depends on the specific implementation in Ansible, and quoting (both single and double) does not have any effect on how it is parsed by Ansible.
The Ansible key=value parser parse\_kv() considers the following escape sequences:
* `\`, `'`, `"`, `\a`, `\b`, `\f`, `\n`, `\r`, `\t` and `\v` – Single character escape
* `\x..` – 2-digit hex escape
* `\u....` – 4-digit hex escape
* `\U........` – 8-digit hex escape
* `\N{...}` – Unicode character by name
This means that the backslash is an escape character for some sequences, and it is usually safer to escape a backslash when in this form.
Here are some examples of using Windows paths with the key=value style:
```
# GOOD
tempdir=C:\\Windows\\Temp
# WORKS
tempdir='C:\\Windows\\Temp'
tempdir="C:\\Windows\\Temp"
# BAD, BUT SOMETIMES WORKS
tempdir=C:\Windows\Temp
tempdir='C:\Windows\Temp'
tempdir="C:\Windows\Temp"
tempdir=C:/Windows/Temp
# FAILS
tempdir=C:\Windows\temp
tempdir='C:\Windows\temp'
tempdir="C:\Windows\temp"
```
The failing examples don’t fail outright but will substitute `\t` with the `<TAB>` character resulting in `tempdir` being `C:\Windows<TAB>emp`.
Limitations
-----------
Some things you cannot do with Ansible and Windows are:
* Upgrade PowerShell
* Interact with the WinRM listeners
Because WinRM is reliant on the services being online and running during normal operations, you cannot upgrade PowerShell or interact with WinRM listeners with Ansible. Both of these actions will cause the connection to fail. This can technically be avoided by using `async` or a scheduled task, but those methods are fragile if the process it runs breaks the underlying connection Ansible uses, and are best left to the bootstrapping process or before an image is created.
Developing Windows Modules
--------------------------
Because Ansible modules for Windows are written in PowerShell, the development guides for Windows modules differ substantially from those for standard standard modules. Please see [Windows module development walkthrough](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general_windows.html#developing-modules-general-windows) for more information.
See also
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[List of Windows Modules](https://docs.ansible.com/ansible/2.9/modules/list_of_windows_modules.html#windows-modules "(in Ansible v2.9)")
Windows specific module list, all implemented in PowerShell
[User Mailing List](https://groups.google.com/group/ansible-project)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Asynchronous actions and polling Asynchronous actions and polling
================================
By default Ansible runs tasks synchronously, holding the connection to the remote node open until the action is completed. This means within a playbook, each task blocks the next task by default, meaning subsequent tasks will not run until the current task completes. This behavior can create challenges. For example, a task may take longer to complete than the SSH session allows for, causing a timeout. Or you may want a long-running process to execute in the background while you perform other tasks concurrently. Asynchronous mode lets you control how long-running tasks execute.
* [Asynchronous ad hoc tasks](#asynchronous-ad-hoc-tasks)
* [Asynchronous playbook tasks](#asynchronous-playbook-tasks)
+ [Avoid connection timeouts: poll > 0](#avoid-connection-timeouts-poll-0)
+ [Run tasks concurrently: poll = 0](#run-tasks-concurrently-poll-0)
Asynchronous ad hoc tasks
-------------------------
You can execute long-running operations in the background with [ad hoc tasks](intro_adhoc#intro-adhoc). For example, to execute `long_running_operation` asynchronously in the background, with a timeout (`-B`) of 3600 seconds, and without polling (`-P`):
```
$ ansible all -B 3600 -P 0 -a "/usr/bin/long_running_operation --do-stuff"
```
To check on the job status later, use the `async_status` module, passing it the job ID that was returned when you ran the original job in the background:
```
$ ansible web1.example.com -m async_status -a "jid=488359678239.2844"
```
Ansible can also check on the status of your long-running job automatically with polling. In most cases, Ansible will keep the connection to your remote node open between polls. To run for 30 minutes and poll for status every 60 seconds:
```
$ ansible all -B 1800 -P 60 -a "/usr/bin/long_running_operation --do-stuff"
```
Poll mode is smart so all jobs will be started before polling begins on any machine. Be sure to use a high enough `--forks` value if you want to get all of your jobs started very quickly. After the time limit (in seconds) runs out (`-B`), the process on the remote nodes will be terminated.
Asynchronous mode is best suited to long-running shell commands or software upgrades. Running the copy module asynchronously, for example, does not do a background file transfer.
Asynchronous playbook tasks
---------------------------
[Playbooks](playbooks#working-with-playbooks) also support asynchronous mode and polling, with a simplified syntax. You can use asynchronous mode in playbooks to avoid connection timeouts or to avoid blocking subsequent tasks. The behavior of asynchronous mode in a playbook depends on the value of `poll`.
### Avoid connection timeouts: poll > 0
If you want to set a longer timeout limit for a certain task in your playbook, use `async` with `poll` set to a positive value. Ansible will still block the next task in your playbook, waiting until the async task either completes, fails or times out. However, the task will only time out if it exceeds the timeout limit you set with the `async` parameter.
To avoid timeouts on a task, specify its maximum runtime and how frequently you would like to poll for status:
```
---
- hosts: all
remote_user: root
tasks:
- name: Simulate long running op (15 sec), wait for up to 45 sec, poll every 5 sec
ansible.builtin.command: /bin/sleep 15
async: 45
poll: 5
```
Note
The default poll value is set by the [DEFAULT\_POLL\_INTERVAL](../reference_appendices/config#default-poll-interval) setting. There is no default for the async time limit. If you leave off the ‘async’ keyword, the task runs synchronously, which is Ansible’s default.
Note
As of Ansible 2.3, async does not support check mode and will fail the task when run in check mode. See [Validating tasks: check mode and diff mode](playbooks_checkmode#check-mode-dry) on how to skip a task in check mode.
Note
When an async task completes with polling enabled, the temporary async job cache file (by default in ~/.ansible\_async/) is automatically removed.
### Run tasks concurrently: poll = 0
If you want to run multiple tasks in a playbook concurrently, use `async` with `poll` set to 0. When you set `poll: 0`, Ansible starts the task and immediately moves on to the next task without waiting for a result. Each async task runs until it either completes, fails or times out (runs longer than its `async` value). The playbook run ends without checking back on async tasks.
To run a playbook task asynchronously:
```
---
- hosts: all
remote_user: root
tasks:
- name: Simulate long running op, allow to run for 45 sec, fire and forget
ansible.builtin.command: /bin/sleep 15
async: 45
poll: 0
```
Note
Do not specify a poll value of 0 with operations that require exclusive locks (such as yum transactions) if you expect to run other commands later in the playbook against those same resources.
Note
Using a higher value for `--forks` will result in kicking off asynchronous tasks even faster. This also increases the efficiency of polling.
Note
When running with `poll: 0`, Ansible will not automatically cleanup the async job cache file. You will need to manually clean this up with the [async\_status](../collections/ansible/builtin/async_status_module#async-status-module) module with `mode: cleanup`.
If you need a synchronization point with an async task, you can register it to obtain its job ID and use the [async\_status](../collections/ansible/builtin/async_status_module#async-status-module) module to observe it in a later task. For example:
```
- name: Run an async task
ansible.builtin.yum:
name: docker-io
state: present
async: 1000
poll: 0
register: yum_sleeper
- name: Check on an async task
async_status:
jid: "{{ yum_sleeper.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 100
delay: 10
```
Note
If the value of `async:` is not high enough, this will cause the “check on it later” task to fail because the temporary status file that the `async_status:` is looking for will not have been written or no longer exist
To run multiple asynchronous tasks while limiting the number of tasks running concurrently:
```
#####################
# main.yml
#####################
- name: Run items asynchronously in batch of two items
vars:
sleep_durations:
- 1
- 2
- 3
- 4
- 5
durations: "{{ item }}"
include_tasks: execute_batch.yml
loop: "{{ sleep_durations | batch(2) | list }}"
#####################
# execute_batch.yml
#####################
- name: Async sleeping for batched_items
ansible.builtin.command: sleep {{ async_item }}
async: 45
poll: 0
loop: "{{ durations }}"
loop_control:
loop_var: "async_item"
register: async_results
- name: Check sync status
async_status:
jid: "{{ async_result_item.ansible_job_id }}"
loop: "{{ async_results.results }}"
loop_control:
loop_var: "async_result_item"
register: async_poll_results
until: async_poll_results.finished
retries: 30
```
See also
[Controlling playbook execution: strategies and more](playbooks_strategies#playbooks-strategies)
Options for controlling playbook execution
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible ipaddr filter ipaddr filter
=============
New in version 1.9.
`ipaddr()` is a Jinja2 filter designed to provide an interface to the [netaddr](https://pypi.org/project/netaddr/) Python package from within Ansible. It can operate on strings or lists of items, test various data to check if they are valid IP addresses, and manipulate the input data to extract requested information. `ipaddr()` works with both IPv4 and IPv6 addresses in various forms. There are also additional functions available to manipulate IP subnets and MAC addresses.
Note
The `ipaddr()` filter migrated to the [ansible.netcommon](https://galaxy.ansible.com/ansible/netcommon) collection. Follow the installation instructions to install that collection.
To use this filter in Ansible, you need to install the [netaddr](https://pypi.org/project/netaddr/) Python library on a computer on which you use Ansible (it is not required on remote hosts). It can usually be installed with either your system package manager or using `pip`:
```
pip install netaddr
```
* [Basic tests](#basic-tests)
* [Filtering lists](#filtering-lists)
* [Wrapping IPv6 addresses in [ ] brackets](#wrapping-ipv6-addresses-in-brackets)
* [Basic queries](#basic-queries)
* [Getting information about hosts and networks](#getting-information-about-hosts-and-networks)
* [Getting information from host/prefix values](#getting-information-from-host-prefix-values)
* [Converting subnet masks to CIDR notation](#converting-subnet-masks-to-cidr-notation)
* [Getting information about the network in CIDR notation](#getting-information-about-the-network-in-cidr-notation)
* [IP address conversion](#ip-address-conversion)
* [Converting IPv4 address to a 6to4 address](#converting-ipv4-address-to-a-6to4-address)
* [Finding IP addresses within a range](#finding-ip-addresses-within-a-range)
* [Testing if a address belong to a network range](#testing-if-a-address-belong-to-a-network-range)
* [IP Math](#ip-math)
* [Subnet manipulation](#subnet-manipulation)
* [Subnet Merging](#subnet-merging)
* [MAC address filter](#mac-address-filter)
* [Generate an IPv6 address in Stateless Configuration (SLAAC)](#generate-an-ipv6-address-in-stateless-configuration-slaac)
Basic tests
-----------
`ipaddr()` is designed to return the input value if a query is True, and `False` if a query is False. This way it can be easily used in chained filters. To use the filter, pass a string to it:
```
{{ '192.0.2.0' | ansible.netcommon.ipaddr }}
```
You can also pass the values as variables:
```
{{ myvar | ansible.netcommon.ipaddr }}
```
Here are some example test results of various input strings:
```
# These values are valid IP addresses or network ranges
'192.168.0.1' -> 192.168.0.1
'192.168.32.0/24' -> 192.168.32.0/24
'fe80::100/10' -> fe80::100/10
45443646733 -> ::a:94a7:50d
'523454/24' -> 0.7.252.190/24
# Values that are not valid IP addresses or network ranges
'localhost' -> False
True -> False
'space bar' -> False
False -> False
'' -> False
':' -> False
'fe80:/10' -> False
```
Sometimes you need either IPv4 or IPv6 addresses. To filter only for a particular type, `ipaddr()` filter has two “aliases”, `ipv4()` and `ipv6()`.
Example use of an IPv4 filter:
```
{{ myvar | ansible.netcommon.ipv4 }}
```
A similar example of an IPv6 filter:
```
{{ myvar | ansible.netcommon.ipv6 }}
```
Here’s some example test results to look for IPv4 addresses:
```
'192.168.0.1' -> 192.168.0.1
'192.168.32.0/24' -> 192.168.32.0/24
'fe80::100/10' -> False
45443646733 -> False
'523454/24' -> 0.7.252.190/24
```
And the same data filtered for IPv6 addresses:
```
'192.168.0.1' -> False
'192.168.32.0/24' -> False
'fe80::100/10' -> fe80::100/10
45443646733 -> ::a:94a7:50d
'523454/24' -> False
```
Filtering lists
---------------
You can filter entire lists - `ipaddr()` will return a list with values valid for a particular query:
```
# Example list of values
test_list = ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64']
# {{ test_list | ansible.netcommon.ipaddr }}
['192.24.2.1', '::1', '192.168.32.0/24', 'fe80::100/10', '2001:db8:32c:faad::/64']
# {{ test_list | ansible.netcommon.ipv4 }}
['192.24.2.1', '192.168.32.0/24']
# {{ test_list | ansible.netcommon.ipv6 }}
['::1', 'fe80::100/10', '2001:db8:32c:faad::/64']
```
Wrapping IPv6 addresses in [ ] brackets
---------------------------------------
Some configuration files require IPv6 addresses to be “wrapped” in square brackets (`[ ]`). To accomplish that, you can use the `ipwrap()` filter. It will wrap all IPv6 addresses and leave any other strings intact:
```
# {{ test_list | ansible.netcommon.ipwrap }}
['192.24.2.1', 'host.fqdn', '[::1]', '192.168.32.0/24', '[fe80::100]/10', True, '', '[2001:db8:32c:faad::]/64']
```
As you can see, `ipwrap()` did not filter out non-IP address values, which is usually what you want when for example you are mixing IP addresses with hostnames. If you still want to filter out all non-IP address values, you can chain both filters together:
```
# {{ test_list | ansible.netcommon.ipaddr | ansible.netcommon.ipwrap }}
['192.24.2.1', '[::1]', '192.168.32.0/24', '[fe80::100]/10', '[2001:db8:32c:faad::]/64']
```
Basic queries
-------------
You can provide a single argument to each `ipaddr()` filter. The filter will then treat it as a query and return values modified by that query. Lists will contain only values that you are querying for.
Types of queries include:
* query by name: `ansible.netcommon.ipaddr('address')`, `ansible.netcommon.ipv4('network')`;
* query by CIDR range: `ansible.netcommon.ipaddr('192.168.0.0/24')`, `ansible.netcommon.ipv6('2001:db8::/32')`;
* query by index number: `ansible.netcommon.ipaddr('1')`, `ansible.netcommon.ipaddr('-1')`;
If a query type is not recognized, Ansible will raise an error.
Getting information about hosts and networks
--------------------------------------------
Here’s our test list again:
```
# Example list of values
test_list = ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64']
```
Let’s take the list above and get only those elements that are host IP addresses and not network ranges:
```
# {{ test_list | ansible.netcommon.ipaddr('address') }}
['192.24.2.1', '::1', 'fe80::100']
```
As you can see, even though some values had a host address with a CIDR prefix, they were dropped by the filter. If you want host IP addresses with their correct CIDR prefixes (as is common with IPv6 addressing), you can use the `ipaddr('host')` filter:
```
# {{ test_list | ansible.netcommon.ipaddr('host') }}
['192.24.2.1/32', '::1/128', 'fe80::100/10']
```
Filtering by IP address type also works:
```
# {{ test_list | ansible.netcommon.ipv4('address') }}
['192.24.2.1']
# {{ test_list | ansible.netcommon.ipv6('address') }}
['::1', 'fe80::100']
```
You can check if IP addresses or network ranges are accessible on a public Internet, or if they are in private networks:
```
# {{ test_list | ansible.netcommon.ipaddr('public') }}
['192.24.2.1', '2001:db8:32c:faad::/64']
# {{ test_list | ansible.netcommon.ipaddr('private') }}
['192.168.32.0/24', 'fe80::100/10']
```
You can check which values are specifically network ranges:
```
# {{ test_list | ansible.netcommon.ipaddr('net') }}
['192.168.32.0/24', '2001:db8:32c:faad::/64']
```
You can also check how many IP addresses can be in a certain range:
```
# {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('size') }}
[256, 18446744073709551616L]
```
By specifying a network range as a query, you can check if a given value is in that range:
```
# {{ test_list | ansible.netcommon.ipaddr('192.0.0.0/8') }}
['192.24.2.1', '192.168.32.0/24']
```
If you specify a positive or negative integer as a query, `ipaddr()` will treat this as an index and will return the specific IP address from a network range, in the ‘host/prefix’ format:
```
# First IP address (network address)
# {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('0') }}
['192.168.32.0/24', '2001:db8:32c:faad::/64']
# Second IP address (usually the gateway host)
# {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('1') }}
['192.168.32.1/24', '2001:db8:32c:faad::1/64']
# Last IP address (the broadcast address in IPv4 networks)
# {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('-1') }}
['192.168.32.255/24', '2001:db8:32c:faad:ffff:ffff:ffff:ffff/64']
```
You can also select IP addresses from a range by their index, from the start or end of the range:
```
# Returns from the start of the range
# {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('200') }}
['192.168.32.200/24', '2001:db8:32c:faad::c8/64']
# Returns from the end of the range
# {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('-200') }}
['192.168.32.56/24', '2001:db8:32c:faad:ffff:ffff:ffff:ff38/64']
# {{ test_list | ansible.netcommon.ipaddr('net') | ansible.netcommon.ipaddr('400') }}
['2001:db8:32c:faad::190/64']
```
Getting information from host/prefix values
-------------------------------------------
You frequently use a combination of IP addresses and subnet prefixes (“CIDR”), this is even more common with IPv6. The `ansible.netcommon.ipaddr()` filter can extract useful data from these prefixes.
Here’s an example set of two host prefixes (with some “control” values):
```
host_prefix = ['2001:db8:deaf:be11::ef3/64', '192.0.2.48/24', '127.0.0.1', '192.168.0.0/16']
```
First, let’s make sure that we only work with correct host/prefix values, not just subnets or single IP addresses:
```
# {{ host_prefix | ansible.netcommon.ipaddr('host/prefix') }}
['2001:db8:deaf:be11::ef3/64', '192.0.2.48/24']
```
In Debian-based systems, the network configuration stored in the `/etc/network/interfaces` file uses a combination of IP address, network address, netmask and broadcast address to configure an IPv4 network interface. We can get these values from a single ‘host/prefix’ combination:
```
# Jinja2 template
{% set ipv4_host = host_prefix | unique | ansible.netcommon.ipv4('host/prefix') | first %}
iface eth0 inet static
address {{ ipv4_host | ansible.netcommon.ipaddr('address') }}
network {{ ipv4_host | ansible.netcommon.ipaddr('network') }}
netmask {{ ipv4_host | ansible.netcommon.ipaddr('netmask') }}
broadcast {{ ipv4_host | ansible.netcommon.ipaddr('broadcast') }}
# Generated configuration file
iface eth0 inet static
address 192.0.2.48
network 192.0.2.0
netmask 255.255.255.0
broadcast 192.0.2.255
```
In the above example, we needed to handle the fact that values were stored in a list, which is unusual in IPv4 networks, where only a single IP address can be set on an interface. However, IPv6 networks can have multiple IP addresses set on an interface:
```
# Jinja2 template
iface eth0 inet6 static
{% set ipv6_list = host_prefix | unique | ansible.netcommon.ipv6('host/prefix') %}
address {{ ipv6_list[0] }}
{% if ipv6_list | length > 1 %}
{% for subnet in ipv6_list[1:] %}
up /sbin/ip address add {{ subnet }} dev eth0
down /sbin/ip address del {{ subnet }} dev eth0
{% endfor %}
{% endif %}
# Generated configuration file
iface eth0 inet6 static
address 2001:db8:deaf:be11::ef3/64
```
If needed, you can extract subnet and prefix information from the ‘host/prefix’ value:
```
# {{ host_prefix | ansible.netcommon.ipaddr('host/prefix') | ansible.netcommon.ipaddr('subnet') }}
['2001:db8:deaf:be11::/64', '192.0.2.0/24']
# {{ host_prefix | ansible.netcommon.ipaddr('host/prefix') | ansible.netcommon.ipaddr('prefix') }}
[64, 24]
```
Converting subnet masks to CIDR notation
----------------------------------------
Given a subnet in the form of network address and subnet mask, the `ipaddr()` filter can convert it into CIDR notation. This can be useful for converting Ansible facts gathered about network configuration from subnet masks into CIDR format:
```
ansible_default_ipv4: {
address: "192.168.0.11",
alias: "eth0",
broadcast: "192.168.0.255",
gateway: "192.168.0.1",
interface: "eth0",
macaddress: "fa:16:3e:c4:bd:89",
mtu: 1500,
netmask: "255.255.255.0",
network: "192.168.0.0",
type: "ether"
}
```
First concatenate the network and netmask:
```
net_mask = "{{ ansible_default_ipv4.network }}/{{ ansible_default_ipv4.netmask }}"
'192.168.0.0/255.255.255.0'
```
This result can be converted to canonical form with `ipaddr()` to produce a subnet in CIDR format:
```
# {{ net_mask | ansible.netcommon.ipaddr('prefix') }}
'24'
# {{ net_mask | ansible.netcommon.ipaddr('net') }}
'192.168.0.0/24'
```
Getting information about the network in CIDR notation
------------------------------------------------------
Given an IP address, the `ipaddr()` filter can produce the network address in CIDR notation. This can be useful when you want to obtain the network address from the IP address in CIDR format.
Here’s an example of IP address:
```
ip_address = "{{ ansible_default_ipv4.address }}/{{ ansible_default_ipv4.netmask }}"
'192.168.0.11/255.255.255.0'
```
This can be used to obtain the network address in CIDR notation format:
```
# {{ ip_address | ansible.netcommon.ipaddr('network/prefix') }}
'192.168.0.0/24'
```
IP address conversion
---------------------
Here’s our test list again:
```
# Example list of values
test_list = ['192.24.2.1', 'host.fqdn', '::1', '192.168.32.0/24', 'fe80::100/10', True, '', '42540766412265424405338506004571095040/64']
```
You can convert IPv4 addresses into IPv6 addresses:
```
# {{ test_list | ansible.netcommon.ipv4('ipv6') }}
['::ffff:192.24.2.1/128', '::ffff:192.168.32.0/120']
```
Converting from IPv6 to IPv4 works very rarely:
```
# {{ test_list | ansible.netcommon.ipv6('ipv4') }}
['0.0.0.1/32']
```
But we can make a double conversion if needed:
```
# {{ test_list | ansible.netcommon.ipaddr('ipv6') | ansible.netcommon.ipaddr('ipv4') }}
['192.24.2.1/32', '0.0.0.1/32', '192.168.32.0/24']
```
You can convert IP addresses to integers, the same way that you can convert integers into IP addresses:
```
# {{ test_list | ansible.netcommon.ipaddr('address') | ansible.netcommon.ipaddr('int') }}
[3222798849, 1, '3232243712/24', '338288524927261089654018896841347694848/10', '42540766412265424405338506004571095040/64']
```
You can convert IPv4 address to [Hexadecimal notation](https://en.wikipedia.org/wiki/Hexadecimal) with optional delimiter:
```
# {{ '192.168.1.5' | ansible.netcommon.ip4_hex }}
c0a80105
# {{ '192.168.1.5' | ansible.netcommon.ip4_hex(':') }}
c0:a8:01:05
```
You can convert IP addresses to PTR records:
```
# {% for address in test_list | ansible.netcommon.ipaddr %}
# {{ address | ansible.netcommon.ipaddr('revdns') }}
# {% endfor %}
1.2.24.192.in-addr.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa.
0.32.168.192.in-addr.arpa.
0.0.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.e.f.ip6.arpa.
0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.d.a.a.f.c.2.3.0.8.b.d.0.1.0.0.2.ip6.arpa.
```
Converting IPv4 address to a 6to4 address
-----------------------------------------
A [6to4](https://en.wikipedia.org/wiki/6to4) tunnel is a way to access the IPv6 Internet from an IPv4-only network. If you have a public IPv4 address, you can automatically configure its IPv6 equivalent in the `2002::/16` network range. After conversion you will gain access to a `2002:xxxx:xxxx::/48` subnet which could be split into 65535 `/64` subnets if needed.
To convert your IPv4 address, just send it through the `'6to4'` filter. It will be automatically converted to a router address (with a `::1/48` host address):
```
# {{ '193.0.2.0' | ansible.netcommon.ipaddr('6to4') }}
2002:c100:0200::1/48
```
Finding IP addresses within a range
-----------------------------------
To find usable IP addresses within an IP range, try these `ipaddr` filters:
To find the next usable IP address in a range, use `next_usable`
```
# {{ '192.168.122.1/24' | ansible.netcommon.ipaddr('next_usable') }}
192.168.122.2
```
To find the last usable IP address from a range, use `last_usable`:
```
# {{ '192.168.122.1/24' | ansible.netcommon.ipaddr('last_usable') }}
192.168.122.254
```
To find the available range of IP addresses from the given network address, use `range_usable`:
```
# {{ '192.168.122.1/24' | ansible.netcommon.ipaddr('range_usable') }}
192.168.122.1-192.168.122.254
```
To find the peer IP address for a point to point link, use `peer`:
```
# {{ '192.168.122.1/31' | ansible.netcommon.ipaddr('peer') }}
192.168.122.0
# {{ '192.168.122.1/30' | ansible.netcommon.ipaddr('peer') }}
192.168.122.2
```
To return the nth ip from a network, use the filter `nthhost`:
```
# {{ '10.0.0.0/8' | ansible.netcommon.nthhost(305) }}
10.0.1.49
```
`nthhost` also supports a negative value:
```
# {{ '10.0.0.0/8' | ansible.netcommon.nthhost(-1) }}
10.255.255.255
```
To find the next nth usable IP address in relation to another within a range, use `next_nth_usable` In the example, `next_nth_usable` returns the second usable IP address for the given IP range:
```
# {{ '192.168.122.1/24' | ansible.netcommon.next_nth_usable(2) }}
192.168.122.3
```
If there is no usable address, it returns an empty string:
```
# {{ '192.168.122.254/24' | ansible.netcommon.next_nth_usable(2) }}
""
```
Just like `next_nth_ansible`, you have `previous_nth_usable` to find the previous usable address:
```
# {{ '192.168.122.10/24' | ansible.netcommon.previous_nth_usable(2) }}
192.168.122.8
```
Testing if a address belong to a network range
----------------------------------------------
The `network_in_usable` filter returns whether an address passed as an argument is usable in a network. Usable addresses are addresses that can be assigned to a host. The network ID and the broadcast address are not usable addresses.:
```
# {{ '192.168.0.0/24' | ansible.netcommon.network_in_usable( '192.168.0.1' ) }}
True
# {{ '192.168.0.0/24' | ansible.netcommon.network_in_usable( '192.168.0.255' ) }}
False
# {{ '192.168.0.0/16' | ansible.netcommon.network_in_usable( '192.168.0.255' ) }}
True
```
The `network_in_network` filter returns whether an address or a network passed as argument is in a network.:
```
# {{ '192.168.0.0/24' | ansible.netcommon.network_in_network( '192.168.0.1' ) }}
True
# {{ '192.168.0.0/24' | ansible.netcommon.network_in_network( '192.168.0.0/24' ) }}
True
# {{ '192.168.0.0/24' | ansible.netcommon.network_in_network( '192.168.0.255' ) }}
True
# Check in a network is part of another network
# {{ '192.168.0.0/16' | ansible.netcommon.network_in_network( '192.168.0.0/24' ) }}
True
```
To check whether multiple addresses belong to a network, use the `reduce_on_network` filter:
```
# {{ ['192.168.0.34', '10.3.0.3', '192.168.2.34'] | ansible.netcommon.reduce_on_network( '192.168.0.0/24' ) }}
['192.168.0.34']
```
IP Math
-------
New in version 2.7.
The `ipmath()` filter can be used to do simple IP math/arithmetic.
Here are a few simple examples:
```
# Get the next five addresses based on an IP address
# {{ '192.168.1.5' | ansible.netcommon.ipmath(5) }}
192.168.1.10
# Get the ten previous addresses based on an IP address
# {{ '192.168.0.5' | ansible.netcommon.ipmath(-10) }}
192.167.255.251
# Get the next five addresses using CIDR notation
# {{ '192.168.1.1/24' | ansible.netcommon.ipmath(5) }}
192.168.1.6
# Get the previous five addresses using CIDR notation
# {{ '192.168.1.6/24' | ansible.netcommon.ipmath(-5) }}
192.168.1.1
# Get the previous ten address using cidr notation
# It returns a address of the previous network range
# {{ '192.168.2.6/24' | ansible.netcommon.ipmath(-10) }}
192.168.1.252
# Get the next ten addresses in IPv6
# {{ '2001::1' | ansible.netcommon.ipmath(10) }}
2001::b
# Get the previous ten address in IPv6
# {{ '2001::5' | ansible.netcommon.ipmath(-10) }}
2000:ffff:ffff:ffff:ffff:ffff:ffff:fffb
```
Subnet manipulation
-------------------
The `ipsubnet()` filter can be used to manipulate network subnets in several ways.
Here is an example IP address and subnet:
```
address = '192.168.144.5'
subnet = '192.168.0.0/16'
```
To check if a given string is a subnet, pass it through the filter without any arguments. If the given string is an IP address, it will be converted into a subnet:
```
# {{ address | ansible.netcommon.ipsubnet }}
192.168.144.5/32
# {{ subnet | ansible.netcommon.ipsubnet }}
192.168.0.0/16
```
If you specify a subnet size as the first parameter of the `ipsubnet()` filter, and the subnet size is **smaller than the current one**, you will get the number of subnets a given subnet can be split into:
```
# {{ subnet | ansible.netcommon.ipsubnet(20) }}
16
```
The second argument of the `ipsubnet()` filter is an index number; by specifying it you can get a new subnet with the specified size:
```
# First subnet
# {{ subnet | ansible.netcommon.ipsubnet(20, 0) }}
192.168.0.0/20
# Last subnet
# {{ subnet | ansible.netcommon.ipsubnet(20, -1) }}
192.168.240.0/20
# Fifth subnet
# {{ subnet | ansible.netcommon.ipsubnet(20, 5) }}
192.168.80.0/20
# Fifth to last subnet
# {{ subnet | ansible.netcommon.ipsubnet(20, -5) }}
192.168.176.0/20
```
If you specify an IP address instead of a subnet, and give a subnet size as the first argument, the `ipsubnet()` filter will instead return the biggest subnet that contains that given IP address:
```
# {{ address | ansible.netcommon.ipsubnet(20) }}
192.168.144.0/20
```
By specifying an index number as a second argument, you can select smaller and smaller subnets:
```
# First subnet
# {{ address | ansible.netcommon.ipsubnet(18, 0) }}
192.168.128.0/18
# Last subnet
# {{ address | ansible.netcommon.ipsubnet(18, -1) }}
192.168.144.4/31
# Fifth subnet
# {{ address | ansible.netcommon.ipsubnet(18, 5) }}
192.168.144.0/23
# Fifth to last subnet
# {{ address | ansible.netcommon.ipsubnet(18, -5) }}
192.168.144.0/27
```
By specifying another subnet as a second argument, if the second subnet includes the first, you can determine the rank of the first subnet in the second
```
# The rank of the IP in the subnet (the IP is the 36870nth /32 of the subnet)
# {{ address | ansible.netcommon.ipsubnet(subnet) }}
36870
# The rank in the /24 that contain the address
# {{ address | ansible.netcommon.ipsubnet('192.168.144.0/24') }}
6
# An IP with the subnet in the first /30 in a /24
# {{ '192.168.144.1/30' | ansible.netcommon.ipsubnet('192.168.144.0/24') }}
1
# The fifth subnet /30 in a /24
# {{ '192.168.144.16/30' | ansible.netcommon.ipsubnet('192.168.144.0/24') }}
5
```
If the second subnet doesn’t include the first subnet, the `ipsubnet()` filter raises an error.
You can use the `ipsubnet()` filter with the `ipaddr()` filter to, for example, split a given `/48` prefix into smaller `/64` subnets:
```
# {{ '193.0.2.0' | ansible.netcommon.ipaddr('6to4') | ipsubnet(64, 58820) | ansible.netcommon.ipaddr('1') }}
2002:c100:200:e5c4::1/64
```
Because of the size of IPv6 subnets, iteration over all of them to find the correct one may take some time on slower computers, depending on the size difference between the subnets.
Subnet Merging
--------------
New in version 2.6.
The `cidr_merge()` filter can be used to merge subnets or individual addresses into their minimal representation, collapsing overlapping subnets and merging adjacent ones wherever possible:
```
{{ ['192.168.0.0/17', '192.168.128.0/17', '192.168.128.1' ] | cidr_merge }}
# => ['192.168.0.0/16']
{{ ['192.168.0.0/24', '192.168.1.0/24', '192.168.3.0/24'] | cidr_merge }}
# => ['192.168.0.0/23', '192.168.3.0/24']
```
Changing the action from ‘merge’ to ‘span’ will instead return the smallest subnet which contains all of the inputs:
```
{{ ['192.168.0.0/24', '192.168.3.0/24'] | ansible.netcommon.cidr_merge('span') }}
# => '192.168.0.0/22'
{{ ['192.168.1.42', '192.168.42.1'] | ansible.netcommon.cidr_merge('span') }}
# => '192.168.0.0/18'
```
MAC address filter
------------------
You can use the `hwaddr()` filter to check if a given string is a MAC address or convert it between various formats. Examples:
```
# Example MAC address
macaddress = '1a:2b:3c:4d:5e:6f'
# Check if given string is a MAC address
# {{ macaddress | ansible.netcommon.hwaddr }}
1a:2b:3c:4d:5e:6f
# Convert MAC address to PostgreSQL format
# {{ macaddress | ansible.netcommon.hwaddr('pgsql') }}
1a2b3c:4d5e6f
# Convert MAC address to Cisco format
# {{ macaddress | ansible.netcommon.hwaddr('cisco') }}
1a2b.3c4d.5e6f
```
The supported formats result in the following conversions for the `1a:2b:3c:4d:5e:6f` MAC address:
```
bare: 1A2B3C4D5E6F
bool: True
int: 28772997619311
cisco: 1a2b.3c4d.5e6f
eui48 or win: 1A-2B-3C-4D-5E-6F
linux or unix: 1a:2b:3c:4d:5e:6f:
pgsql, postgresql, or psql: 1a2b3c:4d5e6f
```
Generate an IPv6 address in Stateless Configuration (SLAAC)
-----------------------------------------------------------
the filter `slaac()` generates an IPv6 address for a given network and a MAC Address in Stateless Configuration:
```
# {{ 'fdcf:1894:23b5:d38c:0000:0000:0000:0000' | slaac('c2:31:b3:83:bf:2b') }}
fdcf:1894:23b5:d38c:c031:b3ff:fe83:bf2b
```
See also
[ansible.netcommon](https://galaxy.ansible.com/ansible/netcommon)
Ansible network collection for common code
[Intro to playbooks](playbooks_intro#about-playbooks)
An introduction to playbooks
[Using filters to manipulate data](playbooks_filters#playbooks-filters)
Introduction to Jinja2 filters and their uses
[Conditionals](playbooks_conditionals#playbooks-conditionals)
Conditional statements in playbooks
[Using Variables](playbooks_variables#playbooks-variables)
All about variables
[Loops](playbooks_loops#playbooks-loops)
Looping in playbooks
[Roles](playbooks_reuse_roles#playbooks-reuse-roles)
Playbook organization by roles
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Module defaults Module defaults
===============
If you frequently call the same module with the same arguments, it can be useful to define default arguments for that particular module using the `module_defaults` keyword.
Here is a basic example:
```
- hosts: localhost
module_defaults:
ansible.builtin.file:
owner: root
group: root
mode: 0755
tasks:
- name: Create file1
ansible.builtin.file:
state: touch
path: /tmp/file1
- name: Create file2
ansible.builtin.file:
state: touch
path: /tmp/file2
- name: Create file3
ansible.builtin.file:
state: touch
path: /tmp/file3
```
The `module_defaults` keyword can be used at the play, block, and task level. Any module arguments explicitly specified in a task will override any established default for that module argument:
```
- block:
- name: Print a message
ansible.builtin.debug:
msg: "Different message"
module_defaults:
ansible.builtin.debug:
msg: "Default message"
```
You can remove any previously established defaults for a module by specifying an empty dict:
```
- name: Create file1
ansible.builtin.file:
state: touch
path: /tmp/file1
module_defaults:
file: {}
```
Note
Any module defaults set at the play level (and block/task level when using `include_role` or `import_role`) will apply to any roles used, which may cause unexpected behavior in the role.
Here are some more realistic use cases for this feature.
Interacting with an API that requires auth:
```
- hosts: localhost
module_defaults:
ansible.builtin.uri:
force_basic_auth: true
user: some_user
password: some_password
tasks:
- name: Interact with a web service
ansible.builtin.uri:
url: http://some.api.host/v1/whatever1
- name: Interact with a web service
ansible.builtin.uri:
url: http://some.api.host/v1/whatever2
- name: Interact with a web service
ansible.builtin.uri:
url: http://some.api.host/v1/whatever3
```
Setting a default AWS region for specific EC2-related modules:
```
- hosts: localhost
vars:
my_region: us-west-2
module_defaults:
amazon.aws.ec2:
region: '{{ my_region }}'
community.aws.ec2_instance_info:
region: '{{ my_region }}'
amazon.aws.ec2_vpc_net_info:
region: '{{ my_region }}'
```
Module defaults groups
----------------------
New in version 2.7.
Ansible 2.7 adds a preview-status feature to group together modules that share common sets of parameters. This makes it easier to author playbooks making heavy use of API-based modules such as cloud modules.
| Group | Purpose | Ansible Version |
| --- | --- | --- |
| aws | Amazon Web Services | 2.7 |
| azure | Azure | 2.7 |
| gcp | Google Cloud Platform | 2.7 |
| k8s | Kubernetes | 2.8 |
| os | OpenStack | 2.8 |
| acme | ACME | 2.10 |
| docker\* | Docker | 2.10 |
| ovirt | oVirt | 2.10 |
| vmware | VMware | 2.10 |
* The [docker\_stack](docker_stack_module) module is not included in the `docker` defaults group.
Use the groups with `module_defaults` by prefixing the group name with `group/` - for example `group/aws`.
In a playbook, you can set module defaults for whole groups of modules, such as setting a common AWS region.
```
# example_play.yml
- hosts: localhost
module_defaults:
group/aws:
region: us-west-2
tasks:
- name: Get info
aws_s3_bucket_info:
# now the region is shared between both info modules
- name: Get info
ec2_ami_info:
filters:
name: 'RHEL*7.5*'
```
ansible Encrypting content with Ansible Vault Encrypting content with Ansible Vault
=====================================
Ansible Vault encrypts variables and files so you can protect sensitive content such as passwords or keys rather than leaving it visible as plaintext in playbooks or roles. To use Ansible Vault you need one or more passwords to encrypt and decrypt content. If you store your vault passwords in a third-party tool such as a secret manager, you need a script to access them. Use the passwords with the [ansible-vault](../cli/ansible-vault#ansible-vault) command-line tool to create and view encrypted variables, create encrypted files, encrypt existing files, or edit, re-key, or decrypt files. You can then place encrypted content under source control and share it more safely.
Warning
* Encryption with Ansible Vault ONLY protects ‘data at rest’. Once the content is decrypted (‘data in use’), play and plugin authors are responsible for avoiding any secret disclosure, see [no\_log](https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#keep-secret-data) for details on hiding output and [Steps to secure your editor](#vault-securing-editor) for security considerations on editors you use with Ansible Vault.
You can use encrypted variables and files in ad hoc commands and playbooks by supplying the passwords you used to encrypt them. You can modify your `ansible.cfg` file to specify the location of a password file or to always prompt for the password.
* [Managing vault passwords](#managing-vault-passwords)
+ [Choosing between a single password and multiple passwords](#choosing-between-a-single-password-and-multiple-passwords)
+ [Managing multiple passwords with vault IDs](#managing-multiple-passwords-with-vault-ids)
- [Limitations of vault IDs](#limitations-of-vault-ids)
- [Enforcing vault ID matching](#enforcing-vault-id-matching)
+ [Storing and accessing vault passwords](#storing-and-accessing-vault-passwords)
- [Storing passwords in files](#storing-passwords-in-files)
- [Storing passwords in third-party tools with vault password client scripts](#storing-passwords-in-third-party-tools-with-vault-password-client-scripts)
* [Encrypting content with Ansible Vault](#id1)
+ [Encrypting individual variables with Ansible Vault](#encrypting-individual-variables-with-ansible-vault)
- [Advantages and disadvantages of encrypting variables](#advantages-and-disadvantages-of-encrypting-variables)
- [Creating encrypted variables](#creating-encrypted-variables)
- [Viewing encrypted variables](#viewing-encrypted-variables)
+ [Encrypting files with Ansible Vault](#encrypting-files-with-ansible-vault)
- [Advantages and disadvantages of encrypting files](#advantages-and-disadvantages-of-encrypting-files)
- [Creating encrypted files](#creating-encrypted-files)
- [Encrypting existing files](#encrypting-existing-files)
- [Viewing encrypted files](#viewing-encrypted-files)
- [Editing encrypted files](#editing-encrypted-files)
- [Changing the password and/or vault ID on encrypted files](#changing-the-password-and-or-vault-id-on-encrypted-files)
- [Decrypting encrypted files](#decrypting-encrypted-files)
- [Steps to secure your editor](#steps-to-secure-your-editor)
* [vim](#vim)
* [Emacs](#emacs)
* [Using encrypted variables and files](#using-encrypted-variables-and-files)
+ [Passing a single password](#passing-a-single-password)
+ [Passing vault IDs](#passing-vault-ids)
+ [Passing multiple vault passwords](#passing-multiple-vault-passwords)
+ [Using `--vault-id` without a vault ID](#using-vault-id-without-a-vault-id)
* [Configuring defaults for using encrypted content](#configuring-defaults-for-using-encrypted-content)
+ [Setting a default vault ID](#setting-a-default-vault-id)
+ [Setting a default password source](#setting-a-default-password-source)
* [When are encrypted files made visible?](#when-are-encrypted-files-made-visible)
* [Speeding up Ansible Vault](#speeding-up-ansible-vault)
* [Format of files encrypted with Ansible Vault](#format-of-files-encrypted-with-ansible-vault)
+ [Ansible Vault payload format 1.1 - 1.2](#ansible-vault-payload-format-1-1-1-2)
Managing vault passwords
------------------------
Managing your encrypted content is easier if you develop a strategy for managing your vault passwords. A vault password can be any string you choose. There is no special command to create a vault password. However, you need to keep track of your vault passwords. Each time you encrypt a variable or file with Ansible Vault, you must provide a password. When you use an encrypted variable or file in a command or playbook, you must provide the same password that was used to encrypt it. To develop a strategy for managing vault passwords, start with two questions:
* Do you want to encrypt all your content with the same password, or use different passwords for different needs?
* Where do you want to store your password or passwords?
### Choosing between a single password and multiple passwords
If you have a small team or few sensitive values, you can use a single password for everything you encrypt with Ansible Vault. Store your vault password securely in a file or a secret manager as described below.
If you have a larger team or many sensitive values, you can use multiple passwords. For example, you can use different passwords for different users or different levels of access. Depending on your needs, you might want a different password for each encrypted file, for each directory, or for each environment. For example, you might have a playbook that includes two vars files, one for the dev environment and one for the production environment, encrypted with two different passwords. When you run the playbook, select the correct vault password for the environment you are targeting, using a vault ID.
### Managing multiple passwords with vault IDs
If you use multiple vault passwords, you can differentiate one password from another with vault IDs. You use the vault ID in three ways:
* Pass it with [`--vault-id`](../cli/ansible-playbook#cmdoption-ansible-playbook-vault-id) to the [ansible-vault](../cli/ansible-vault#ansible-vault) command when you create encrypted content
* Include it wherever you store the password for that vault ID (see [Storing and accessing vault passwords](#storing-vault-passwords))
* Pass it with [`--vault-id`](../cli/ansible-playbook#cmdoption-ansible-playbook-vault-id) to the [ansible-playbook](../cli/ansible-playbook#ansible-playbook) command when you run a playbook that uses content you encrypted with that vault ID
When you pass a vault ID as an option to the [ansible-vault](../cli/ansible-vault#ansible-vault) command, you add a label (a hint or nickname) to the encrypted content. This label documents which password you used to encrypt it. The encrypted variable or file includes the vault ID label in plain text in the header. The vault ID is the last element before the encrypted content. For example:
```
my_encrypted_var: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
30613233633461343837653833666333643061636561303338373661313838333565653635353162
3263363434623733343538653462613064333634333464660a663633623939393439316636633863
61636237636537333938306331383339353265363239643939666639386530626330633337633833
6664656334373166630a363736393262666465663432613932613036303963343263623137386239
6330
```
In addition to the label, you must provide a source for the related password. The source can be a prompt, a file, or a script, depending on how you are storing your vault passwords. The pattern looks like this:
```
--vault-id label@source
```
If your playbook uses multiple encrypted variables or files that you encrypted with different passwords, you must pass the vault IDs when you run that playbook. You can use [`--vault-id`](../cli/ansible-playbook#cmdoption-ansible-playbook-vault-id) by itself, with [`--vault-password-file`](../cli/ansible-playbook#cmdoption-ansible-playbook-vault-password-file), or with [`--ask-vault-pass`](../cli/ansible-playbook#cmdoption-ansible-playbook-ask-vault-password). The pattern is the same as when you create encrypted content: include the label and the source for the matching password.
See below for examples of encrypting content with vault IDs and using content encrypted with vault IDs. The [`--vault-id`](../cli/ansible-playbook#cmdoption-ansible-playbook-vault-id) option works with any Ansible command that interacts with vaults, including [ansible-vault](../cli/ansible-vault#ansible-vault), [ansible-playbook](../cli/ansible-playbook#ansible-playbook), and so on.
#### Limitations of vault IDs
Ansible does not enforce using the same password every time you use a particular vault ID label. You can encrypt different variables or files with the same vault ID label but different passwords. This usually happens when you type the password at a prompt and make a mistake. It is possible to use different passwords with the same vault ID label on purpose. For example, you could use each label as a reference to a class of passwords, rather than a single password. In this scenario, you must always know which specific password or file to use in context. However, you are more likely to encrypt two files with the same vault ID label and different passwords by mistake. If you encrypt two files with the same label but different passwords by accident, you can [rekey](#rekeying-files) one file to fix the issue.
#### Enforcing vault ID matching
By default the vault ID label is only a hint to remind you which password you used to encrypt a variable or file. Ansible does not check that the vault ID in the header of the encrypted content matches the vault ID you provide when you use the content. Ansible decrypts all files and variables called by your command or playbook that are encrypted with the password you provide. To check the encrypted content and decrypt it only when the vault ID it contains matches the one you provide with `--vault-id`, set the config option [DEFAULT\_VAULT\_ID\_MATCH](../reference_appendices/config#default-vault-id-match). When you set [DEFAULT\_VAULT\_ID\_MATCH](../reference_appendices/config#default-vault-id-match), each password is only used to decrypt data that was encrypted with the same label. This is efficient, predictable, and can reduce errors when different values are encrypted with different passwords.
Note
Even with the [DEFAULT\_VAULT\_ID\_MATCH](../reference_appendices/config#default-vault-id-match) setting enabled, Ansible does not enforce using the same password every time you use a particular vault ID label.
### Storing and accessing vault passwords
You can memorize your vault password, or manually copy vault passwords from any source and paste them at a command-line prompt, but most users store them securely and access them as needed from within Ansible. You have two options for storing vault passwords that work from within Ansible: in files, or in a third-party tool such as the system keyring or a secret manager. If you store your passwords in a third-party tool, you need a vault password client script to retrieve them from within Ansible.
#### Storing passwords in files
To store a vault password in a file, enter the password as a string on a single line in the file. Make sure the permissions on the file are appropriate. Do not add password files to source control.
#### Storing passwords in third-party tools with vault password client scripts
You can store your vault passwords on the system keyring, in a database, or in a secret manager and retrieve them from within Ansible using a vault password client script. Enter the password as a string on a single line. If your password has a vault ID, store it in a way that works with your password storage tool.
To create a vault password client script:
* Create a file with a name ending in either `-client` or `-client.EXTENSION`
* Make the file executable
* Within the script itself:
+ Print the passwords to standard output
+ Accept a `--vault-id` option
+ If the script prompts for data (for example, a database password), send the prompts to standard error
When you run a playbook that uses vault passwords stored in a third-party tool, specify the script as the source within the `--vault-id` flag. For example:
```
ansible-playbook --vault-id dev@contrib/vault/vault-keyring-client.py
```
Ansible executes the client script with a `--vault-id` option so the script knows which vault ID label you specified. For example a script loading passwords from a secret manager can use the vault ID label to pick either the ‘dev’ or ‘prod’ password. The example command above results in the following execution of the client script:
```
contrib/vault/vault-keyring-client.py --vault-id dev
```
For an example of a client script that loads passwords from the system keyring, see `contrib/vault/vault-keyring-client.py`.
Encrypting content with Ansible Vault
-------------------------------------
Once you have a strategy for managing and storing vault passwords, you can start encrypting content. You can encrypt two types of content with Ansible Vault: variables and files. Encrypted content always includes the `!vault` tag, which tells Ansible and YAML that the content needs to be decrypted, and a `|` character, which allows multi-line strings. Encrypted content created with `--vault-id` also contains the vault ID label. For more details about the encryption process and the format of content encrypted with Ansible Vault, see [Format of files encrypted with Ansible Vault](#vault-format). This table shows the main differences between encrypted variables and encrypted files:
| | Encrypted variables | Encrypted files |
| --- | --- | --- |
| How much is encrypted? | Variables within a plaintext file | The entire file |
| When is it decrypted? | On demand, only when needed | Whenever loaded or referenced [1](#f1) |
| What can be encrypted? | Only variables | Any structured data file |
`1`
Ansible cannot know if it needs content from an encrypted file unless it decrypts the file, so it decrypts all encrypted files referenced in your playbooks and roles.
### Encrypting individual variables with Ansible Vault
You can encrypt single values inside a YAML file using the [ansible-vault encrypt\_string](../cli/ansible-vault#ansible-vault-encrypt-string) command. For one way to keep your vaulted variables safely visible, see [Keep vaulted variables safely visible](playbooks_best_practices#tip-for-variables-and-vaults).
#### Advantages and disadvantages of encrypting variables
With variable-level encryption, your files are still easily legible. You can mix plaintext and encrypted variables, even inline in a play or role. However, password rotation is not as simple as with file-level encryption. You cannot [rekey](#rekeying-files) encrypted variables. Also, variable-level encryption only works on variables. If you want to encrypt tasks or other content, you must encrypt the entire file.
#### Creating encrypted variables
The [ansible-vault encrypt\_string](../cli/ansible-vault#ansible-vault-encrypt-string) command encrypts and formats any string you type (or copy or generate) into a format that can be included in a playbook, role, or variables file. To create a basic encrypted variable, pass three options to the [ansible-vault encrypt\_string](../cli/ansible-vault#ansible-vault-encrypt-string) command:
* a source for the vault password (prompt, file, or script, with or without a vault ID)
* the string to encrypt
* the string name (the name of the variable)
The pattern looks like this:
```
ansible-vault encrypt_string <password_source> '<string_to_encrypt>' --name '<string_name_of_variable>'
```
For example, to encrypt the string ‘foobar’ using the only password stored in ‘a\_password\_file’ and name the variable ‘the\_secret’:
```
ansible-vault encrypt_string --vault-password-file a_password_file 'foobar' --name 'the_secret'
```
The command above creates this content:
```
the_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
62313365396662343061393464336163383764373764613633653634306231386433626436623361
6134333665353966363534333632666535333761666131620a663537646436643839616531643561
63396265333966386166373632626539326166353965363262633030333630313338646335303630
3438626666666137650a353638643435666633633964366338633066623234616432373231333331
6564
```
To encrypt the string ‘foooodev’, add the vault ID label ‘dev’ with the ‘dev’ vault password stored in ‘a\_password\_file’, and call the encrypted variable ‘the\_dev\_secret’:
```
ansible-vault encrypt_string --vault-id dev@a_password_file 'foooodev' --name 'the_dev_secret'
```
The command above creates this content:
```
the_dev_secret: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
30613233633461343837653833666333643061636561303338373661313838333565653635353162
3263363434623733343538653462613064333634333464660a663633623939393439316636633863
61636237636537333938306331383339353265363239643939666639386530626330633337633833
6664656334373166630a363736393262666465663432613932613036303963343263623137386239
6330
```
To encrypt the string ‘letmein’ read from stdin, add the vault ID ‘dev’ using the ‘dev’ vault password stored in `a_password_file`, and name the variable ‘db\_password’:
```
echo -n 'letmein' | ansible-vault encrypt_string --vault-id dev@a_password_file --stdin-name 'db_password'
```
Warning
Typing secret content directly at the command line (without a prompt) leaves the secret string in your shell history. Do not do this outside of testing.
The command above creates this output:
```
Reading plaintext input from stdin. (ctrl-d to end input, twice if your content does not already have a new line)
db_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
61323931353866666336306139373937316366366138656131323863373866376666353364373761
3539633234313836346435323766306164626134376564330a373530313635343535343133316133
36643666306434616266376434363239346433643238336464643566386135356334303736353136
6565633133366366360a326566323363363936613664616364623437336130623133343530333739
3039
```
To be prompted for a string to encrypt, encrypt it with the ‘dev’ vault password from ‘a\_password\_file’, name the variable ‘new\_user\_password’ and give it the vault ID label ‘dev’:
```
ansible-vault encrypt_string --vault-id dev@a_password_file --stdin-name 'new_user_password'
```
The command above triggers this prompt:
```
Reading plaintext input from stdin. (ctrl-d to end input, twice if your content does not already have a new line)
```
Type the string to encrypt (for example, ‘hunter2’), hit ctrl-d, and wait.
Warning
Do not press `Enter` after supplying the string to encrypt. That will add a newline to the encrypted value.
The sequence above creates this output:
```
new_user_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
37636561366636643464376336303466613062633537323632306566653533383833366462366662
6565353063303065303831323539656138653863353230620a653638643639333133306331336365
62373737623337616130386137373461306535383538373162316263386165376131623631323434
3866363862363335620a376466656164383032633338306162326639643635663936623939666238
3161
```
You can add the output from any of the examples above to any playbook, variables file, or role for future use. Encrypted variables are larger than plain-text variables, but they protect your sensitive content while leaving the rest of the playbook, variables file, or role in plain text so you can easily read it.
#### Viewing encrypted variables
You can view the original value of an encrypted variable using the debug module. You must pass the password that was used to encrypt the variable. For example, if you stored the variable created by the last example above in a file called ‘vars.yml’, you could view the unencrypted value of that variable like this:
```
ansible localhost -m ansible.builtin.debug -a var="new_user_password" -e "@vars.yml" --vault-id dev@a_password_file
localhost | SUCCESS => {
"new_user_password": "hunter2"
}
```
### Encrypting files with Ansible Vault
Ansible Vault can encrypt any structured data file used by Ansible, including:
* group variables files from inventory
* host variables files from inventory
* variables files passed to ansible-playbook with `-e @file.yml` or `-e @file.json`
* variables files loaded by `include_vars` or `vars_files`
* variables files in roles
* defaults files in roles
* tasks files
* handlers files
* binary files or other arbitrary files
The full file is encrypted in the vault.
Note
Ansible Vault uses an editor to create or modify encrypted files. See [Steps to secure your editor](#vault-securing-editor) for some guidance on securing the editor.
#### Advantages and disadvantages of encrypting files
File-level encryption is easy to use. Password rotation for encrypted files is straightforward with the [rekey](#rekeying-files) command. Encrypting files can hide not only sensitive values, but the names of the variables you use. However, with file-level encryption the contents of files are no longer easy to access and read. This may be a problem with encrypted tasks files. When encrypting a variables file, see [Keep vaulted variables safely visible](playbooks_best_practices#tip-for-variables-and-vaults) for one way to keep references to these variables in a non-encrypted file. Ansible always decrypts the entire encrypted file when it is when loaded or referenced, because Ansible cannot know if it needs the content unless it decrypts it.
#### Creating encrypted files
To create a new encrypted data file called ‘foo.yml’ with the ‘test’ vault password from ‘multi\_password\_file’:
```
ansible-vault create --vault-id test@multi_password_file foo.yml
```
The tool launches an editor (whatever editor you have defined with $EDITOR, default editor is vi). Add the content. When you close the editor session, the file is saved as encrypted data. The file header reflects the vault ID used to create it:
```
``$ANSIBLE_VAULT;1.2;AES256;test``
```
To create a new encrypted data file with the vault ID ‘my\_new\_password’ assigned to it and be prompted for the password:
```
ansible-vault create --vault-id my_new_password@prompt foo.yml
```
Again, add content to the file in the editor and save. Be sure to store the new password you created at the prompt, so you can find it when you want to decrypt that file.
#### Encrypting existing files
To encrypt an existing file, use the [ansible-vault encrypt](../cli/ansible-vault#ansible-vault-encrypt) command. This command can operate on multiple files at once. For example:
```
ansible-vault encrypt foo.yml bar.yml baz.yml
```
To encrypt existing files with the ‘project’ ID and be prompted for the password:
```
ansible-vault encrypt --vault-id project@prompt foo.yml bar.yml baz.yml
```
#### Viewing encrypted files
To view the contents of an encrypted file without editing it, you can use the [ansible-vault view](../cli/ansible-vault#ansible-vault-view) command:
```
ansible-vault view foo.yml bar.yml baz.yml
```
#### Editing encrypted files
To edit an encrypted file in place, use the [ansible-vault edit](../cli/ansible-vault#ansible-vault-edit) command. This command decrypts the file to a temporary file, allows you to edit the content, then saves and re-encrypts the content and removes the temporary file when you close the editor. For example:
```
ansible-vault edit foo.yml
```
To edit a file encrypted with the `vault2` password file and assigned the vault ID `pass2`:
```
ansible-vault edit --vault-id pass2@vault2 foo.yml
```
#### Changing the password and/or vault ID on encrypted files
To change the password on an encrypted file or files, use the [rekey](../cli/ansible-vault#ansible-vault-rekey) command:
```
ansible-vault rekey foo.yml bar.yml baz.yml
```
This command can rekey multiple data files at once and will ask for the original password and also the new password. To set a different ID for the rekeyed files, pass the new ID to `--new-vault-id`. For example, to rekey a list of files encrypted with the ‘preprod1’ vault ID from the ‘ppold’ file to the ‘preprod2’ vault ID and be prompted for the new password:
```
ansible-vault rekey --vault-id preprod1@ppold --new-vault-id preprod2@prompt foo.yml bar.yml baz.yml
```
#### Decrypting encrypted files
If you have an encrypted file that you no longer want to keep encrypted, you can permanently decrypt it by running the [ansible-vault decrypt](../cli/ansible-vault#ansible-vault-decrypt) command. This command will save the file unencrypted to the disk, so be sure you do not want to [edit](../cli/ansible-vault#ansible-vault-edit) it instead.
```
ansible-vault decrypt foo.yml bar.yml baz.yml
```
#### Steps to secure your editor
Ansible Vault relies on your configured editor, which can be a source of disclosures. Most editors have ways to prevent loss of data, but these normally rely on extra plain text files that can have a clear text copy of your secrets. Consult your editor documentation to configure the editor to avoid disclosing secure data. The following sections provide some guidance on common editors but should not be taken as a complete guide to securing your editor.
##### vim
You can set the following `vim` options in command mode to avoid cases of disclosure. There may be more settings you need to modify to ensure security, especially when using plugins, so consult the `vim` documentation.
1. Disable swapfiles that act like an autosave in case of crash or interruption.
```
set noswapfile
```
2. Disable creation of backup files.
```
set nobackup
set nowritebackup
```
3. Disable the viminfo file from copying data from your current session.
```
set viminfo=
```
4. Disable copying to the system clipboard.
```
set clipboard=
```
You can optionally add these settings in `.vimrc` for all files, or just specific paths or extensions. See the `vim` manual for details.
##### Emacs
You can set the following Emacs options to avoid cases of disclosure. There may be more settings you need to modify to ensure security, especially when using plugins, so consult the Emacs documentation.
1. Do not copy data to the system clipboard.
```
(setq x-select-enable-clipboard nil)
```
2. Disable creation of backup files.
```
(setq make-backup-files nil)
```
3. Disable autosave files.
```
(setq auto-save-default nil)
```
Using encrypted variables and files
-----------------------------------
When you run a task or playbook that uses encrypted variables or files, you must provide the passwords to decrypt the variables or files. You can do this at the command line or in the playbook itself.
### Passing a single password
If all the encrypted variables and files your task or playbook needs use a single password, you can use the [`--ask-vault-pass`](../cli/ansible-playbook#cmdoption-ansible-playbook-ask-vault-password) or [`--vault-password-file`](../cli/ansible-playbook#cmdoption-ansible-playbook-vault-password-file) cli options.
To prompt for the password:
```
ansible-playbook --ask-vault-pass site.yml
```
To retrieve the password from the `/path/to/my/vault-password-file` file:
```
ansible-playbook --vault-password-file /path/to/my/vault-password-file site.yml
```
To get the password from the vault password client script `my-vault-password-client.py`:
```
ansible-playbook --vault-password-file my-vault-password-client.py
```
### Passing vault IDs
You can also use the [`--vault-id`](../cli/ansible-playbook#cmdoption-ansible-playbook-vault-id) option to pass a single password with its vault label. This approach is clearer when multiple vaults are used within a single inventory.
To prompt for the password for the ‘dev’ vault ID:
```
ansible-playbook --vault-id dev@prompt site.yml
```
To retrieve the password for the ‘dev’ vault ID from the `dev-password` file:
```
ansible-playbook --vault-id dev@dev-password site.yml
```
To get the password for the ‘dev’ vault ID from the vault password client script `my-vault-password-client.py`:
```
ansible-playbook --vault-id [email protected]
```
### Passing multiple vault passwords
If your task or playbook requires multiple encrypted variables or files that you encrypted with different vault IDs, you must use the [`--vault-id`](../cli/ansible-playbook#cmdoption-ansible-playbook-vault-id) option, passing multiple `--vault-id` options to specify the vault IDs (‘dev’, ‘prod’, ‘cloud’, ‘db’) and sources for the passwords (prompt, file, script). . For example, to use a ‘dev’ password read from a file and to be prompted for the ‘prod’ password:
```
ansible-playbook --vault-id dev@dev-password --vault-id prod@prompt site.yml
```
By default the vault ID labels (dev, prod and so on) are only hints. Ansible attempts to decrypt vault content with each password. The password with the same label as the encrypted data will be tried first, after that each vault secret will be tried in the order they were provided on the command line.
Where the encrypted data has no label, or the label does not match any of the provided labels, the passwords will be tried in the order they are specified. In the example above, the ‘dev’ password will be tried first, then the ‘prod’ password for cases where Ansible doesn’t know which vault ID is used to encrypt something.
### Using `--vault-id` without a vault ID
The [`--vault-id`](../cli/ansible-playbook#cmdoption-ansible-playbook-vault-id) option can also be used without specifying a vault-id. This behavior is equivalent to [`--ask-vault-pass`](../cli/ansible-playbook#cmdoption-ansible-playbook-ask-vault-password) or [`--vault-password-file`](../cli/ansible-playbook#cmdoption-ansible-playbook-vault-password-file) so is rarely used.
For example, to use a password file `dev-password`:
```
ansible-playbook --vault-id dev-password site.yml
```
To prompt for the password:
```
ansible-playbook --vault-id @prompt site.yml
```
To get the password from an executable script `my-vault-password-client.py`:
```
ansible-playbook --vault-id my-vault-password-client.py
```
Configuring defaults for using encrypted content
------------------------------------------------
### Setting a default vault ID
If you use one vault ID more frequently than any other, you can set the config option [DEFAULT\_VAULT\_IDENTITY\_LIST](../reference_appendices/config#default-vault-identity-list) to specify a default vault ID and password source. Ansible will use the default vault ID and source any time you do not specify [`--vault-id`](../cli/ansible-playbook#cmdoption-ansible-playbook-vault-id). You can set multiple values for this option. Setting multiple values is equivalent to passing multiple [`--vault-id`](../cli/ansible-playbook#cmdoption-ansible-playbook-vault-id) cli options.
### Setting a default password source
If you use one vault password file more frequently than any other, you can set the [DEFAULT\_VAULT\_PASSWORD\_FILE](../reference_appendices/config#default-vault-password-file) config option or the [`ANSIBLE_VAULT_PASSWORD_FILE`](../reference_appendices/config#envvar-ANSIBLE_VAULT_PASSWORD_FILE) environment variable to specify that file. For example, if you set `ANSIBLE_VAULT_PASSWORD_FILE=~/.vault_pass.txt`, Ansible will automatically search for the password in that file. This is useful if, for example, you use Ansible from a continuous integration system such as Jenkins.
When are encrypted files made visible?
--------------------------------------
In general, content you encrypt with Ansible Vault remains encrypted after execution. However, there is one exception. If you pass an encrypted file as the `src` argument to the [copy](../collections/ansible/builtin/copy_module#copy-module), [template](../collections/ansible/builtin/template_module#template-module), [unarchive](../collections/ansible/builtin/unarchive_module#unarchive-module), [script](../collections/ansible/builtin/script_module#script-module) or [assemble](../collections/ansible/builtin/assemble_module#assemble-module) module, the file will not be encrypted on the target host (assuming you supply the correct vault password when you run the play). This behavior is intended and useful. You can encrypt a configuration file or template to avoid sharing the details of your configuration, but when you copy that configuration to servers in your environment, you want it to be decrypted so local users and processes can access it.
Speeding up Ansible Vault
-------------------------
If you have many encrypted files, decrypting them at startup may cause a perceptible delay. To speed this up, install the cryptography package:
```
pip install cryptography
```
Format of files encrypted with Ansible Vault
--------------------------------------------
Ansible Vault creates UTF-8 encoded txt files. The file format includes a newline terminated header. For example:
```
$ANSIBLE_VAULT;1.1;AES256
```
or:
```
$ANSIBLE_VAULT;1.2;AES256;vault-id-label
```
The header contains up to four elements, separated by semi-colons (`;`).
1. The format ID (`$ANSIBLE_VAULT`). Currently `$ANSIBLE_VAULT` is the only valid format ID. The format ID identifies content that is encrypted with Ansible Vault (via vault.is\_encrypted\_file()).
2. The vault format version (`1.X`). All supported versions of Ansible will currently default to ‘1.1’ or ‘1.2’ if a labeled vault ID is supplied. The ‘1.0’ format is supported for reading only (and will be converted automatically to the ‘1.1’ format on write). The format version is currently used as an exact string compare only (version numbers are not currently ‘compared’).
3. The cipher algorithm used to encrypt the data (`AES256`). Currently `AES256` is the only supported cipher algorithm. Vault format 1.0 used ‘AES’, but current code always uses ‘AES256’.
4. The vault ID label used to encrypt the data (optional, `vault-id-label`) For example, if you encrypt a file with `--vault-id dev@prompt`, the vault-id-label is `dev`.
Note: In the future, the header could change. Fields after the format ID and format version depend on the format version, and future vault format versions may add more cipher algorithm options and/or additional fields.
The rest of the content of the file is the ‘vaulttext’. The vaulttext is a text armored version of the encrypted ciphertext. Each line is 80 characters wide, except for the last line which may be shorter.
### Ansible Vault payload format 1.1 - 1.2
The vaulttext is a concatenation of the ciphertext and a SHA256 digest with the result ‘hexlifyied’.
‘hexlify’ refers to the `hexlify()` method of the Python Standard Library’s [binascii](https://docs.python.org/3/library/binascii.html) module.
hexlify()’ed result of:
* hexlify()’ed string of the salt, followed by a newline (`0x0a`)
* hexlify()’ed string of the crypted HMAC, followed by a newline. The HMAC is:
+ a [RFC2104](https://www.ietf.org/rfc/rfc2104.txt) style HMAC
- inputs are:
* The AES256 encrypted ciphertext
* A PBKDF2 key. This key, the cipher key, and the cipher IV are generated from:
+ the salt, in bytes
+ 10000 iterations
+ SHA256() algorithm
+ the first 32 bytes are the cipher key
+ the second 32 bytes are the HMAC key
+ remaining 16 bytes are the cipher IV
* hexlify()’ed string of the ciphertext. The ciphertext is:
* AES256 encrypted data. The data is encrypted using:
+ AES-CTR stream cipher
+ cipher key
+ IV
+ a 128 bit counter block seeded from an integer IV
+ the plaintext
- the original plaintext
- padding up to the AES256 blocksize. (The data used for padding is based on [RFC5652](https://tools.ietf.org/html/rfc5652#section-6.3))
| programming_docs |
ansible Windows Frequently Asked Questions Windows Frequently Asked Questions
==================================
Here are some commonly asked questions in regards to Ansible and Windows and their answers.
Note
This document covers questions about managing Microsoft Windows servers with Ansible. For questions about Ansible Core, please see the [general FAQ page](https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#ansible-faq).
Does Ansible work with Windows XP or Server 2003?
-------------------------------------------------
Ansible does not work with Windows XP or Server 2003 hosts. Ansible does work with these Windows operating system versions:
* Windows Server 2008 1
* Windows Server 2008 R2 1
* Windows Server 2012
* Windows Server 2012 R2
* Windows Server 2016
* Windows Server 2019
* Windows 7 1
* Windows 8.1
* Windows 10
1 - See the [Server 2008 FAQ](#windows-faq-server2008) entry for more details.
Ansible also has minimum PowerShell version requirements - please see [Setting up a Windows Host](windows_setup#windows-setup) for the latest information.
Are Server 2008, 2008 R2 and Windows 7 supported?
-------------------------------------------------
Microsoft ended Extended Support for these versions of Windows on January 14th, 2020, and Ansible deprecated official support in the 2.10 release. No new feature development will occur targeting these operating systems, and automated testing has ceased. However, existing modules and features will likely continue to work, and simple pull requests to resolve issues with these Windows versions may be accepted.
Can I manage Windows Nano Server with Ansible?
----------------------------------------------
Ansible does not currently work with Windows Nano Server, since it does not have access to the full .NET Framework that is used by the majority of the modules and internal components.
Can Ansible run on Windows?
---------------------------
No, Ansible can only manage Windows hosts. Ansible cannot run on a Windows host natively, though it can run under the Windows Subsystem for Linux (WSL).
Note
The Windows Subsystem for Linux is not supported by Ansible and should not be used for production systems.
To install Ansible on WSL, the following commands can be run in the bash terminal:
```
sudo apt-get update
sudo apt-get install python-pip git libffi-dev libssl-dev -y
pip install --user ansible pywinrm
```
To run Ansible from source instead of a release on the WSL, simply uninstall the pip installed version and then clone the git repo.
```
pip uninstall ansible -y
git clone https://github.com/ansible/ansible.git
source ansible/hacking/env-setup
# To enable Ansible on login, run the following
echo ". ~/ansible/hacking/env-setup -q' >> ~/.bashrc
```
If you encounter timeout errors when running Ansible on the WSL, this may be due to an issue with `sleep` not returning correctly. The following workaround may resolve the issue:
```
mv /usr/bin/sleep /usr/bin/sleep.orig
ln -s /bin/true /usr/bin/sleep
```
Another option is to use WSL 2 if running Windows 10 later than build 2004.
```
wsl --set-default-version 2
```
Can I use SSH keys to authenticate to Windows hosts?
----------------------------------------------------
You cannot use SSH keys with the WinRM or PSRP connection plugins. These connection plugins use X509 certificates for authentication instead of the SSH key pairs that SSH uses.
The way X509 certificates are generated and mapped to a user is different from the SSH implementation; consult the [Windows Remote Management](windows_winrm#windows-winrm) documentation for more information.
Ansible 2.8 has added an experimental option to use the SSH connection plugin, which uses SSH keys for authentication, for Windows servers. See [this question](#windows-faq-ssh) for more information.
Why can I run a command locally that does not work under Ansible?
-----------------------------------------------------------------
Ansible executes commands through WinRM. These processes are different from running a command locally in these ways:
* Unless using an authentication option like CredSSP or Kerberos with credential delegation, the WinRM process does not have the ability to delegate the user’s credentials to a network resource, causing `Access is
Denied` errors.
* All processes run under WinRM are in a non-interactive session. Applications that require an interactive session will not work.
* When running through WinRM, Windows restricts access to internal Windows APIs like the Windows Update API and DPAPI, which some installers and programs rely on.
Some ways to bypass these restrictions are to:
* Use `become`, which runs a command as it would when run locally. This will bypass most WinRM restrictions, as Windows is unaware the process is running under WinRM when `become` is used. See the [Understanding privilege escalation: become](become#become) documentation for more information.
* Use a scheduled task, which can be created with `win_scheduled_task`. Like `become`, it will bypass all WinRM restrictions, but it can only be used to run commands, not modules.
* Use `win_psexec` to run a command on the host. PSExec does not use WinRM and so will bypass any of the restrictions.
* To access network resources without any of these workarounds, you can use CredSSP or Kerberos with credential delegation enabled.
See [Understanding privilege escalation: become](become#become) more info on how to use become. The limitations section at [Windows Remote Management](windows_winrm#windows-winrm) has more details around WinRM limitations.
This program won’t install on Windows with Ansible
--------------------------------------------------
See [this question](#windows-faq-winrm) for more information about WinRM limitations.
What Windows modules are available?
-----------------------------------
Most of the Ansible modules in Ansible Core are written for a combination of Linux/Unix machines and arbitrary web services. These modules are written in Python and most of them do not work on Windows.
Because of this, there are dedicated Windows modules that are written in PowerShell and are meant to be run on Windows hosts. A list of these modules can be found [here](https://docs.ansible.com/ansible/2.9/modules/list_of_windows_modules.html#windows-modules "(in Ansible v2.9)").
In addition, the following Ansible Core modules/action-plugins work with Windows:
* add\_host
* assert
* async\_status
* debug
* fail
* fetch
* group\_by
* include
* include\_role
* include\_vars
* meta
* pause
* raw
* script
* set\_fact
* set\_stats
* setup
* slurp
* template (also: win\_template)
* wait\_for\_connection
Can I run Python modules on Windows hosts?
------------------------------------------
No, the WinRM connection protocol is set to use PowerShell modules, so Python modules will not work. A way to bypass this issue to use `delegate_to: localhost` to run a Python module on the Ansible controller. This is useful if during a playbook, an external service needs to be contacted and there is no equivalent Windows module available.
Can I connect to Windows hosts over SSH?
----------------------------------------
Ansible 2.8 has added an experimental option to use the SSH connection plugin to manage Windows hosts. To connect to Windows hosts over SSH, you must install and configure the [Win32-OpenSSH](https://github.com/PowerShell/Win32-OpenSSH) fork that is in development with Microsoft on the Windows host(s). While most of the basics should work with SSH, `Win32-OpenSSH` is rapidly changing, with new features added and bugs fixed in every release. It is highly recommend you [install](https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH) the latest release of `Win32-OpenSSH` from the GitHub Releases page when using it with Ansible on Windows hosts.
To use SSH as the connection to a Windows host, set the following variables in the inventory:
```
ansible_connection=ssh
# Set either cmd or powershell not both
ansible_shell_type=cmd
# ansible_shell_type=powershell
```
The value for `ansible_shell_type` should either be `cmd` or `powershell`. Use `cmd` if the `DefaultShell` has not been configured on the SSH service and `powershell` if that has been set as the `DefaultShell`.
Why is connecting to a Windows host via SSH failing?
----------------------------------------------------
Unless you are using `Win32-OpenSSH` as described above, you must connect to Windows hosts using [Windows Remote Management](windows_winrm#windows-winrm). If your Ansible output indicates that SSH was used, either you did not set the connection vars properly or the host is not inheriting them correctly.
Make sure `ansible_connection: winrm` is set in the inventory for the Windows host(s).
Why are my credentials being rejected?
--------------------------------------
This can be due to a myriad of reasons unrelated to incorrect credentials.
See HTTP 401/Credentials Rejected at [Setting up a Windows Host](windows_setup#windows-setup) for a more detailed guide of this could mean.
Why am I getting an error SSL CERTIFICATE\_VERIFY\_FAILED?
----------------------------------------------------------
When the Ansible controller is running on Python 2.7.9+ or an older version of Python that has backported SSLContext (like Python 2.7.5 on RHEL 7), the controller will attempt to validate the certificate WinRM is using for an HTTPS connection. If the certificate cannot be validated (such as in the case of a self signed cert), it will fail the verification process.
To ignore certificate validation, add `ansible_winrm_server_cert_validation: ignore` to inventory for the Windows host.
See also
[Windows Guides](windows#windows)
The Windows documentation index
[Intro to playbooks](playbooks_intro#about-playbooks)
An introduction to playbooks
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[User Mailing List](https://groups.google.com/group/ansible-project)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Roles Roles
=====
Roles let you automatically load related vars, files, tasks, handlers, and other Ansible artifacts based on a known file structure. After you group your content in roles, you can easily reuse them and share them with other users.
* [Role directory structure](#role-directory-structure)
* [Storing and finding roles](#storing-and-finding-roles)
* [Using roles](#using-roles)
+ [Using roles at the play level](#using-roles-at-the-play-level)
+ [Including roles: dynamic reuse](#including-roles-dynamic-reuse)
+ [Importing roles: static reuse](#importing-roles-static-reuse)
* [Role argument validation](#role-argument-validation)
+ [Specification format](#specification-format)
+ [Sample specification](#sample-specification)
* [Running a role multiple times in one playbook](#running-a-role-multiple-times-in-one-playbook)
+ [Passing different parameters](#passing-different-parameters)
+ [Using `allow_duplicates: true`](#using-allow-duplicates-true)
* [Using role dependencies](#using-role-dependencies)
+ [Running role dependencies multiple times in one playbook](#running-role-dependencies-multiple-times-in-one-playbook)
* [Embedding modules and plugins in roles](#embedding-modules-and-plugins-in-roles)
* [Sharing roles: Ansible Galaxy](#sharing-roles-ansible-galaxy)
Role directory structure
------------------------
An Ansible role has a defined directory structure with eight main standard directories. You must include at least one of these directories in each role. You can omit any directories the role does not use. For example:
```
# playbooks
site.yml
webservers.yml
fooservers.yml
roles/
common/
tasks/
handlers/
library/
files/
templates/
vars/
defaults/
meta/
webservers/
tasks/
defaults/
meta/
```
By default Ansible will look in each directory within a role for a `main.yml` file for relevant content (also `main.yaml` and `main`):
* `tasks/main.yml` - the main list of tasks that the role executes.
* `handlers/main.yml` - handlers, which may be used within or outside this role.
* `library/my_module.py` - modules, which may be used within this role (see [Embedding modules and plugins in roles](#embedding-modules-and-plugins-in-roles) for more information).
* `defaults/main.yml` - default variables for the role (see [Using Variables](playbooks_variables#playbooks-variables) for more information). These variables have the lowest priority of any variables available, and can be easily overridden by any other variable, including inventory variables.
* `vars/main.yml` - other variables for the role (see [Using Variables](playbooks_variables#playbooks-variables) for more information).
* `files/main.yml` - files that the role deploys.
* `templates/main.yml` - templates that the role deploys.
* `meta/main.yml` - metadata for the role, including role dependencies.
You can add other YAML files in some directories. For example, you can place platform-specific tasks in separate files and refer to them in the `tasks/main.yml` file:
```
# roles/example/tasks/main.yml
- name: Install the correct web server for RHEL
import_tasks: redhat.yml
when: ansible_facts['os_family']|lower == 'redhat'
- name: Install the correct web server for Debian
import_tasks: debian.yml
when: ansible_facts['os_family']|lower == 'debian'
# roles/example/tasks/redhat.yml
- name: Install web server
ansible.builtin.yum:
name: "httpd"
state: present
# roles/example/tasks/debian.yml
- name: Install web server
ansible.builtin.apt:
name: "apache2"
state: present
```
Roles may also include modules and other plugin types in a directory called `library`. For more information, please refer to [Embedding modules and plugins in roles](#embedding-modules-and-plugins-in-roles) below.
Storing and finding roles
-------------------------
By default, Ansible looks for roles in two locations:
* in a directory called `roles/`, relative to the playbook file
* in `/etc/ansible/roles`
If you store your roles in a different location, set the [roles\_path](../reference_appendices/config#default-roles-path) configuration option so Ansible can find your roles. Checking shared roles into a single location makes them easier to use in multiple playbooks. See [Configuring Ansible](../installation_guide/intro_configuration#intro-configuration) for details about managing settings in ansible.cfg.
Alternatively, you can call a role with a fully qualified path:
```
---
- hosts: webservers
roles:
- role: '/path/to/my/roles/common'
```
Using roles
-----------
You can use roles in three ways:
* at the play level with the `roles` option: This is the classic way of using roles in a play.
* at the tasks level with `include_role`: You can reuse roles dynamically anywhere in the `tasks` section of a play using `include_role`.
* at the tasks level with `import_role`: You can reuse roles statically anywhere in the `tasks` section of a play using `import_role`.
### Using roles at the play level
The classic (original) way to use roles is with the `roles` option for a given play:
```
---
- hosts: webservers
roles:
- common
- webservers
```
When you use the `roles` option at the play level, for each role ‘x’:
* If roles/x/tasks/main.yml exists, Ansible adds the tasks in that file to the play.
* If roles/x/handlers/main.yml exists, Ansible adds the handlers in that file to the play.
* If roles/x/vars/main.yml exists, Ansible adds the variables in that file to the play.
* If roles/x/defaults/main.yml exists, Ansible adds the variables in that file to the play.
* If roles/x/meta/main.yml exists, Ansible adds any role dependencies in that file to the list of roles.
* Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely.
When you use the `roles` option at the play level, Ansible treats the roles as static imports and processes them during playbook parsing. Ansible executes your playbook in this order:
* Any `pre_tasks` defined in the play.
* Any handlers triggered by pre\_tasks.
* Each role listed in `roles:`, in the order listed. Any role dependencies defined in the role’s `meta/main.yml` run first, subject to tag filtering and conditionals. See [Using role dependencies](#role-dependencies) for more details.
* Any `tasks` defined in the play.
* Any handlers triggered by the roles or tasks.
* Any `post_tasks` defined in the play.
* Any handlers triggered by post\_tasks.
Note
If using tags with tasks in a role, be sure to also tag your pre\_tasks, post\_tasks, and role dependencies and pass those along as well, especially if the pre/post tasks and role dependencies are used for monitoring outage window control or load balancing. See [Tags](playbooks_tags#tags) for details on adding and using tags.
You can pass other keywords to the `roles` option:
```
---
- hosts: webservers
roles:
- common
- role: foo_app_instance
vars:
dir: '/opt/a'
app_port: 5000
tags: typeA
- role: foo_app_instance
vars:
dir: '/opt/b'
app_port: 5001
tags: typeB
```
When you add a tag to the `role` option, Ansible applies the tag to ALL tasks within the role.
When using `vars:` within the `roles:` section of a playbook, the variables are added to the play variables, making them available to all tasks within the play before and after the role. This behavior can be changed by [DEFAULT\_PRIVATE\_ROLE\_VARS](../reference_appendices/config#default-private-role-vars).
### Including roles: dynamic reuse
You can reuse roles dynamically anywhere in the `tasks` section of a play using `include_role`. While roles added in a `roles` section run before any other tasks in a playbook, included roles run in the order they are defined. If there are other tasks before an `include_role` task, the other tasks will run first.
To include a role:
```
---
- hosts: webservers
tasks:
- name: Print a message
ansible.builtin.debug:
msg: "this task runs before the example role"
- name: Include the example role
include_role:
name: example
- name: Print a message
ansible.builtin.debug:
msg: "this task runs after the example role"
```
You can pass other keywords, including variables and tags, when including roles:
```
---
- hosts: webservers
tasks:
- name: Include the foo_app_instance role
include_role:
name: foo_app_instance
vars:
dir: '/opt/a'
app_port: 5000
tags: typeA
...
```
When you add a [tag](playbooks_tags#tags) to an `include_role` task, Ansible applies the tag `only` to the include itself. This means you can pass `--tags` to run only selected tasks from the role, if those tasks themselves have the same tag as the include statement. See [Selectively running tagged tasks in re-usable files](playbooks_tags#selective-reuse) for details.
You can conditionally include a role:
```
---
- hosts: webservers
tasks:
- name: Include the some_role role
include_role:
name: some_role
when: "ansible_facts['os_family'] == 'RedHat'"
```
### Importing roles: static reuse
You can reuse roles statically anywhere in the `tasks` section of a play using `import_role`. The behavior is the same as using the `roles` keyword. For example:
```
---
- hosts: webservers
tasks:
- name: Print a message
ansible.builtin.debug:
msg: "before we run our role"
- name: Import the example role
import_role:
name: example
- name: Print a message
ansible.builtin.debug:
msg: "after we ran our role"
```
You can pass other keywords, including variables and tags, when importing roles:
```
---
- hosts: webservers
tasks:
- name: Import the foo_app_instance role
import_role:
name: foo_app_instance
vars:
dir: '/opt/a'
app_port: 5000
...
```
When you add a tag to an `import_role` statement, Ansible applies the tag to `all` tasks within the role. See [Tag inheritance: adding tags to multiple tasks](playbooks_tags#tag-inheritance) for details.
Role argument validation
------------------------
Beginning with version 2.11, you may choose to enable role argument validation based on an argument specification. This specification is defined in the `meta/argument_specs.yml` file (or with the `.yaml` file extension). When this argument specification is defined, a new task is inserted at the beginning of role execution that will validate the parameters supplied for the role against the specification. If the parameters fail validation, the role will fail execution.
Note
Ansible also supports role specifications defined in the role `meta/main.yml` file, as well. However, any role that defines the specs within this file will not work on versions below 2.11. For this reason, we recommend using the `meta/argument_specs.yml` file to maintain backward compatibility.
Note
When role argument validation is used on a role that has defined [dependencies](#role-dependencies), then validation on those dependencies will run before the dependent role, even if argument validation fails for the dependent role.
### Specification format
The role argument specification must be defined in a top-level `argument_specs` block within the role `meta/argument_specs.yml` file. All fields are lower-case.
entry-point-name
* The name of the role entry point.
* This should be `main` in the case of an unspecified entry point.
* This will be the base name of the tasks file to execute, with no `.yml` or `.yaml` file extension.
short\_description
* A short, one-line description of the entry point.
* The `short_description` is displayed by `ansible-doc -t role -l`.
description
* A longer description that may contain multiple lines.
author
* Name of the entry point authors.
* Use a multi-line list if there is more than one author.
options
* Options are often called “parameters” or “arguments”. This section defines those options.
* For each role option (argument), you may include:
option-name
* The name of the option/argument.
description
* Detailed explanation of what this option does. It should be written in full sentences.
type
* The data type of the option. Default is `str`.
* If an option is of type `list`, `elements` should be specified.
required
* Only needed if `true`.
* If missing, the option is not required.
default
* If `required` is false/missing, `default` may be specified (assumed ‘null’ if missing).
* Ensure that the default value in the docs matches the default value in the code. The actual default for the role variable will always come from `defaults/main.yml`.
* The default field must not be listed as part of the description, unless it requires additional information or conditions.
* If the option is a boolean value, you can use any of the boolean values recognized by Ansible: (such as true/false or yes/no). Choose the one that reads better in the context of the option.
choices
* List of option values.
* Should be absent if empty.
elements
* Specifies the data type for list elements when type is `list`.
options
* If this option takes a dict or list of dicts, you can define the structure here.
### Sample specification
```
# roles/myapp/meta/argument_specs.yml
---
argument_specs:
# roles/myapp/tasks/main.yml entry point
main:
short_description: The main entry point for the myapp role.
options:
myapp_int:
type: "int"
required: false
default: 42
description: "The integer value, defaulting to 42."
myapp_str:
type: "str"
required: true
description: "The string value"
# roles/maypp/tasks/alternate.yml entry point
alternate:
short_description: The alternate entry point for the myapp role.
options:
myapp_int:
type: "int"
required: false
default: 1024
description: "The integer value, defaulting to 1024."
```
Running a role multiple times in one playbook
---------------------------------------------
Ansible only executes each role once, even if you define it multiple times, unless the parameters defined on the role are different for each definition. For example, Ansible only runs the role `foo` once in a play like this:
```
---
- hosts: webservers
roles:
- foo
- bar
- foo
```
You have two options to force Ansible to run a role more than once.
### Passing different parameters
If you pass different parameters in each role definition, Ansible runs the role more than once. Providing different variable values is not the same as passing different role parameters. You must use the `roles` keyword for this behavior, since `import_role` and `include_role` do not accept role parameters.
This playbook runs the `foo` role twice:
```
---
- hosts: webservers
roles:
- { role: foo, message: "first" }
- { role: foo, message: "second" }
```
This syntax also runs the `foo` role twice;
```
---
- hosts: webservers
roles:
- role: foo
message: "first"
- role: foo
message: "second"
```
In these examples, Ansible runs `foo` twice because each role definition has different parameters.
### Using `allow_duplicates: true`
Add `allow_duplicates: true` to the `meta/main.yml` file for the role:
```
# playbook.yml
---
- hosts: webservers
roles:
- foo
- foo
# roles/foo/meta/main.yml
---
allow_duplicates: true
```
In this example, Ansible runs `foo` twice because we have explicitly enabled it to do so.
Using role dependencies
-----------------------
Role dependencies let you automatically pull in other roles when using a role. Ansible does not execute role dependencies when you include or import a role. You must use the `roles` keyword if you want Ansible to execute role dependencies.
Role dependencies are prerequisites, not true dependencies. The roles do not have a parent/child relationship. Ansible loads all listed roles, runs the roles listed under `dependencies` first, then runs the role that lists them. The play object is the parent of all roles, including roles called by a `dependencies` list.
Role dependencies are stored in the `meta/main.yml` file within the role directory. This file should contain a list of roles and parameters to insert before the specified role. For example:
```
# roles/myapp/meta/main.yml
---
dependencies:
- role: common
vars:
some_parameter: 3
- role: apache
vars:
apache_port: 80
- role: postgres
vars:
dbname: blarg
other_parameter: 12
```
Ansible always executes roles listed in `dependencies` before the role that lists them. Ansible executes this pattern recursively when you use the `roles` keyword. For example, if you list role `foo` under `roles:`, role `foo` lists role `bar` under `dependencies` in its meta/main.yml file, and role `bar` lists role `baz` under `dependencies` in its meta/main.yml, Ansible executes `baz`, then `bar`, then `foo`.
### Running role dependencies multiple times in one playbook
Ansible treats duplicate role dependencies like duplicate roles listed under `roles:`: Ansible only executes role dependencies once, even if defined multiple times, unless the parameters, tags, or when clause defined on the role are different for each definition. If two roles in a playbook both list a third role as a dependency, Ansible only runs that role dependency once, unless you pass different parameters, tags, when clause, or use `allow_duplicates: true` in the role you want to run multiple times. See [Galaxy role dependencies](../galaxy/user_guide#galaxy-dependencies) for more details.
Note
Role deduplication does not consult the invocation signature of parent roles. Additionally, when using `vars:` instead of role params, there is a side effect of changing variable scoping. Using `vars:` results in those variables being scoped at the play level. In the below example, using `vars:` would cause `n` to be defined as `4` through the entire play, including roles called before it.
In addition to the above, users should be aware that role de-duplication occurs before variable evaluation. This means that [Lazy Evaluation](https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html#term-Lazy-Evaluation) may make seemingly different role invocations equivalently the same, preventing the role from running more than once.
For example, a role named `car` depends on a role named `wheel` as follows:
```
---
dependencies:
- role: wheel
n: 1
- role: wheel
n: 2
- role: wheel
n: 3
- role: wheel
n: 4
```
And the `wheel` role depends on two roles: `tire` and `brake`. The `meta/main.yml` for wheel would then contain the following:
```
---
dependencies:
- role: tire
- role: brake
```
And the `meta/main.yml` for `tire` and `brake` would contain the following:
```
---
allow_duplicates: true
```
The resulting order of execution would be as follows:
```
tire(n=1)
brake(n=1)
wheel(n=1)
tire(n=2)
brake(n=2)
wheel(n=2)
...
car
```
To use `allow_duplicates: true` with role dependencies, you must specify it for the role listed under `dependencies`, not for the role that lists it. In the example above, `allow_duplicates: true` appears in the `meta/main.yml` of the `tire` and `brake` roles. The `wheel` role does not require `allow_duplicates: true`, because each instance defined by `car` uses different parameter values.
Note
See [Using Variables](playbooks_variables#playbooks-variables) for details on how Ansible chooses among variable values defined in different places (variable inheritance and scope).
Embedding modules and plugins in roles
--------------------------------------
If you write a custom module (see [Should you develop a module?](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html#developing-modules)) or a plugin (see [Developing plugins](https://docs.ansible.com/ansible/latest/dev_guide/developing_plugins.html#developing-plugins)), you might wish to distribute it as part of a role. For example, if you write a module that helps configure your company’s internal software, and you want other people in your organization to use this module, but you do not want to tell everyone how to configure their Ansible library path, you can include the module in your internal\_config role.
To add a module or a plugin to a role: Alongside the ‘tasks’ and ‘handlers’ structure of a role, add a directory named ‘library’ and then include the module directly inside the ‘library’ directory.
Assuming you had this:
```
roles/
my_custom_modules/
library/
module1
module2
```
The module will be usable in the role itself, as well as any roles that are called *after* this role, as follows:
```
---
- hosts: webservers
roles:
- my_custom_modules
- some_other_role_using_my_custom_modules
- yet_another_role_using_my_custom_modules
```
If necessary, you can also embed a module in a role to modify a module in Ansible’s core distribution. For example, you can use the development version of a particular module before it is released in production releases by copying the module and embedding the copy in a role. Use this approach with caution, as API signatures may change in core components, and this workaround is not guaranteed to work.
The same mechanism can be used to embed and distribute plugins in a role, using the same schema. For example, for a filter plugin:
```
roles/
my_custom_filter/
filter_plugins
filter1
filter2
```
These filters can then be used in a Jinja template in any role called after ‘my\_custom\_filter’.
Sharing roles: Ansible Galaxy
-----------------------------
[Ansible Galaxy](https://galaxy.ansible.com) is a free site for finding, downloading, rating, and reviewing all kinds of community-developed Ansible roles and can be a great way to get a jumpstart on your automation projects.
The client `ansible-galaxy` is included in Ansible. The Galaxy client allows you to download roles from Ansible Galaxy, and also provides an excellent default framework for creating your own roles.
Read the [Ansible Galaxy documentation](https://galaxy.ansible.com/docs/) page for more information
See also
[Galaxy User Guide](../galaxy/user_guide#ansible-galaxy)
How to create new roles, share roles on Galaxy, role management
[YAML Syntax](../reference_appendices/yamlsyntax#yaml-syntax)
Learn about YAML syntax
[Working with playbooks](playbooks#working-with-playbooks)
Review the basic Playbook language features
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[Using Variables](playbooks_variables#playbooks-variables)
Variables in playbooks
[Conditionals](playbooks_conditionals#playbooks-conditionals)
Conditionals in playbooks
[Loops](playbooks_loops#playbooks-loops)
Loops in playbooks
[Tags](playbooks_tags#tags)
Using tags to select or skip roles/tasks in long playbooks
[Collection Index](../collections/index#list-of-collections)
Browse existing collections, modules, and plugins
[Should you develop a module?](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html#developing-modules)
Extending Ansible by writing your own modules
[GitHub Ansible examples](https://github.com/ansible/ansible-examples)
Complete playbook files from the GitHub project source
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
| programming_docs |
ansible Debugging tasks Debugging tasks
===============
Ansible offers a task debugger so you can fix errors during execution instead of editing your playbook and running it again to see if your change worked. You have access to all of the features of the debugger in the context of the task. You can check or set the value of variables, update module arguments, and re-run the task with the new variables and arguments. The debugger lets you resolve the cause of the failure and continue with playbook execution.
* [Enabling the debugger](#enabling-the-debugger)
+ [Enabling the debugger with the `debugger` keyword](#enabling-the-debugger-with-the-debugger-keyword)
- [Examples of using the `debugger` keyword](#examples-of-using-the-debugger-keyword)
+ [Enabling the debugger in configuration or an environment variable](#enabling-the-debugger-in-configuration-or-an-environment-variable)
+ [Enabling the debugger as a strategy](#enabling-the-debugger-as-a-strategy)
* [Resolving errors in the debugger](#resolving-errors-in-the-debugger)
* [Available debug commands](#available-debug-commands)
+ [Print command](#print-command)
+ [Update args command](#update-args-command)
+ [Update vars command](#update-vars-command)
+ [Update task command](#update-task-command)
+ [Redo command](#redo-command)
+ [Continue command](#continue-command)
+ [Quit command](#quit-command)
* [How the debugger interacts with the free strategy](#how-the-debugger-interacts-with-the-free-strategy)
Enabling the debugger
---------------------
The debugger is not enabled by default. If you want to invoke the debugger during playbook execution, you must enable it first.
Use one of these three methods to enable the debugger:
* with the debugger keyword
* in configuration or an environment variable, or
* as a strategy
### Enabling the debugger with the `debugger` keyword
New in version 2.5.
You can use the `debugger` keyword to enable (or disable) the debugger for a specific play, role, block, or task. This option is especially useful when developing or extending playbooks, plays, and roles. You can enable the debugger on new or updated tasks. If they fail, you can fix the errors efficiently. The `debugger` keyword accepts five values:
| Value | Result |
| --- | --- |
| always | Always invoke the debugger, regardless of the outcome |
| never | Never invoke the debugger, regardless of the outcome |
| on\_failed | Only invoke the debugger if a task fails |
| on\_unreachable | Only invoke the debugger if a host is unreachable |
| on\_skipped | Only invoke the debugger if the task is skipped |
When you use the `debugger` keyword, the value you specify overrides any global configuration to enable or disable the debugger. If you define `debugger` at multiple levels, such as in a role and in a task, Ansible honors the most granular definition. The definition at the play or role level applies to all blocks and tasks within that play or role, unless they specify a different value. The definition at the block level overrides the definition at the play or role level, and applies to all tasks within that block, unless they specify a different value. The definition at the task level always applies to the task; it overrides the definitions at the block, play, or role level.
#### Examples of using the `debugger` keyword
Example of setting the `debugger` keyword on a task:
```
- name: Execute a command
ansible.builtin.command: "false"
debugger: on_failed
```
Example of setting the `debugger` keyword on a play:
```
- name: My play
hosts: all
debugger: on_skipped
tasks:
- name: Execute a command
ansible.builtin.command: "true"
when: False
```
Example of setting the `debugger` keyword at multiple levels:
```
- name: Play
hosts: all
debugger: never
tasks:
- name: Execute a command
ansible.builtin.command: "false"
debugger: on_failed
```
In this example, the debugger is set to `never` at the play level and to `on_failed` at the task level. If the task fails, Ansible invokes the debugger, because the definition on the task overrides the definition on its parent play.
### Enabling the debugger in configuration or an environment variable
New in version 2.5.
You can enable the task debugger globally with a setting in ansible.cfg or with an environment variable. The only options are `True` or `False`. If you set the configuration option or environment variable to `True`, Ansible runs the debugger on failed tasks by default.
To enable the task debugger from ansible.cfg, add this setting to the defaults section:
```
[defaults]
enable_task_debugger = True
```
To enable the task debugger with an environment variable, pass the variable when you run your playbook:
```
ANSIBLE_ENABLE_TASK_DEBUGGER=True ansible-playbook -i hosts site.yml
```
When you enable the debugger globally, every failed task invokes the debugger, unless the role, play, block, or task explicity disables the debugger. If you need more granular control over what conditions trigger the debugger, use the `debugger` keyword.
### Enabling the debugger as a strategy
If you are running legacy playbooks or roles, you may see the debugger enabled as a [strategy](../plugins/strategy#strategy-plugins). You can do this at the play level, in ansible.cfg, or with the environment variable `ANSIBLE_STRATEGY=debug`. For example:
```
- hosts: test
strategy: debug
tasks:
...
```
Or in ansible.cfg:
```
[defaults]
strategy = debug
```
Note
This backwards-compatible method, which matches Ansible versions before 2.5, may be removed in a future release.
Resolving errors in the debugger
--------------------------------
After Ansible invokes the debugger, you can use the seven [debugger commands](#available-commands) to resolve the error that Ansible encountered. Consider this example playbook, which defines the `var1` variable but uses the undefined `wrong_var` variable in a task by mistake.
```
- hosts: test
debugger: on_failed
gather_facts: no
vars:
var1: value1
tasks:
- name: Use a wrong variable
ansible.builtin.ping: data={{ wrong_var }}
```
If you run this playbook, Ansible invokes the debugger when the task fails. From the debug prompt, you can change the module arguments or the variables and run the task again.
```
PLAY ***************************************************************************
TASK [wrong variable] **********************************************************
fatal: [192.0.2.10]: FAILED! => {"failed": true, "msg": "ERROR! 'wrong_var' is undefined"}
Debugger invoked
[192.0.2.10] TASK: wrong variable (debug)> p result._result
{'failed': True,
'msg': 'The task includes an option with an undefined variable. The error '
"was: 'wrong_var' is undefined\n"
'\n'
'The error appears to have been in '
"'playbooks/debugger.yml': line 7, "
'column 7, but may\n'
'be elsewhere in the file depending on the exact syntax problem.\n'
'\n'
'The offending line appears to be:\n'
'\n'
' tasks:\n'
' - name: wrong variable\n'
' ^ here\n'}
[192.0.2.10] TASK: wrong variable (debug)> p task.args
{u'data': u'{{ wrong_var }}'}
[192.0.2.10] TASK: wrong variable (debug)> task.args['data'] = '{{ var1 }}'
[192.0.2.10] TASK: wrong variable (debug)> p task.args
{u'data': '{{ var1 }}'}
[192.0.2.10] TASK: wrong variable (debug)> redo
ok: [192.0.2.10]
PLAY RECAP *********************************************************************
192.0.2.10 : ok=1 changed=0 unreachable=0 failed=0
```
Changing the task arguments in the debugger to use `var1` instead of `wrong_var` makes the task run successfully.
Available debug commands
------------------------
You can use these seven commands at the debug prompt:
| Command | Shortcut | Action |
| --- | --- | --- |
| print | p | Print information about the task |
| task.args[*key*] = *value* | no shortcut | Update module arguments |
| task\_vars[*key*] = *value* | no shortcut | Update task variables (you must `update_task` next) |
| update\_task | u | Recreate a task with updated task variables |
| redo | r | Run the task again |
| continue | c | Continue executing, starting with the next task |
| quit | q | Quit the debugger |
For more details, see the individual descriptions and examples below.
### Print command
`print *task/task.args/task_vars/host/result*` prints information about the task:
```
[192.0.2.10] TASK: install package (debug)> p task
TASK: install package
[192.0.2.10] TASK: install package (debug)> p task.args
{u'name': u'{{ pkg_name }}'}
[192.0.2.10] TASK: install package (debug)> p task_vars
{u'ansible_all_ipv4_addresses': [u'192.0.2.10'],
u'ansible_architecture': u'x86_64',
...
}
[192.0.2.10] TASK: install package (debug)> p task_vars['pkg_name']
u'bash'
[192.0.2.10] TASK: install package (debug)> p host
192.0.2.10
[192.0.2.10] TASK: install package (debug)> p result._result
{'_ansible_no_log': False,
'changed': False,
u'failed': True,
...
u'msg': u"No package matching 'not_exist' is available"}
```
### Update args command
`task.args[*key*] = *value*` updates a module argument. This sample playbook has an invalid package name:
```
- hosts: test
strategy: debug
gather_facts: yes
vars:
pkg_name: not_exist
tasks:
- name: Install a package
ansible.builtin.apt: name={{ pkg_name }}
```
When you run the playbook, the invalid package name triggers an error, and Ansible invokes the debugger. You can fix the package name by viewing, then updating the module argument:
```
[192.0.2.10] TASK: install package (debug)> p task.args
{u'name': u'{{ pkg_name }}'}
[192.0.2.10] TASK: install package (debug)> task.args['name'] = 'bash'
[192.0.2.10] TASK: install package (debug)> p task.args
{u'name': 'bash'}
[192.0.2.10] TASK: install package (debug)> redo
```
After you update the module argument, use `redo` to run the task again with the new args.
### Update vars command
`task_vars[*key*] = *value*` updates the `task_vars`. You could fix the playbook above by viewing, then updating the task variables instead of the module args:
```
[192.0.2.10] TASK: install package (debug)> p task_vars['pkg_name']
u'not_exist'
[192.0.2.10] TASK: install package (debug)> task_vars['pkg_name'] = 'bash'
[192.0.2.10] TASK: install package (debug)> p task_vars['pkg_name']
'bash'
[192.0.2.10] TASK: install package (debug)> update_task
[192.0.2.10] TASK: install package (debug)> redo
```
After you update the task variables, you must use `update_task` to load the new variables before using `redo` to run the task again.
Note
In 2.5 this was updated from `vars` to `task_vars` to avoid conflicts with the `vars()` python function.
### Update task command
New in version 2.8.
`u` or `update_task` recreates the task from the original task data structure and templates with updated task variables. See the entry [Update vars command](#update-vars-command) for an example of use.
### Redo command
`r` or `redo` runs the task again.
### Continue command
`c` or `continue` continues executing, starting with the next task.
### Quit command
`q` or `quit` quits the debugger. The playbook execution is aborted.
How the debugger interacts with the free strategy
-------------------------------------------------
With the default `linear` strategy enabled, Ansible halts execution while the debugger is active, and runs the debugged task immediately after you enter the `redo` command. With the `free` strategy enabled, however, Ansible does not wait for all hosts, and may queue later tasks on one host before a task fails on another host. With the `free` strategy, Ansible does not queue or execute any tasks while the debugger is active. However, all queued tasks remain in the queue and run as soon as you exit the debugger. If you use `redo` to reschedule a task from the debugger, other queued tasks may execute before your rescheduled task. For more information about strategies, see [Controlling playbook execution: strategies and more](playbooks_strategies#playbooks-strategies).
See also
[Executing playbooks for troubleshooting](playbooks_startnstep#playbooks-start-and-step)
Running playbooks while debugging or testing
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Error handling in playbooks Error handling in playbooks
===========================
When Ansible receives a non-zero return code from a command or a failure from a module, by default it stops executing on that host and continues on other hosts. However, in some circumstances you may want different behavior. Sometimes a non-zero return code indicates success. Sometimes you want a failure on one host to stop execution on all hosts. Ansible provides tools and settings to handle these situations and help you get the behavior, output, and reporting you want.
* [Ignoring failed commands](#ignoring-failed-commands)
* [Ignoring unreachable host errors](#ignoring-unreachable-host-errors)
* [Resetting unreachable hosts](#resetting-unreachable-hosts)
* [Handlers and failure](#handlers-and-failure)
* [Defining failure](#defining-failure)
* [Defining “changed”](#defining-changed)
* [Ensuring success for command and shell](#ensuring-success-for-command-and-shell)
* [Aborting a play on all hosts](#aborting-a-play-on-all-hosts)
+ [Aborting on the first error: any\_errors\_fatal](#aborting-on-the-first-error-any-errors-fatal)
+ [Setting a maximum failure percentage](#setting-a-maximum-failure-percentage)
* [Controlling errors in blocks](#controlling-errors-in-blocks)
Ignoring failed commands
------------------------
By default Ansible stops executing tasks on a host when a task fails on that host. You can use `ignore_errors` to continue on in spite of the failure:
```
- name: Do not count this as a failure
ansible.builtin.command: /bin/false
ignore_errors: yes
```
The `ignore_errors` directive only works when the task is able to run and returns a value of ‘failed’. It does not make Ansible ignore undefined variable errors, connection failures, execution issues (for example, missing packages), or syntax errors.
Ignoring unreachable host errors
--------------------------------
New in version 2.7.
You can ignore a task failure due to the host instance being ‘UNREACHABLE’ with the `ignore_unreachable` keyword. Ansible ignores the task errors, but continues to execute future tasks against the unreachable host. For example, at the task level:
```
- name: This executes, fails, and the failure is ignored
ansible.builtin.command: /bin/true
ignore_unreachable: yes
- name: This executes, fails, and ends the play for this host
ansible.builtin.command: /bin/true
```
And at the playbook level:
```
- hosts: all
ignore_unreachable: yes
tasks:
- name: This executes, fails, and the failure is ignored
ansible.builtin.command: /bin/true
- name: This executes, fails, and ends the play for this host
ansible.builtin.command: /bin/true
ignore_unreachable: no
```
Resetting unreachable hosts
---------------------------
If Ansible cannot connect to a host, it marks that host as ‘UNREACHABLE’ and removes it from the list of active hosts for the run. You can use `meta: clear_host_errors` to reactivate all hosts, so subsequent tasks can try to reach them again.
Handlers and failure
--------------------
Ansible runs [handlers](playbooks_handlers#handlers) at the end of each play. If a task notifies a handler but another task fails later in the play, by default the handler does *not* run on that host, which may leave the host in an unexpected state. For example, a task could update a configuration file and notify a handler to restart some service. If a task later in the same play fails, the configuration file might be changed but the service will not be restarted.
You can change this behavior with the `--force-handlers` command-line option, by including `force_handlers: True` in a play, or by adding `force_handlers = True` to ansible.cfg. When handlers are forced, Ansible will run all notified handlers on all hosts, even hosts with failed tasks. (Note that certain errors could still prevent the handler from running, such as a host becoming unreachable.)
Defining failure
----------------
Ansible lets you define what “failure” means in each task using the `failed_when` conditional. As with all conditionals in Ansible, lists of multiple `failed_when` conditions are joined with an implicit `and`, meaning the task only fails when *all* conditions are met. If you want to trigger a failure when any of the conditions is met, you must define the conditions in a string with an explicit `or` operator.
You may check for failure by searching for a word or phrase in the output of a command:
```
- name: Fail task when the command error output prints FAILED
ansible.builtin.command: /usr/bin/example-command -x -y -z
register: command_result
failed_when: "'FAILED' in command_result.stderr"
```
or based on the return code:
```
- name: Fail task when both files are identical
ansible.builtin.raw: diff foo/file1 bar/file2
register: diff_cmd
failed_when: diff_cmd.rc == 0 or diff_cmd.rc >= 2
```
You can also combine multiple conditions for failure. This task will fail if both conditions are true:
```
- name: Check if a file exists in temp and fail task if it does
ansible.builtin.command: ls /tmp/this_should_not_be_here
register: result
failed_when:
- result.rc == 0
- '"No such" not in result.stdout'
```
If you want the task to fail when only one condition is satisfied, change the `failed_when` definition to:
```
failed_when: result.rc == 0 or "No such" not in result.stdout
```
If you have too many conditions to fit neatly into one line, you can split it into a multi-line yaml value with `>`:
```
- name: example of many failed_when conditions with OR
ansible.builtin.shell: "./myBinary"
register: ret
failed_when: >
("No such file or directory" in ret.stdout) or
(ret.stderr != '') or
(ret.rc == 10)
```
Defining “changed”
------------------
Ansible lets you define when a particular task has “changed” a remote node using the `changed_when` conditional. This lets you determine, based on return codes or output, whether a change should be reported in Ansible statistics and whether a handler should be triggered or not. As with all conditionals in Ansible, lists of multiple `changed_when` conditions are joined with an implicit `and`, meaning the task only reports a change when *all* conditions are met. If you want to report a change when any of the conditions is met, you must define the conditions in a string with an explicit `or` operator. For example:
```
tasks:
- name: Report 'changed' when the return code is not equal to 2
ansible.builtin.shell: /usr/bin/billybass --mode="take me to the river"
register: bass_result
changed_when: "bass_result.rc != 2"
- name: This will never report 'changed' status
ansible.builtin.shell: wall 'beep'
changed_when: False
```
You can also combine multiple conditions to override “changed” result:
```
- name: Combine multiple conditions to override 'changed' result
ansible.builtin.command: /bin/fake_command
register: result
ignore_errors: True
changed_when:
- '"ERROR" in result.stderr'
- result.rc == 2
```
See [Defining failure](#controlling-what-defines-failure) for more conditional syntax examples.
Ensuring success for command and shell
--------------------------------------
The [command](../collections/ansible/builtin/command_module#command-module) and [shell](../collections/ansible/builtin/shell_module#shell-module) modules care about return codes, so if you have a command whose successful exit code is not zero, you can do this:
```
tasks:
- name: Run this command and ignore the result
ansible.builtin.shell: /usr/bin/somecommand || /bin/true
```
Aborting a play on all hosts
----------------------------
Sometimes you want a failure on a single host, or failures on a certain percentage of hosts, to abort the entire play on all hosts. You can stop play execution after the first failure happens with `any_errors_fatal`. For finer-grained control, you can use `max_fail_percentage` to abort the run after a given percentage of hosts has failed.
### Aborting on the first error: any\_errors\_fatal
If you set `any_errors_fatal` and a task returns an error, Ansible finishes the fatal task on all hosts in the current batch, then stops executing the play on all hosts. Subsequent tasks and plays are not executed. You can recover from fatal errors by adding a [rescue section](playbooks_blocks#block-error-handling) to the block. You can set `any_errors_fatal` at the play or block level:
```
- hosts: somehosts
any_errors_fatal: true
roles:
- myrole
- hosts: somehosts
tasks:
- block:
- include_tasks: mytasks.yml
any_errors_fatal: true
```
You can use this feature when all tasks must be 100% successful to continue playbook execution. For example, if you run a service on machines in multiple data centers with load balancers to pass traffic from users to the service, you want all load balancers to be disabled before you stop the service for maintenance. To ensure that any failure in the task that disables the load balancers will stop all other tasks:
```
---
- hosts: load_balancers_dc_a
any_errors_fatal: true
tasks:
- name: Shut down datacenter 'A'
ansible.builtin.command: /usr/bin/disable-dc
- hosts: frontends_dc_a
tasks:
- name: Stop service
ansible.builtin.command: /usr/bin/stop-software
- name: Update software
ansible.builtin.command: /usr/bin/upgrade-software
- hosts: load_balancers_dc_a
tasks:
- name: Start datacenter 'A'
ansible.builtin.command: /usr/bin/enable-dc
```
In this example Ansible starts the software upgrade on the front ends only if all of the load balancers are successfully disabled.
### Setting a maximum failure percentage
By default, Ansible continues to execute tasks as long as there are hosts that have not yet failed. In some situations, such as when executing a rolling update, you may want to abort the play when a certain threshold of failures has been reached. To achieve this, you can set a maximum failure percentage on a play:
```
---
- hosts: webservers
max_fail_percentage: 30
serial: 10
```
The `max_fail_percentage` setting applies to each batch when you use it with [serial](playbooks_strategies#rolling-update-batch-size). In the example above, if more than 3 of the 10 servers in the first (or any) batch of servers failed, the rest of the play would be aborted.
Note
The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort the play when 2 of the systems failed, set the max\_fail\_percentage at 49 rather than 50.
Controlling errors in blocks
----------------------------
You can also use blocks to define responses to task errors. This approach is similar to exception handling in many programming languages. See [Handling errors with blocks](playbooks_blocks#block-error-handling) for details and examples.
See also
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[Conditionals](playbooks_conditionals#playbooks-conditionals)
Conditional statements in playbooks
[Using Variables](playbooks_variables#playbooks-variables)
All about variables
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Setting up a Windows Host Setting up a Windows Host
=========================
This document discusses the setup that is required before Ansible can communicate with a Microsoft Windows host.
* [Host Requirements](#host-requirements)
+ [Upgrading PowerShell and .NET Framework](#upgrading-powershell-and-net-framework)
+ [WinRM Memory Hotfix](#winrm-memory-hotfix)
* [WinRM Setup](#winrm-setup)
+ [WinRM Listener](#winrm-listener)
- [Setup WinRM Listener](#setup-winrm-listener)
- [Delete WinRM Listener](#delete-winrm-listener)
+ [WinRM Service Options](#winrm-service-options)
+ [Common WinRM Issues](#common-winrm-issues)
- [HTTP 401/Credentials Rejected](#http-401-credentials-rejected)
- [HTTP 500 Error](#http-500-error)
- [Timeout Errors](#timeout-errors)
- [Connection Refused Errors](#connection-refused-errors)
- [Failure to Load Builtin Modules](#failure-to-load-builtin-modules)
* [Windows SSH Setup](#windows-ssh-setup)
+ [Installing Win32-OpenSSH](#installing-win32-openssh)
+ [Configuring the Win32-OpenSSH shell](#configuring-the-win32-openssh-shell)
+ [Win32-OpenSSH Authentication](#win32-openssh-authentication)
+ [Configuring Ansible for SSH on Windows](#configuring-ansible-for-ssh-on-windows)
+ [Known issues with SSH on Windows](#known-issues-with-ssh-on-windows)
Host Requirements
-----------------
For Ansible to communicate to a Windows host and use Windows modules, the Windows host must meet these requirements:
* Ansible can generally manage Windows versions under current and extended support from Microsoft. Ansible can manage desktop OSs including Windows 7, 8.1, and 10, and server OSs including Windows Server 2008, 2008 R2, 2012, 2012 R2, 2016, and 2019.
* Ansible requires PowerShell 3.0 or newer and at least .NET 4.0 to be installed on the Windows host.
* A WinRM listener should be created and activated. More details for this can be found below.
Note
While these are the base requirements for Ansible connectivity, some Ansible modules have additional requirements, such as a newer OS or PowerShell version. Please consult the module’s documentation page to determine whether a host meets those requirements.
### Upgrading PowerShell and .NET Framework
Ansible requires PowerShell version 3.0 and .NET Framework 4.0 or newer to function on older operating systems like Server 2008 and Windows 7. The base image does not meet this requirement. You can use the [Upgrade-PowerShell.ps1](https://github.com/jborean93/ansible-windows/blob/master/scripts/Upgrade-PowerShell.ps1) script to update these.
This is an example of how to run this script from PowerShell:
```
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$url = "https://raw.githubusercontent.com/jborean93/ansible-windows/master/scripts/Upgrade-PowerShell.ps1"
$file = "$env:temp\Upgrade-PowerShell.ps1"
$username = "Administrator"
$password = "Password"
(New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Force
# Version can be 3.0, 4.0 or 5.1
&$file -Version 5.1 -Username $username -Password $password -Verbose
```
Once completed, you will need to remove auto logon and set the execution policy back to the default (`Restricted `` for Windows clients, or ``RemoteSigned` for Windows servers). You can do this with the following PowerShell commands:
```
# This isn't needed but is a good security practice to complete
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force
$reg_winlogon_path = "HKLM:\Software\Microsoft\Windows NT\CurrentVersion\Winlogon"
Set-ItemProperty -Path $reg_winlogon_path -Name AutoAdminLogon -Value 0
Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultUserName -ErrorAction SilentlyContinue
Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultPassword -ErrorAction SilentlyContinue
```
The script works by checking to see what programs need to be installed (such as .NET Framework 4.5.2) and what PowerShell version is required. If a reboot is required and the `username` and `password` parameters are set, the script will automatically reboot and logon when it comes back up from the reboot. The script will continue until no more actions are required and the PowerShell version matches the target version. If the `username` and `password` parameters are not set, the script will prompt the user to manually reboot and logon when required. When the user is next logged in, the script will continue where it left off and the process continues until no more actions are required.
Note
If running on Server 2008, then SP2 must be installed. If running on Server 2008 R2 or Windows 7, then SP1 must be installed.
Note
Windows Server 2008 can only install PowerShell 3.0; specifying a newer version will result in the script failing.
Note
The `username` and `password` parameters are stored in plain text in the registry. Make sure the cleanup commands are run after the script finishes to ensure no credentials are still stored on the host.
### WinRM Memory Hotfix
When running on PowerShell v3.0, there is a bug with the WinRM service that limits the amount of memory available to WinRM. Without this hotfix installed, Ansible will fail to execute certain commands on the Windows host. These hotfixes should be installed as part of the system bootstrapping or imaging process. The script [Install-WMF3Hotfix.ps1](https://github.com/jborean93/ansible-windows/blob/master/scripts/Install-WMF3Hotfix.ps1) can be used to install the hotfix on affected hosts.
The following PowerShell command will install the hotfix:
```
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$url = "https://raw.githubusercontent.com/jborean93/ansible-windows/master/scripts/Install-WMF3Hotfix.ps1"
$file = "$env:temp\Install-WMF3Hotfix.ps1"
(New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
powershell.exe -ExecutionPolicy ByPass -File $file -Verbose
```
For more details, please refer to the [Hotfix document](https://support.microsoft.com/en-us/help/2842230/out-of-memory-error-on-a-computer-that-has-a-customized-maxmemorypersh) from Microsoft.
WinRM Setup
-----------
Once Powershell has been upgraded to at least version 3.0, the final step is for the WinRM service to be configured so that Ansible can connect to it. There are two main components of the WinRM service that governs how Ansible can interface with the Windows host: the `listener` and the `service` configuration settings.
Details about each component can be read below, but the script [ConfigureRemotingForAnsible.ps1](https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1) can be used to set up the basics. This script sets up both HTTP and HTTPS listeners with a self-signed certificate and enables the `Basic` authentication option on the service.
To use this script, run the following in PowerShell:
```
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$url = "https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1"
$file = "$env:temp\ConfigureRemotingForAnsible.ps1"
(New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
powershell.exe -ExecutionPolicy ByPass -File $file
```
There are different switches and parameters (like `-EnableCredSSP` and `-ForceNewSSLCert`) that can be set alongside this script. The documentation for these options are located at the top of the script itself.
Note
The ConfigureRemotingForAnsible.ps1 script is intended for training and development purposes only and should not be used in a production environment, since it enables settings (like `Basic` authentication) that can be inherently insecure.
### WinRM Listener
The WinRM services listens for requests on one or more ports. Each of these ports must have a listener created and configured.
To view the current listeners that are running on the WinRM service, run the following command:
```
winrm enumerate winrm/config/Listener
```
This will output something like:
```
Listener
Address = *
Transport = HTTP
Port = 5985
Hostname
Enabled = true
URLPrefix = wsman
CertificateThumbprint
ListeningOn = 10.0.2.15, 127.0.0.1, 192.168.56.155, ::1, fe80::5efe:10.0.2.15%6, fe80::5efe:192.168.56.155%8, fe80::
ffff:ffff:fffe%2, fe80::203d:7d97:c2ed:ec78%3, fe80::e8ea:d765:2c69:7756%7
Listener
Address = *
Transport = HTTPS
Port = 5986
Hostname = SERVER2016
Enabled = true
URLPrefix = wsman
CertificateThumbprint = E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE
ListeningOn = 10.0.2.15, 127.0.0.1, 192.168.56.155, ::1, fe80::5efe:10.0.2.15%6, fe80::5efe:192.168.56.155%8, fe80::
ffff:ffff:fffe%2, fe80::203d:7d97:c2ed:ec78%3, fe80::e8ea:d765:2c69:7756%7
```
In the example above there are two listeners activated; one is listening on port 5985 over HTTP and the other is listening on port 5986 over HTTPS. Some of the key options that are useful to understand are:
* `Transport`: Whether the listener is run over HTTP or HTTPS, it is recommended to use a listener over HTTPS as the data is encrypted without any further changes required.
* `Port`: The port the listener runs on, by default it is `5985` for HTTP and `5986` for HTTPS. This port can be changed to whatever is required and corresponds to the host var `ansible_port`.
* `URLPrefix`: The URL prefix to listen on, by default it is `wsman`. If this is changed, the host var `ansible_winrm_path` must be set to the same value.
* `CertificateThumbprint`: If running over an HTTPS listener, this is the thumbprint of the certificate in the Windows Certificate Store that is used in the connection. To get the details of the certificate itself, run this command with the relevant certificate thumbprint in PowerShell:
```
$thumbprint = "E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE"
Get-ChildItem -Path cert:\LocalMachine\My -Recurse | Where-Object { $_.Thumbprint -eq $thumbprint } | Select-Object *
```
#### Setup WinRM Listener
There are three ways to set up a WinRM listener:
* Using `winrm quickconfig` for HTTP or `winrm quickconfig -transport:https` for HTTPS. This is the easiest option to use when running outside of a domain environment and a simple listener is required. Unlike the other options, this process also has the added benefit of opening up the Firewall for the ports required and starts the WinRM service.
* Using Group Policy Objects. This is the best way to create a listener when the host is a member of a domain because the configuration is done automatically without any user input. For more information on group policy objects, see the [Group Policy Objects documentation](https://msdn.microsoft.com/en-us/library/aa374162(v=vs.85).aspx).
* Using PowerShell to create the listener with a specific configuration. This can be done by running the following PowerShell commands:
```
$selector_set = @{
Address = "*"
Transport = "HTTPS"
}
$value_set = @{
CertificateThumbprint = "E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE"
}
New-WSManInstance -ResourceURI "winrm/config/Listener" -SelectorSet $selector_set -ValueSet $value_set
```
To see the other options with this PowerShell cmdlet, see [New-WSManInstance](https://docs.microsoft.com/en-us/powershell/module/microsoft.wsman.management/new-wsmaninstance?view=powershell-5.1).
Note
When creating an HTTPS listener, an existing certificate needs to be created and stored in the `LocalMachine\My` certificate store. Without a certificate being present in this store, most commands will fail.
#### Delete WinRM Listener
To remove a WinRM listener:
```
# Remove all listeners
Remove-Item -Path WSMan:\localhost\Listener\* -Recurse -Force
# Only remove listeners that are run over HTTPS
Get-ChildItem -Path WSMan:\localhost\Listener | Where-Object { $_.Keys -contains "Transport=HTTPS" } | Remove-Item -Recurse -Force
```
Note
The `Keys` object is an array of strings, so it can contain different values. By default it contains a key for `Transport=` and `Address=` which correspond to the values from winrm enumerate winrm/config/Listeners.
### WinRM Service Options
There are a number of options that can be set to control the behavior of the WinRM service component, including authentication options and memory settings.
To get an output of the current service configuration options, run the following command:
```
winrm get winrm/config/Service
winrm get winrm/config/Winrs
```
This will output something like:
```
Service
RootSDDL = O:NSG:BAD:P(A;;GA;;;BA)(A;;GR;;;IU)S:P(AU;FA;GA;;;WD)(AU;SA;GXGW;;;WD)
MaxConcurrentOperations = 4294967295
MaxConcurrentOperationsPerUser = 1500
EnumerationTimeoutms = 240000
MaxConnections = 300
MaxPacketRetrievalTimeSeconds = 120
AllowUnencrypted = false
Auth
Basic = true
Kerberos = true
Negotiate = true
Certificate = true
CredSSP = true
CbtHardeningLevel = Relaxed
DefaultPorts
HTTP = 5985
HTTPS = 5986
IPv4Filter = *
IPv6Filter = *
EnableCompatibilityHttpListener = false
EnableCompatibilityHttpsListener = false
CertificateThumbprint
AllowRemoteAccess = true
Winrs
AllowRemoteShellAccess = true
IdleTimeout = 7200000
MaxConcurrentUsers = 2147483647
MaxShellRunTime = 2147483647
MaxProcessesPerShell = 2147483647
MaxMemoryPerShellMB = 2147483647
MaxShellsPerUser = 2147483647
```
While many of these options should rarely be changed, a few can easily impact the operations over WinRM and are useful to understand. Some of the important options are:
* `Service\AllowUnencrypted`: This option defines whether WinRM will allow traffic that is run over HTTP without message encryption. Message level encryption is only possible when `ansible_winrm_transport` is `ntlm`, `kerberos` or `credssp`. By default this is `false` and should only be set to `true` when debugging WinRM messages.
* `Service\Auth\*`: These flags define what authentication options are allowed with the WinRM service. By default, `Negotiate (NTLM)` and `Kerberos` are enabled.
* `Service\Auth\CbtHardeningLevel`: Specifies whether channel binding tokens are not verified (None), verified but not required (Relaxed), or verified and required (Strict). CBT is only used when connecting with NTLM or Kerberos over HTTPS.
* `Service\CertificateThumbprint`: This is the thumbprint of the certificate used to encrypt the TLS channel used with CredSSP authentication. By default this is empty; a self-signed certificate is generated when the WinRM service starts and is used in the TLS process.
* `Winrs\MaxShellRunTime`: This is the maximum time, in milliseconds, that a remote command is allowed to execute.
* `Winrs\MaxMemoryPerShellMB`: This is the maximum amount of memory allocated per shell, including the shell’s child processes.
To modify a setting under the `Service` key in PowerShell:
```
# substitute {path} with the path to the option after winrm/config/Service
Set-Item -Path WSMan:\localhost\Service\{path} -Value "value here"
# for example, to change Service\Auth\CbtHardeningLevel run
Set-Item -Path WSMan:\localhost\Service\Auth\CbtHardeningLevel -Value Strict
```
To modify a setting under the `Winrs` key in PowerShell:
```
# Substitute {path} with the path to the option after winrm/config/Winrs
Set-Item -Path WSMan:\localhost\Shell\{path} -Value "value here"
# For example, to change Winrs\MaxShellRunTime run
Set-Item -Path WSMan:\localhost\Shell\MaxShellRunTime -Value 2147483647
```
Note
If running in a domain environment, some of these options are set by GPO and cannot be changed on the host itself. When a key has been configured with GPO, it contains the text `[Source="GPO"]` next to the value.
### Common WinRM Issues
Because WinRM has a wide range of configuration options, it can be difficult to setup and configure. Because of this complexity, issues that are shown by Ansible could in fact be issues with the host setup instead.
One easy way to determine whether a problem is a host issue is to run the following command from another Windows host to connect to the target Windows host:
```
# Test out HTTP
winrs -r:http://server:5985/wsman -u:Username -p:Password ipconfig
# Test out HTTPS (will fail if the cert is not verifiable)
winrs -r:https://server:5986/wsman -u:Username -p:Password -ssl ipconfig
# Test out HTTPS, ignoring certificate verification
$username = "Username"
$password = ConvertTo-SecureString -String "Password" -AsPlainText -Force
$cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
$session_option = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck
Invoke-Command -ComputerName server -UseSSL -ScriptBlock { ipconfig } -Credential $cred -SessionOption $session_option
```
If this fails, the issue is probably related to the WinRM setup. If it works, the issue may not be related to the WinRM setup; please continue reading for more troubleshooting suggestions.
#### HTTP 401/Credentials Rejected
A HTTP 401 error indicates the authentication process failed during the initial connection. Some things to check for this are:
* Verify that the credentials are correct and set properly in your inventory with `ansible_user` and `ansible_password`
* Ensure that the user is a member of the local Administrators group or has been explicitly granted access (a connection test with the `winrs` command can be used to rule this out).
* Make sure that the authentication option set by `ansible_winrm_transport` is enabled under `Service\Auth\*`
* If running over HTTP and not HTTPS, use `ntlm`, `kerberos` or `credssp` with `ansible_winrm_message_encryption: auto` to enable message encryption. If using another authentication option or if the installed pywinrm version cannot be upgraded, the `Service\AllowUnencrypted` can be set to `true` but this is only recommended for troubleshooting
* Ensure the downstream packages `pywinrm`, `requests-ntlm`, `requests-kerberos`, and/or `requests-credssp` are up to date using `pip`.
* If using Kerberos authentication, ensure that `Service\Auth\CbtHardeningLevel` is not set to `Strict`.
* When using Basic or Certificate authentication, make sure that the user is a local account and not a domain account. Domain accounts do not work with Basic and Certificate authentication.
#### HTTP 500 Error
These indicate an error has occurred with the WinRM service. Some things to check for include:
* Verify that the number of current open shells has not exceeded either `WinRsMaxShellsPerUser` or any of the other Winrs quotas haven’t been exceeded.
#### Timeout Errors
These usually indicate an error with the network connection where Ansible is unable to reach the host. Some things to check for include:
* Make sure the firewall is not set to block the configured WinRM listener ports
* Ensure that a WinRM listener is enabled on the port and path set by the host vars
* Ensure that the `winrm` service is running on the Windows host and configured for automatic start
#### Connection Refused Errors
These usually indicate an error when trying to communicate with the WinRM service on the host. Some things to check for:
* Ensure that the WinRM service is up and running on the host. Use `(Get-Service -Name winrm).Status` to get the status of the service.
* Check that the host firewall is allowing traffic over the WinRM port. By default this is `5985` for HTTP and `5986` for HTTPS.
Sometimes an installer may restart the WinRM or HTTP service and cause this error. The best way to deal with this is to use `win_psexec` from another Windows host.
#### Failure to Load Builtin Modules
If powershell fails with an error message similar to `The 'Out-String' command was found in the module 'Microsoft.PowerShell.Utility', but the module could not be loaded.` then there could be a problem trying to access all the paths specified by the `PSModulePath` environment variable. A common cause of this issue is that the `PSModulePath` environment variable contains a UNC path to a file share and because of the double hop/credential delegation issue the Ansible process cannot access these folders. The way around this problems is to either:
* Remove the UNC path from the `PSModulePath` environment variable, or
* Use an authentication option that supports credential delegation like `credssp` or `kerberos` with credential delegation enabled
See [KB4076842](https://support.microsoft.com/en-us/help/4076842) for more information on this problem.
Windows SSH Setup
-----------------
Ansible 2.8 has added an experimental SSH connection for Windows managed nodes.
Warning
Use this feature at your own risk! Using SSH with Windows is experimental, the implementation may make backwards incompatible changes in feature releases. The server side components can be unreliable depending on the version that is installed.
### Installing Win32-OpenSSH
The first step to using SSH with Windows is to install the [Win32-OpenSSH](https://github.com/PowerShell/Win32-OpenSSH) service on the Windows host. Microsoft offers a way to install `Win32-OpenSSH` through a Windows capability but currently the version that is installed through this process is too old to work with Ansible. To install `Win32-OpenSSH` for use with Ansible, select one of these three installation options:
* Manually install the service, following the [install instructions](https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH) from Microsoft.
* Install the [openssh](https://chocolatey.org/packages/openssh) package using Chocolatey:
```
choco install --package-parameters=/SSHServerFeature openssh
```
* Use `win_chocolatey` to install the service:
```
- name: install the Win32-OpenSSH service
win_chocolatey:
name: openssh
package_params: /SSHServerFeature
state: present
```
* Use an existing Ansible Galaxy role like [jborean93.win\_openssh](https://galaxy.ansible.com/jborean93/win_openssh):
```
# Make sure the role has been downloaded first
ansible-galaxy install jborean93.win_openssh
# main.yml
- name: install Win32-OpenSSH service
hosts: windows
gather_facts: no
roles:
- role: jborean93.win_openssh
opt_openssh_setup_service: True
```
Note
`Win32-OpenSSH` is still a beta product and is constantly being updated to include new features and bugfixes. If you are using SSH as a connection option for Windows, it is highly recommend you install the latest release from one of the 3 methods above.
### Configuring the Win32-OpenSSH shell
By default `Win32-OpenSSH` will use `cmd.exe` as a shell. To configure a different shell, use an Ansible task to define the registry setting:
```
- name: set the default shell to PowerShell
win_regedit:
path: HKLM:\SOFTWARE\OpenSSH
name: DefaultShell
data: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
type: string
state: present
# Or revert the settings back to the default, cmd
- name: set the default shell to cmd
win_regedit:
path: HKLM:\SOFTWARE\OpenSSH
name: DefaultShell
state: absent
```
### Win32-OpenSSH Authentication
Win32-OpenSSH authentication with Windows is similar to SSH authentication on Unix/Linux hosts. You can use a plaintext password or SSH public key authentication, add public keys to an `authorized_key` file in the `.ssh` folder of the user’s profile directory, and configure the service using the `sshd_config` file used by the SSH service as you would on a Unix/Linux host.
When using SSH key authentication with Ansible, the remote session won’t have access to the user’s credentials and will fail when attempting to access a network resource. This is also known as the double-hop or credential delegation issue. There are two ways to work around this issue:
* Use plaintext password auth by setting `ansible_password`
* Use `become` on the task with the credentials of the user that needs access to the remote resource
### Configuring Ansible for SSH on Windows
To configure Ansible to use SSH for Windows hosts, you must set two connection variables:
* set `ansible_connection` to `ssh`
* set `ansible_shell_type` to `cmd` or `powershell`
The `ansible_shell_type` variable should reflect the `DefaultShell` configured on the Windows host. Set to `cmd` for the default shell or set to `powershell` if the `DefaultShell` has been changed to PowerShell.
### Known issues with SSH on Windows
Using SSH with Windows is experimental, and we expect to uncover more issues. Here are the known ones:
* Win32-OpenSSH versions older than `v7.9.0.0p1-Beta` do not work when `powershell` is the shell type
* While SCP should work, SFTP is the recommended SSH file transfer mechanism to use when copying or fetching a file
See also
[Intro to playbooks](playbooks_intro#about-playbooks)
An introduction to playbooks
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[List of Windows Modules](https://docs.ansible.com/ansible/2.9/modules/list_of_windows_modules.html#windows-modules "(in Ansible v2.9)")
Windows specific module list, all implemented in PowerShell
[User Mailing List](https://groups.google.com/group/ansible-project)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Controlling where tasks run: delegation and local actions Controlling where tasks run: delegation and local actions
=========================================================
By default Ansible gathers facts and executes all tasks on the machines that match the `hosts` line of your playbook. This page shows you how to delegate tasks to a different machine or group, delegate facts to specific machines or groups, or run an entire playbook locally. Using these approaches, you can manage inter-related environments precisely and efficiently. For example, when updating your webservers, you might need to remove them from a load-balanced pool temporarily. You cannot perform this task on the webservers themselves. By delegating the task to localhost, you keep all the tasks within the same play.
* [Tasks that cannot be delegated](#tasks-that-cannot-be-delegated)
* [Delegating tasks](#delegating-tasks)
* [Delegating facts](#delegating-facts)
* [Local playbooks](#local-playbooks)
Tasks that cannot be delegated
------------------------------
Some tasks always execute on the controller. These tasks, including `include`, `add_host`, and `debug`, cannot be delegated.
Delegating tasks
----------------
If you want to perform a task on one host with reference to other hosts, use the `delegate_to` keyword on a task. This is ideal for managing nodes in a load balanced pool or for controlling outage windows. You can use delegation with the [serial](playbooks_strategies#rolling-update-batch-size) keyword to control the number of hosts executing at one time:
```
---
- hosts: webservers
serial: 5
tasks:
- name: Take out of load balancer pool
ansible.builtin.command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: Actual steps would go here
ansible.builtin.yum:
name: acme-web-stack
state: latest
- name: Add back to load balancer pool
ansible.builtin.command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
```
The first and third tasks in this play run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that you can use on a per-task basis: `local_action`. Here is the same playbook as above, but using the shorthand syntax for delegating to 127.0.0.1:
```
---
# ...
tasks:
- name: Take out of load balancer pool
local_action: ansible.builtin.command /usr/bin/take_out_of_pool {{ inventory_hostname }}
# ...
- name: Add back to load balancer pool
local_action: ansible.builtin.command /usr/bin/add_back_to_pool {{ inventory_hostname }}
```
You can use a local action to call ‘rsync’ to recursively copy files to the managed servers:
```
---
# ...
tasks:
- name: Recursively copy files from management server to target
local_action: ansible.builtin.command rsync -a /path/to/files {{ inventory_hostname }}:/path/to/target/
```
Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync asks for a passphrase.
To specify more arguments, use the following syntax:
```
---
# ...
tasks:
- name: Send summary mail
local_action:
module: community.general.mail
subject: "Summary Mail"
to: "{{ mail_recipient }}"
body: "{{ mail_body }}"
run_once: True
```
The `ansible_host` variable reflects the host a task is delegated to.
Delegating facts
----------------
Delegating Ansible tasks is like delegating tasks in the real world - your groceries belong to you, even if someone else delivers them to your home. Similarly, any facts gathered by a delegated task are assigned by default to the `inventory_hostname` (the current host), not to the host which produced the facts (the delegated to host). To assign gathered facts to the delegated host instead of the current host, set `delegate_facts` to `true`:
```
---
- hosts: app_servers
tasks:
- name: Gather facts from db servers
ansible.builtin.setup:
delegate_to: "{{ item }}"
delegate_facts: true
loop: "{{ groups['dbservers'] }}"
```
This task gathers facts for the machines in the dbservers group and assigns the facts to those machines, even though the play targets the app\_servers group. This way you can lookup `hostvars[‘dbhost1’][‘ansible_default_ipv4’][‘address’]` even though dbservers were not part of the play, or left out by using `–limit`.
Local playbooks
---------------
It may be useful to use a playbook locally on a remote host, rather than by connecting over SSH. This can be useful for assuring the configuration of a system by putting a playbook in a crontab. This may also be used to run a playbook inside an OS installer, such as an Anaconda kickstart.
To run an entire playbook locally, just set the `hosts:` line to `hosts: 127.0.0.1` and then run the playbook like so:
```
ansible-playbook playbook.yml --connection=local
```
Alternatively, a local connection can be used in a single playbook play, even if other plays in the playbook use the default remote connection type:
```
---
- hosts: 127.0.0.1
connection: local
```
Note
If you set the connection to local and there is no ansible\_python\_interpreter set, modules will run under /usr/bin/python and not under {{ ansible\_playbook\_python }}. Be sure to set ansible\_python\_interpreter: “{{ ansible\_playbook\_python }}” in host\_vars/localhost.yml, for example. You can avoid this issue by using `local_action` or `delegate_to: localhost` instead.
See also
[Intro to playbooks](playbooks_intro#playbooks-intro)
An introduction to playbooks
[Controlling playbook execution: strategies and more](playbooks_strategies#playbooks-strategies)
More ways to control how and where Ansible executes
[Ansible Examples on GitHub](https://github.com/ansible/ansible-examples)
Many examples of full-stack deployments
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Conditionals Conditionals
============
In a playbook, you may want to execute different tasks, or have different goals, depending on the value of a fact (data about the remote system), a variable, or the result of a previous task. You may want the value of some variables to depend on the value of other variables. Or you may want to create additional groups of hosts based on whether the hosts match other criteria. You can do all of these things with conditionals.
Ansible uses Jinja2 [tests](playbooks_tests#playbooks-tests) and [filters](playbooks_filters#playbooks-filters) in conditionals. Ansible supports all the standard tests and filters, and adds some unique ones as well.
Note
There are many options to control execution flow in Ansible. You can find more examples of supported conditionals at <https://jinja.palletsprojects.com/en/master/templates/#comparisons>.
* [Basic conditionals with `when`](#basic-conditionals-with-when)
+ [Conditionals based on ansible\_facts](#conditionals-based-on-ansible-facts)
+ [Conditions based on registered variables](#conditions-based-on-registered-variables)
+ [Conditionals based on variables](#conditionals-based-on-variables)
+ [Using conditionals in loops](#using-conditionals-in-loops)
+ [Loading custom facts](#loading-custom-facts)
+ [Conditionals with re-use](#conditionals-with-re-use)
- [Conditionals with imports](#conditionals-with-imports)
- [Conditionals with includes](#conditionals-with-includes)
- [Conditionals with roles](#conditionals-with-roles)
+ [Selecting variables, files, or templates based on facts](#selecting-variables-files-or-templates-based-on-facts)
- [Selecting variables files based on facts](#selecting-variables-files-based-on-facts)
- [Selecting files and templates based on facts](#selecting-files-and-templates-based-on-facts)
* [Commonly-used facts](#commonly-used-facts)
+ [ansible\_facts[‘distribution’]](#ansible-facts-distribution)
+ [ansible\_facts[‘distribution\_major\_version’]](#ansible-facts-distribution-major-version)
+ [ansible\_facts[‘os\_family’]](#ansible-facts-os-family)
Basic conditionals with `when`
------------------------------
The simplest conditional statement applies to a single task. Create the task, then add a `when` statement that applies a test. The `when` clause is a raw Jinja2 expression without double curly braces (see [group\_by\_module](https://docs.ansible.com/ansible/3/collections/ansible/builtin/group_by_module.html#group-by-module "(in Ansible v3)")). When you run the task or playbook, Ansible evaluates the test for all hosts. On any host where the test passes (returns a value of True), Ansible runs that task. For example, if you are installing mysql on multiple machines, some of which have SELinux enabled, you might have a task to configure SELinux to allow mysql to run. You would only want that task to run on machines that have SELinux enabled:
```
tasks:
- name: Configure SELinux to start mysql on any port
ansible.posix.seboolean:
name: mysql_connect_any
state: true
persistent: yes
when: ansible_selinux.status == "enabled"
# all variables can be used directly in conditionals without double curly braces
```
### Conditionals based on ansible\_facts
Often you want to execute or skip a task based on facts. Facts are attributes of individual hosts, including IP address, operating system, the status of a filesystem, and many more. With conditionals based on facts:
* You can install a certain package only when the operating system is a particular version.
* You can skip configuring a firewall on hosts with internal IP addresses.
* You can perform cleanup tasks only when a filesystem is getting full.
See [Commonly-used facts](#commonly-used-facts) for a list of facts that frequently appear in conditional statements. Not all facts exist for all hosts. For example, the ‘lsb\_major\_release’ fact used in an example below only exists when the lsb\_release package is installed on the target host. To see what facts are available on your systems, add a debug task to your playbook:
```
- name: Show facts available on the system
ansible.builtin.debug:
var: ansible_facts
```
Here is a sample conditional based on a fact:
```
tasks:
- name: Shut down Debian flavored systems
ansible.builtin.command: /sbin/shutdown -t now
when: ansible_facts['os_family'] == "Debian"
```
If you have multiple conditions, you can group them with parentheses:
```
tasks:
- name: Shut down CentOS 6 and Debian 7 systems
ansible.builtin.command: /sbin/shutdown -t now
when: (ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_major_version'] == "6") or
(ansible_facts['distribution'] == "Debian" and ansible_facts['distribution_major_version'] == "7")
```
You can use [logical operators](https://jinja.palletsprojects.com/en/master/templates/#logic) to combine conditions. When you have multiple conditions that all need to be true (that is, a logical `and`), you can specify them as a list:
```
tasks:
- name: Shut down CentOS 6 systems
ansible.builtin.command: /sbin/shutdown -t now
when:
- ansible_facts['distribution'] == "CentOS"
- ansible_facts['distribution_major_version'] == "6"
```
If a fact or variable is a string, and you need to run a mathematical comparison on it, use a filter to ensure that Ansible reads the value as an integer:
```
tasks:
- ansible.builtin.shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6
```
### Conditions based on registered variables
Often in a playbook you want to execute or skip a task based on the outcome of an earlier task. For example, you might want to configure a service after it is upgraded by an earlier task. To create a conditional based on a registered variable:
1. Register the outcome of the earlier task as a variable.
2. Create a conditional test based on the registered variable.
You create the name of the registered variable using the `register` keyword. A registered variable always contains the status of the task that created it as well as any output that task generated. You can use registered variables in templates and action lines as well as in conditional `when` statements. You can access the string contents of the registered variable using `variable.stdout`. For example:
```
- name: Test play
hosts: all
tasks:
- name: Register a variable
ansible.builtin.shell: cat /etc/motd
register: motd_contents
- name: Use the variable in conditional statement
ansible.builtin.shell: echo "motd contains the word hi"
when: motd_contents.stdout.find('hi') != -1
```
You can use registered results in the loop of a task if the variable is a list. If the variable is not a list, you can convert it into a list, with either `stdout_lines` or with `variable.stdout.split()`. You can also split the lines by other fields:
```
- name: Registered variable usage as a loop list
hosts: all
tasks:
- name: Retrieve the list of home directories
ansible.builtin.command: ls /home
register: home_dirs
- name: Add home dirs to the backup spooler
ansible.builtin.file:
path: /mnt/bkspool/{{ item }}
src: /home/{{ item }}
state: link
loop: "{{ home_dirs.stdout_lines }}"
# same as loop: "{{ home_dirs.stdout.split() }}"
```
The string content of a registered variable can be empty. If you want to run another task only on hosts where the stdout of your registered variable is empty, check the registered variable’s string contents for emptiness:
```
- name: check registered variable for emptiness
hosts: all
tasks:
- name: List contents of directory
ansible.builtin.command: ls mydir
register: contents
- name: Check contents for emptiness
ansible.builtin.debug:
msg: "Directory is empty"
when: contents.stdout == ""
```
Ansible always registers something in a registered variable for every host, even on hosts where a task fails or Ansible skips a task because a condition is not met. To run a follow-up task on these hosts, query the registered variable for `is skipped` (not for “undefined” or “default”). See [Registering variables](playbooks_variables#registered-variables) for more information. Here are sample conditionals based on the success or failure of a task. Remember to ignore errors if you want Ansible to continue executing on a host when a failure occurs:
```
tasks:
- name: Register a variable, ignore errors and continue
ansible.builtin.command: /bin/false
register: result
ignore_errors: true
- name: Run only if the task that registered the "result" variable fails
ansible.builtin.command: /bin/something
when: result is failed
- name: Run only if the task that registered the "result" variable succeeds
ansible.builtin.command: /bin/something_else
when: result is succeeded
- name: Run only if the task that registered the "result" variable is skipped
ansible.builtin.command: /bin/still/something_else
when: result is skipped
```
Note
Older versions of Ansible used `success` and `fail`, but `succeeded` and `failed` use the correct tense. All of these options are now valid.
### Conditionals based on variables
You can also create conditionals based on variables defined in the playbooks or inventory. Because conditionals require boolean input (a test must evaluate as True to trigger the condition), you must apply the `| bool` filter to non boolean variables, such as string variables with content like ‘yes’, ‘on’, ‘1’, or ‘true’. You can define variables like this:
```
vars:
epic: true
monumental: "yes"
```
With the variables above, Ansible would run one of these tasks and skip the other:
```
tasks:
- name: Run the command if "epic" or "monumental" is true
ansible.builtin.shell: echo "This certainly is epic!"
when: epic or monumental | bool
- name: Run the command if "epic" is false
ansible.builtin.shell: echo "This certainly isn't epic!"
when: not epic
```
If a required variable has not been set, you can skip or fail using Jinja2’s `defined` test. For example:
```
tasks:
- name: Run the command if "foo" is defined
ansible.builtin.shell: echo "I've got '{{ foo }}' and am not afraid to use it!"
when: foo is defined
- name: Fail if "bar" is undefined
ansible.builtin.fail: msg="Bailing out. This play requires 'bar'"
when: bar is undefined
```
This is especially useful in combination with the conditional import of vars files (see below). As the examples show, you do not need to use `{{ }}` to use variables inside conditionals, as these are already implied.
### Using conditionals in loops
If you combine a `when` statement with a [loop](playbooks_loops#playbooks-loops), Ansible processes the condition separately for each item. This is by design, so you can execute the task on some items in the loop and skip it on other items. For example:
```
tasks:
- name: Run with items greater than 5
ansible.builtin.command: echo {{ item }}
loop: [ 0, 2, 4, 6, 8, 10 ]
when: item > 5
```
If you need to skip the whole task when the loop variable is undefined, use the `|default` filter to provide an empty iterator. For example, when looping over a list:
```
- name: Skip the whole task when a loop variable is undefined
ansible.builtin.command: echo {{ item }}
loop: "{{ mylist|default([]) }}"
when: item > 5
```
You can do the same thing when looping over a dict:
```
- name: The same as above using a dict
ansible.builtin.command: echo {{ item.key }}
loop: "{{ query('dict', mydict|default({})) }}"
when: item.value > 5
```
### Loading custom facts
You can provide your own facts, as described in [Should you develop a module?](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html#developing-modules). To run them, just make a call to your own custom fact gathering module at the top of your list of tasks, and variables returned there will be accessible to future tasks:
```
tasks:
- name: Gather site specific fact data
action: site_facts
- name: Use a custom fact
ansible.builtin.command: /usr/bin/thingy
when: my_custom_fact_just_retrieved_from_the_remote_system == '1234'
```
### Conditionals with re-use
You can use conditionals with re-usable tasks files, playbooks, or roles. Ansible executes these conditional statements differently for dynamic re-use (includes) and for static re-use (imports). See [Re-using Ansible artifacts](playbooks_reuse#playbooks-reuse) for more information on re-use in Ansible.
#### Conditionals with imports
When you add a conditional to an import statement, Ansible applies the condition to all tasks within the imported file. This behavior is the equivalent of [Tag inheritance: adding tags to multiple tasks](playbooks_tags#tag-inheritance). Ansible applies the condition to every task, and evaluates each task separately. For example, you might have a playbook called `main.yml` and a tasks file called `other_tasks.yml`:
```
# all tasks within an imported file inherit the condition from the import statement
# main.yml
- import_tasks: other_tasks.yml # note "import"
when: x is not defined
# other_tasks.yml
- name: Set a variable
ansible.builtin.set_fact:
x: foo
- name: Print a variable
ansible.builtin.debug:
var: x
```
Ansible expands this at execution time to the equivalent of:
```
- name: Set a variable if not defined
ansible.builtin.set_fact:
x: foo
when: x is not defined
# this task sets a value for x
- name: Do the task if "x" is not defined
ansible.builtin.debug:
var: x
when: x is not defined
# Ansible skips this task, because x is now defined
```
Thus if `x` is initially undefined, the `debug` task will be skipped. If this is not the behavior you want, use an `include_*` statement to apply a condition only to that statement itself.
You can apply conditions to `import_playbook` as well as to the other `import_*` statements. When you use this approach, Ansible returns a ‘skipped’ message for every task on every host that does not match the criteria, creating repetitive output. In many cases the [group\_by module](../collections/ansible/builtin/group_by_module#group-by-module) can be a more streamlined way to accomplish the same objective; see [Handling OS and distro differences](playbooks_best_practices#os-variance).
#### Conditionals with includes
When you use a conditional on an `include_*` statement, the condition is applied only to the include task itself and not to any other tasks within the included file(s). To contrast with the example used for conditionals on imports above, look at the same playbook and tasks file, but using an include instead of an import:
```
# Includes let you re-use a file to define a variable when it is not already defined
# main.yml
- include_tasks: other_tasks.yml
when: x is not defined
# other_tasks.yml
- name: Set a variable
ansible.builtin.set_fact:
x: foo
- name: Print a variable
ansible.builtin.debug:
var: x
```
Ansible expands this at execution time to the equivalent of:
```
# main.yml
- include_tasks: other_tasks.yml
when: x is not defined
# if condition is met, Ansible includes other_tasks.yml
# other_tasks.yml
- name: Set a variable
ansible.builtin.set_fact:
x: foo
# no condition applied to this task, Ansible sets the value of x to foo
- name: Print a variable
ansible.builtin.debug:
var: x
# no condition applied to this task, Ansible prints the debug statement
```
By using `include_tasks` instead of `import_tasks`, both tasks from `other_tasks.yml` will be executed as expected. For more information on the differences between `include` v `import` see [Re-using Ansible artifacts](playbooks_reuse#playbooks-reuse).
#### Conditionals with roles
There are three ways to apply conditions to roles:
* Add the same condition or conditions to all tasks in the role by placing your `when` statement under the `roles` keyword. See the example in this section.
* Add the same condition or conditions to all tasks in the role by placing your `when` statement on a static `import_role` in your playbook.
* Add a condition or conditions to individual tasks or blocks within the role itself. This is the only approach that allows you to select or skip some tasks within the role based on your `when` statement. To select or skip tasks within the role, you must have conditions set on individual tasks or blocks, use the dynamic `include_role` in your playbook, and add the condition or conditions to the include. When you use this approach, Ansible applies the condition to the include itself plus any tasks in the role that also have that `when` statement.
When you incorporate a role in your playbook statically with the `roles` keyword, Ansible adds the conditions you define to all the tasks in the role. For example:
```
- hosts: webservers
roles:
- role: debian_stock_config
when: ansible_facts['os_family'] == 'Debian'
```
### Selecting variables, files, or templates based on facts
Sometimes the facts about a host determine the values you want to use for certain variables or even the file or template you want to select for that host. For example, the names of packages are different on CentOS and on Debian. The configuration files for common services are also different on different OS flavors and versions. To load different variables file, templates, or other files based on a fact about the hosts:
1. name your vars files, templates, or files to match the Ansible fact that differentiates them
2. select the correct vars file, template, or file for each host with a variable based on that Ansible fact
Ansible separates variables from tasks, keeping your playbooks from turning into arbitrary code with nested conditionals. This approach results in more streamlined and auditable configuration rules because there are fewer decision points to track.
#### Selecting variables files based on facts
You can create a playbook that works on multiple platforms and OS versions with a minimum of syntax by placing your variable values in vars files and conditionally importing them. If you want to install Apache on some CentOS and some Debian servers, create variables files with YAML keys and values. For example:
```
---
# for vars/RedHat.yml
apache: httpd
somethingelse: 42
```
Then import those variables files based on the facts you gather on the hosts in your playbook:
```
---
- hosts: webservers
remote_user: root
vars_files:
- "vars/common.yml"
- [ "vars/{{ ansible_facts['os_family'] }}.yml", "vars/os_defaults.yml" ]
tasks:
- name: Make sure apache is started
ansible.builtin.service:
name: '{{ apache }}'
state: started
```
Ansible gathers facts on the hosts in the webservers group, then interpolates the variable “ansible\_facts[‘os\_family’]” into a list of filenames. If you have hosts with Red Hat operating systems (CentOS, for example), Ansible looks for ‘vars/RedHat.yml’. If that file does not exist, Ansible attempts to load ‘vars/os\_defaults.yml’. For Debian hosts, Ansible first looks for ‘vars/Debian.yml’, before falling back on ‘vars/os\_defaults.yml’. If no files in the list are found, Ansible raises an error.
#### Selecting files and templates based on facts
You can use the same approach when different OS flavors or versions require different configuration files or templates. Select the appropriate file or template based on the variables assigned to each host. This approach is often much cleaner than putting a lot of conditionals into a single template to cover multiple OS or package versions.
For example, you can template out a configuration file that is very different between, say, CentOS and Debian:
```
- name: Template a file
ansible.builtin.template:
src: "{{ item }}"
dest: /etc/myapp/foo.conf
loop: "{{ query('first_found', { 'files': myfiles, 'paths': mypaths}) }}"
vars:
myfiles:
- "{{ ansible_facts['distribution'] }}.conf"
- default.conf
mypaths: ['search_location_one/somedir/', '/opt/other_location/somedir/']
```
Commonly-used facts
-------------------
The following Ansible facts are frequently used in conditionals.
### ansible\_facts[‘distribution’]
Possible values (sample, not complete list):
```
Alpine
Altlinux
Amazon
Archlinux
ClearLinux
Coreos
CentOS
Debian
Fedora
Gentoo
Mandriva
NA
OpenWrt
OracleLinux
RedHat
Slackware
SLES
SMGL
SUSE
Ubuntu
VMwareESX
```
### ansible\_facts[‘distribution\_major\_version’]
The major version of the operating system. For example, the value is `16` for Ubuntu 16.04.
### ansible\_facts[‘os\_family’]
Possible values (sample, not complete list):
```
AIX
Alpine
Altlinux
Archlinux
Darwin
Debian
FreeBSD
Gentoo
HP-UX
Mandrake
RedHat
SGML
Slackware
Solaris
Suse
Windows
```
See also
[Working with playbooks](playbooks#working-with-playbooks)
An introduction to playbooks
[Roles](playbooks_reuse_roles#playbooks-reuse-roles)
Playbook organization by roles
[Tips and tricks](playbooks_best_practices#playbooks-best-practices)
Tips and tricks for playbooks
[Using Variables](playbooks_variables#playbooks-variables)
All about variables
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
| programming_docs |
ansible Windows performance Windows performance
===================
This document offers some performance optimizations you might like to apply to your Windows hosts to speed them up specifically in the context of using Ansible with them, and generally.
Optimise PowerShell performance to reduce Ansible task overhead
---------------------------------------------------------------
To speed up the startup of PowerShell by around 10x, run the following PowerShell snippet in an Administrator session. Expect it to take tens of seconds.
Note
If native images have already been created by the ngen task or service, you will observe no difference in performance (but this snippet will at that point execute faster than otherwise).
```
function Optimize-PowershellAssemblies {
# NGEN powershell assembly, improves startup time of powershell by 10x
$old_path = $env:path
try {
$env:path = [Runtime.InteropServices.RuntimeEnvironment]::GetRuntimeDirectory()
[AppDomain]::CurrentDomain.GetAssemblies() | % {
if (! $_.location) {continue}
$Name = Split-Path $_.location -leaf
if ($Name.startswith("Microsoft.PowerShell.")) {
Write-Progress -Activity "Native Image Installation" -Status "$name"
ngen install $_.location | % {"`t$_"}
}
}
} finally {
$env:path = $old_path
}
}
Optimize-PowershellAssemblies
```
PowerShell is used by every Windows Ansible module. This optimisation reduces the time PowerShell takes to start up, removing that overhead from every invocation.
This snippet uses [the native image generator, ngen](https://docs.microsoft.com/en-us/dotnet/framework/tools/ngen-exe-native-image-generator#WhenToUse) to pre-emptively create native images for the assemblies that PowerShell relies on.
Fix high-CPU-on-boot for VMs/cloud instances
--------------------------------------------
If you are creating golden images to spawn instances from, you can avoid a disruptive high CPU task near startup via [processing the ngen queue](https://docs.microsoft.com/en-us/dotnet/framework/tools/ngen-exe-native-image-generator#native-image-service) within your golden image creation, if you know the CPU types won’t change between golden image build process and runtime.
Place the following near the end of your playbook, bearing in mind the factors that can cause native images to be invalidated ([see MSDN](https://docs.microsoft.com/en-us/dotnet/framework/tools/ngen-exe-native-image-generator#native-images-and-jit-compilation)).
```
- name: generate native .NET images for CPU
win_dotnet_ngen:
```
ansible Discovering variables: facts and magic variables Discovering variables: facts and magic variables
================================================
With Ansible you can retrieve or discover certain variables containing information about your remote systems or about Ansible itself. Variables related to remote systems are called facts. With facts, you can use the behavior or state of one system as configuration on other systems. For example, you can use the IP address of one system as a configuration value on another system. Variables related to Ansible are called magic variables.
* [Ansible facts](#ansible-facts)
+ [Package requirements for fact gathering](#package-requirements-for-fact-gathering)
+ [Caching facts](#caching-facts)
+ [Disabling facts](#disabling-facts)
+ [Adding custom facts](#adding-custom-facts)
- [facts.d or local facts](#facts-d-or-local-facts)
* [Information about Ansible: magic variables](#information-about-ansible-magic-variables)
+ [Ansible version](#ansible-version)
Ansible facts
-------------
Ansible facts are data related to your remote systems, including operating systems, IP addresses, attached filesystems, and more. You can access this data in the `ansible_facts` variable. By default, you can also access some Ansible facts as top-level variables with the `ansible_` prefix. You can disable this behavior using the [INJECT\_FACTS\_AS\_VARS](../reference_appendices/config#inject-facts-as-vars) setting. To see all available facts, add this task to a play:
```
- name: Print all available facts
ansible.builtin.debug:
var: ansible_facts
```
To see the ‘raw’ information as gathered, run this command at the command line:
```
ansible <hostname> -m ansible.builtin.setup
```
Facts include a large amount of variable data, which may look like this:
```
{
"ansible_all_ipv4_addresses": [
"REDACTED IP ADDRESS"
],
"ansible_all_ipv6_addresses": [
"REDACTED IPV6 ADDRESS"
],
"ansible_apparmor": {
"status": "disabled"
},
"ansible_architecture": "x86_64",
"ansible_bios_date": "11/28/2013",
"ansible_bios_version": "4.1.5",
"ansible_cmdline": {
"BOOT_IMAGE": "/boot/vmlinuz-3.10.0-862.14.4.el7.x86_64",
"console": "ttyS0,115200",
"no_timer_check": true,
"nofb": true,
"nomodeset": true,
"ro": true,
"root": "LABEL=cloudimg-rootfs",
"vga": "normal"
},
"ansible_date_time": {
"date": "2018-10-25",
"day": "25",
"epoch": "1540469324",
"hour": "12",
"iso8601": "2018-10-25T12:08:44Z",
"iso8601_basic": "20181025T120844109754",
"iso8601_basic_short": "20181025T120844",
"iso8601_micro": "2018-10-25T12:08:44.109968Z",
"minute": "08",
"month": "10",
"second": "44",
"time": "12:08:44",
"tz": "UTC",
"tz_offset": "+0000",
"weekday": "Thursday",
"weekday_number": "4",
"weeknumber": "43",
"year": "2018"
},
"ansible_default_ipv4": {
"address": "REDACTED",
"alias": "eth0",
"broadcast": "REDACTED",
"gateway": "REDACTED",
"interface": "eth0",
"macaddress": "REDACTED",
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "REDACTED",
"type": "ether"
},
"ansible_default_ipv6": {},
"ansible_device_links": {
"ids": {},
"labels": {
"xvda1": [
"cloudimg-rootfs"
],
"xvdd": [
"config-2"
]
},
"masters": {},
"uuids": {
"xvda1": [
"cac81d61-d0f8-4b47-84aa-b48798239164"
],
"xvdd": [
"2018-10-25-12-05-57-00"
]
}
},
"ansible_devices": {
"xvda": {
"holders": [],
"host": "",
"links": {
"ids": [],
"labels": [],
"masters": [],
"uuids": []
},
"model": null,
"partitions": {
"xvda1": {
"holders": [],
"links": {
"ids": [],
"labels": [
"cloudimg-rootfs"
],
"masters": [],
"uuids": [
"cac81d61-d0f8-4b47-84aa-b48798239164"
]
},
"sectors": "83883999",
"sectorsize": 512,
"size": "40.00 GB",
"start": "2048",
"uuid": "cac81d61-d0f8-4b47-84aa-b48798239164"
}
},
"removable": "0",
"rotational": "0",
"sas_address": null,
"sas_device_handle": null,
"scheduler_mode": "deadline",
"sectors": "83886080",
"sectorsize": "512",
"size": "40.00 GB",
"support_discard": "0",
"vendor": null,
"virtual": 1
},
"xvdd": {
"holders": [],
"host": "",
"links": {
"ids": [],
"labels": [
"config-2"
],
"masters": [],
"uuids": [
"2018-10-25-12-05-57-00"
]
},
"model": null,
"partitions": {},
"removable": "0",
"rotational": "0",
"sas_address": null,
"sas_device_handle": null,
"scheduler_mode": "deadline",
"sectors": "131072",
"sectorsize": "512",
"size": "64.00 MB",
"support_discard": "0",
"vendor": null,
"virtual": 1
},
"xvde": {
"holders": [],
"host": "",
"links": {
"ids": [],
"labels": [],
"masters": [],
"uuids": []
},
"model": null,
"partitions": {
"xvde1": {
"holders": [],
"links": {
"ids": [],
"labels": [],
"masters": [],
"uuids": []
},
"sectors": "167770112",
"sectorsize": 512,
"size": "80.00 GB",
"start": "2048",
"uuid": null
}
},
"removable": "0",
"rotational": "0",
"sas_address": null,
"sas_device_handle": null,
"scheduler_mode": "deadline",
"sectors": "167772160",
"sectorsize": "512",
"size": "80.00 GB",
"support_discard": "0",
"vendor": null,
"virtual": 1
}
},
"ansible_distribution": "CentOS",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/redhat-release",
"ansible_distribution_file_variety": "RedHat",
"ansible_distribution_major_version": "7",
"ansible_distribution_release": "Core",
"ansible_distribution_version": "7.5.1804",
"ansible_dns": {
"nameservers": [
"127.0.0.1"
]
},
"ansible_domain": "",
"ansible_effective_group_id": 1000,
"ansible_effective_user_id": 1000,
"ansible_env": {
"HOME": "/home/zuul",
"LANG": "en_US.UTF-8",
"LESSOPEN": "||/usr/bin/lesspipe.sh %s",
"LOGNAME": "zuul",
"MAIL": "/var/mail/zuul",
"PATH": "/usr/local/bin:/usr/bin",
"PWD": "/home/zuul",
"SELINUX_LEVEL_REQUESTED": "",
"SELINUX_ROLE_REQUESTED": "",
"SELINUX_USE_CURRENT_RANGE": "",
"SHELL": "/bin/bash",
"SHLVL": "2",
"SSH_CLIENT": "REDACTED 55672 22",
"SSH_CONNECTION": "REDACTED 55672 REDACTED 22",
"USER": "zuul",
"XDG_RUNTIME_DIR": "/run/user/1000",
"XDG_SESSION_ID": "1",
"_": "/usr/bin/python2"
},
"ansible_eth0": {
"active": true,
"device": "eth0",
"ipv4": {
"address": "REDACTED",
"broadcast": "REDACTED",
"netmask": "255.255.255.0",
"network": "REDACTED"
},
"ipv6": [
{
"address": "REDACTED",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "REDACTED",
"module": "xen_netfront",
"mtu": 1500,
"pciid": "vif-0",
"promisc": false,
"type": "ether"
},
"ansible_eth1": {
"active": true,
"device": "eth1",
"ipv4": {
"address": "REDACTED",
"broadcast": "REDACTED",
"netmask": "255.255.224.0",
"network": "REDACTED"
},
"ipv6": [
{
"address": "REDACTED",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "REDACTED",
"module": "xen_netfront",
"mtu": 1500,
"pciid": "vif-1",
"promisc": false,
"type": "ether"
},
"ansible_fips": false,
"ansible_form_factor": "Other",
"ansible_fqdn": "centos-7-rax-dfw-0003427354",
"ansible_hostname": "centos-7-rax-dfw-0003427354",
"ansible_interfaces": [
"lo",
"eth1",
"eth0"
],
"ansible_is_chroot": false,
"ansible_kernel": "3.10.0-862.14.4.el7.x86_64",
"ansible_lo": {
"active": true,
"device": "lo",
"ipv4": {
"address": "127.0.0.1",
"broadcast": "host",
"netmask": "255.0.0.0",
"network": "127.0.0.0"
},
"ipv6": [
{
"address": "::1",
"prefix": "128",
"scope": "host"
}
],
"mtu": 65536,
"promisc": false,
"type": "loopback"
},
"ansible_local": {},
"ansible_lsb": {
"codename": "Core",
"description": "CentOS Linux release 7.5.1804 (Core)",
"id": "CentOS",
"major_release": "7",
"release": "7.5.1804"
},
"ansible_machine": "x86_64",
"ansible_machine_id": "2db133253c984c82aef2fafcce6f2bed",
"ansible_memfree_mb": 7709,
"ansible_memory_mb": {
"nocache": {
"free": 7804,
"used": 173
},
"real": {
"free": 7709,
"total": 7977,
"used": 268
},
"swap": {
"cached": 0,
"free": 0,
"total": 0,
"used": 0
}
},
"ansible_memtotal_mb": 7977,
"ansible_mounts": [
{
"block_available": 7220998,
"block_size": 4096,
"block_total": 9817227,
"block_used": 2596229,
"device": "/dev/xvda1",
"fstype": "ext4",
"inode_available": 10052341,
"inode_total": 10419200,
"inode_used": 366859,
"mount": "/",
"options": "rw,seclabel,relatime,data=ordered",
"size_available": 29577207808,
"size_total": 40211361792,
"uuid": "cac81d61-d0f8-4b47-84aa-b48798239164"
},
{
"block_available": 0,
"block_size": 2048,
"block_total": 252,
"block_used": 252,
"device": "/dev/xvdd",
"fstype": "iso9660",
"inode_available": 0,
"inode_total": 0,
"inode_used": 0,
"mount": "/mnt/config",
"options": "ro,relatime,mode=0700",
"size_available": 0,
"size_total": 516096,
"uuid": "2018-10-25-12-05-57-00"
}
],
"ansible_nodename": "centos-7-rax-dfw-0003427354",
"ansible_os_family": "RedHat",
"ansible_pkg_mgr": "yum",
"ansible_processor": [
"0",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"1",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"2",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"3",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"4",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"5",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"6",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz",
"7",
"GenuineIntel",
"Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz"
],
"ansible_processor_cores": 8,
"ansible_processor_count": 8,
"ansible_processor_nproc": 8,
"ansible_processor_threads_per_core": 1,
"ansible_processor_vcpus": 8,
"ansible_product_name": "HVM domU",
"ansible_product_serial": "REDACTED",
"ansible_product_uuid": "REDACTED",
"ansible_product_version": "4.1.5",
"ansible_python": {
"executable": "/usr/bin/python2",
"has_sslcontext": true,
"type": "CPython",
"version": {
"major": 2,
"micro": 5,
"minor": 7,
"releaselevel": "final",
"serial": 0
},
"version_info": [
2,
7,
5,
"final",
0
]
},
"ansible_python_version": "2.7.5",
"ansible_real_group_id": 1000,
"ansible_real_user_id": 1000,
"ansible_selinux": {
"config_mode": "enforcing",
"mode": "enforcing",
"policyvers": 31,
"status": "enabled",
"type": "targeted"
},
"ansible_selinux_python_present": true,
"ansible_service_mgr": "systemd",
"ansible_ssh_host_key_ecdsa_public": "REDACTED KEY VALUE",
"ansible_ssh_host_key_ed25519_public": "REDACTED KEY VALUE",
"ansible_ssh_host_key_rsa_public": "REDACTED KEY VALUE",
"ansible_swapfree_mb": 0,
"ansible_swaptotal_mb": 0,
"ansible_system": "Linux",
"ansible_system_capabilities": [
""
],
"ansible_system_capabilities_enforced": "True",
"ansible_system_vendor": "Xen",
"ansible_uptime_seconds": 151,
"ansible_user_dir": "/home/zuul",
"ansible_user_gecos": "",
"ansible_user_gid": 1000,
"ansible_user_id": "zuul",
"ansible_user_shell": "/bin/bash",
"ansible_user_uid": 1000,
"ansible_userspace_architecture": "x86_64",
"ansible_userspace_bits": "64",
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "xen",
"gather_subset": [
"all"
],
"module_setup": true
}
```
You can reference the model of the first disk in the facts shown above in a template or playbook as:
```
{{ ansible_facts['devices']['xvda']['model'] }}
```
To reference the system hostname:
```
{{ ansible_facts['nodename'] }}
```
You can use facts in conditionals (see [Conditionals](playbooks_conditionals#playbooks-conditionals)) and also in templates. You can also use facts to create dynamic groups of hosts that match particular criteria, see the [group\_by module](../collections/ansible/builtin/group_by_module#group-by-module) documentation for details.
### Package requirements for fact gathering
On some distros, you may see missing fact values or facts set to default values because the packages that support gathering those facts are not installed by default. You can install the necessary packages on your remote hosts using the OS package manager. Known dependencies include:
* Linux Network fact gathering - Depends on the `ip` binary, commonly included in the `iproute2` package.
### Caching facts
Like registered variables, facts are stored in memory by default. However, unlike registered variables, facts can be gathered independently and cached for repeated use. With cached facts, you can refer to facts from one system when configuring a second system, even if Ansible executes the current play on the second system first. For example:
```
{{ hostvars['asdf.example.com']['ansible_facts']['os_family'] }}
```
Caching is controlled by the cache plugins. By default, Ansible uses the memory cache plugin, which stores facts in memory for the duration of the current playbook run. To retain Ansible facts for repeated use, select a different cache plugin. See [Cache Plugins](../plugins/cache#cache-plugins) for details.
Fact caching can improve performance. If you manage thousands of hosts, you can configure fact caching to run nightly, then manage configuration on a smaller set of servers periodically throughout the day. With cached facts, you have access to variables and information about all hosts even when you are only managing a small number of servers.
### Disabling facts
By default, Ansible gathers facts at the beginning of each play. If you do not need to gather facts (for example, if you know everything about your systems centrally), you can turn off fact gathering at the play level to improve scalability. Disabling facts may particularly improve performance in push mode with very large numbers of systems, or if you are using Ansible on experimental platforms. To disable fact gathering:
```
- hosts: whatever
gather_facts: no
```
### Adding custom facts
The setup module in Ansible automatically discovers a standard set of facts about each host. If you want to add custom values to your facts, you can write a custom facts module, set temporary facts with a `ansible.builtin.set_fact` task, or provide permanent custom facts using the facts.d directory.
#### facts.d or local facts
New in version 1.3.
You can add static custom facts by adding static files to facts.d, or add dynamic facts by adding executable scripts to facts.d. For example, you can add a list of all users on a host to your facts by creating and running a script in facts.d.
To use facts.d, create an `/etc/ansible/facts.d` directory on the remote host or hosts. If you prefer a different directory, create it and specify it using the `fact_path` play keyword. Add files to the directory to supply your custom facts. All file names must end with `.fact`. The files can be JSON, INI, or executable files returning JSON.
To add static facts, simply add a file with the `.fact` extension. For example, create `/etc/ansible/facts.d/preferences.fact` with this content:
```
[general]
asdf=1
bar=2
```
Note
Make sure the file is not executable as this will break the `ansible.builtin.setup` module.
The next time fact gathering runs, your facts will include a hash variable fact named `general` with `asdf` and `bar` as members. To validate this, run the following:
```
ansible <hostname> -m ansible.builtin.setup -a "filter=ansible_local"
```
And you will see your custom fact added:
```
"ansible_local": {
"preferences": {
"general": {
"asdf" : "1",
"bar" : "2"
}
}
}
```
The ansible\_local namespace separates custom facts created by facts.d from system facts or variables defined elsewhere in the playbook, so variables will not override each other. You can access this custom fact in a template or playbook as:
```
{{ ansible_local['preferences']['general']['asdf'] }}
```
Note
The key part in the key=value pairs will be converted into lowercase inside the ansible\_local variable. Using the example above, if the ini file contained `XYZ=3` in the `[general]` section, then you should expect to access it as: `{{ ansible_local['preferences']['general']['xyz'] }}` and not `{{ ansible_local['preferences']['general']['XYZ'] }}`. This is because Ansible uses Python’s [ConfigParser](https://docs.python.org/2/library/configparser.html) which passes all option names through the [optionxform](https://docs.python.org/2/library/configparser.html#ConfigParser.RawConfigParser.optionxform) method and this method’s default implementation converts option names to lower case.
You can also use facts.d to execute a script on the remote host, generating dynamic custom facts to the ansible\_local namespace. For example, you can generate a list of all users that exist on a remote host as a fact about that host. To generate dynamic custom facts using facts.d:
1. Write and test a script to generate the JSON data you want.
2. Save the script in your facts.d directory.
3. Make sure your script has the `.fact` file extension.
4. Make sure your script is executable by the Ansible connection user.
5. Gather facts to execute the script and add the JSON output to ansible\_local.
By default, fact gathering runs once at the beginning of each play. If you create a custom fact using facts.d in a playbook, it will be available in the next play that gathers facts. If you want to use it in the same play where you created it, you must explicitly re-run the setup module. For example:
```
- hosts: webservers
tasks:
- name: Create directory for ansible custom facts
ansible.builtin.file:
state: directory
recurse: yes
path: /etc/ansible/facts.d
- name: Install custom ipmi fact
ansible.builtin.copy:
src: ipmi.fact
dest: /etc/ansible/facts.d
- name: Re-read facts after adding custom fact
ansible.builtin.setup:
filter: ansible_local
```
If you use this pattern frequently, a custom facts module would be more efficient than facts.d.
Information about Ansible: magic variables
------------------------------------------
You can access information about Ansible operations, including the python version being used, the hosts and groups in inventory, and the directories for playbooks and roles, using “magic” variables. Like connection variables, magic variables are [Special Variables](../reference_appendices/special_variables#special-variables). Magic variable names are reserved - do not set variables with these names. The variable `environment` is also reserved.
The most commonly used magic variables are `hostvars`, `groups`, `group_names`, and `inventory_hostname`. With `hostvars`, you can access variables defined for any host in the play, at any point in a playbook. You can access Ansible facts using the `hostvars` variable too, but only after you have gathered (or cached) facts.
If you want to configure your database server using the value of a ‘fact’ from another node, or the value of an inventory variable assigned to another node, you can use `hostvars` in a template or on an action line:
```
{{ hostvars['test.example.com']['ansible_facts']['distribution'] }}
```
With `groups`, a list of all the groups (and hosts) in the inventory, you can enumerate all hosts within a group. For example:
```
{% for host in groups['app_servers'] %}
# something that applies to all app servers.
{% endfor %}
```
You can use `groups` and `hostvars` together to find all the IP addresses in a group.
```
{% for host in groups['app_servers'] %}
{{ hostvars[host]['ansible_facts']['eth0']['ipv4']['address'] }}
{% endfor %}
```
You can use this approach to point a frontend proxy server to all the hosts in your app servers group, to set up the correct firewall rules between servers, and so on. You must either cache facts or gather facts for those hosts before the task that fills out the template.
With `group_names`, a list (array) of all the groups the current host is in, you can create templated files that vary based on the group membership (or role) of the host:
```
{% if 'webserver' in group_names %}
# some part of a configuration file that only applies to webservers
{% endif %}
```
You can use the magic variable `inventory_hostname`, the name of the host as configured in your inventory, as an alternative to `ansible_hostname` when fact-gathering is disabled. If you have a long FQDN, you can use `inventory_hostname_short`, which contains the part up to the first period, without the rest of the domain.
Other useful magic variables refer to the current play or playbook. These vars may be useful for filling out templates with multiple hostnames or for injecting the list into the rules for a load balancer.
`ansible_play_hosts` is the list of all hosts still active in the current play.
`ansible_play_batch` is a list of hostnames that are in scope for the current ‘batch’ of the play.
The batch size is defined by `serial`, when not set it is equivalent to the whole play (making it the same as `ansible_play_hosts`).
`ansible_playbook_python` is the path to the python executable used to invoke the Ansible command line tool.
`inventory_dir` is the pathname of the directory holding Ansible’s inventory host file.
`inventory_file` is the pathname and the filename pointing to the Ansible’s inventory host file.
`playbook_dir` contains the playbook base directory.
`role_path` contains the current role’s pathname and only works inside a role.
`ansible_check_mode` is a boolean, set to `True` if you run Ansible with `--check`.
### Ansible version
New in version 1.8.
To adapt playbook behavior to different versions of Ansible, you can use the variable `ansible_version`, which has the following structure:
```
"ansible_version": {
"full": "2.10.1",
"major": 2,
"minor": 10,
"revision": 1,
"string": "2.10.1"
}
```
| programming_docs |
ansible ansible ansible
=======
**Define and run a single task ‘playbook’ against a set of hosts**
* [Synopsis](#synopsis)
* [Description](#description)
* [Common Options](#common-options)
* [Environment](#environment)
* [Files](#files)
* [Author](#author)
* [License](#license)
* [See also](#see-also)
Synopsis
--------
```
usage: ansible [-h] [--version] [-v] [-b] [--become-method BECOME_METHOD]
[--become-user BECOME_USER] [-K] [-i INVENTORY] [--list-hosts]
[-l SUBSET] [-P POLL_INTERVAL] [-B SECONDS] [-o] [-t TREE] [-k]
[--private-key PRIVATE_KEY_FILE] [-u REMOTE_USER]
[-c CONNECTION] [-T TIMEOUT]
[--ssh-common-args SSH_COMMON_ARGS]
[--sftp-extra-args SFTP_EXTRA_ARGS]
[--scp-extra-args SCP_EXTRA_ARGS]
[--ssh-extra-args SSH_EXTRA_ARGS] [-C] [--syntax-check] [-D]
[-e EXTRA_VARS] [--vault-id VAULT_IDS]
[--ask-vault-password | --vault-password-file VAULT_PASSWORD_FILES]
[-f FORKS] [-M MODULE_PATH] [--playbook-dir BASEDIR]
[--task-timeout TASK_TIMEOUT] [-a MODULE_ARGS] [-m MODULE_NAME]
pattern
```
Description
-----------
is an extra-simple tool/framework/API for doing ‘remote things’. this command allows you to define and run a single task ‘playbook’ against a set of hosts
Common Options
--------------
`--ask-vault-password, --ask-vault-pass`
ask for vault password
`--become-method <BECOME_METHOD>`
privilege escalation method to use (default=sudo), use `ansible-doc -t become -l` to list valid choices.
`--become-user <BECOME_USER>`
run operations as this user (default=root)
`--list-hosts`
outputs a list of matching hosts; does not execute anything else
`--playbook-dir <BASEDIR>`
Since this tool does not use playbooks, use this as a substitute playbook directory.This sets the relative path for many features including roles/ group\_vars/ etc.
`--private-key <PRIVATE_KEY_FILE>, --key-file <PRIVATE_KEY_FILE>`
use this file to authenticate the connection
`--scp-extra-args <SCP_EXTRA_ARGS>`
specify extra arguments to pass to scp only (e.g. -l)
`--sftp-extra-args <SFTP_EXTRA_ARGS>`
specify extra arguments to pass to sftp only (e.g. -f, -l)
`--ssh-common-args <SSH_COMMON_ARGS>`
specify common arguments to pass to sftp/scp/ssh (e.g. ProxyCommand)
`--ssh-extra-args <SSH_EXTRA_ARGS>`
specify extra arguments to pass to ssh only (e.g. -R)
`--syntax-check`
perform a syntax check on the playbook, but do not execute it
`--task-timeout <TASK_TIMEOUT>`
set task timeout limit in seconds, must be positive integer.
`--vault-id`
the vault identity to use
`--vault-password-file, --vault-pass-file`
vault password file
`--version`
show program’s version number, config file location, configured module search path, module location, executable location and exit
`-B <SECONDS>, --background <SECONDS>`
run asynchronously, failing after X seconds (default=N/A)
`-C, --check`
don’t make any changes; instead, try to predict some of the changes that may occur
`-D, --diff`
when changing (small) files and templates, show the differences in those files; works great with –check
`-K, --ask-become-pass`
ask for privilege escalation password
`-M, --module-path`
prepend colon-separated path(s) to module library (default=~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules)
`-P <POLL_INTERVAL>, --poll <POLL_INTERVAL>`
set the poll interval if using -B (default=15)
`-T <TIMEOUT>, --timeout <TIMEOUT>`
override the connection timeout in seconds (default=10)
`-a <MODULE_ARGS>, --args <MODULE_ARGS>`
The action’s options in space separated k=v format: -a ‘opt1=val1 opt2=val2’
`-b, --become`
run operations with become (does not imply password prompting)
`-c <CONNECTION>, --connection <CONNECTION>`
connection type to use (default=smart)
`-e, --extra-vars`
set additional variables as key=value or YAML/JSON, if filename prepend with @
`-f <FORKS>, --forks <FORKS>`
specify number of parallel processes to use (default=5)
`-h, --help`
show this help message and exit
`-i, --inventory, --inventory-file`
specify inventory host path or comma separated host list. –inventory-file is deprecated
`-k, --ask-pass`
ask for connection password
`-l <SUBSET>, --limit <SUBSET>`
further limit selected hosts to an additional pattern
`-m <MODULE_NAME>, --module-name <MODULE_NAME>`
Name of the action to execute (default=command)
`-o, --one-line`
condense output
`-t <TREE>, --tree <TREE>`
log output to this directory
`-u <REMOTE_USER>, --user <REMOTE_USER>`
connect as this user (default=None)
`-v, --verbose`
verbose mode (-vvv for more, -vvvv to enable connection debugging)
Environment
-----------
The following environment variables may be specified.
[`ANSIBLE_CONFIG`](../reference_appendices/config#envvar-ANSIBLE_CONFIG) – Override the default ansible config file
Many more are available for most options in ansible.cfg
Files
-----
`/etc/ansible/ansible.cfg` – Config file, used if present
`~/.ansible.cfg` – User config file, overrides the default config if present
Author
------
Ansible was originally written by Michael DeHaan.
See the `AUTHORS` file for a complete list of contributors.
License
-------
Ansible is released under the terms of the GPLv3+ License.
See also
--------
*ansible(1)*, *ansible-config(1)*, *ansible-console(1)*, *ansible-doc(1)*, *ansible-galaxy(1)*, *ansible-inventory(1)*, *ansible-playbook(1)*, *ansible-pull(1)*, *ansible-vault(1)*,
ansible ansible-vault ansible-vault
=============
**encryption/decryption utility for Ansible data files**
* [Synopsis](#synopsis)
* [Description](#description)
* [Common Options](#common-options)
* [Actions](#actions)
+ [create](#create)
+ [decrypt](#decrypt)
+ [edit](#edit)
+ [view](#view)
+ [encrypt](#encrypt)
+ [encrypt\_string](#encrypt-string)
+ [rekey](#rekey)
* [Environment](#environment)
* [Files](#files)
* [Author](#author)
* [License](#license)
* [See also](#see-also)
Synopsis
--------
```
usage: ansible-vault [-h] [--version] [-v]
{create,decrypt,edit,view,encrypt,encrypt_string,rekey}
...
```
Description
-----------
can encrypt any structured data file used by Ansible. This can include *group\_vars/* or *host\_vars/* inventory variables, variables loaded by *include\_vars* or *vars\_files*, or variable files passed on the ansible-playbook command line with *-e @file.yml* or *-e @file.json*. Role variables and defaults are also included!
Because Ansible tasks, handlers, and other objects are data, these can also be encrypted with vault. If you’d like to not expose what variables you are using, you can keep an individual task file entirely encrypted.
Common Options
--------------
`--version`
show program’s version number, config file location, configured module search path, module location, executable location and exit
`-h, --help`
show this help message and exit
`-v, --verbose`
verbose mode (-vvv for more, -vvvv to enable connection debugging)
Actions
-------
### create
create and open a file in an editor that will be encrypted with the provided vault secret when closed
`--ask-vault-password, --ask-vault-pass`
ask for vault password
`--encrypt-vault-id <ENCRYPT_VAULT_ID>`
the vault id used to encrypt (required if more than one vault-id is provided)
`--vault-id`
the vault identity to use
`--vault-password-file, --vault-pass-file`
vault password file
### decrypt
decrypt the supplied file using the provided vault secret
`--ask-vault-password, --ask-vault-pass`
ask for vault password
`--output <OUTPUT_FILE>`
output file name for encrypt or decrypt; use - for stdout
`--vault-id`
the vault identity to use
`--vault-password-file, --vault-pass-file`
vault password file
### edit
open and decrypt an existing vaulted file in an editor, that will be encrypted again when closed
`--ask-vault-password, --ask-vault-pass`
ask for vault password
`--encrypt-vault-id <ENCRYPT_VAULT_ID>`
the vault id used to encrypt (required if more than one vault-id is provided)
`--vault-id`
the vault identity to use
`--vault-password-file, --vault-pass-file`
vault password file
### view
open, decrypt and view an existing vaulted file using a pager using the supplied vault secret
`--ask-vault-password, --ask-vault-pass`
ask for vault password
`--vault-id`
the vault identity to use
`--vault-password-file, --vault-pass-file`
vault password file
### encrypt
encrypt the supplied file using the provided vault secret
`--ask-vault-password, --ask-vault-pass`
ask for vault password
`--encrypt-vault-id <ENCRYPT_VAULT_ID>`
the vault id used to encrypt (required if more than one vault-id is provided)
`--output <OUTPUT_FILE>`
output file name for encrypt or decrypt; use - for stdout
`--vault-id`
the vault identity to use
`--vault-password-file, --vault-pass-file`
vault password file
### encrypt\_string
encrypt the supplied string using the provided vault secret
`--ask-vault-password, --ask-vault-pass`
ask for vault password
`--encrypt-vault-id <ENCRYPT_VAULT_ID>`
the vault id used to encrypt (required if more than one vault-id is provided)
`--output <OUTPUT_FILE>`
output file name for encrypt or decrypt; use - for stdout
`--show-input`
Do not hide input when prompted for the string to encrypt
`--stdin-name <ENCRYPT_STRING_STDIN_NAME>`
Specify the variable name for stdin
`--vault-id`
the vault identity to use
`--vault-password-file, --vault-pass-file`
vault password file
`-n, --name`
Specify the variable name
`-p, --prompt`
Prompt for the string to encrypt
### rekey
re-encrypt a vaulted file with a new secret, the previous secret is required
`--ask-vault-password, --ask-vault-pass`
ask for vault password
`--encrypt-vault-id <ENCRYPT_VAULT_ID>`
the vault id used to encrypt (required if more than one vault-id is provided)
`--new-vault-id <NEW_VAULT_ID>`
the new vault identity to use for rekey
`--new-vault-password-file <NEW_VAULT_PASSWORD_FILE>`
new vault password file for rekey
`--vault-id`
the vault identity to use
`--vault-password-file, --vault-pass-file`
vault password file
Environment
-----------
The following environment variables may be specified.
[`ANSIBLE_CONFIG`](../reference_appendices/config#envvar-ANSIBLE_CONFIG) – Override the default ansible config file
Many more are available for most options in ansible.cfg
Files
-----
`/etc/ansible/ansible.cfg` – Config file, used if present
`~/.ansible.cfg` – User config file, overrides the default config if present
Author
------
Ansible was originally written by Michael DeHaan.
See the `AUTHORS` file for a complete list of contributors.
License
-------
Ansible is released under the terms of the GPLv3+ License.
See also
--------
*ansible(1)*, *ansible-config(1)*, *ansible-console(1)*, *ansible-doc(1)*, *ansible-galaxy(1)*, *ansible-inventory(1)*, *ansible-playbook(1)*, *ansible-pull(1)*, *ansible-vault(1)*,
ansible ansible-galaxy ansible-galaxy
==============
**Perform various Role and Collection related operations.**
* [Synopsis](#synopsis)
* [Description](#description)
* [Common Options](#common-options)
* [Actions](#actions)
+ [collection](#collection)
- [collection download](#collection-download)
- [collection init](#collection-init)
- [collection build](#collection-build)
- [collection publish](#collection-publish)
- [collection install](#collection-install)
- [collection list](#collection-list)
- [collection verify](#collection-verify)
+ [role](#role)
- [role init](#role-init)
- [role remove](#role-remove)
- [role delete](#role-delete)
- [role list](#role-list)
- [role search](#role-search)
- [role import](#role-import)
- [role setup](#role-setup)
- [role info](#role-info)
- [role install](#role-install)
* [Environment](#environment)
* [Files](#files)
* [Author](#author)
* [License](#license)
* [See also](#see-also)
Synopsis
--------
```
usage: ansible-galaxy [-h] [--version] [-v] TYPE ...
```
Description
-----------
command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.
Common Options
--------------
`--version`
show program’s version number, config file location, configured module search path, module location, executable location and exit
`-h, --help`
show this help message and exit
`-v, --verbose`
verbose mode (-vvv for more, -vvvv to enable connection debugging)
Actions
-------
### collection
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as listed below.
#### collection download
`--clear-response-cache`
Clear the existing server response cache.
`--no-cache`
Do not use the server response cache.
`--pre`
Include pre-release versions. Semantic versioning pre-releases are ignored by default
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-n, --no-deps`
Don’t download collection(s) listed as dependencies.
`-p <DOWNLOAD_PATH>, --download-path <DOWNLOAD_PATH>`
The directory to download the collections to.
`-r <REQUIREMENTS>, --requirements-file <REQUIREMENTS>`
A file containing a list of collections to be downloaded.
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### collection init
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format. Requires a role or collection name. The collection name must be in the format `<namespace>.<collection>`.
`--collection-skeleton <COLLECTION_SKELETON>`
The path to a collection skeleton that the new collection should be based upon.
`--init-path <INIT_PATH>`
The path in which the skeleton collection will be created. The default is the current working directory.
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-f, --force`
Force overwriting an existing role or collection
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### collection build
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy. By default, this command builds from the current working directory. You can optionally pass in the collection input path (where the `galaxy.yml` file is).
`--output-path <OUTPUT_PATH>`
The path in which the collection is built to. The default is the current working directory.
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-f, --force`
Force overwriting an existing role or collection
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### collection publish
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
`--import-timeout <IMPORT_TIMEOUT>`
The time to wait for the collection import process to finish.
`--no-wait`
Don’t wait for import validation results.
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### collection install
`--clear-response-cache`
Clear the existing server response cache.
`--force-with-deps`
Force overwriting an existing collection and its dependencies.
`--no-cache`
Do not use the server response cache.
`--pre`
Include pre-release versions. Semantic versioning pre-releases are ignored by default
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-U, --upgrade`
Upgrade installed collection artifacts. This will also update dependencies unless –no-deps is provided
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-f, --force`
Force overwriting an existing role or collection
`-i, --ignore-errors`
Ignore errors during installation and continue with the next specified collection. This will not ignore dependency conflict errors.
`-n, --no-deps`
Don’t download collections listed as dependencies.
`-p <COLLECTIONS_PATH>, --collections-path <COLLECTIONS_PATH>`
The path to the directory containing your collections.
`-r <REQUIREMENTS>, --requirements-file <REQUIREMENTS>`
A file containing a list of collections to be installed.
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### collection list
List installed collections or roles
`--format <OUTPUT_FORMAT>`
Format to display the list of collections in.
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-p, --collections-path`
One or more directories to search for collections in addition to the default COLLECTIONS\_PATHS. Separate multiple paths with ‘:’.
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### collection verify
`--offline`
Validate collection integrity locally without contacting server for canonical manifest hash.
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-i, --ignore-errors`
Ignore errors during verification and continue with the next specified collection.
`-p, --collections-path`
One or more directories to search for collections in addition to the default COLLECTIONS\_PATHS. Separate multiple paths with ‘:’.
`-r <REQUIREMENTS>, --requirements-file <REQUIREMENTS>`
A file containing a list of collections to be verified.
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
### role
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init as listed below.
#### role init
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format. Requires a role or collection name. The collection name must be in the format `<namespace>.<collection>`.
`--init-path <INIT_PATH>`
The path in which the skeleton role will be created. The default is the current working directory.
`--offline`
Don’t query the galaxy API when creating roles
`--role-skeleton <ROLE_SKELETON>`
The path to a role skeleton that the new role should be based upon.
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`--type <ROLE_TYPE>`
Initialize using an alternate role type. Valid types include: ‘container’, ‘apb’ and ‘network’.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-f, --force`
Force overwriting an existing role or collection
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### role remove
removes the list of roles passed as arguments from the local system.
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-p, --roles-path`
The path to the directory containing your roles. The default is the first writable one configured via DEFAULT\_ROLES\_PATH: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### role delete
Delete a role from Ansible Galaxy.
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### role list
List installed collections or roles
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-p, --roles-path`
The path to the directory containing your roles. The default is the first writable one configured via DEFAULT\_ROLES\_PATH: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### role search
searches for roles on the Ansible Galaxy server
`--author <AUTHOR>`
GitHub username
`--galaxy-tags <GALAXY_TAGS>`
list of galaxy tags to filter by
`--platforms <PLATFORMS>`
list of OS platforms to filter by
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### role import
used to import a role into Ansible Galaxy
`--branch <REFERENCE>`
The name of a branch to import. Defaults to the repository’s default branch (usually master)
`--no-wait`
Don’t wait for import results.
`--role-name <ROLE_NAME>`
The name the role should have, if different than the repo name
`--status`
Check the status of the most recent import request for given github\_user/github\_repo.
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### role setup
Setup an integration from Github or Travis for Ansible Galaxy roles
`--list`
List all of your integrations.
`--remove <REMOVE_ID>`
Remove the integration matching the provided ID value. Use –list to see ID values.
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-p, --roles-path`
The path to the directory containing your roles. The default is the first writable one configured via DEFAULT\_ROLES\_PATH: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### role info
prints out detailed information about an installed role as well as info available from the galaxy API.
`--offline`
Don’t query the galaxy API when creating roles
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-p, --roles-path`
The path to the directory containing your roles. The default is the first writable one configured via DEFAULT\_ROLES\_PATH: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
#### role install
`--force-with-deps`
Force overwriting an existing role and its dependencies.
`--token <API_KEY>, --api-key <API_KEY>`
The Ansible Galaxy API key which can be found at <https://galaxy.ansible.com/me/preferences>.
`-c, --ignore-certs`
Ignore SSL certificate validation errors.
`-f, --force`
Force overwriting an existing role or collection
`-g, --keep-scm-meta`
Use tar instead of the scm archive option when packaging the role.
`-i, --ignore-errors`
Ignore errors and continue with the next specified role.
`-n, --no-deps`
Don’t download roles listed as dependencies.
`-p, --roles-path`
The path to the directory containing your roles. The default is the first writable one configured via DEFAULT\_ROLES\_PATH: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
`-r <REQUIREMENTS>, --role-file <REQUIREMENTS>`
A file containing a list of roles to be installed.
`-s <API_SERVER>, --server <API_SERVER>`
The Galaxy API server URL
Environment
-----------
The following environment variables may be specified.
[`ANSIBLE_CONFIG`](../reference_appendices/config#envvar-ANSIBLE_CONFIG) – Override the default ansible config file
Many more are available for most options in ansible.cfg
Files
-----
`/etc/ansible/ansible.cfg` – Config file, used if present
`~/.ansible.cfg` – User config file, overrides the default config if present
Author
------
Ansible was originally written by Michael DeHaan.
See the `AUTHORS` file for a complete list of contributors.
License
-------
Ansible is released under the terms of the GPLv3+ License.
See also
--------
*ansible(1)*, *ansible-config(1)*, *ansible-console(1)*, *ansible-doc(1)*, *ansible-galaxy(1)*, *ansible-inventory(1)*, *ansible-playbook(1)*, *ansible-pull(1)*, *ansible-vault(1)*,
| programming_docs |
ansible ansible-console ansible-console
===============
**REPL console for executing Ansible tasks.**
* [Synopsis](#synopsis)
* [Description](#description)
* [Common Options](#common-options)
* [Environment](#environment)
* [Files](#files)
* [Author](#author)
* [License](#license)
* [See also](#see-also)
Synopsis
--------
```
usage: ansible-console [-h] [--version] [-v] [-b]
[--become-method BECOME_METHOD]
[--become-user BECOME_USER] [-K] [-i INVENTORY]
[--list-hosts] [-l SUBSET] [-k]
[--private-key PRIVATE_KEY_FILE] [-u REMOTE_USER]
[-c CONNECTION] [-T TIMEOUT]
[--ssh-common-args SSH_COMMON_ARGS]
[--sftp-extra-args SFTP_EXTRA_ARGS]
[--scp-extra-args SCP_EXTRA_ARGS]
[--ssh-extra-args SSH_EXTRA_ARGS] [-C] [--syntax-check]
[-D] [--vault-id VAULT_IDS]
[--ask-vault-password | --vault-password-file VAULT_PASSWORD_FILES]
[-f FORKS] [-M MODULE_PATH] [--playbook-dir BASEDIR]
[-e EXTRA_VARS] [--task-timeout TASK_TIMEOUT] [--step]
[pattern]
```
Description
-----------
A REPL that allows for running ad-hoc tasks against a chosen inventory from a nice shell with built-in tab completion (based on dominis’ ansible-shell).
It supports several commands, and you can modify its configuration at runtime:
* `cd [pattern]`: change host/group (you can use host patterns eg.: app\*.dc\*:!app01\*)
* `list`: list available hosts in the current path
* `list groups`: list groups included in the current path
* `become`: toggle the become flag
* `!`: forces shell module instead of the ansible module (!yum update -y)
* `verbosity [num]`: set the verbosity level
* `forks [num]`: set the number of forks
* `become_user [user]`: set the become\_user
* `remote_user [user]`: set the remote\_user
* `become_method [method]`: set the privilege escalation method
* `check [bool]`: toggle check mode
* `diff [bool]`: toggle diff mode
* `timeout [integer]`: set the timeout of tasks in seconds (0 to disable)
* `help [command/module]`: display documentation for the command or module
* `exit`: exit ansible-console
Common Options
--------------
`--ask-vault-password, --ask-vault-pass`
ask for vault password
`--become-method <BECOME_METHOD>`
privilege escalation method to use (default=sudo), use `ansible-doc -t become -l` to list valid choices.
`--become-user <BECOME_USER>`
run operations as this user (default=root)
`--list-hosts`
outputs a list of matching hosts; does not execute anything else
`--playbook-dir <BASEDIR>`
Since this tool does not use playbooks, use this as a substitute playbook directory.This sets the relative path for many features including roles/ group\_vars/ etc.
`--private-key <PRIVATE_KEY_FILE>, --key-file <PRIVATE_KEY_FILE>`
use this file to authenticate the connection
`--scp-extra-args <SCP_EXTRA_ARGS>`
specify extra arguments to pass to scp only (e.g. -l)
`--sftp-extra-args <SFTP_EXTRA_ARGS>`
specify extra arguments to pass to sftp only (e.g. -f, -l)
`--ssh-common-args <SSH_COMMON_ARGS>`
specify common arguments to pass to sftp/scp/ssh (e.g. ProxyCommand)
`--ssh-extra-args <SSH_EXTRA_ARGS>`
specify extra arguments to pass to ssh only (e.g. -R)
`--step`
one-step-at-a-time: confirm each task before running
`--syntax-check`
perform a syntax check on the playbook, but do not execute it
`--task-timeout <TASK_TIMEOUT>`
set task timeout limit in seconds, must be positive integer.
`--vault-id`
the vault identity to use
`--vault-password-file, --vault-pass-file`
vault password file
`--version`
show program’s version number, config file location, configured module search path, module location, executable location and exit
`-C, --check`
don’t make any changes; instead, try to predict some of the changes that may occur
`-D, --diff`
when changing (small) files and templates, show the differences in those files; works great with –check
`-K, --ask-become-pass`
ask for privilege escalation password
`-M, --module-path`
prepend colon-separated path(s) to module library (default=~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules)
`-T <TIMEOUT>, --timeout <TIMEOUT>`
override the connection timeout in seconds (default=10)
`-b, --become`
run operations with become (does not imply password prompting)
`-c <CONNECTION>, --connection <CONNECTION>`
connection type to use (default=smart)
`-e, --extra-vars`
set additional variables as key=value or YAML/JSON, if filename prepend with @
`-f <FORKS>, --forks <FORKS>`
specify number of parallel processes to use (default=5)
`-h, --help`
show this help message and exit
`-i, --inventory, --inventory-file`
specify inventory host path or comma separated host list. –inventory-file is deprecated
`-k, --ask-pass`
ask for connection password
`-l <SUBSET>, --limit <SUBSET>`
further limit selected hosts to an additional pattern
`-u <REMOTE_USER>, --user <REMOTE_USER>`
connect as this user (default=None)
`-v, --verbose`
verbose mode (-vvv for more, -vvvv to enable connection debugging)
Environment
-----------
The following environment variables may be specified.
[`ANSIBLE_CONFIG`](../reference_appendices/config#envvar-ANSIBLE_CONFIG) – Override the default ansible config file
Many more are available for most options in ansible.cfg
Files
-----
`/etc/ansible/ansible.cfg` – Config file, used if present
`~/.ansible.cfg` – User config file, overrides the default config if present
Author
------
Ansible was originally written by Michael DeHaan.
See the `AUTHORS` file for a complete list of contributors.
License
-------
Ansible is released under the terms of the GPLv3+ License.
See also
--------
*ansible(1)*, *ansible-config(1)*, *ansible-console(1)*, *ansible-doc(1)*, *ansible-galaxy(1)*, *ansible-inventory(1)*, *ansible-playbook(1)*, *ansible-pull(1)*, *ansible-vault(1)*,
ansible ansible-config ansible-config
==============
**View ansible configuration.**
* [Synopsis](#synopsis)
* [Description](#description)
* [Common Options](#common-options)
* [Actions](#actions)
+ [list](#list)
+ [dump](#dump)
+ [view](#view)
* [Environment](#environment)
* [Files](#files)
* [Author](#author)
* [License](#license)
* [See also](#see-also)
Synopsis
--------
```
usage: ansible-config [-h] [--version] [-v] {list,dump,view} ...
```
Description
-----------
Config command line class
Common Options
--------------
`--version`
show program’s version number, config file location, configured module search path, module location, executable location and exit
`-h, --help`
show this help message and exit
`-v, --verbose`
verbose mode (-vvv for more, -vvvv to enable connection debugging)
Actions
-------
### list
list all current configs reading lib/constants.py and shows env and config file setting names
`-c <CONFIG_FILE>, --config <CONFIG_FILE>`
path to configuration file, defaults to first file found in precedence.
### dump
Shows the current settings, merges ansible.cfg if specified
`--only-changed`
Only show configurations that have changed from the default
`-c <CONFIG_FILE>, --config <CONFIG_FILE>`
path to configuration file, defaults to first file found in precedence.
### view
Displays the current config file
`-c <CONFIG_FILE>, --config <CONFIG_FILE>`
path to configuration file, defaults to first file found in precedence.
Environment
-----------
The following environment variables may be specified.
[`ANSIBLE_CONFIG`](../reference_appendices/config#envvar-ANSIBLE_CONFIG) – Override the default ansible config file
Many more are available for most options in ansible.cfg
Files
-----
`/etc/ansible/ansible.cfg` – Config file, used if present
`~/.ansible.cfg` – User config file, overrides the default config if present
Author
------
Ansible was originally written by Michael DeHaan.
See the `AUTHORS` file for a complete list of contributors.
License
-------
Ansible is released under the terms of the GPLv3+ License.
See also
--------
*ansible(1)*, *ansible-config(1)*, *ansible-console(1)*, *ansible-doc(1)*, *ansible-galaxy(1)*, *ansible-inventory(1)*, *ansible-playbook(1)*, *ansible-pull(1)*, *ansible-vault(1)*,
ansible ansible-doc ansible-doc
===========
**plugin documentation tool**
* [Synopsis](#synopsis)
* [Description](#description)
* [Common Options](#common-options)
* [Environment](#environment)
* [Files](#files)
* [Author](#author)
* [License](#license)
* [See also](#see-also)
Synopsis
--------
```
usage: ansible-doc [-h] [--version] [-v] [-M MODULE_PATH]
[--playbook-dir BASEDIR]
[-t {become,cache,callback,cliconf,connection,httpapi,inventory,lookup,netconf,shell,vars,module,strategy,role,keyword}]
[-j] [-r ROLES_PATH]
[-F | -l | -s | --metadata-dump | -e ENTRY_POINT]
[plugin [plugin ...]]
```
Description
-----------
displays information on modules installed in Ansible libraries. It displays a terse listing of plugins and their short descriptions, provides a printout of their DOCUMENTATION strings, and it can create a short “snippet” which can be pasted into a playbook.
Common Options
--------------
`--metadata-dump`
**For internal testing only** Dump json metadata for all plugins.
`--playbook-dir <BASEDIR>`
Since this tool does not use playbooks, use this as a substitute playbook directory.This sets the relative path for many features including roles/ group\_vars/ etc.
`--version`
show program’s version number, config file location, configured module search path, module location, executable location and exit
`-F, --list_files`
Show plugin names and their source files without summaries (implies –list). A supplied argument will be used for filtering, can be a namespace or full collection name.
`-M, --module-path`
prepend colon-separated path(s) to module library (default=~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules)
`-e <ENTRY_POINT>, --entry-point <ENTRY_POINT>`
Select the entry point for role(s).
`-h, --help`
show this help message and exit
`-j, --json`
Change output into json format.
`-l, --list`
List available plugins. A supplied argument will be used for filtering, can be a namespace or full collection name.
`-r, --roles-path`
The path to the directory containing your roles.
`-s, --snippet`
Show playbook snippet for specified plugin(s)
`-t <TYPE>, --type <TYPE>`
Choose which plugin type (defaults to “module”). Available plugin types are : (‘become’, ‘cache’, ‘callback’, ‘cliconf’, ‘connection’, ‘httpapi’, ‘inventory’, ‘lookup’, ‘netconf’, ‘shell’, ‘vars’, ‘module’, ‘strategy’, ‘role’, ‘keyword’)
`-v, --verbose`
verbose mode (-vvv for more, -vvvv to enable connection debugging)
Environment
-----------
The following environment variables may be specified.
[`ANSIBLE_CONFIG`](../reference_appendices/config#envvar-ANSIBLE_CONFIG) – Override the default ansible config file
Many more are available for most options in ansible.cfg
Files
-----
`/etc/ansible/ansible.cfg` – Config file, used if present
`~/.ansible.cfg` – User config file, overrides the default config if present
Author
------
Ansible was originally written by Michael DeHaan.
See the `AUTHORS` file for a complete list of contributors.
License
-------
Ansible is released under the terms of the GPLv3+ License.
See also
--------
*ansible(1)*, *ansible-config(1)*, *ansible-console(1)*, *ansible-doc(1)*, *ansible-galaxy(1)*, *ansible-inventory(1)*, *ansible-playbook(1)*, *ansible-pull(1)*, *ansible-vault(1)*,
ansible ansible-inventory ansible-inventory
=================
**None**
* [Synopsis](#synopsis)
* [Description](#description)
* [Common Options](#common-options)
* [Environment](#environment)
* [Files](#files)
* [Author](#author)
* [License](#license)
* [See also](#see-also)
Synopsis
--------
```
usage: ansible-inventory [-h] [--version] [-v] [-i INVENTORY]
[--vault-id VAULT_IDS]
[--ask-vault-password | --vault-password-file VAULT_PASSWORD_FILES]
[--playbook-dir BASEDIR] [-e EXTRA_VARS] [--list]
[--host HOST] [--graph] [-y] [--toml] [--vars]
[--export] [--output OUTPUT_FILE]
[host|group]
```
Description
-----------
used to display or dump the configured inventory as Ansible sees it
Common Options
--------------
`--ask-vault-password, --ask-vault-pass`
ask for vault password
`--export`
When doing an –list, represent in a way that is optimized for export,not as an accurate representation of how Ansible has processed it
`--graph`
create inventory graph, if supplying pattern it must be a valid group name
`--host <HOST>`
Output specific host info, works as inventory script
`--list`
Output all hosts info, works as inventory script
`--list-hosts`
==SUPPRESS==
`--output <OUTPUT_FILE>`
When doing –list, send the inventory to a file instead of to the screen
`--playbook-dir <BASEDIR>`
Since this tool does not use playbooks, use this as a substitute playbook directory.This sets the relative path for many features including roles/ group\_vars/ etc.
`--toml`
Use TOML format instead of default JSON, ignored for –graph
`--vars`
Add vars to graph display, ignored unless used with –graph
`--vault-id`
the vault identity to use
`--vault-password-file, --vault-pass-file`
vault password file
`--version`
show program’s version number, config file location, configured module search path, module location, executable location and exit
`-e, --extra-vars`
set additional variables as key=value or YAML/JSON, if filename prepend with @
`-h, --help`
show this help message and exit
`-i, --inventory, --inventory-file`
specify inventory host path or comma separated host list. –inventory-file is deprecated
`-l, --limit`
==SUPPRESS==
`-v, --verbose`
verbose mode (-vvv for more, -vvvv to enable connection debugging)
`-y, --yaml`
Use YAML format instead of default JSON, ignored for –graph
Environment
-----------
The following environment variables may be specified.
[`ANSIBLE_CONFIG`](../reference_appendices/config#envvar-ANSIBLE_CONFIG) – Override the default ansible config file
Many more are available for most options in ansible.cfg
Files
-----
`/etc/ansible/ansible.cfg` – Config file, used if present
`~/.ansible.cfg` – User config file, overrides the default config if present
Author
------
Ansible was originally written by Michael DeHaan.
See the `AUTHORS` file for a complete list of contributors.
License
-------
Ansible is released under the terms of the GPLv3+ License.
See also
--------
*ansible(1)*, *ansible-config(1)*, *ansible-console(1)*, *ansible-doc(1)*, *ansible-galaxy(1)*, *ansible-inventory(1)*, *ansible-playbook(1)*, *ansible-pull(1)*, *ansible-vault(1)*,
ansible ansible-pull ansible-pull
============
**pulls playbooks from a VCS repo and executes them for the local host**
* [Synopsis](#synopsis)
* [Description](#description)
* [Common Options](#common-options)
* [Environment](#environment)
* [Files](#files)
* [Author](#author)
* [License](#license)
* [See also](#see-also)
Synopsis
--------
```
usage: ansible-pull [-h] [--version] [-v] [-k]
[--private-key PRIVATE_KEY_FILE] [-u REMOTE_USER]
[-c CONNECTION] [-T TIMEOUT]
[--ssh-common-args SSH_COMMON_ARGS]
[--sftp-extra-args SFTP_EXTRA_ARGS]
[--scp-extra-args SCP_EXTRA_ARGS]
[--ssh-extra-args SSH_EXTRA_ARGS] [--vault-id VAULT_IDS]
[--ask-vault-password | --vault-password-file VAULT_PASSWORD_FILES]
[-e EXTRA_VARS] [-t TAGS] [--skip-tags SKIP_TAGS]
[-i INVENTORY] [--list-hosts] [-l SUBSET] [-M MODULE_PATH]
[-K] [--purge] [-o] [-s SLEEP] [-f] [-d DEST] [-U URL]
[--full] [-C CHECKOUT] [--accept-host-key]
[-m MODULE_NAME] [--verify-commit] [--clean]
[--track-subs] [--check] [--diff]
[playbook.yml [playbook.yml ...]]
```
Description
-----------
Used to pull a remote copy of ansible on each managed node, each set to run via cron and update playbook source via a source repository. This inverts the default *push* architecture of ansible into a *pull* architecture, which has near-limitless scaling potential.
The setup playbook can be tuned to change the cron frequency, logging locations, and parameters to ansible-pull. This is useful both for extreme scale-out as well as periodic remediation. Usage of the ‘fetch’ module to retrieve logs from ansible-pull runs would be an excellent way to gather and analyze remote logs from ansible-pull.
Common Options
--------------
`--accept-host-key`
adds the hostkey for the repo url if not already added
`--ask-vault-password, --ask-vault-pass`
ask for vault password
`--check`
don’t make any changes; instead, try to predict some of the changes that may occur
`--clean`
modified files in the working repository will be discarded
`--diff`
when changing (small) files and templates, show the differences in those files; works great with –check
`--full`
Do a full clone, instead of a shallow one.
`--list-hosts`
outputs a list of matching hosts; does not execute anything else
`--private-key <PRIVATE_KEY_FILE>, --key-file <PRIVATE_KEY_FILE>`
use this file to authenticate the connection
`--purge`
purge checkout after playbook run
`--scp-extra-args <SCP_EXTRA_ARGS>`
specify extra arguments to pass to scp only (e.g. -l)
`--sftp-extra-args <SFTP_EXTRA_ARGS>`
specify extra arguments to pass to sftp only (e.g. -f, -l)
`--skip-tags`
only run plays and tasks whose tags do not match these values
`--ssh-common-args <SSH_COMMON_ARGS>`
specify common arguments to pass to sftp/scp/ssh (e.g. ProxyCommand)
`--ssh-extra-args <SSH_EXTRA_ARGS>`
specify extra arguments to pass to ssh only (e.g. -R)
`--track-subs`
submodules will track the latest changes. This is equivalent to specifying the –remote flag to git submodule update
`--vault-id`
the vault identity to use
`--vault-password-file, --vault-pass-file`
vault password file
`--verify-commit`
verify GPG signature of checked out commit, if it fails abort running the playbook. This needs the corresponding VCS module to support such an operation
`--version`
show program’s version number, config file location, configured module search path, module location, executable location and exit
`-C <CHECKOUT>, --checkout <CHECKOUT>`
branch/tag/commit to checkout. Defaults to behavior of repository module.
`-K, --ask-become-pass`
ask for privilege escalation password
`-M, --module-path`
prepend colon-separated path(s) to module library (default=~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules)
`-T <TIMEOUT>, --timeout <TIMEOUT>`
override the connection timeout in seconds (default=10)
`-U <URL>, --url <URL>`
URL of the playbook repository
`-c <CONNECTION>, --connection <CONNECTION>`
connection type to use (default=smart)
`-d <DEST>, --directory <DEST>`
absolute path of repository checkout directory (relative paths are not supported)
`-e, --extra-vars`
set additional variables as key=value or YAML/JSON, if filename prepend with @
`-f, --force`
run the playbook even if the repository could not be updated
`-h, --help`
show this help message and exit
`-i, --inventory, --inventory-file`
specify inventory host path or comma separated host list. –inventory-file is deprecated
`-k, --ask-pass`
ask for connection password
`-l <SUBSET>, --limit <SUBSET>`
further limit selected hosts to an additional pattern
`-m <MODULE_NAME>, --module-name <MODULE_NAME>`
Repository module name, which ansible will use to check out the repo. Choices are (‘git’, ‘subversion’, ‘hg’, ‘bzr’). Default is git.
`-o, --only-if-changed`
only run the playbook if the repository has been updated
`-s <SLEEP>, --sleep <SLEEP>`
sleep for random interval (between 0 and n number of seconds) before starting. This is a useful way to disperse git requests
`-t, --tags`
only run plays and tasks tagged with these values
`-u <REMOTE_USER>, --user <REMOTE_USER>`
connect as this user (default=None)
`-v, --verbose`
verbose mode (-vvv for more, -vvvv to enable connection debugging)
Environment
-----------
The following environment variables may be specified.
[`ANSIBLE_CONFIG`](../reference_appendices/config#envvar-ANSIBLE_CONFIG) – Override the default ansible config file
Many more are available for most options in ansible.cfg
Files
-----
`/etc/ansible/ansible.cfg` – Config file, used if present
`~/.ansible.cfg` – User config file, overrides the default config if present
Author
------
Ansible was originally written by Michael DeHaan.
See the `AUTHORS` file for a complete list of contributors.
License
-------
Ansible is released under the terms of the GPLv3+ License.
See also
--------
*ansible(1)*, *ansible-config(1)*, *ansible-console(1)*, *ansible-doc(1)*, *ansible-galaxy(1)*, *ansible-inventory(1)*, *ansible-playbook(1)*, *ansible-pull(1)*, *ansible-vault(1)*,
| programming_docs |
ansible ansible-playbook ansible-playbook
================
**Runs Ansible playbooks, executing the defined tasks on the targeted hosts.**
* [Synopsis](#synopsis)
* [Description](#description)
* [Common Options](#common-options)
* [Environment](#environment)
* [Files](#files)
* [Author](#author)
* [License](#license)
* [See also](#see-also)
Synopsis
--------
```
usage: ansible-playbook [-h] [--version] [-v] [-k]
[--private-key PRIVATE_KEY_FILE] [-u REMOTE_USER]
[-c CONNECTION] [-T TIMEOUT]
[--ssh-common-args SSH_COMMON_ARGS]
[--sftp-extra-args SFTP_EXTRA_ARGS]
[--scp-extra-args SCP_EXTRA_ARGS]
[--ssh-extra-args SSH_EXTRA_ARGS] [--force-handlers]
[--flush-cache] [-b] [--become-method BECOME_METHOD]
[--become-user BECOME_USER] [-K] [-t TAGS]
[--skip-tags SKIP_TAGS] [-C] [--syntax-check] [-D]
[-i INVENTORY] [--list-hosts] [-l SUBSET]
[-e EXTRA_VARS] [--vault-id VAULT_IDS]
[--ask-vault-password | --vault-password-file VAULT_PASSWORD_FILES]
[-f FORKS] [-M MODULE_PATH] [--list-tasks]
[--list-tags] [--step] [--start-at-task START_AT_TASK]
playbook [playbook ...]
```
Description
-----------
the tool to run *Ansible playbooks*, which are a configuration and multinode deployment system. See the project home page (<https://docs.ansible.com>) for more information.
Common Options
--------------
`--ask-vault-password, --ask-vault-pass`
ask for vault password
`--become-method <BECOME_METHOD>`
privilege escalation method to use (default=sudo), use `ansible-doc -t become -l` to list valid choices.
`--become-user <BECOME_USER>`
run operations as this user (default=root)
`--flush-cache`
clear the fact cache for every host in inventory
`--force-handlers`
run handlers even if a task fails
`--list-hosts`
outputs a list of matching hosts; does not execute anything else
`--list-tags`
list all available tags
`--list-tasks`
list all tasks that would be executed
`--private-key <PRIVATE_KEY_FILE>, --key-file <PRIVATE_KEY_FILE>`
use this file to authenticate the connection
`--scp-extra-args <SCP_EXTRA_ARGS>`
specify extra arguments to pass to scp only (e.g. -l)
`--sftp-extra-args <SFTP_EXTRA_ARGS>`
specify extra arguments to pass to sftp only (e.g. -f, -l)
`--skip-tags`
only run plays and tasks whose tags do not match these values
`--ssh-common-args <SSH_COMMON_ARGS>`
specify common arguments to pass to sftp/scp/ssh (e.g. ProxyCommand)
`--ssh-extra-args <SSH_EXTRA_ARGS>`
specify extra arguments to pass to ssh only (e.g. -R)
`--start-at-task <START_AT_TASK>`
start the playbook at the task matching this name
`--step`
one-step-at-a-time: confirm each task before running
`--syntax-check`
perform a syntax check on the playbook, but do not execute it
`--vault-id`
the vault identity to use
`--vault-password-file, --vault-pass-file`
vault password file
`--version`
show program’s version number, config file location, configured module search path, module location, executable location and exit
`-C, --check`
don’t make any changes; instead, try to predict some of the changes that may occur
`-D, --diff`
when changing (small) files and templates, show the differences in those files; works great with –check
`-K, --ask-become-pass`
ask for privilege escalation password
`-M, --module-path`
prepend colon-separated path(s) to module library (default=~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules)
`-T <TIMEOUT>, --timeout <TIMEOUT>`
override the connection timeout in seconds (default=10)
`-b, --become`
run operations with become (does not imply password prompting)
`-c <CONNECTION>, --connection <CONNECTION>`
connection type to use (default=smart)
`-e, --extra-vars`
set additional variables as key=value or YAML/JSON, if filename prepend with @
`-f <FORKS>, --forks <FORKS>`
specify number of parallel processes to use (default=5)
`-h, --help`
show this help message and exit
`-i, --inventory, --inventory-file`
specify inventory host path or comma separated host list. –inventory-file is deprecated
`-k, --ask-pass`
ask for connection password
`-l <SUBSET>, --limit <SUBSET>`
further limit selected hosts to an additional pattern
`-t, --tags`
only run plays and tasks tagged with these values
`-u <REMOTE_USER>, --user <REMOTE_USER>`
connect as this user (default=None)
`-v, --verbose`
verbose mode (-vvv for more, -vvvv to enable connection debugging)
Environment
-----------
The following environment variables may be specified.
[`ANSIBLE_CONFIG`](../reference_appendices/config#envvar-ANSIBLE_CONFIG) – Override the default ansible config file
Many more are available for most options in ansible.cfg
Files
-----
`/etc/ansible/ansible.cfg` – Config file, used if present
`~/.ansible.cfg` – User config file, overrides the default config if present
Author
------
Ansible was originally written by Michael DeHaan.
See the `AUTHORS` file for a complete list of contributors.
License
-------
Ansible is released under the terms of the GPLv3+ License.
See also
--------
*ansible(1)*, *ansible-config(1)*, *ansible-console(1)*, *ansible-doc(1)*, *ansible-galaxy(1)*, *ansible-inventory(1)*, *ansible-playbook(1)*, *ansible-pull(1)*, *ansible-vault(1)*,
ansible Galaxy Developer Guide Galaxy Developer Guide
======================
You can host collections and roles on Galaxy to share with the Ansible community. Galaxy content is formatted in pre-packaged units of work such as [roles](../user_guide/playbooks_reuse_roles#playbooks-reuse-roles), and new in Galaxy 3.2, [collections](../user_guide/collections_using#collections). You can create roles for provisioning infrastructure, deploying applications, and all of the tasks you do everyday. Taking this a step further, you can create collections which provide a comprehensive package of automation that may include multiple playbooks, roles, modules, and plugins.
* [Creating collections for Galaxy](#creating-collections-for-galaxy)
* [Creating roles for Galaxy](#creating-roles-for-galaxy)
+ [Force](#force)
+ [Container enabled](#container-enabled)
+ [Using a custom role skeleton](#using-a-custom-role-skeleton)
+ [Authenticate with Galaxy](#authenticate-with-galaxy)
+ [Import a role](#import-a-role)
+ [Delete a role](#delete-a-role)
+ [Travis integrations](#travis-integrations)
Creating collections for Galaxy
-------------------------------
Collections are a distribution format for Ansible content. You can use collections to package and distribute playbooks, roles, modules, and plugins. You can publish and use collections through [Ansible Galaxy](https://galaxy.ansible.com).
See [Developing collections](https://docs.ansible.com/ansible/latest/dev_guide/developing_collections.html#developing-collections) for details on how to create collections.
Creating roles for Galaxy
-------------------------
Use the `init` command to initialize the base structure of a new role, saving time on creating the various directories and main.yml files a role requires
```
$ ansible-galaxy init role_name
```
The above will create the following directory structure in the current working directory:
```
role_name/
README.md
.travis.yml
defaults/
main.yml
files/
handlers/
main.yml
meta/
main.yml
templates/
tests/
inventory
test.yml
vars/
main.yml
```
If you want to create a repository for the role the repository root should be `role_name`.
### Force
If a directory matching the name of the role already exists in the current working directory, the init command will result in an error. To ignore the error use the `--force` option. Force will create the above subdirectories and files, replacing anything that matches.
### Container enabled
If you are creating a Container Enabled role, pass `--type container` to `ansible-galaxy init`. This will create the same directory structure as above, but populate it with default files appropriate for a Container Enabled role. For instance, the README.md has a slightly different structure, the *.travis.yml* file tests the role using [Ansible Container](https://github.com/ansible/ansible-container), and the meta directory includes a *container.yml* file.
### Using a custom role skeleton
A custom role skeleton directory can be supplied as follows:
```
$ ansible-galaxy init --role-skeleton=/path/to/skeleton role_name
```
When a skeleton is provided, init will:
* copy all files and directories from the skeleton to the new role
* any .j2 files found outside of a templates folder will be rendered as templates. The only useful variable at the moment is role\_name
* The .git folder and any .git\_keep files will not be copied
Alternatively, the role\_skeleton and ignoring of files can be configured via ansible.cfg
```
[galaxy]
role_skeleton = /path/to/skeleton
role_skeleton_ignore = ^.git$,^.*/.git_keep$
```
### Authenticate with Galaxy
Using the `import`, `delete` and `setup` commands to manage your roles on the Galaxy website requires authentication, and the `login` command can be used to do just that. Before you can use the `login` command, you must create an account on the Galaxy website.
The `login` command requires using your GitHub credentials. You can use your username and password, or you can create a [personal access token](https://help.github.com/articles/creating-an-access-token-for-command-line-use/). If you choose to create a token, grant minimal access to the token, as it is used just to verify identify.
The following shows authenticating with the Galaxy website using a GitHub username and password:
```
$ ansible-galaxy login
We need your GitHub login to identify you.
This information will not be sent to Galaxy, only to api.github.com.
The password will not be displayed.
Use --github-token if you do not want to enter your password.
GitHub Username: dsmith
Password for dsmith:
Successfully logged into Galaxy as dsmith
```
When you choose to use your username and password, your password is not sent to Galaxy. It is used to authenticates with GitHub and create a personal access token. It then sends the token to Galaxy, which in turn verifies that your identity and returns a Galaxy access token. After authentication completes the GitHub token is destroyed.
If you do not want to use your GitHub password, or if you have two-factor authentication enabled with GitHub, use the `--github-token` option to pass a personal access token that you create.
### Import a role
The `import` command requires that you first authenticate using the `login` command. Once authenticated you can import any GitHub repository that you own or have been granted access.
Use the following to import to role:
```
$ ansible-galaxy import github_user github_repo
```
By default the command will wait for Galaxy to complete the import process, displaying the results as the import progresses:
```
Successfully submitted import request 41
Starting import 41: role_name=myrole repo=githubuser/ansible-role-repo ref=
Retrieving GitHub repo githubuser/ansible-role-repo
Accessing branch: devel
Parsing and validating meta/main.yml
Parsing galaxy_tags
Parsing platforms
Adding dependencies
Parsing and validating README.md
Adding repo tags as role versions
Import completed
Status SUCCESS : warnings=0 errors=0
```
#### Branch
Use the `--branch` option to import a specific branch. If not specified, the default branch for the repo will be used.
#### Role name
By default the name given to the role will be derived from the GitHub repository name. However, you can use the `--role-name` option to override this and set the name.
#### No wait
If the `--no-wait` option is present, the command will not wait for results. Results of the most recent import for any of your roles is available on the Galaxy web site by visiting *My Imports*.
### Delete a role
The `delete` command requires that you first authenticate using the `login` command. Once authenticated you can remove a role from the Galaxy web site. You are only allowed to remove roles where you have access to the repository in GitHub.
Use the following to delete a role:
```
$ ansible-galaxy delete github_user github_repo
```
This only removes the role from Galaxy. It does not remove or alter the actual GitHub repository.
### Travis integrations
You can create an integration or connection between a role in Galaxy and [Travis](https://travis-ci.org). Once the connection is established, a build in Travis will automatically trigger an import in Galaxy, updating the search index with the latest information about the role.
You create the integration using the `setup` command, but before an integration can be created, you must first authenticate using the `login` command; you will also need an account in Travis, and your Travis token. Once you’re ready, use the following command to create the integration:
```
$ ansible-galaxy setup travis github_user github_repo xxx-travis-token-xxx
```
The setup command requires your Travis token, however the token is not stored in Galaxy. It is used along with the GitHub username and repo to create a hash as described in [the Travis documentation](https://docs.travis-ci.com/user/notifications/). The hash is stored in Galaxy and used to verify notifications received from Travis.
The setup command enables Galaxy to respond to notifications. To configure Travis to run a build on your repository and send a notification, follow the [Travis getting started guide](https://docs.travis-ci.com/user/getting-started/).
To instruct Travis to notify Galaxy when a build completes, add the following to your .travis.yml file:
```
notifications:
webhooks: https://galaxy.ansible.com/api/v1/notifications/
```
#### List Travis integrations
Use the `--list` option to display your Travis integrations:
```
$ ansible-galaxy setup --list
ID Source Repo
---------- ---------- ----------
2 travis github_user/github_repo
1 travis github_user/github_repo
```
#### Remove Travis integrations
Use the `--remove` option to disable and remove a Travis integration:
```
$ ansible-galaxy setup --remove ID
```
Provide the ID of the integration to be disabled. You can find the ID by using the `--list` option.
See also
[Using collections](../user_guide/collections_using#collections)
Shareable collections of modules, playbooks and roles
[Roles](../user_guide/playbooks_reuse_roles#playbooks-reuse-roles)
All about ansible roles
[Mailing List](https://groups.google.com/group/ansible-project)
Questions? Help? Ideas? Stop by the list on Google Groups
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Galaxy User Guide Galaxy User Guide
=================
*Ansible Galaxy* refers to the [Galaxy](https://galaxy.ansible.com) website, a free site for finding, downloading, and sharing community developed roles.
Use Galaxy to jump-start your automation project with great content from the Ansible community. Galaxy provides pre-packaged units of work such as [roles](../user_guide/playbooks_reuse_roles#playbooks-reuse-roles), and new in Galaxy 3.2, [collections](../user_guide/collections_using#collections) You can find roles for provisioning infrastructure, deploying applications, and all of the tasks you do everyday. The collection format provides a comprehensive package of automation that may include multiple playbooks, roles, modules, and plugins.
* [Finding collections on Galaxy](#finding-collections-on-galaxy)
* [Installing collections](#installing-collections)
+ [Installing a collection from Galaxy](#installing-a-collection-from-galaxy)
+ [Downloading a collection from Automation Hub](#downloading-a-collection-from-automation-hub)
+ [Installing an older version of a collection](#installing-an-older-version-of-a-collection)
+ [Install multiple collections with a requirements file](#install-multiple-collections-with-a-requirements-file)
+ [Downloading a collection for offline use](#downloading-a-collection-for-offline-use)
+ [Installing a collection from a git repository](#installing-a-collection-from-a-git-repository)
+ [Listing installed collections](#listing-installed-collections)
+ [Configuring the `ansible-galaxy` client](#configuring-the-ansible-galaxy-client)
* [Finding roles on Galaxy](#finding-roles-on-galaxy)
+ [Get more information about a role](#get-more-information-about-a-role)
* [Installing roles from Galaxy](#installing-roles-from-galaxy)
+ [Installing roles](#installing-roles)
+ [Installing a specific version of a role](#installing-a-specific-version-of-a-role)
+ [Installing multiple roles from a file](#installing-multiple-roles-from-a-file)
+ [Installing roles and collections from the same requirements.yml file](#installing-roles-and-collections-from-the-same-requirements-yml-file)
+ [Installing multiple roles from multiple files](#installing-multiple-roles-from-multiple-files)
+ [Dependencies](#dependencies)
+ [List installed roles](#list-installed-roles)
+ [Remove an installed role](#remove-an-installed-role)
Finding collections on Galaxy
-----------------------------
To find collections on Galaxy:
1. Click the Search icon in the left-hand navigation.
2. Set the filter to *collection*.
3. Set other filters and press enter.
Galaxy presents a list of collections that match your search criteria.
Installing collections
----------------------
### Installing a collection from Galaxy
By default, `ansible-galaxy collection install` uses <https://galaxy.ansible.com> as the Galaxy server (as listed in the `ansible.cfg` file under [GALAXY\_SERVER](../reference_appendices/config#galaxy-server)). You do not need any further configuration.
See [Configuring the ansible-galaxy client](../user_guide/collections_using#galaxy-server-config) if you are using any other Galaxy server, such as Red Hat Automation Hub.
To install a collection hosted in Galaxy:
```
ansible-galaxy collection install my_namespace.my_collection
```
To upgrade a collection to the latest available version from the Galaxy server you can use the `--upgrade` option:
```
ansible-galaxy collection install my_namespace.my_collection --upgrade
```
You can also directly use the tarball from your build:
```
ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz -p ./collections
```
You can build and install a collection from a local source directory. The `ansible-galaxy` utility builds the collection using the `MANIFEST.json` or `galaxy.yml` metadata in the directory.
```
ansible-galaxy collection install /path/to/collection -p ./collections
```
You can also install multiple collections in a namespace directory.
```
ns/
├── collection1/
│ ├── MANIFEST.json
│ └── plugins/
└── collection2/
├── galaxy.yml
└── plugins/
```
```
ansible-galaxy collection install /path/to/ns -p ./collections
```
Note
The install command automatically appends the path `ansible_collections` to the one specified with the `-p` option unless the parent directory is already in a folder called `ansible_collections`.
When using the `-p` option to specify the install path, use one of the values configured in [COLLECTIONS\_PATHS](../reference_appendices/config#collections-paths), as this is where Ansible itself will expect to find collections. If you don’t specify a path, `ansible-galaxy collection install` installs the collection to the first path defined in [COLLECTIONS\_PATHS](../reference_appendices/config#collections-paths), which by default is `~/.ansible/collections`
You can also keep a collection adjacent to the current playbook, under a `collections/ansible_collections/` directory structure.
```
./
├── play.yml
├── collections/
│ └── ansible_collections/
│ └── my_namespace/
│ └── my_collection/<collection structure lives here>
```
See [Collection structure](https://docs.ansible.com/ansible/latest/dev_guide/developing_collections_structure.html#collection-structure) for details on the collection directory structure.
### Downloading a collection from Automation Hub
You can download collections from Automation Hub at the command line. Automation Hub content is available to subscribers only, so you must download an API token and configure your local environment to provide it before you can you download collections. To download a collection from Automation Hub with the `ansible-galaxy` command:
1. Get your Automation Hub API token. Go to <https://cloud.redhat.com/ansible/automation-hub/token/> and click Get API token from the version dropdown to copy your API token.
2. Configure Red Hat Automation Hub server in the `server_list` option under the `[galaxy]` section in your `ansible.cfg` file.
```
[galaxy]
server_list = automation_hub
[galaxy_server.automation_hub]
url=https://cloud.redhat.com/api/automation-hub/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
token=my_ah_token
```
3. Download the collection hosted in Automation Hub.
```
ansible-galaxy collection install my_namespace.my_collection
```
See also
[Getting started with Automation Hub](https://www.ansible.com/blog/getting-started-with-ansible-hub)
An introduction to Automation Hub
### Installing an older version of a collection
You can only have one version of a collection installed at a time. By default `ansible-galaxy` installs the latest available version. If you want to install a specific version, you can add a version range identifier. For example, to install the 1.0.0-beta.1 version of the collection:
```
ansible-galaxy collection install my_namespace.my_collection:==1.0.0-beta.1
```
You can specify multiple range identifiers separated by `,`. Use single quotes so the shell passes the entire command, including `>`, `!`, and other operators, along. For example, to install the most recent version that is greater than or equal to 1.0.0 and less than 2.0.0:
```
ansible-galaxy collection install 'my_namespace.my_collection:>=1.0.0,<2.0.0'
```
Ansible will always install the most recent version that meets the range identifiers you specify. You can use the following range identifiers:
* `*`: The most recent version. This is the default.
* `!=`: Not equal to the version specified.
* `==`: Exactly the version specified.
* `>=`: Greater than or equal to the version specified.
* `>`: Greater than the version specified.
* `<=`: Less than or equal to the version specified.
* `<`: Less than the version specified.
Note
By default `ansible-galaxy` ignores pre-release versions. To install a pre-release version, you must use the `==` range identifier to require it explicitly.
### Install multiple collections with a requirements file
You can set up a `requirements.yml` file to install multiple collections in one command. This file is a YAML file in the format:
```
---
collections:
# With just the collection name
- my_namespace.my_collection
# With the collection name, version, and source options
- name: my_namespace.my_other_collection
version: 'version range identifiers (default: ``*``)'
source: 'The Galaxy URL to pull the collection from (default: ``--api-server`` from cmdline)'
```
You can specify four keys for each collection entry:
* `name`
* `version`
* `source`
* `type`
The `version` key uses the same range identifier format documented in [Installing an older version of a collection](../user_guide/collections_using#collections-older-version).
The `type` key can be set to `galaxy`, `url`, `file`, and `git`. If `type` is omitted, the `name` key is used to implicitly determine the source of the collection.
When you install a collection with `type: git`, the `version` key can refer to a branch or to a [git commit-ish](https://git-scm.com/docs/gitglossary#def_commit-ish) object (commit or tag). For example:
```
collections:
- name: https://github.com/organization/repo_name.git
type: git
version: devel
```
You can also add roles to a `requirements.yml` file, under the `roles` key. The values follow the same format as a requirements file used in older Ansible releases.
```
---
roles:
# Install a role from Ansible Galaxy.
- name: geerlingguy.java
version: 1.9.6
collections:
# Install a collection from Ansible Galaxy.
- name: geerlingguy.php_roles
version: 0.9.3
source: https://galaxy.ansible.com
```
To install both roles and collections at the same time with one command, run the following:
```
$ ansible-galaxy install -r requirements.yml
```
Running `ansible-galaxy collection install -r` or `ansible-galaxy role install -r` will only install collections, or roles respectively.
Note
Installing both roles and collections from the same requirements file will not work when specifying a custom collection or role install path. In this scenario the collections will be skipped and the command will process each like `ansible-galaxy role install` would.
### Downloading a collection for offline use
To download the collection tarball from Galaxy for offline use:
1. Navigate to the collection page.
2. Click on Download tarball.
You may also need to manually download any dependent collections.
### Installing a collection from a git repository
You can install a collection from a git repository instead of from Galaxy or Automation Hub. As a developer, installing from a git repository lets you review your collection before you create the tarball and publish the collection. As a user, installing from a git repository lets you use collections or versions that are not in Galaxy or Automation Hub yet.
The repository must contain a `galaxy.yml` or `MANIFEST.json` file. This file provides metadata such as the version number and namespace of the collection.
#### Installing a collection from a git repository at the command line
To install a collection from a git repository at the command line, use the URI of the repository instead of a collection name or path to a `tar.gz` file. Prefix the URI with `git+` (or with `git@` to use a private repository with ssh authentication). You can specify a branch, commit, or tag using the comma-separated [git commit-ish](https://git-scm.com/docs/gitglossary#def_commit-ish) syntax.
For example:
```
# Install a collection in a repository using the latest commit on the branch 'devel'
ansible-galaxy collection install git+https://github.com/organization/repo_name.git,devel
# Install a collection from a private github repository
ansible-galaxy collection install [email protected]:organization/repo_name.git
# Install a collection from a local git repository
ansible-galaxy collection install git+file:///home/user/path/to/repo_name.git
```
Warning
Embedding credentials into a git URI is not secure. Use safe authentication options to prevent your credentials from being exposed in logs or elsewhere.
* Use [SSH](https://help.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh) authentication
* Use [netrc](https://linux.die.net/man/5/netrc) authentication
* Use [http.extraHeader](https://git-scm.com/docs/git-config#Documentation/git-config.txt-httpextraHeader) in your git configuration
* Use [url.<base>.pushInsteadOf](https://git-scm.com/docs/git-config#Documentation/git-config.txt-urlltbasegtpushInsteadOf) in your git configuration
#### Specifying the collection location within the git repository
When you install a collection from a git repository, Ansible uses the collection `galaxy.yml` or `MANIFEST.json` metadata file to build the collection. By default, Ansible searches two paths for collection `galaxy.yml` or `MANIFEST.json` metadata files:
* The top level of the repository.
* Each directory in the repository path (one level deep).
If a `galaxy.yml` or `MANIFEST.json` file exists in the top level of the repository, Ansible uses the collection metadata in that file to install an individual collection.
```
├── galaxy.yml
├── plugins/
│ ├── lookup/
│ ├── modules/
│ └── module_utils/
└─── README.md
```
If a `galaxy.yml` or `MANIFEST.json` file exists in one or more directories in the repository path (one level deep), Ansible installs each directory with a metadata file as a collection. For example, Ansible installs both collection1 and collection2 from this repository structure by default:
```
├── collection1
│ ├── docs/
│ ├── galaxy.yml
│ └── plugins/
│ ├── inventory/
│ └── modules/
└── collection2
├── docs/
├── galaxy.yml
├── plugins/
| ├── filter/
| └── modules/
└── roles/
```
If you have a different repository structure or only want to install a subset of collections, you can add a fragment to the end of your URI (before the optional comma-separated version) to indicate the location of the metadata file or files. The path should be a directory, not the metadata file itself. For example, to install only collection2 from the example repository with two collections:
```
ansible-galaxy collection install git+https://github.com/organization/repo_name.git#/collection2/
```
In some repositories, the main directory corresponds to the namespace:
```
namespace/
├── collectionA/
| ├── docs/
| ├── galaxy.yml
| ├── plugins/
| │ ├── README.md
| │ └── modules/
| ├── README.md
| └── roles/
└── collectionB/
├── docs/
├── galaxy.yml
├── plugins/
│ ├── connection/
│ └── modules/
├── README.md
└── roles/
```
You can install all collections in this repository, or install one collection from a specific commit:
```
# Install all collections in the namespace
ansible-galaxy collection install git+https://github.com/organization/repo_name.git#/namespace/
# Install an individual collection using a specific commit
ansible-galaxy collection install git+https://github.com/organization/repo_name.git#/namespace/collectionA/,7b60ddc245bc416b72d8ea6ed7b799885110f5e5
```
### Listing installed collections
To list installed collections, run `ansible-galaxy collection list`. See [Listing collections](../user_guide/collections_using#collections-listing) for more details.
### Configuring the `ansible-galaxy` client
By default, `ansible-galaxy` uses <https://galaxy.ansible.com> as the Galaxy server (as listed in the `ansible.cfg` file under [GALAXY\_SERVER](../reference_appendices/config#galaxy-server)).
You can use either option below to configure `ansible-galaxy collection` to use other servers (such as Red Hat Automation Hub or a custom Galaxy server):
* Set the server list in the [GALAXY\_SERVER\_LIST](../reference_appendices/config#galaxy-server-list) configuration option in [The configuration file](../reference_appendices/config#ansible-configuration-settings-locations).
* Use the `--server` command line argument to limit to an individual server.
To configure a Galaxy server list in `ansible.cfg`:
1. Add the `server_list` option under the `[galaxy]` section to one or more server names.
2. Create a new section for each server name.
3. Set the `url` option for each server name.
4. Optionally, set the API token for each server name. Go to <https://galaxy.ansible.com/me/preferences> and click Show API key.
Note
The `url` option for each server name must end with a forward slash `/`. If you do not set the API token in your Galaxy server list, use the `--api-key` argument to pass in the token to the `ansible-galaxy collection publish` command.
For Automation Hub, you additionally need to:
1. Set the `auth_url` option for each server name.
2. Set the API token for each server name. Go to <https://cloud.redhat.com/ansible/automation-hub/token/> and click :Get API token from the version dropdown to copy your API token.
The following example shows how to configure multiple servers:
```
[galaxy]
server_list = automation_hub, my_org_hub, release_galaxy, test_galaxy
[galaxy_server.automation_hub]
url=https://cloud.redhat.com/api/automation-hub/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token
token=my_ah_token
[galaxy_server.my_org_hub]
url=https://automation.my_org/
username=my_user
password=my_pass
[galaxy_server.release_galaxy]
url=https://galaxy.ansible.com/
token=my_token
[galaxy_server.test_galaxy]
url=https://galaxy-dev.ansible.com/
token=my_test_token
```
Note
You can use the `--server` command line argument to select an explicit Galaxy server in the `server_list` and the value of this argument should match the name of the server. To use a server not in the server list, set the value to the URL to access that server (all servers in the server list will be ignored). Also you cannot use the `--api-key` argument for any of the predefined servers. You can only use the `api_key` argument if you did not define a server list or if you specify a URL in the `--server` argument.
**Galaxy server list configuration options**
The [GALAXY\_SERVER\_LIST](../reference_appendices/config#galaxy-server-list) option is a list of server identifiers in a prioritized order. When searching for a collection, the install process will search in that order, for example, `automation_hub` first, then `my_org_hub`, `release_galaxy`, and finally `test_galaxy` until the collection is found. The actual Galaxy instance is then defined under the section `[galaxy_server.{{ id }}]` where `{{ id }}` is the server identifier defined in the list. This section can then define the following keys:
* `url`: The URL of the Galaxy instance to connect to. Required.
* `token`: An API token key to use for authentication against the Galaxy instance. Mutually exclusive with `username`.
* `username`: The username to use for basic authentication against the Galaxy instance. Mutually exclusive with `token`.
* `password`: The password to use, in conjunction with `username`, for basic authentication.
* `auth_url`: The URL of a Keycloak server ‘token\_endpoint’ if using SSO authentication (for example, Automation Hub). Mutually exclusive with `username`. Requires `token`.
As well as defining these server options in the `ansible.cfg` file, you can also define them as environment variables. The environment variable is in the form `ANSIBLE_GALAXY_SERVER_{{ id }}_{{ key }}` where `{{ id }}` is the upper case form of the server identifier and `{{ key }}` is the key to define. For example I can define `token` for `release_galaxy` by setting `ANSIBLE_GALAXY_SERVER_RELEASE_GALAXY_TOKEN=secret_token`.
For operations that use only one Galaxy server (for example, the `publish`, `info`, or `install` commands). the `ansible-galaxy collection` command uses the first entry in the `server_list`, unless you pass in an explicit server with the `--server` argument.
Note
Once a collection is found, any of its requirements are only searched within the same Galaxy instance as the parent collection. The install process will not search for a collection requirement in a different Galaxy instance.
Finding roles on Galaxy
-----------------------
Search the Galaxy database by tags, platforms, author and multiple keywords. For example:
```
$ ansible-galaxy search elasticsearch --author geerlingguy
```
The search command will return a list of the first 1000 results matching your search:
```
Found 2 roles matching your search:
Name Description
---- -----------
geerlingguy.elasticsearch Elasticsearch for Linux.
geerlingguy.elasticsearch-curator Elasticsearch curator for Linux.
```
### Get more information about a role
Use the `info` command to view more detail about a specific role:
```
$ ansible-galaxy info username.role_name
```
This returns everything found in Galaxy for the role:
```
Role: username.role_name
description: Installs and configures a thing, a distributed, highly available NoSQL thing.
active: True
commit: c01947b7bc89ebc0b8a2e298b87ab416aed9dd57
commit_message: Adding travis
commit_url: https://github.com/username/repo_name/commit/c01947b7bc89ebc0b8a2e298b87ab
company: My Company, Inc.
created: 2015-12-08T14:17:52.773Z
download_count: 1
forks_count: 0
github_branch:
github_repo: repo_name
github_user: username
id: 6381
is_valid: True
issue_tracker_url:
license: Apache
min_ansible_version: 1.4
modified: 2015-12-08T18:43:49.085Z
namespace: username
open_issues_count: 0
path: /Users/username/projects/roles
scm: None
src: username.repo_name
stargazers_count: 0
travis_status_url: https://travis-ci.org/username/repo_name.svg?branch=main
version:
watchers_count: 1
```
Installing roles from Galaxy
----------------------------
The `ansible-galaxy` command comes bundled with Ansible, and you can use it to install roles from Galaxy or directly from a git based SCM. You can also use it to create a new role, remove roles, or perform tasks on the Galaxy website.
The command line tool by default communicates with the Galaxy website API using the server address *https://galaxy.ansible.com*. If you run your own internal Galaxy server and want to use it instead of the default one, pass the `--server` option following the address of this galaxy server. You can set permanently this option by setting the Galaxy server value in your `ansible.cfg` file to use it . For information on setting the value in *ansible.cfg* see [GALAXY\_SERVER](../reference_appendices/config#galaxy-server).
### Installing roles
Use the `ansible-galaxy` command to download roles from the [Galaxy website](https://galaxy.ansible.com)
```
$ ansible-galaxy install namespace.role_name
```
#### Setting where to install roles
By default, Ansible downloads roles to the first writable directory in the default list of paths `~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles`. This installs roles in the home directory of the user running `ansible-galaxy`.
You can override this with one of the following options:
* Set the environment variable [`ANSIBLE_ROLES_PATH`](../reference_appendices/config#envvar-ANSIBLE_ROLES_PATH) in your session.
* Use the `--roles-path` option for the `ansible-galaxy` command.
* Define `roles_path` in an `ansible.cfg` file.
The following provides an example of using `--roles-path` to install the role into the current working directory:
```
$ ansible-galaxy install --roles-path . geerlingguy.apache
```
See also
[Configuring Ansible](../installation_guide/intro_configuration#intro-configuration)
All about configuration files
### Installing a specific version of a role
When the Galaxy server imports a role, it imports any git tags matching the [Semantic Version](https://semver.org/) format as versions. In turn, you can download a specific version of a role by specifying one of the imported tags.
To see the available versions for a role:
1. Locate the role on the Galaxy search page.
2. Click on the name to view more details, including the available versions.
You can also navigate directly to the role using the /<namespace>/<role name>. For example, to view the role geerlingguy.apache, go to <https://galaxy.ansible.com/geerlingguy/apache>.
To install a specific version of a role from Galaxy, append a comma and the value of a GitHub release tag. For example:
```
$ ansible-galaxy install geerlingguy.apache,1.0.0
```
It is also possible to point directly to the git repository and specify a branch name or commit hash as the version. For example, the following will install a specific commit:
```
$ ansible-galaxy install git+https://github.com/geerlingguy/ansible-role-apache.git,0b7cd353c0250e87a26e0499e59e7fd265cc2f25
```
### Installing multiple roles from a file
You can install multiple roles by including the roles in a `requirements.yml` file. The format of the file is YAML, and the file extension must be either *.yml* or *.yaml*.
Use the following command to install roles included in `requirements.yml:`
```
$ ansible-galaxy install -r requirements.yml
```
Again, the extension is important. If the *.yml* extension is left off, the `ansible-galaxy` CLI assumes the file is in an older, now deprecated, “basic” format.
Each role in the file will have one or more of the following attributes:
src
The source of the role. Use the format *namespace.role\_name*, if downloading from Galaxy; otherwise, provide a URL pointing to a repository within a git based SCM. See the examples below. This is a required attribute.
scm
Specify the SCM. As of this writing only *git* or *hg* are allowed. See the examples below. Defaults to *git*.
version:
The version of the role to download. Provide a release tag value, commit hash, or branch name. Defaults to the branch set as a default in the repository, otherwise defaults to the *master*.
name:
Download the role to a specific name. Defaults to the Galaxy name when downloading from Galaxy, otherwise it defaults to the name of the repository.
Use the following example as a guide for specifying roles in *requirements.yml*:
```
# from galaxy
- name: yatesr.timezone
# from locally cloned git repository (git+file:// requires full paths)
- src: git+file:///home/bennojoy/nginx
# from GitHub
- src: https://github.com/bennojoy/nginx
# from GitHub, overriding the name and specifying a specific tag
- name: nginx_role
src: https://github.com/bennojoy/nginx
version: main
# from GitHub, specifying a specific commit hash
- src: https://github.com/bennojoy/nginx
version: "ee8aa41"
# from a webserver, where the role is packaged in a tar.gz
- name: http-role-gz
src: https://some.webserver.example.com/files/main.tar.gz
# from a webserver, where the role is packaged in a tar.bz2
- name: http-role-bz2
src: https://some.webserver.example.com/files/main.tar.bz2
# from a webserver, where the role is packaged in a tar.xz (Python 3.x only)
- name: http-role-xz
src: https://some.webserver.example.com/files/main.tar.xz
# from Bitbucket
- src: git+https://bitbucket.org/willthames/git-ansible-galaxy
version: v1.4
# from Bitbucket, alternative syntax and caveats
- src: https://bitbucket.org/willthames/hg-ansible-galaxy
scm: hg
# from GitLab or other git-based scm, using git+ssh
- src: [email protected]:mygroup/ansible-core.git
scm: git
version: "0.1" # quoted, so YAML doesn't parse this as a floating-point value
```
Warning
Embedding credentials into a SCM URL is not secure. Make sure to use safe auth options for security reasons. For example, use [SSH](https://help.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh), [netrc](https://linux.die.net/man/5/netrc) or [http.extraHeader](https://git-scm.com/docs/git-config#Documentation/git-config.txt-httpextraHeader)/[url.<base>.pushInsteadOf](https://git-scm.com/docs/git-config#Documentation/git-config.txt-urlltbasegtpushInsteadOf) in Git config to prevent your creds from being exposed in logs.
### Installing roles and collections from the same requirements.yml file
You can install roles and collections from the same requirements files
```
---
roles:
# Install a role from Ansible Galaxy.
- name: geerlingguy.java
version: 1.9.6
collections:
# Install a collection from Ansible Galaxy.
- name: geerlingguy.php_roles
version: 0.9.3
source: https://galaxy.ansible.com
```
### Installing multiple roles from multiple files
For large projects, the `include` directive in a `requirements.yml` file provides the ability to split a large file into multiple smaller files.
For example, a project may have a `requirements.yml` file, and a `webserver.yml` file.
Below are the contents of the `webserver.yml` file:
```
# from github
- src: https://github.com/bennojoy/nginx
# from Bitbucket
- src: git+http://bitbucket.org/willthames/git-ansible-galaxy
version: v1.4
```
The following shows the contents of the `requirements.yml` file that now includes the `webserver.yml` file:
```
# from galaxy
- name: yatesr.timezone
- include: <path_to_requirements>/webserver.yml
```
To install all the roles from both files, pass the root file, in this case `requirements.yml` on the command line, as follows:
```
$ ansible-galaxy install -r requirements.yml
```
### Dependencies
Roles can also be dependent on other roles, and when you install a role that has dependencies, those dependencies will automatically be installed to the `roles_path`.
There are two ways to define the dependencies of a role:
* using `meta/requirements.yml`
* using `meta/main.yml`
#### Using `meta/requirements.yml`
New in version 2.10.
You can create the file `meta/requirements.yml` and define dependencies in the same format used for `requirements.yml` described in the [Installing multiple roles from a file](#installing-multiple-roles-from-a-file) section.
From there, you can import or include the specified roles in your tasks.
#### Using `meta/main.yml`
Alternatively, you can specify role dependencies in the `meta/main.yml` file by providing a list of roles under the `dependencies` section. If the source of a role is Galaxy, you can simply specify the role in the format `namespace.role_name`. You can also use the more complex format in `requirements.yml`, allowing you to provide `src`, `scm`, `version`, and `name`.
Dependencies installed that way, depending on other factors described below, will also be executed **before** this role is executed during play execution. To better understand how dependencies are handled during play execution, see [Roles](../user_guide/playbooks_reuse_roles#playbooks-reuse-roles).
The following shows an example `meta/main.yml` file with dependent roles:
```
---
dependencies:
- geerlingguy.java
galaxy_info:
author: geerlingguy
description: Elasticsearch for Linux.
company: "Midwestern Mac, LLC"
license: "license (BSD, MIT)"
min_ansible_version: 2.4
platforms:
- name: EL
versions:
- all
- name: Debian
versions:
- all
- name: Ubuntu
versions:
- all
galaxy_tags:
- web
- system
- monitoring
- logging
- lucene
- elk
- elasticsearch
```
Tags are inherited *down* the dependency chain. In order for tags to be applied to a role and all its dependencies, the tag should be applied to the role, not to all the tasks within a role.
Roles listed as dependencies are subject to conditionals and tag filtering, and may not execute fully depending on what tags and conditionals are applied.
If the source of a role is Galaxy, specify the role in the format *namespace.role\_name*:
```
dependencies:
- geerlingguy.apache
- geerlingguy.ansible
```
Alternately, you can specify the role dependencies in the complex form used in `requirements.yml` as follows:
```
dependencies:
- name: geerlingguy.ansible
- name: composer
src: git+https://github.com/geerlingguy/ansible-role-composer.git
version: 775396299f2da1f519f0d8885022ca2d6ee80ee8
```
Note
Galaxy expects all role dependencies to exist in Galaxy, and therefore dependencies to be specified in the `namespace.role_name` format. If you import a role with a dependency where the `src` value is a URL, the import process will fail.
### List installed roles
Use `list` to show the name and version of each role installed in the *roles\_path*.
```
$ ansible-galaxy list
- ansible-network.network-engine, v2.7.2
- ansible-network.config_manager, v2.6.2
- ansible-network.cisco_nxos, v2.7.1
- ansible-network.vyos, v2.7.3
- ansible-network.cisco_ios, v2.7.0
```
### Remove an installed role
Use `remove` to delete a role from *roles\_path*:
```
$ ansible-galaxy remove namespace.role_name
```
See also
[Using collections](../user_guide/collections_using#collections)
Shareable collections of modules, playbooks and roles
[Roles](../user_guide/playbooks_reuse_roles#playbooks-reuse-roles)
Reusable tasks, handlers, and other files in a known directory structure
| programming_docs |
ansible Implicit ‘localhost’ Implicit ‘localhost’
====================
When you try to reference a `localhost` and you don’t have it defined in inventory, Ansible will create an implicit one for you.:
```
- hosts: all
tasks:
- name: check that i have log file for all hosts on my local machine
stat: path=/var/log/hosts/{{inventory_hostname}}.log
delegate_to: localhost
```
In a case like this (or `local_action`) when Ansible needs to contact a ‘localhost’ but you did not supply one, we create one for you. This host is defined with specific connection variables equivalent to this in an inventory:
```
...
hosts:
localhost:
vars:
ansible_connection: local
ansible_python_interpreter: "{{ansible_playbook_python}}"
```
This ensures that the proper connection and Python are used to execute your tasks locally. You can override the built-in implicit version by creating a `localhost` host entry in your inventory. At that point, all implicit behaviors are ignored; the `localhost` in inventory is treated just like any other host. Group and host vars will apply, including connection vars, which includes the `ansible_python_interpreter` setting. This will also affect `delegate_to: localhost` and `local_action`, the latter being an alias to the former.
Note
* This host is not targetable via any group, however it will use vars from `host_vars` and from the ‘all’ group.
* Implicit localhost does not appear in the `hostvars` magic variable unless demanded, such as by `"{{ hostvars['localhost'] }}"`.
* The `inventory_file` and `inventory_dir` magic variables are not available for the implicit localhost as they are dependent on **each inventory host**.
* This implicit host also gets triggered by using `127.0.0.1` or `::1` as they are the IPv4 and IPv6 representations of ‘localhost’.
* Even though there are many ways to create it, there will only ever be ONE implicit localhost, using the name first used to create it.
* Having `connection: local` does NOT trigger an implicit localhost, you are just changing the connection for the `inventory_hostname`.
ansible VMware Guide VMware Guide
============
Welcome to the Ansible for VMware Guide!
The purpose of this guide is to teach you everything you need to know about using Ansible with VMware.
To get started, please select one of the following topics.
* [Introduction to Ansible for VMware](vmware_scenarios/vmware_intro)
* [Ansible for VMware Concepts](vmware_scenarios/vmware_concepts)
* [VMware Prerequisites](vmware_scenarios/vmware_requirements)
* [Using VMware dynamic inventory plugin](vmware_scenarios/vmware_inventory)
* [Using Virtual machine attributes in VMware dynamic inventory plugin](vmware_scenarios/vmware_inventory_vm_attributes)
* [Using VMware dynamic inventory plugin - Hostnames](vmware_scenarios/vmware_inventory_hostnames)
* [Using VMware dynamic inventory plugin - Filters](vmware_scenarios/vmware_inventory_filters)
* [Ansible for VMware Scenarios](vmware_scenarios/vmware_scenarios)
* [Troubleshooting Ansible for VMware](vmware_scenarios/vmware_troubleshooting)
* [Other useful VMware resources](vmware_scenarios/vmware_external_doc_links)
* [Ansible VMware FAQ](vmware_scenarios/faq)
ansible Microsoft Azure Guide Microsoft Azure Guide
=====================
Ansible includes a suite of modules for interacting with Azure Resource Manager, giving you the tools to easily create and orchestrate infrastructure on the Microsoft Azure Cloud.
Requirements
------------
Using the Azure Resource Manager modules requires having specific Azure SDK modules installed on the host running Ansible.
```
$ pip install 'ansible[azure]'
```
If you are running Ansible from source, you can install the dependencies from the root directory of the Ansible repo.
```
$ pip install .[azure]
```
You can also directly run Ansible in [Azure Cloud Shell](https://shell.azure.com), where Ansible is pre-installed.
Authenticating with Azure
-------------------------
Using the Azure Resource Manager modules requires authenticating with the Azure API. You can choose from two authentication strategies:
* Active Directory Username/Password
* Service Principal Credentials
Follow the directions for the strategy you wish to use, then proceed to [Providing Credentials to Azure Modules](#providing-credentials-to-azure-modules) for instructions on how to actually use the modules and authenticate with the Azure API.
### Using Service Principal
There is now a detailed official tutorial describing [how to create a service principal](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal).
After stepping through the tutorial you will have:
* Your Client ID, which is found in the “client id” box in the “Configure” page of your application in the Azure portal
* Your Secret key, generated when you created the application. You cannot show the key after creation. If you lost the key, you must create a new one in the “Configure” page of your application.
* And finally, a tenant ID. It’s a UUID (for example, ABCDEFGH-1234-ABCD-1234-ABCDEFGHIJKL) pointing to the AD containing your application. You will find it in the URL from within the Azure portal, or in the “view endpoints” of any given URL.
### Using Active Directory Username/Password
To create an Active Directory username/password:
* Connect to the Azure Classic Portal with your admin account
* Create a user in your default AAD. You must NOT activate Multi-Factor Authentication
* Go to Settings - Administrators
* Click on Add and enter the email of the new user.
* Check the checkbox of the subscription you want to test with this user.
* Login to Azure Portal with this new user to change the temporary password to a new one. You will not be able to use the temporary password for OAuth login.
### Providing Credentials to Azure Modules
The modules offer several ways to provide your credentials. For a CI/CD tool such as Ansible AWX or Jenkins, you will most likely want to use environment variables. For local development you may wish to store your credentials in a file within your home directory. And of course, you can always pass credentials as parameters to a task within a playbook. The order of precedence is parameters, then environment variables, and finally a file found in your home directory.
#### Using Environment Variables
To pass service principal credentials via the environment, define the following variables:
* AZURE\_CLIENT\_ID
* AZURE\_SECRET
* AZURE\_SUBSCRIPTION\_ID
* AZURE\_TENANT
To pass Active Directory username/password via the environment, define the following variables:
* AZURE\_AD\_USER
* AZURE\_PASSWORD
* AZURE\_SUBSCRIPTION\_ID
To pass Active Directory username/password in ADFS via the environment, define the following variables:
* AZURE\_AD\_USER
* AZURE\_PASSWORD
* AZURE\_CLIENT\_ID
* AZURE\_TENANT
* AZURE\_ADFS\_AUTHORITY\_URL
“AZURE\_ADFS\_AUTHORITY\_URL” is optional. It’s necessary only when you have own ADFS authority like <https://yourdomain.com/adfs>.
#### Storing in a File
When working in a development environment, it may be desirable to store credentials in a file. The modules will look for credentials in `$HOME/.azure/credentials`. This file is an ini style file. It will look as follows:
```
[default]
subscription_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
client_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
secret=xxxxxxxxxxxxxxxxx
tenant=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
```
Note
If your secret values contain non-ASCII characters, you must [URL Encode](https://www.w3schools.com/tags/ref_urlencode.asp) them to avoid login errors.
It is possible to store multiple sets of credentials within the credentials file by creating multiple sections. Each section is considered a profile. The modules look for the [default] profile automatically. Define AZURE\_PROFILE in the environment or pass a profile parameter to specify a specific profile.
#### Passing as Parameters
If you wish to pass credentials as parameters to a task, use the following parameters for service principal:
* client\_id
* secret
* subscription\_id
* tenant
Or, pass the following parameters for Active Directory username/password:
* ad\_user
* password
* subscription\_id
Or, pass the following parameters for ADFS username/password:
* ad\_user
* password
* client\_id
* tenant
* adfs\_authority\_url
“adfs\_authority\_url” is optional. It’s necessary only when you have own ADFS authority like <https://yourdomain.com/adfs>.
Other Cloud Environments
------------------------
To use an Azure Cloud other than the default public cloud (for example, Azure China Cloud, Azure US Government Cloud, Azure Stack), pass the “cloud\_environment” argument to modules, configure it in a credential profile, or set the “AZURE\_CLOUD\_ENVIRONMENT” environment variable. The value is either a cloud name as defined by the Azure Python SDK (for example, “AzureChinaCloud”, “AzureUSGovernment”; defaults to “AzureCloud”) or an Azure metadata discovery URL (for Azure Stack).
Creating Virtual Machines
-------------------------
There are two ways to create a virtual machine, both involving the azure\_rm\_virtualmachine module. We can either create a storage account, network interface, security group and public IP address and pass the names of these objects to the module as parameters, or we can let the module do the work for us and accept the defaults it chooses.
### Creating Individual Components
An Azure module is available to help you create a storage account, virtual network, subnet, network interface, security group and public IP. Here is a full example of creating each of these and passing the names to the `azure.azcollection.azure_rm_virtualmachine` module at the end:
```
- name: Create storage account
azure.azcollection.azure_rm_storageaccount:
resource_group: Testing
name: testaccount001
account_type: Standard_LRS
- name: Create virtual network
azure.azcollection.azure_rm_virtualnetwork:
resource_group: Testing
name: testvn001
address_prefixes: "10.10.0.0/16"
- name: Add subnet
azure.azcollection.azure_rm_subnet:
resource_group: Testing
name: subnet001
address_prefix: "10.10.0.0/24"
virtual_network: testvn001
- name: Create public ip
azure.azcollection.azure_rm_publicipaddress:
resource_group: Testing
allocation_method: Static
name: publicip001
- name: Create security group that allows SSH
azure.azcollection.azure_rm_securitygroup:
resource_group: Testing
name: secgroup001
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 101
direction: Inbound
- name: Create NIC
azure.azcollection.azure_rm_networkinterface:
resource_group: Testing
name: testnic001
virtual_network: testvn001
subnet: subnet001
public_ip_name: publicip001
security_group: secgroup001
- name: Create virtual machine
azure.azcollection.azure_rm_virtualmachine:
resource_group: Testing
name: testvm001
vm_size: Standard_D1
storage_account: testaccount001
storage_container: testvm001
storage_blob: testvm001.vhd
admin_username: admin
admin_password: Password!
network_interfaces: testnic001
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
```
Each of the Azure modules offers a variety of parameter options. Not all options are demonstrated in the above example. See each individual module for further details and examples.
### Creating a Virtual Machine with Default Options
If you simply want to create a virtual machine without specifying all the details, you can do that as well. The only caveat is that you will need a virtual network with one subnet already in your resource group. Assuming you have a virtual network already with an existing subnet, you can run the following to create a VM:
```
azure.azcollection.azure_rm_virtualmachine:
resource_group: Testing
name: testvm10
vm_size: Standard_D1
admin_username: chouseknecht
ssh_password_enabled: false
ssh_public_keys: "{{ ssh_keys }}"
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
```
### Creating a Virtual Machine in Availability Zones
If you want to create a VM in an availability zone, consider the following:
* Both OS disk and data disk must be a ‘managed disk’, not an ‘unmanaged disk’.
* When creating a VM with the `azure.azcollection.azure_rm_virtualmachine` module, you need to explicitly set the `managed_disk_type` parameter to change the OS disk to a managed disk. Otherwise, the OS disk becomes an unmanaged disk.
* When you create a data disk with the `azure.azcollection.azure_rm_manageddisk` module, you need to explicitly specify the `storage_account_type` parameter to make it a managed disk. Otherwise, the data disk will be an unmanaged disk.
* A managed disk does not require a storage account or a storage container, unlike an unmanaged disk. In particular, note that once a VM is created on an unmanaged disk, an unnecessary storage container named “vhds” is automatically created.
* When you create an IP address with the `azure.azcollection.azure_rm_publicipaddress` module, you must set the `sku` parameter to `standard`. Otherwise, the IP address cannot be used in an availability zone.
Dynamic Inventory Script
------------------------
If you are not familiar with Ansible’s dynamic inventory scripts, check out [Intro to Dynamic Inventory](../user_guide/intro_dynamic_inventory#intro-dynamic-inventory).
The Azure Resource Manager inventory script is called [azure\_rm.py](https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.py). It authenticates with the Azure API exactly the same as the Azure modules, which means you will either define the same environment variables described above in [Using Environment Variables](#using-environment-variables), create a `$HOME/.azure/credentials` file (also described above in [Storing in a File](#storing-in-a-file)), or pass command line parameters. To see available command line options execute the following:
```
$ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.py
$ ./azure_rm.py --help
```
As with all dynamic inventory scripts, the script can be executed directly, passed as a parameter to the ansible command, or passed directly to ansible-playbook using the -i option. No matter how it is executed the script produces JSON representing all of the hosts found in your Azure subscription. You can narrow this down to just hosts found in a specific set of Azure resource groups, or even down to a specific host.
For a given host, the inventory script provides the following host variables:
```
{
"ansible_host": "XXX.XXX.XXX.XXX",
"computer_name": "computer_name2",
"fqdn": null,
"id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Compute/virtualMachines/object-name",
"image": {
"offer": "CentOS",
"publisher": "OpenLogic",
"sku": "7.1",
"version": "latest"
},
"location": "westus",
"mac_address": "00-00-5E-00-53-FE",
"name": "object-name",
"network_interface": "interface-name",
"network_interface_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/networkInterfaces/object-name1",
"network_security_group": null,
"network_security_group_id": null,
"os_disk": {
"name": "object-name",
"operating_system_type": "Linux"
},
"plan": null,
"powerstate": "running",
"private_ip": "172.26.3.6",
"private_ip_alloc_method": "Static",
"provisioning_state": "Succeeded",
"public_ip": "XXX.XXX.XXX.XXX",
"public_ip_alloc_method": "Static",
"public_ip_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/publicIPAddresses/object-name",
"public_ip_name": "object-name",
"resource_group": "galaxy-production",
"security_group": "object-name",
"security_group_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/networkSecurityGroups/object-name",
"tags": {
"db": "mysql"
},
"type": "Microsoft.Compute/virtualMachines",
"virtual_machine_size": "Standard_DS4"
}
```
### Host Groups
By default hosts are grouped by:
* azure (all hosts)
* location name
* resource group name
* security group name
* tag key
* tag key\_value
* os\_disk operating\_system\_type (Windows/Linux)
You can control host groupings and host selection by either defining environment variables or creating an azure\_rm.ini file in your current working directory.
NOTE: An .ini file will take precedence over environment variables.
NOTE: The name of the .ini file is the basename of the inventory script (in other words, ‘azure\_rm’) with a ‘.ini’ extension. This allows you to copy, rename and customize the inventory script and have matching .ini files all in the same directory.
Control grouping using the following variables defined in the environment:
* AZURE\_GROUP\_BY\_RESOURCE\_GROUP=yes
* AZURE\_GROUP\_BY\_LOCATION=yes
* AZURE\_GROUP\_BY\_SECURITY\_GROUP=yes
* AZURE\_GROUP\_BY\_TAG=yes
* AZURE\_GROUP\_BY\_OS\_FAMILY=yes
Select hosts within specific resource groups by assigning a comma separated list to:
* AZURE\_RESOURCE\_GROUPS=resource\_group\_a,resource\_group\_b
Select hosts for specific tag key by assigning a comma separated list of tag keys to:
* AZURE\_TAGS=key1,key2,key3
Select hosts for specific locations by assigning a comma separated list of locations to:
* AZURE\_LOCATIONS=eastus,eastus2,westus
Or, select hosts for specific tag key:value pairs by assigning a comma separated list key:value pairs to:
* AZURE\_TAGS=key1:value1,key2:value2
If you don’t need the powerstate, you can improve performance by turning off powerstate fetching:
* AZURE\_INCLUDE\_POWERSTATE=no
A sample azure\_rm.ini file is included along with the inventory script in [here](https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.ini). An .ini file will contain the following:
```
[azure]
# Control which resource groups are included. By default all resources groups are included.
# Set resource_groups to a comma separated list of resource groups names.
#resource_groups=
# Control which tags are included. Set tags to a comma separated list of keys or key:value pairs
#tags=
# Control which locations are included. Set locations to a comma separated list of locations.
#locations=
# Include powerstate. If you don't need powerstate information, turning it off improves runtime performance.
# Valid values: yes, no, true, false, True, False, 0, 1.
include_powerstate=yes
# Control grouping with the following boolean flags. Valid values: yes, no, true, false, True, False, 0, 1.
group_by_resource_group=yes
group_by_location=yes
group_by_security_group=yes
group_by_tag=yes
group_by_os_family=yes
```
### Examples
Here are some examples using the inventory script:
```
# Download inventory script
$ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.py
# Execute /bin/uname on all instances in the Testing resource group
$ ansible -i azure_rm.py Testing -m shell -a "/bin/uname -a"
# Execute win_ping on all Windows instances
$ ansible -i azure_rm.py windows -m win_ping
# Execute ping on all Linux instances
$ ansible -i azure_rm.py linux -m ping
# Use the inventory script to print instance specific information
$ ./azure_rm.py --host my_instance_host_name --resource-groups=Testing --pretty
# Use the inventory script with ansible-playbook
$ ansible-playbook -i ./azure_rm.py test_playbook.yml
```
Here is a simple playbook to exercise the Azure inventory script:
```
- name: Test the inventory script
hosts: azure
connection: local
gather_facts: no
tasks:
- debug:
msg: "{{ inventory_hostname }} has powerstate {{ powerstate }}"
```
You can execute the playbook with something like:
```
$ ansible-playbook -i ./azure_rm.py test_azure_inventory.yml
```
### Disabling certificate validation on Azure endpoints
When an HTTPS proxy is present, or when using Azure Stack, it may be necessary to disable certificate validation for Azure endpoints in the Azure modules. This is not a recommended security practice, but may be necessary when the system CA store cannot be altered to include the necessary CA certificate. Certificate validation can be controlled by setting the “cert\_validation\_mode” value in a credential profile, via the “AZURE\_CERT\_VALIDATION\_MODE” environment variable, or by passing the “cert\_validation\_mode” argument to any Azure module. The default value is “validate”; setting the value to “ignore” will prevent all certificate validation. The module argument takes precedence over a credential profile value, which takes precedence over the environment value.
| programming_docs |
ansible Virtualization and Containerization Guides Virtualization and Containerization Guides
==========================================
The guides in this section cover integrating Ansible with popular tools for creating virtual machines and containers. They explore particular use cases in greater depth and provide a more “top-down” explanation of some basic features.
* [Docker Guide](guide_docker)
* [Kubernetes Guide](guide_kubernetes)
* [Vagrant Guide](guide_vagrant)
* [VMware Guide](guide_vmware)
* [VMware REST Scenarios](guide_vmware_rest)
ansible CloudStack Cloud Guide CloudStack Cloud Guide
======================
Introduction
------------
The purpose of this section is to explain how to put Ansible modules together to use Ansible in a CloudStack context. You will find more usage examples in the details section of each module.
Ansible contains a number of extra modules for interacting with CloudStack based clouds. All modules support check mode, are designed to be idempotent, have been created and tested, and are maintained by the community.
Note
Some of the modules will require domain admin or root admin privileges.
Prerequisites
-------------
Prerequisites for using the CloudStack modules are minimal. In addition to Ansible itself, all of the modules require the python library `cs` <https://pypi.org/project/cs/>
You’ll need this Python module installed on the execution host, usually your workstation.
```
$ pip install cs
```
Or alternatively starting with Debian 9 and Ubuntu 16.04:
```
$ sudo apt install python-cs
```
Note
cs also includes a command line interface for ad hoc interaction with the CloudStack API, for example `$ cs listVirtualMachines state=Running`.
Limitations and Known Issues
----------------------------
VPC support has been improved since Ansible 2.3 but is still not yet fully implemented. The community is working on the VPC integration.
Credentials File
----------------
You can pass credentials and the endpoint of your cloud as module arguments, however in most cases it is a far less work to store your credentials in the cloudstack.ini file.
The python library cs looks for the credentials file in the following order (last one wins):
* A `.cloudstack.ini` (note the dot) file in the home directory.
* A `CLOUDSTACK_CONFIG` environment variable pointing to an .ini file.
* A `cloudstack.ini` (without the dot) file in the current working directory, same directory as your playbooks are located.
The structure of the ini file must look like this:
```
$ cat $HOME/.cloudstack.ini
[cloudstack]
endpoint = https://cloud.example.com/client/api
key = api key
secret = api secret
timeout = 30
```
Note
The section `[cloudstack]` is the default section. `CLOUDSTACK_REGION` environment variable can be used to define the default section.
New in version 2.4.
The ENV variables support `CLOUDSTACK_*` as written in the documentation of the library `cs`, like `CLOUDSTACK_TIMEOUT`, `CLOUDSTACK_METHOD`, and so on. has been implemented into Ansible. It is even possible to have some incomplete config in your cloudstack.ini:
```
$ cat $HOME/.cloudstack.ini
[cloudstack]
endpoint = https://cloud.example.com/client/api
timeout = 30
```
and fulfill the missing data by either setting ENV variables or tasks params:
```
---
- name: provision our VMs
hosts: cloud-vm
tasks:
- name: ensure VMs are created and running
delegate_to: localhost
cs_instance:
api_key: your api key
api_secret: your api secret
...
```
Regions
-------
If you use more than one CloudStack region, you can define as many sections as you want and name them as you like, for example:
```
$ cat $HOME/.cloudstack.ini
[exoscale]
endpoint = https://api.exoscale.ch/compute
key = api key
secret = api secret
[example_cloud_one]
endpoint = https://cloud-one.example.com/client/api
key = api key
secret = api secret
[example_cloud_two]
endpoint = https://cloud-two.example.com/client/api
key = api key
secret = api secret
```
Hint
Sections can also be used to for login into the same region using different accounts.
By passing the argument `api_region` with the CloudStack modules, the region wanted will be selected.
```
- name: ensure my ssh public key exists on Exoscale
cs_sshkeypair:
name: my-ssh-key
public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
api_region: exoscale
delegate_to: localhost
```
Or by looping over a regions list if you want to do the task in every region:
```
- name: ensure my ssh public key exists in all CloudStack regions
local_action: cs_sshkeypair
name: my-ssh-key
public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
api_region: "{{ item }}"
loop:
- exoscale
- example_cloud_one
- example_cloud_two
```
Environment Variables
---------------------
New in version 2.3.
Since Ansible 2.3 it is possible to use environment variables for domain (`CLOUDSTACK_DOMAIN`), account (`CLOUDSTACK_ACCOUNT`), project (`CLOUDSTACK_PROJECT`), VPC (`CLOUDSTACK_VPC`) and zone (`CLOUDSTACK_ZONE`). This simplifies the tasks by not repeating the arguments for every tasks.
Below you see an example how it can be used in combination with Ansible’s block feature:
```
- hosts: cloud-vm
tasks:
- block:
- name: ensure my ssh public key
cs_sshkeypair:
name: my-ssh-key
public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
- name: ensure my ssh public key
cs_instance:
display_name: "{{ inventory_hostname_short }}"
template: Linux Debian 7 64-bit 20GB Disk
service_offering: "{{ cs_offering }}"
ssh_key: my-ssh-key
state: running
delegate_to: localhost
environment:
CLOUDSTACK_DOMAIN: root/customers
CLOUDSTACK_PROJECT: web-app
CLOUDSTACK_ZONE: sf-1
```
Note
You are still able overwrite the environment variables using the module arguments, for example `zone: sf-2`
Note
Unlike `CLOUDSTACK_REGION` these additional environment variables are ignored in the CLI `cs`.
Use Cases
---------
The following should give you some ideas how to use the modules to provision VMs to the cloud. As always, there isn’t only one way to do it. But as always: keep it simple for the beginning is always a good start.
### Use Case: Provisioning in a Advanced Networking CloudStack setup
Our CloudStack cloud has an advanced networking setup, we would like to provision web servers, which get a static NAT and open firewall ports 80 and 443. Further we provision database servers, to which we do not give any access to. For accessing the VMs by SSH we use a SSH jump host.
This is how our inventory looks like:
```
[cloud-vm:children]
webserver
db-server
jumphost
[webserver]
web-01.example.com public_ip=198.51.100.20
web-02.example.com public_ip=198.51.100.21
[db-server]
db-01.example.com
db-02.example.com
[jumphost]
jump.example.com public_ip=198.51.100.22
```
As you can see, the public IPs for our web servers and jumphost has been assigned as variable `public_ip` directly in the inventory.
The configure the jumphost, web servers and database servers, we use `group_vars`. The `group_vars` directory contains 4 files for configuration of the groups: cloud-vm, jumphost, webserver and db-server. The cloud-vm is there for specifying the defaults of our cloud infrastructure.
```
# file: group_vars/cloud-vm
---
cs_offering: Small
cs_firewall: []
```
Our database servers should get more CPU and RAM, so we define to use a `Large` offering for them.
```
# file: group_vars/db-server
---
cs_offering: Large
```
The web servers should get a `Small` offering as we would scale them horizontally, which is also our default offering. We also ensure the known web ports are opened for the world.
```
# file: group_vars/webserver
---
cs_firewall:
- { port: 80 }
- { port: 443 }
```
Further we provision a jump host which has only port 22 opened for accessing the VMs from our office IPv4 network.
```
# file: group_vars/jumphost
---
cs_firewall:
- { port: 22, cidr: "17.17.17.0/24" }
```
Now to the fun part. We create a playbook to create our infrastructure we call it `infra.yml`:
```
# file: infra.yaml
---
- name: provision our VMs
hosts: cloud-vm
tasks:
- name: run all enclosed tasks from localhost
delegate_to: localhost
block:
- name: ensure VMs are created and running
cs_instance:
name: "{{ inventory_hostname_short }}"
template: Linux Debian 7 64-bit 20GB Disk
service_offering: "{{ cs_offering }}"
state: running
- name: ensure firewall ports opened
cs_firewall:
ip_address: "{{ public_ip }}"
port: "{{ item.port }}"
cidr: "{{ item.cidr | default('0.0.0.0/0') }}"
loop: "{{ cs_firewall }}"
when: public_ip is defined
- name: ensure static NATs
cs_staticnat: vm="{{ inventory_hostname_short }}" ip_address="{{ public_ip }}"
when: public_ip is defined
```
In the above play we defined 3 tasks and use the group `cloud-vm` as target to handle all VMs in the cloud but instead SSH to these VMs, we use `delegate_to: localhost` to execute the API calls locally from our workstation.
In the first task, we ensure we have a running VM created with the Debian template. If the VM is already created but stopped, it would just start it. If you like to change the offering on an existing VM, you must add `force: yes` to the task, which would stop the VM, change the offering and start the VM again.
In the second task we ensure the ports are opened if we give a public IP to the VM.
In the third task we add static NAT to the VMs having a public IP defined.
Note
The public IP addresses must have been acquired in advance, also see `cs_ip_address`
Note
For some modules, for example `cs_sshkeypair` you usually want this to be executed only once, not for every VM. Therefore you would make a separate play for it targeting localhost. You find an example in the use cases below.
### Use Case: Provisioning on a Basic Networking CloudStack setup
A basic networking CloudStack setup is slightly different: Every VM gets a public IP directly assigned and security groups are used for access restriction policy.
This is how our inventory looks like:
```
[cloud-vm:children]
webserver
[webserver]
web-01.example.com
web-02.example.com
```
The default for your VMs looks like this:
```
# file: group_vars/cloud-vm
---
cs_offering: Small
cs_securitygroups: [ 'default']
```
Our webserver will also be in security group `web`:
```
# file: group_vars/webserver
---
cs_securitygroups: [ 'default', 'web' ]
```
The playbook looks like the following:
```
# file: infra.yaml
---
- name: cloud base setup
hosts: localhost
tasks:
- name: upload ssh public key
cs_sshkeypair:
name: defaultkey
public_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
- name: ensure security groups exist
cs_securitygroup:
name: "{{ item }}"
loop:
- default
- web
- name: add inbound SSH to security group default
cs_securitygroup_rule:
security_group: default
start_port: "{{ item }}"
end_port: "{{ item }}"
loop:
- 22
- name: add inbound TCP rules to security group web
cs_securitygroup_rule:
security_group: web
start_port: "{{ item }}"
end_port: "{{ item }}"
loop:
- 80
- 443
- name: install VMs in the cloud
hosts: cloud-vm
tasks:
- delegate_to: localhost
block:
- name: create and run VMs on CloudStack
cs_instance:
name: "{{ inventory_hostname_short }}"
template: Linux Debian 7 64-bit 20GB Disk
service_offering: "{{ cs_offering }}"
security_groups: "{{ cs_securitygroups }}"
ssh_key: defaultkey
state: Running
register: vm
- name: show VM IP
debug: msg="VM {{ inventory_hostname }} {{ vm.default_ip }}"
- name: assign IP to the inventory
set_fact: ansible_ssh_host={{ vm.default_ip }}
- name: waiting for SSH to come up
wait_for: port=22 host={{ vm.default_ip }} delay=5
```
In the first play we setup the security groups, in the second play the VMs will created be assigned to these groups. Further you see, that we assign the public IP returned from the modules to the host inventory. This is needed as we do not know the IPs we will get in advance. In a next step you would configure the DNS servers with these IPs for accessing the VMs with their DNS name.
In the last task we wait for SSH to be accessible, so any later play would be able to access the VM by SSH without failure.
ansible Network Technology Guides Network Technology Guides
=========================
The guides in this section cover using Ansible with specific network technologies. They explore particular use cases in greater depth and provide a more “top-down” explanation of some basic features.
* [Cisco ACI Guide](guide_aci)
* [Cisco Meraki Guide](guide_meraki)
* [Infoblox Guide](guide_infoblox)
To learn more about Network Automation with Ansible, see [Network Getting Started](../network/getting_started/index#network-getting-started) and [Network Advanced Topics](../network/user_guide/index#network-advanced).
ansible Cisco ACI Guide Cisco ACI Guide
===============
What is Cisco ACI ?
-------------------
### Application Centric Infrastructure (ACI)
The Cisco Application Centric Infrastructure (ACI) allows application requirements to define the network. This architecture simplifies, optimizes, and accelerates the entire application deployment life cycle.
### Application Policy Infrastructure Controller (APIC)
The APIC manages the scalable ACI multi-tenant fabric. The APIC provides a unified point of automation and management, policy programming, application deployment, and health monitoring for the fabric. The APIC, which is implemented as a replicated synchronized clustered controller, optimizes performance, supports any application anywhere, and provides unified operation of the physical and virtual infrastructure.
The APIC enables network administrators to easily define the optimal network for applications. Data center operators can clearly see how applications consume network resources, easily isolate and troubleshoot application and infrastructure problems, and monitor and profile resource usage patterns.
The Cisco Application Policy Infrastructure Controller (APIC) API enables applications to directly connect with a secure, shared, high-performance resource pool that includes network, compute, and storage capabilities.
### ACI Fabric
The Cisco Application Centric Infrastructure (ACI) Fabric includes Cisco Nexus 9000 Series switches with the APIC to run in the leaf/spine ACI fabric mode. These switches form a “fat-tree” network by connecting each leaf node to each spine node; all other devices connect to the leaf nodes. The APIC manages the ACI fabric.
The ACI fabric provides consistent low-latency forwarding across high-bandwidth links (40 Gbps, with a 100-Gbps future capability). Traffic with the source and destination on the same leaf switch is handled locally, and all other traffic travels from the ingress leaf to the egress leaf through a spine switch. Although this architecture appears as two hops from a physical perspective, it is actually a single Layer 3 hop because the fabric operates as a single Layer 3 switch.
The ACI fabric object-oriented operating system (OS) runs on each Cisco Nexus 9000 Series node. It enables programming of objects for each configurable element of the system. The ACI fabric OS renders policies from the APIC into a concrete model that runs in the physical infrastructure. The concrete model is analogous to compiled software; it is the form of the model that the switch operating system can execute.
All the switch nodes contain a complete copy of the concrete model. When an administrator creates a policy in the APIC that represents a configuration, the APIC updates the logical model. The APIC then performs the intermediate step of creating a fully elaborated policy that it pushes into all the switch nodes where the concrete model is updated.
The APIC is responsible for fabric activation, switch firmware management, network policy configuration, and instantiation. While the APIC acts as the centralized policy and network management engine for the fabric, it is completely removed from the data path, including the forwarding topology. Therefore, the fabric can still forward traffic even when communication with the APIC is lost.
### More information
Various resources exist to start learning ACI, here is a list of interesting articles from the community.
* [Adam Raffe: Learning ACI](https://adamraffe.com/learning-aci/)
* [Luca Relandini: ACI for dummies](https://lucarelandini.blogspot.be/2015/03/aci-for-dummies.html)
* [Cisco DevNet Learning Labs about ACI](https://learninglabs.cisco.com/labs/tags/ACI)
Using the ACI modules
---------------------
The Ansible ACI modules provide a user-friendly interface to managing your ACI environment using Ansible playbooks.
For instance ensuring that a specific tenant exists, is done using the following Ansible task using the aci\_tenant module:
```
- name: Ensure tenant customer-xyz exists
aci_tenant:
host: my-apic-1
username: admin
password: my-password
tenant: customer-xyz
description: Customer XYZ
state: present
```
A complete list of existing ACI modules is available on the content tab of the [ACI collection on Ansible Galaxy](https://galaxy.ansible.com/cisco/aci).
If you want to learn how to write your own ACI modules to contribute, look at the [Developing Cisco ACI modules](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general_aci.html#aci-dev-guide) section.
### Querying ACI configuration
A module can also be used to query a specific object.
```
- name: Query tenant customer-xyz
aci_tenant:
host: my-apic-1
username: admin
password: my-password
tenant: customer-xyz
state: query
register: my_tenant
```
Or query all objects.
```
- name: Query all tenants
aci_tenant:
host: my-apic-1
username: admin
password: my-password
state: query
register: all_tenants
```
After registering the return values of the aci\_tenant task as shown above, you can access all tenant information from variable `all_tenants`.
### Running on the controller locally
As originally designed, Ansible modules are shipped to and run on the remote target(s), however the ACI modules (like most network-related modules) do not run on the network devices or controller (in this case the APIC), but they talk directly to the APIC’s REST interface.
For this very reason, the modules need to run on the local Ansible controller (or are delegated to another system that *can* connect to the APIC).
#### Gathering facts
Because we run the modules on the Ansible controller gathering facts will not work. That is why when using these ACI modules it is mandatory to disable facts gathering. You can do this globally in your `ansible.cfg` or by adding `gather_facts: no` to every play.
```
- name: Another play in my playbook
hosts: my-apic-1
gather_facts: no
tasks:
- name: Create a tenant
aci_tenant:
...
```
#### Delegating to localhost
So let us assume we have our target configured in the inventory using the FQDN name as the `ansible_host` value, as shown below.
```
apics:
my-apic-1:
ansible_host: apic01.fqdn.intra
ansible_user: admin
ansible_password: my-password
```
One way to set this up is to add to every task the directive: `delegate_to: localhost`.
```
- name: Query all tenants
aci_tenant:
host: '{{ ansible_host }}'
username: '{{ ansible_user }}'
password: '{{ ansible_password }}'
state: query
delegate_to: localhost
register: all_tenants
```
If one would forget to add this directive, Ansible will attempt to connect to the APIC using SSH and attempt to copy the module and run it remotely. This will fail with a clear error, yet may be confusing to some.
#### Using the local connection method
Another option frequently used, is to tie the `local` connection method to this target so that every subsequent task for this target will use the local connection method (hence run it locally, rather than use SSH).
In this case the inventory may look like this:
```
apics:
my-apic-1:
ansible_host: apic01.fqdn.intra
ansible_user: admin
ansible_password: my-password
ansible_connection: local
```
But used tasks do not need anything special added.
```
- name: Query all tenants
aci_tenant:
host: '{{ ansible_host }}'
username: '{{ ansible_user }}'
password: '{{ ansible_password }}'
state: query
register: all_tenants
```
Hint
For clarity we have added `delegate_to: localhost` to all the examples in the module documentation. This helps to ensure first-time users can easily copy&paste parts and make them work with a minimum of effort.
### Common parameters
Every Ansible ACI module accepts the following parameters that influence the module’s communication with the APIC REST API:
host
Hostname or IP address of the APIC.
port
Port to use for communication. (Defaults to `443` for HTTPS, and `80` for HTTP)
username
User name used to log on to the APIC. (Defaults to `admin`)
password
Password for `username` to log on to the APIC, using password-based authentication.
private\_key
Private key for `username` to log on to APIC, using signature-based authentication. This could either be the raw private key content (include header/footer) or a file that stores the key content. *New in version 2.5*
certificate\_name
Name of the certificate in the ACI Web GUI. This defaults to either the `username` value or the `private_key` file base name). *New in version 2.5*
timeout
Timeout value for socket-level communication.
use\_proxy
Use system proxy settings. (Defaults to `yes`)
use\_ssl
Use HTTPS or HTTP for APIC REST communication. (Defaults to `yes`)
validate\_certs
Validate certificate when using HTTPS communication. (Defaults to `yes`)
output\_level
Influence the level of detail ACI modules return to the user. (One of `normal`, `info` or `debug`) *New in version 2.5*
### Proxy support
By default, if an environment variable `<protocol>_proxy` is set on the target host, requests will be sent through that proxy. This behaviour can be overridden by setting a variable for this task (see [Setting the remote environment](../user_guide/playbooks_environment#playbooks-environment)), or by using the `use_proxy` module parameter.
HTTP redirects can redirect from HTTP to HTTPS so ensure that the proxy environment for both protocols is correctly configured.
If proxy support is not needed, but the system may have it configured nevertheless, use the parameter `use_proxy: no` to avoid accidental system proxy usage.
Hint
Selective proxy support using the `no_proxy` environment variable is also supported.
### Return values
New in version 2.5.
The following values are always returned:
current
The resulting state of the managed object, or results of your query.
The following values are returned when `output_level: info`:
previous
The original state of the managed object (before any change was made).
proposed
The proposed config payload, based on user-supplied values.
sent
The sent config payload, based on user-supplied values and the existing configuration.
The following values are returned when `output_level: debug` or `ANSIBLE_DEBUG=1`:
filter\_string
The filter used for specific APIC queries.
method
The HTTP method used for the sent payload. (Either `GET` for queries, `DELETE` or `POST` for changes)
response
The HTTP response from the APIC.
status
The HTTP status code for the request.
url
The url used for the request.
Note
The module return values are documented in detail as part of each module’s documentation.
### More information
Various resources exist to start learn more about ACI programmability, we recommend the following links:
* [Developing Cisco ACI modules](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general_aci.html#aci-dev-guide)
* [Jacob McGill: Automating Cisco ACI with Ansible](https://blogs.cisco.com/developer/automating-cisco-aci-with-ansible-eliminates-repetitive-day-to-day-tasks)
* [Cisco DevNet Learning Labs about ACI and Ansible](https://learninglabs.cisco.com/labs/tags/ACI,Ansible)
ACI authentication
------------------
### Password-based authentication
If you want to log on using a username and password, you can use the following parameters with your ACI modules:
```
username: admin
password: my-password
```
Password-based authentication is very simple to work with, but it is not the most efficient form of authentication from ACI’s point-of-view as it requires a separate login-request and an open session to work. To avoid having your session time-out and requiring another login, you can use the more efficient Signature-based authentication.
Note
Password-based authentication also may trigger anti-DoS measures in ACI v3.1+ that causes session throttling and results in HTTP 503 errors and login failures.
Warning
Never store passwords in plain text.
The “Vault” feature of Ansible allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plain text in your playbooks or roles. These vault files can then be distributed or placed in source control. See [Using encrypted variables and files](../user_guide/vault#playbooks-vault) for more information.
### Signature-based authentication using certificates
New in version 2.5.
Using signature-based authentication is more efficient and more reliable than password-based authentication.
#### Generate certificate and private key
Signature-based authentication requires a (self-signed) X.509 certificate with private key, and a configuration step for your AAA user in ACI. To generate a working X.509 certificate and private key, use the following procedure:
```
$ openssl req -new -newkey rsa:1024 -days 36500 -nodes -x509 -keyout admin.key -out admin.crt -subj '/CN=Admin/O=Your Company/C=US'
```
#### Configure your local user
Perform the following steps:
* Add the X.509 certificate to your ACI AAA local user at ADMIN » AAA
* Click AAA Authentication
* Check that in the Authentication field the Realm field displays Local
* Expand Security Management » Local Users
* Click the name of the user you want to add a certificate to, in the User Certificates area
* Click the + sign and in the Create X509 Certificate enter a certificate name in the Name field
+ If you use the basename of your private key here, you don’t need to enter `certificate_name` in Ansible
* Copy and paste your X.509 certificate in the Data field.
You can automate this by using the following Ansible task:
```
- name: Ensure we have a certificate installed
aci_aaa_user_certificate:
host: my-apic-1
username: admin
password: my-password
aaa_user: admin
certificate_name: admin
certificate: "{{ lookup('file', 'pki/admin.crt') }}" # This will read the certificate data from a local file
```
Note
Signature-based authentication only works with local users.
#### Use signature-based authentication with Ansible
You need the following parameters with your ACI module(s) for it to work:
```
username: admin
private_key: pki/admin.key
certificate_name: admin # This could be left out !
```
or you can use the private key content:
```
username: admin
private_key: |
-----BEGIN PRIVATE KEY-----
<<your private key content>>
-----END PRIVATE KEY-----
certificate_name: admin # This could be left out !
```
Hint
If you use a certificate name in ACI that matches the private key’s basename, you can leave out the `certificate_name` parameter like the example above.
#### Using Ansible Vault to encrypt the private key
New in version 2.8.
To start, encrypt the private key and give it a strong password.
```
ansible-vault encrypt admin.key
```
Use a text editor to open the private-key. You should have an encrypted cert now.
```
$ANSIBLE_VAULT;1.1;AES256
56484318584354658465121889743213151843149454864654151618131547984132165489484654
45641818198456456489479874513215489484843614848456466655432455488484654848489498
....
```
Copy and paste the new encrypted cert into your playbook as a new variable.
```
private_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
56484318584354658465121889743213151843149454864654151618131547984132165489484654
45641818198456456489479874513215489484843614848456466655432455488484654848489498
....
```
Use the new variable for the private\_key:
```
username: admin
private_key: "{{ private_key }}"
certificate_name: admin # This could be left out !
```
When running the playbook, use “–ask-vault-pass” to decrypt the private key.
```
ansible-playbook site.yaml --ask-vault-pass
```
#### More information
* Detailed information about Signature-based Authentication is available from [Cisco APIC Signature-Based Transactions](https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/kb/b_KB_Signature_Based_Transactions.html).
* More information on Ansible Vault can be found on the [Ansible Vault](../user_guide/vault#vault) page.
Using ACI REST with Ansible
---------------------------
While already a lot of ACI modules exists in the Ansible distribution, and the most common actions can be performed with these existing modules, there’s always something that may not be possible with off-the-shelf modules.
The aci\_rest module provides you with direct access to the APIC REST API and enables you to perform any task not already covered by the existing modules. This may seem like a complex undertaking, but you can generate the needed REST payload for any action performed in the ACI web interface effortlessly.
### Built-in idempotency
Because the APIC REST API is intrinsically idempotent and can report whether a change was made, the aci\_rest module automatically inherits both capabilities and is a first-class solution for automating your ACI infrastructure. As a result, users that require more powerful low-level access to their ACI infrastructure don’t have to give up on idempotency and don’t have to guess whether a change was performed when using the aci\_rest module.
### Using the aci\_rest module
The aci\_rest module accepts the native XML and JSON payloads, but additionally accepts inline YAML payload (structured like JSON). The XML payload requires you to use a path ending with `.xml` whereas JSON or YAML require the path to end with `.json`.
When you’re making modifications, you can use the POST or DELETE methods, whereas doing just queries require the GET method.
For instance, if you would like to ensure a specific tenant exists on ACI, these below four examples are functionally identical:
**XML** (Native ACI REST)
```
- aci_rest:
host: my-apic-1
private_key: pki/admin.key
method: post
path: /api/mo/uni.xml
content: |
<fvTenant name="customer-xyz" descr="Customer XYZ"/>
```
**JSON** (Native ACI REST)
```
- aci_rest:
host: my-apic-1
private_key: pki/admin.key
method: post
path: /api/mo/uni.json
content:
{
"fvTenant": {
"attributes": {
"name": "customer-xyz",
"descr": "Customer XYZ"
}
}
}
```
**YAML** (Ansible-style REST)
```
- aci_rest:
host: my-apic-1
private_key: pki/admin.key
method: post
path: /api/mo/uni.json
content:
fvTenant:
attributes:
name: customer-xyz
descr: Customer XYZ
```
**Ansible task** (Dedicated module)
```
- aci_tenant:
host: my-apic-1
private_key: pki/admin.key
tenant: customer-xyz
description: Customer XYZ
state: present
```
Hint
The XML format is more practical when there is a need to template the REST payload (inline), but the YAML format is more convenient for maintaining your infrastructure-as-code and feels more naturally integrated with Ansible playbooks. The dedicated modules offer a more simple, abstracted, but also a more limited experience. Use what feels best for your use-case.
### More information
Plenty of resources exist to learn about ACI’s APIC REST interface, we recommend the links below:
* [The ACI collection on Ansible Galaxy](https://galaxy.ansible.com/cisco/aci)
* [APIC REST API Configuration Guide](https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/2-x/rest_cfg/2_1_x/b_Cisco_APIC_REST_API_Configuration_Guide.html) – Detailed guide on how the APIC REST API is designed and used, incl. many examples
* [APIC Management Information Model reference](https://developer.cisco.com/docs/apic-mim-ref/) – Complete reference of the APIC object model
* [Cisco DevNet Learning Labs about ACI and REST](https://learninglabs.cisco.com/labs/tags/ACI,REST)
Operational examples
--------------------
Here is a small overview of useful operational tasks to reuse in your playbooks.
Feel free to contribute more useful snippets.
### Waiting for all controllers to be ready
You can use the below task after you started to build your APICs and configured the cluster to wait until all the APICs have come online. It will wait until the number of controllers equals the number listed in the `apic` inventory group.
```
- name: Waiting for all controllers to be ready
aci_rest:
host: my-apic-1
private_key: pki/admin.key
method: get
path: /api/node/class/topSystem.json?query-target-filter=eq(topSystem.role,"controller")
register: topsystem
until: topsystem|success and topsystem.totalCount|int >= groups['apic']|count >= 3
retries: 20
delay: 30
```
### Waiting for cluster to be fully-fit
The below example waits until the cluster is fully-fit. In this example you know the number of APICs in the cluster and you verify each APIC reports a ‘fully-fit’ status.
```
- name: Waiting for cluster to be fully-fit
aci_rest:
host: my-apic-1
private_key: pki/admin.key
method: get
path: /api/node/class/infraWiNode.json?query-target-filter=wcard(infraWiNode.dn,"topology/pod-1/node-1/av")
register: infrawinode
until: >
infrawinode|success and
infrawinode.totalCount|int >= groups['apic']|count >= 3 and
infrawinode.imdata[0].infraWiNode.attributes.health == 'fully-fit' and
infrawinode.imdata[1].infraWiNode.attributes.health == 'fully-fit' and
infrawinode.imdata[2].infraWiNode.attributes.health == 'fully-fit'
retries: 30
delay: 30
```
APIC error messages
-------------------
The following error messages may occur and this section can help you understand what exactly is going on and how to fix/avoid them.
APIC Error 122: unknown managed object class ‘polUni’
In case you receive this error while you are certain your aci\_rest payload and object classes are seemingly correct, the issue might be that your payload is not in fact correct JSON (for example, the sent payload is using single quotes, rather than double quotes), and as a result the APIC is not correctly parsing your object classes from the payload. One way to avoid this is by using a YAML or an XML formatted payload, which are easier to construct correctly and modify later.
APIC Error 400: invalid data at line ‘1’. Attributes are missing, tag ‘attributes’ must be specified first, before any other tag
Although the JSON specification allows unordered elements, the APIC REST API requires that the JSON `attributes` element precede the `children` array or other elements. So you need to ensure that your payload conforms to this requirement. Sorting your dictionary keys will do the trick just fine. If you don’t have any attributes, it may be necessary to add: `attributes: {}` as the APIC does expect the entry to precede any `children`.
APIC Error 801: property descr of uni/tn-TENANT/ap-AP failed validation for value ‘A “legacy” network’
Some values in the APIC have strict format-rules to comply to, and the internal APIC validation check for the provided value failed. In the above case, the `description` parameter (internally known as `descr`) only accepts values conforming to [Regex: [a-zA-Z0-9\!#$%()\*,-./:;@ \_{|}~?&+]+](https://pubhub-prod.s3.amazonaws.com/media/apic-mim-ref/docs/MO-fvAp.html#descr), in general it must not include quotes or square brackets.
Known issues
------------
The aci\_rest module is a wrapper around the APIC REST API. As a result any issues related to the APIC will be reflected in the use of this module.
All below issues either have been reported to the vendor, and most can simply be avoided.
Too many consecutive API calls may result in connection throttling
Starting with ACI v3.1 the APIC will actively throttle password-based authenticated connection rates over a specific threshold. This is as part of an anti-DDOS measure but can act up when using Ansible with ACI using password-based authentication. Currently, one solution is to increase this threshold within the nginx configuration, but using signature-based authentication is recommended.
**NOTE:** It is advisable to use signature-based authentication with ACI as it not only prevents connection-throttling, but also improves general performance when using the ACI modules.
Specific requests may not reflect changes correctly ([#35401](https://github.com/ansible/ansible/issues/35041))
There is a known issue where specific requests to the APIC do not properly reflect changed in the resulting output, even when we request those changes explicitly from the APIC. In one instance using the path `api/node/mo/uni/infra.xml` fails, where `api/node/mo/uni/infra/.xml` does work correctly.
**NOTE:** A workaround is to register the task return values (for example, `register: this`) and influence when the task should report a change by adding: `changed_when: this.imdata != []`.
Specific requests are known to not be idempotent ([#35050](https://github.com/ansible/ansible/issues/35050))
The behaviour of the APIC is inconsistent to the use of `status="created"` and `status="deleted"`. The result is that when you use `status="created"` in your payload the resulting tasks are not idempotent and creation will fail when the object was already created. However this is not the case with `status="deleted"` where such call to an non-existing object does not cause any failure whatsoever.
**NOTE:** A workaround is to avoid using `status="created"` and instead use `status="modified"` when idempotency is essential to your workflow..
Setting user password is not idempotent ([#35544](https://github.com/ansible/ansible/issues/35544))
Due to an inconsistency in the APIC REST API, a task that sets the password of a locally-authenticated user is not idempotent. The APIC will complain with message `Password history check: user dag should not use previous 5 passwords`.
**NOTE:** There is no workaround for this issue.
ACI Ansible community
---------------------
If you have specific issues with the ACI modules, or a feature request, or you like to contribute to the ACI project by proposing changes or documentation updates, look at the Ansible Community wiki ACI page at: <https://github.com/ansible/community/wiki/Network:-ACI>
You will find our roadmap, an overview of open ACI issues and pull-requests, and more information about who we are. If you have an interest in using ACI with Ansible, feel free to join! We occasionally meet online to track progress and prepare for new Ansible releases.
See also
[ACI collection on Ansible Galaxy](https://galaxy.ansible.com/cisco/aci)
View the content tab for a complete list of supported ACI modules.
[Developing Cisco ACI modules](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general_aci.html#aci-dev-guide)
A walkthrough on how to develop new Cisco ACI modules to contribute back.
[ACI community](https://github.com/ansible/community/wiki/Network:-ACI)
The Ansible ACI community wiki page, includes roadmap, ideas and development documentation.
[Ansible for Network Automation](../network/index#network-guide)
A detailed guide on how to use Ansible for automating network infrastructure.
[Network Working Group](https://github.com/ansible/community/tree/master/group-network)
The Ansible Network community page, includes contact information and meeting information.
`#ansible-network on` [irc.libera.chat](https://libera.chat/)
The #ansible-network IRC chat channel on libera.chat.
[User Mailing List](https://groups.google.com/group/ansible-project)
Have a question? Stop by the google group!
| programming_docs |
ansible Infoblox Guide Infoblox Guide
==============
* [Prerequisites](#prerequisites)
* [Credentials and authenticating](#credentials-and-authenticating)
* [NIOS lookup plugins](#nios-lookup-plugins)
+ [Retrieving all network views](#retrieving-all-network-views)
+ [Retrieving a host record](#retrieving-a-host-record)
* [Use cases with modules](#use-cases-with-modules)
+ [Configuring an IPv4 network](#configuring-an-ipv4-network)
+ [Creating a host record](#creating-a-host-record)
+ [Creating a forward DNS zone](#creating-a-forward-dns-zone)
+ [Creating a reverse DNS zone](#creating-a-reverse-dns-zone)
* [Dynamic inventory script](#dynamic-inventory-script)
This guide describes how to use Ansible with the Infoblox Network Identity Operating System (NIOS). With Ansible integration, you can use Ansible playbooks to automate Infoblox Core Network Services for IP address management (IPAM), DNS, and inventory tracking.
You can review simple example tasks in the documentation for any of the [NIOS modules](https://docs.ansible.com/ansible/2.9/modules/list_of_net_tools_modules.html#nios-net-tools-modules "(in Ansible v2.9)") or look at the [Use cases with modules](#use-cases-with-modules) section for more elaborate examples. See the [Infoblox](https://www.infoblox.com/) website for more information on the Infoblox product.
Note
You can retrieve most of the example playbooks used in this guide from the [network-automation/infoblox\_ansible](https://github.com/network-automation/infoblox_ansible) GitHub repository.
Prerequisites
-------------
Before using Ansible `nios` modules with Infoblox, you must install the `infoblox-client` on your Ansible control node:
```
$ sudo pip install infoblox-client
```
Note
You need an NIOS account with the WAPI feature enabled to use Ansible with Infoblox.
Credentials and authenticating
------------------------------
To use Infoblox `nios` modules in playbooks, you need to configure the credentials to access your Infoblox system. The examples in this guide use credentials stored in `<playbookdir>/group_vars/nios.yml`. Replace these values with your Infoblox credentials:
```
---
nios_provider:
host: 192.0.0.2
username: admin
password: ansible
```
NIOS lookup plugins
-------------------
Ansible includes the following lookup plugins for NIOS:
* [nios](https://docs.ansible.com/ansible/2.9/plugins/lookup/nios.html#nios-lookup "(in Ansible v2.9)") Uses the Infoblox WAPI API to fetch NIOS specified objects, for example network views, DNS views, and host records.
* [nios\_next\_ip](https://docs.ansible.com/ansible/2.9/plugins/lookup/nios_next_ip.html#nios-next-ip-lookup "(in Ansible v2.9)") Provides the next available IP address from a network. You’ll see an example of this in [Creating a host record](#creating-a-host-record).
* [nios\_next\_network](https://docs.ansible.com/ansible/2.9/plugins/lookup/nios_next_network.html#nios-next-network-lookup "(in Ansible v2.9)") - Returns the next available network range for a network-container.
You must run the NIOS lookup plugins locally by specifying `connection: local`. See [lookup plugins](../plugins/lookup#lookup-plugins) for more detail.
### Retrieving all network views
To retrieve all network views and save them in a variable, use the [set\_fact](../collections/ansible/builtin/set_fact_module#set-fact-module) module with the [nios](https://docs.ansible.com/ansible/2.9/plugins/lookup/nios.html#nios-lookup "(in Ansible v2.9)") lookup plugin:
```
---
- hosts: nios
connection: local
tasks:
- name: fetch all networkview objects
set_fact:
networkviews: "{{ lookup('nios', 'networkview', provider=nios_provider) }}"
- name: check the networkviews
debug:
var: networkviews
```
### Retrieving a host record
To retrieve a set of host records, use the `set_fact` module with the `nios` lookup plugin and include a filter for the specific hosts you want to retrieve:
```
---
- hosts: nios
connection: local
tasks:
- name: fetch host leaf01
set_fact:
host: "{{ lookup('nios', 'record:host', filter={'name': 'leaf01.ansible.com'}, provider=nios_provider) }}"
- name: check the leaf01 return variable
debug:
var: host
- name: debug specific variable (ipv4 address)
debug:
var: host.ipv4addrs[0].ipv4addr
- name: fetch host leaf02
set_fact:
host: "{{ lookup('nios', 'record:host', filter={'name': 'leaf02.ansible.com'}, provider=nios_provider) }}"
- name: check the leaf02 return variable
debug:
var: host
```
If you run this `get_host_record.yml` playbook, you should see results similar to the following:
```
$ ansible-playbook get_host_record.yml
PLAY [localhost] ***************************************************************************************
TASK [fetch host leaf01] ******************************************************************************
ok: [localhost]
TASK [check the leaf01 return variable] *************************************************************
ok: [localhost] => {
< ...output shortened...>
"host": {
"ipv4addrs": [
{
"configure_for_dhcp": false,
"host": "leaf01.ansible.com",
}
],
"name": "leaf01.ansible.com",
"view": "default"
}
}
TASK [debug specific variable (ipv4 address)] ******************************************************
ok: [localhost] => {
"host.ipv4addrs[0].ipv4addr": "192.168.1.11"
}
TASK [fetch host leaf02] ******************************************************************************
ok: [localhost]
TASK [check the leaf02 return variable] *************************************************************
ok: [localhost] => {
< ...output shortened...>
"host": {
"ipv4addrs": [
{
"configure_for_dhcp": false,
"host": "leaf02.example.com",
"ipv4addr": "192.168.1.12"
}
],
}
}
PLAY RECAP ******************************************************************************************
localhost : ok=5 changed=0 unreachable=0 failed=0
```
The output above shows the host record for `leaf01.ansible.com` and `leaf02.ansible.com` that were retrieved by the `nios` lookup plugin. This playbook saves the information in variables which you can use in other playbooks. This allows you to use Infoblox as a single source of truth to gather and use information that changes dynamically. See [Using Variables](../user_guide/playbooks_variables#playbooks-variables) for more information on using Ansible variables. See the [nios](https://docs.ansible.com/ansible/2.9/plugins/lookup/nios.html#nios-lookup "(in Ansible v2.9)") examples for more data options that you can retrieve.
You can access these playbooks at [Infoblox lookup playbooks](https://github.com/network-automation/infoblox_ansible/tree/master/lookup_playbooks).
Use cases with modules
----------------------
You can use the `nios` modules in tasks to simplify common Infoblox workflows. Be sure to set up your [NIOS credentials](#nios-credentials) before following these examples.
### Configuring an IPv4 network
To configure an IPv4 network, use the [nios\_network](https://docs.ansible.com/ansible/2.9/modules/nios_network_module.html#nios-network-module "(in Ansible v2.9)") module:
```
---
- hosts: nios
connection: local
tasks:
- name: Create a network on the default network view
nios_network:
network: 192.168.100.0/24
comment: sets the IPv4 network
options:
- name: domain-name
value: ansible.com
state: present
provider: "{{nios_provider}}"
```
Notice the last parameter, `provider`, uses the variable `nios_provider` defined in the `group_vars/` directory.
### Creating a host record
To create a host record named `leaf03.ansible.com` on the newly-created IPv4 network:
```
---
- hosts: nios
connection: local
tasks:
- name: configure an IPv4 host record
nios_host_record:
name: leaf03.ansible.com
ipv4addrs:
- ipv4addr:
"{{ lookup('nios_next_ip', '192.168.100.0/24', provider=nios_provider)[0] }}"
state: present
provider: "{{nios_provider}}"
```
Notice the IPv4 address in this example uses the [nios\_next\_ip](https://docs.ansible.com/ansible/2.9/plugins/lookup/nios_next_ip.html#nios-next-ip-lookup "(in Ansible v2.9)") lookup plugin to find the next available IPv4 address on the network.
### Creating a forward DNS zone
To configure a forward DNS zone use, the `nios_zone` module:
```
---
- hosts: nios
connection: local
tasks:
- name: Create a forward DNS zone called ansible-test.com
nios_zone:
name: ansible-test.com
comment: local DNS zone
state: present
provider: "{{ nios_provider }}"
```
### Creating a reverse DNS zone
To configure a reverse DNS zone:
```
---
- hosts: nios
connection: local
tasks:
- name: configure a reverse mapping zone on the system using IPV6 zone format
nios_zone:
name: 100::1/128
zone_format: IPV6
state: present
provider: "{{ nios_provider }}"
```
Dynamic inventory script
------------------------
You can use the Infoblox dynamic inventory script to import your network node inventory with Infoblox NIOS. To gather the inventory from Infoblox, you need two files:
* [infoblox.yaml](https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/infoblox.yaml) - A file that specifies the NIOS provider arguments and optional filters.
* [infoblox.py](https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/infoblox.py) - The python script that retrieves the NIOS inventory.
Note
Please note that the inventory script only works when Ansible 2.9, 2.10 or 3 have been installed. The inventory script will eventually be removed from [community.general](https://galaxy.ansible.com/community/general), and will not work if `community.general` is only installed with `ansible-galaxy collection install`. Please use the inventory plugin from [infoblox.nios\_modules](https://galaxy.ansible.com/infoblox/nios_modules) instead.
To use the Infoblox dynamic inventory script:
1. Download the `infoblox.yaml` file and save it in the `/etc/ansible` directory.
2. Modify the `infoblox.yaml` file with your NIOS credentials.
3. Download the `infoblox.py` file and save it in the `/etc/ansible/hosts` directory.
4. Change the permissions on the `infoblox.py` file to make the file an executable:
```
$ sudo chmod +x /etc/ansible/hosts/infoblox.py
```
You can optionally use `./infoblox.py --list` to test the script. After a few minutes, you should see your Infoblox inventory in JSON format. You can explicitly use the Infoblox dynamic inventory script as follows:
```
$ ansible -i infoblox.py all -m ping
```
You can also implicitly use the Infoblox dynamic inventory script by including it in your inventory directory (`etc/ansible/hosts` by default). See [Working with dynamic inventory](../user_guide/intro_dynamic_inventory#dynamic-inventory) for more details.
See also
[Infoblox website](https://www.infoblox.com//)
The Infoblox website
[Infoblox and Ansible Deployment Guide](https://www.infoblox.com/resources/deployment-guides/infoblox-and-ansible-integration)
The deployment guide for Ansible integration provided by Infoblox.
[Infoblox Integration in Ansible 2.5](https://www.ansible.com/blog/infoblox-integration-in-ansible-2.5)
Ansible blog post about Infoblox.
[Ansible NIOS modules](https://docs.ansible.com/ansible/2.9/modules/list_of_net_tools_modules.html#nios-net-tools-modules "(in Ansible v2.9)")
The list of supported NIOS modules, with examples.
[Infoblox Ansible Examples](https://github.com/network-automation/infoblox_ansible)
Infoblox example playbooks.
ansible Vultr Guide Vultr Guide
===========
Ansible offers a set of modules to interact with [Vultr](https://www.vultr.com) cloud platform.
This set of module forms a framework that allows one to easily manage and orchestrate one’s infrastructure on Vultr cloud platform.
Requirements
------------
There is actually no technical requirement; simply an already created Vultr account.
Configuration
-------------
Vultr modules offer a rather flexible way with regard to configuration.
Configuration is read in that order:
* Environment Variables (eg. `VULTR_API_KEY`, `VULTR_API_TIMEOUT`)
* File specified by environment variable `VULTR_API_CONFIG`
* `vultr.ini` file located in current working directory
* `$HOME/.vultr.ini`
Ini file are structured this way:
```
[default]
key = MY_API_KEY
timeout = 60
[personal_account]
key = MY_PERSONAL_ACCOUNT_API_KEY
timeout = 30
```
If `VULTR_API_ACCOUNT` environment variable or `api_account` module parameter is not specified, modules will look for the section named “default”.
Authentication
--------------
Before using the Ansible modules to interact with Vultr, ones need an API key. If one doesn’t own one yet, log in to [Vultr](https://www.vultr.com) go to Account, then API, enable API then the API key should show up.
Ensure you allow the usage of the API key from the proper IP addresses.
Refer to the Configuration section to find out where to put this information.
To check that everything is working properly run the following command:
```
#> VULTR_API_KEY=XXX ansible -m vultr_account_info localhost
localhost | SUCCESS => {
"changed": false,
"vultr_account_info": {
"balance": -8.9,
"last_payment_amount": -10.0,
"last_payment_date": "2018-07-21 11:34:46",
"pending_charges": 6.0
},
"vultr_api": {
"api_account": "default",
"api_endpoint": "https://api.vultr.com",
"api_retries": 5,
"api_timeout": 60
}
}
```
If a similar output displays then everything is setup properly, else please ensure the proper `VULTR_API_KEY` has been specified and that Access Control on Vultr > Account > API page are accurate.
Usage
-----
Since [Vultr](https://www.vultr.com) offers a public API, the execution of the module to manage the infrastructure on their platform will happen on localhost. This translates to:
```
---
- hosts: localhost
tasks:
- name: Create a 10G volume
vultr_block_storage:
name: my_disk
size: 10
region: New Jersey
```
From that point on, only your creativity is the limit. Make sure to read the documentation of the [available modules](../modules/list_of_cloud_modules#vultr).
Dynamic Inventory
-----------------
Ansible provides a dynamic inventory plugin for [Vultr](https://www.vultr.com). The configuration process is exactly the same as the one for the modules.
To be able to use it you need to enable it first by specifying the following in the `ansible.cfg` file:
```
[inventory]
enable_plugins=vultr
```
And provide a configuration file to be used with the plugin, the minimal configuration file looks like this:
```
---
plugin: vultr
```
To list the available hosts one can simply run:
```
#> ansible-inventory -i vultr.yml --list
```
For example, this allows you to take action on nodes grouped by location or OS name:
```
---
- hosts: Amsterdam
tasks:
- name: Rebooting the machine
shell: reboot
become: True
```
Integration tests
-----------------
Ansible includes integration tests for all Vultr modules.
These tests are meant to run against the public Vultr API and that is why they require a valid key to access the API.
Prepare the test setup:
```
$ cd ansible # location the ansible source is
$ source ./hacking/env-setup
```
Set the Vultr API key:
```
$ cd test/integration
$ cp cloud-config-vultr.ini.template cloud-config-vultr.ini
$ vi cloud-config-vultr.ini
```
Run all Vultr tests:
```
$ ansible-test integration cloud/vultr/ -v --diff --allow-unsupported
```
To run a specific test, for example vultr\_account\_info:
```
$ ansible-test integration cloud/vultr/vultr_account_info -v --diff --allow-unsupported
```
ansible Scaleway Guide Scaleway Guide
==============
Introduction
------------
[Scaleway](https://scaleway.com) is a cloud provider supported by Ansible, version 2.6 or higher via a dynamic inventory plugin and modules. Those modules are:
* [scaleway\_sshkey – Scaleway SSH keys management module](https://docs.ansible.com/ansible/2.9/modules/scaleway_sshkey_module.html#scaleway-sshkey-module "(in Ansible v2.9)"): adds a public SSH key from a file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized\_keys.
* [scaleway\_compute – Scaleway compute management module](https://docs.ansible.com/ansible/2.9/modules/scaleway_compute_module.html#scaleway-compute-module "(in Ansible v2.9)"): manages servers on Scaleway. You can use this module to create, restart and delete servers.
* [scaleway\_volume – Scaleway volumes management module](https://docs.ansible.com/ansible/2.9/modules/scaleway_volume_module.html#scaleway-volume-module "(in Ansible v2.9)"): manages volumes on Scaleway.
Note
This guide assumes you are familiar with Ansible and how it works. If you’re not, have a look at [ansible\_documentation](https://docs.ansible.com/ansible/3/index.html#ansible-documentation "(in Ansible v3)") before getting started.
Requirements
------------
The Scaleway modules and inventory script connect to the Scaleway API using [Scaleway REST API](https://developer.scaleway.com). To use the modules and inventory script you’ll need a Scaleway API token. You can generate an API token via the Scaleway console [here](https://cloud.scaleway.com/#/credentials). The simplest way to authenticate yourself is to set the Scaleway API token in an environment variable:
```
$ export SCW_TOKEN=00000000-1111-2222-3333-444444444444
```
If you’re not comfortable exporting your API token, you can pass it as a parameter to the modules using the `api_token` argument.
If you want to use a new SSH keypair in this tutorial, you can generate it to `./id_rsa` and `./id_rsa.pub` as:
```
$ ssh-keygen -t rsa -f ./id_rsa
```
If you want to use an existing keypair, just copy the private and public key over to the playbook directory.
How to add an SSH key?
----------------------
Connection to Scaleway Compute nodes use Secure Shell. SSH keys are stored at the account level, which means that you can re-use the same SSH key in multiple nodes. The first step to configure Scaleway compute resources is to have at least one SSH key configured.
[scaleway\_sshkey – Scaleway SSH keys management module](https://docs.ansible.com/ansible/2.9/modules/scaleway_sshkey_module.html#scaleway-sshkey-module "(in Ansible v2.9)") is a module that manages SSH keys on your Scaleway account. You can add an SSH key to your account by including the following task in a playbook:
```
- name: "Add SSH key"
scaleway_sshkey:
ssh_pub_key: "ssh-rsa AAAA..."
state: "present"
```
The `ssh_pub_key` parameter contains your ssh public key as a string. Here is an example inside a playbook:
```
- name: Test SSH key lifecycle on a Scaleway account
hosts: localhost
gather_facts: no
environment:
SCW_API_KEY: ""
tasks:
- scaleway_sshkey:
ssh_pub_key: "ssh-rsa AAAAB...424242 [email protected]"
state: present
register: result
- assert:
that:
- result is success and result is changed
```
How to create a compute instance?
---------------------------------
Now that we have an SSH key configured, the next step is to spin up a server! [scaleway\_compute – Scaleway compute management module](https://docs.ansible.com/ansible/2.9/modules/scaleway_compute_module.html#scaleway-compute-module "(in Ansible v2.9)") is a module that can create, update and delete Scaleway compute instances:
```
- name: Create a server
scaleway_compute:
name: foobar
state: present
image: 00000000-1111-2222-3333-444444444444
organization: 00000000-1111-2222-3333-444444444444
region: ams1
commercial_type: START1-S
```
Here are the parameter details for the example shown above:
* `name` is the name of the instance (the one that will show up in your web console).
* `image` is the UUID of the system image you would like to use. A list of all images is available for each availability zone.
* `organization` represents the organization that your account is attached to.
* `region` represents the Availability Zone which your instance is in (for this example, par1 and ams1).
* `commercial_type` represents the name of the commercial offers. You can check out the Scaleway pricing page to find which instance is right for you.
Take a look at this short playbook to see a working example using `scaleway_compute`:
```
- name: Test compute instance lifecycle on a Scaleway account
hosts: localhost
gather_facts: no
environment:
SCW_API_KEY: ""
tasks:
- name: Create a server
register: server_creation_task
scaleway_compute:
name: foobar
state: present
image: 00000000-1111-2222-3333-444444444444
organization: 00000000-1111-2222-3333-444444444444
region: ams1
commercial_type: START1-S
wait: true
- debug: var=server_creation_task
- assert:
that:
- server_creation_task is success
- server_creation_task is changed
- name: Run it
scaleway_compute:
name: foobar
state: running
image: 00000000-1111-2222-3333-444444444444
organization: 00000000-1111-2222-3333-444444444444
region: ams1
commercial_type: START1-S
wait: true
tags:
- web_server
register: server_run_task
- debug: var=server_run_task
- assert:
that:
- server_run_task is success
- server_run_task is changed
```
Dynamic Inventory Script
------------------------
Ansible ships with [scaleway – Scaleway inventory source](https://docs.ansible.com/ansible/2.9/plugins/inventory/scaleway.html#scaleway-inventory "(in Ansible v2.9)"). You can now get a complete inventory of your Scaleway resources through this plugin and filter it on different parameters (`regions` and `tags` are currently supported).
Let’s create an example! Suppose that we want to get all hosts that got the tag web\_server. Create a file named `scaleway_inventory.yml` with the following content:
```
plugin: scaleway
regions:
- ams1
- par1
tags:
- web_server
```
This inventory means that we want all hosts that got the tag `web_server` on the zones `ams1` and `par1`. Once you have configured this file, you can get the information using the following command:
```
$ ansible-inventory --list -i scaleway_inventory.yml
```
The output will be:
```
{
"_meta": {
"hostvars": {
"dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d": {
"ansible_verbosity": 6,
"arch": "x86_64",
"commercial_type": "START1-S",
"hostname": "foobar",
"ipv4": "192.0.2.1",
"organization": "00000000-1111-2222-3333-444444444444",
"state": "running",
"tags": [
"web_server"
]
}
}
},
"all": {
"children": [
"ams1",
"par1",
"ungrouped",
"web_server"
]
},
"ams1": {},
"par1": {
"hosts": [
"dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d"
]
},
"ungrouped": {},
"web_server": {
"hosts": [
"dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d"
]
}
}
```
As you can see, we get different groups of hosts. `par1` and `ams1` are groups based on location. `web_server` is a group based on a tag.
In case a filter parameter is not defined, the plugin supposes all values possible are wanted. This means that for each tag that exists on your Scaleway compute nodes, a group based on each tag will be created.
Scaleway S3 object storage
--------------------------
[Object Storage](https://www.scaleway.com/object-storage) allows you to store any kind of objects (documents, images, videos, and so on). As the Scaleway API is S3 compatible, Ansible supports it natively through the modules: [s3\_bucket – Manage S3 buckets in AWS, DigitalOcean, Ceph, Walrus and FakeS3](https://docs.ansible.com/ansible/2.9/modules/s3_bucket_module.html#s3-bucket-module "(in Ansible v2.9)"), [aws\_s3 – manage objects in S3](https://docs.ansible.com/ansible/2.9/modules/aws_s3_module.html#aws-s3-module "(in Ansible v2.9)").
You can find many examples in the [scaleway\_s3 integration tests](https://github.com/ansible/ansible-legacy-tests/tree/devel/test/legacy/roles/scaleway_s3).
```
- hosts: myserver
vars:
scaleway_region: nl-ams
s3_url: https://s3.nl-ams.scw.cloud
environment:
# AWS_ACCESS_KEY matches your scaleway organization id available at https://cloud.scaleway.com/#/account
AWS_ACCESS_KEY: 00000000-1111-2222-3333-444444444444
# AWS_SECRET_KEY matches a secret token that you can retrieve at https://cloud.scaleway.com/#/credentials
AWS_SECRET_KEY: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
module_defaults:
group/aws:
s3_url: '{{ s3_url }}'
region: '{{ scaleway_region }}'
tasks:
# use a fact instead of a variable, otherwise template is evaluate each time variable is used
- set_fact:
bucket_name: "{{ 99999999 | random | to_uuid }}"
# "requester_pays:" is mandatory because Scaleway doesn't implement related API
# another way is to use aws_s3 and "mode: create" !
- s3_bucket:
name: '{{ bucket_name }}'
requester_pays:
- name: Another way to create the bucket
aws_s3:
bucket: '{{ bucket_name }}'
mode: create
encrypt: false
register: bucket_creation_check
- name: add something in the bucket
aws_s3:
mode: put
bucket: '{{ bucket_name }}'
src: /tmp/test.txt # needs to be created before
object: test.txt
encrypt: false # server side encryption must be disabled
```
| programming_docs |
ansible Packet.net Guide Packet.net Guide
================
Introduction
------------
[Packet.net](https://packet.net) is a bare metal infrastructure host that’s supported by Ansible (>=2.3) via a dynamic inventory script and two cloud modules. The two modules are:
* packet\_sshkey: adds a public SSH key from file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized\_keys.
* packet\_device: manages servers on Packet. You can use this module to create, restart and delete devices.
Note, this guide assumes you are familiar with Ansible and how it works. If you’re not, have a look at their [docs](../index#ansible-documentation) before getting started.
Requirements
------------
The Packet modules and inventory script connect to the Packet API using the packet-python package. You can install it with pip:
```
$ pip install packet-python
```
In order to check the state of devices created by Ansible on Packet, it’s a good idea to install one of the [Packet CLI clients](https://www.packet.net/developers/integrations/). Otherwise you can check them via the [Packet portal](https://app.packet.net/portal).
To use the modules and inventory script you’ll need a Packet API token. You can generate an API token via the Packet portal [here](https://app.packet.net/portal#/api-keys). The simplest way to authenticate yourself is to set the Packet API token in an environment variable:
```
$ export PACKET_API_TOKEN=Bfse9F24SFtfs423Gsd3ifGsd43sSdfs
```
If you’re not comfortable exporting your API token, you can pass it as a parameter to the modules.
On Packet, devices and reserved IP addresses belong to [projects](https://www.packet.com/developers/api/#projects). In order to use the packet\_device module, you need to specify the UUID of the project in which you want to create or manage devices. You can find a project’s UUID in the Packet portal [here](https://app.packet.net/portal#/projects/list/table/) (it’s just under the project table) or via one of the available [CLIs](https://www.packet.net/developers/integrations/).
If you want to use a new SSH keypair in this tutorial, you can generate it to `./id_rsa` and `./id_rsa.pub` as:
```
$ ssh-keygen -t rsa -f ./id_rsa
```
If you want to use an existing keypair, just copy the private and public key over to the playbook directory.
Device Creation
---------------
The following code block is a simple playbook that creates one [Type 0](https://www.packet.com/cloud/servers/t1-small/) server (the ‘plan’ parameter). You have to supply ‘plan’ and ‘operating\_system’. ‘location’ defaults to ‘ewr1’ (Parsippany, NJ). You can find all the possible values for the parameters via a [CLI client](https://www.packet.net/developers/integrations/).
```
# playbook_create.yml
- name: create ubuntu device
hosts: localhost
tasks:
- packet_sshkey:
key_file: ./id_rsa.pub
label: tutorial key
- packet_device:
project_id: <your_project_id>
hostnames: myserver
operating_system: ubuntu_16_04
plan: baremetal_0
facility: sjc1
```
After running `ansible-playbook playbook_create.yml`, you should have a server provisioned on Packet. You can verify via a CLI or in the [Packet portal](https://app.packet.net/portal#/projects/list/table).
If you get an error with the message “failed to set machine state present, error: Error 404: Not Found”, please verify your project UUID.
Updating Devices
----------------
The two parameters used to uniquely identify Packet devices are: “device\_ids” and “hostnames”. Both parameters accept either a single string (later converted to a one-element list), or a list of strings.
The ‘device\_ids’ and ‘hostnames’ parameters are mutually exclusive. The following values are all acceptable:
* device\_ids: a27b7a83-fc93-435b-a128-47a5b04f2dcf
* hostnames: mydev1
* device\_ids: [a27b7a83-fc93-435b-a128-47a5b04f2dcf, 4887130f-0ccd-49a0-99b0-323c1ceb527b]
* hostnames: [mydev1, mydev2]
In addition, hostnames can contain a special ‘%d’ formatter along with a ‘count’ parameter that lets you easily expand hostnames that follow a simple name and number pattern; in other words, `hostnames: "mydev%d", count: 2` will expand to [mydev1, mydev2].
If your playbook acts on existing Packet devices, you can only pass the ‘hostname’ and ‘device\_ids’ parameters. The following playbook shows how you can reboot a specific Packet device by setting the ‘hostname’ parameter:
```
# playbook_reboot.yml
- name: reboot myserver
hosts: localhost
tasks:
- packet_device:
project_id: <your_project_id>
hostnames: myserver
state: rebooted
```
You can also identify specific Packet devices with the ‘device\_ids’ parameter. The device’s UUID can be found in the [Packet Portal](https://app.packet.net/portal) or by using a [CLI](https://www.packet.net/developers/integrations/). The following playbook removes a Packet device using the ‘device\_ids’ field:
```
# playbook_remove.yml
- name: remove a device
hosts: localhost
tasks:
- packet_device:
project_id: <your_project_id>
device_ids: <myserver_device_id>
state: absent
```
More Complex Playbooks
----------------------
In this example, we’ll create a CoreOS cluster with [user data](https://packet.com/developers/docs/servers/key-features/user-data/).
The CoreOS cluster will use [etcd](https://etcd.io/) for discovery of other servers in the cluster. Before provisioning your servers, you’ll need to generate a discovery token for your cluster:
```
$ curl -w "\n" 'https://discovery.etcd.io/new?size=3'
```
The following playbook will create an SSH key, 3 Packet servers, and then wait until SSH is ready (or until 5 minutes passed). Make sure to substitute the discovery token URL in ‘user\_data’, and the ‘project\_id’ before running `ansible-playbook`. Also, feel free to change ‘plan’ and ‘facility’.
```
# playbook_coreos.yml
- name: Start 3 CoreOS nodes in Packet and wait until SSH is ready
hosts: localhost
tasks:
- packet_sshkey:
key_file: ./id_rsa.pub
label: new
- packet_device:
hostnames: [coreos-one, coreos-two, coreos-three]
operating_system: coreos_beta
plan: baremetal_0
facility: ewr1
project_id: <your_project_id>
wait_for_public_IPv: 4
user_data: |
#cloud-config
coreos:
etcd2:
discovery: https://discovery.etcd.io/<token>
advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001
initial-advertise-peer-urls: http://$private_ipv4:2380
listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
listen-peer-urls: http://$private_ipv4:2380
fleet:
public-ip: $private_ipv4
units:
- name: etcd2.service
command: start
- name: fleet.service
command: start
register: newhosts
- name: wait for ssh
wait_for:
delay: 1
host: "{{ item.public_ipv4 }}"
port: 22
state: started
timeout: 500
loop: "{{ newhosts.results[0].devices }}"
```
As with most Ansible modules, the default states of the Packet modules are idempotent, meaning the resources in your project will remain the same after re-runs of a playbook. Thus, we can keep the `packet_sshkey` module call in our playbook. If the public key is already in your Packet account, the call will have no effect.
The second module call provisions 3 Packet Type 0 (specified using the ‘plan’ parameter) servers in the project identified via the ‘project\_id’ parameter. The servers are all provisioned with CoreOS beta (the ‘operating\_system’ parameter) and are customized with cloud-config user data passed to the ‘user\_data’ parameter.
The `packet_device` module has a `wait_for_public_IPv` that is used to specify the version of the IP address to wait for (valid values are `4` or `6` for IPv4 or IPv6). If specified, Ansible will wait until the GET API call for a device contains an Internet-routeable IP address of the specified version. When referring to an IP address of a created device in subsequent module calls, it’s wise to use the `wait_for_public_IPv` parameter, or `state: active` in the packet\_device module call.
Run the playbook:
```
$ ansible-playbook playbook_coreos.yml
```
Once the playbook quits, your new devices should be reachable via SSH. Try to connect to one and check if etcd has started properly:
```
tomk@work $ ssh -i id_rsa core@$one_of_the_servers_ip
core@coreos-one ~ $ etcdctl cluster-health
```
Once you create a couple of devices, you might appreciate the dynamic inventory script…
Dynamic Inventory Script
------------------------
The dynamic inventory script queries the Packet API for a list of hosts, and exposes it to Ansible so you can easily identify and act on Packet devices.
You can find it in Ansible Community General Collection’s git repo at [scripts/inventory/packet\_net.py](https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/packet_net.py).
The inventory script is configurable via a [ini file](https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/packet_net.ini).
If you want to use the inventory script, you must first export your Packet API token to a PACKET\_API\_TOKEN environment variable.
You can either copy the inventory and ini config out from the cloned git repo, or you can download it to your working directory like so:
```
$ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/packet_net.py
$ chmod +x packet_net.py
$ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/packet_net.ini
```
In order to understand what the inventory script gives to Ansible you can run:
```
$ ./packet_net.py --list
```
It should print a JSON document looking similar to following trimmed dictionary:
```
{
"_meta": {
"hostvars": {
"147.75.64.169": {
"packet_billing_cycle": "hourly",
"packet_created_at": "2017-02-09T17:11:26Z",
"packet_facility": "ewr1",
"packet_hostname": "coreos-two",
"packet_href": "/devices/d0ab8972-54a8-4bff-832b-28549d1bec96",
"packet_id": "d0ab8972-54a8-4bff-832b-28549d1bec96",
"packet_locked": false,
"packet_operating_system": "coreos_beta",
"packet_plan": "baremetal_0",
"packet_state": "active",
"packet_updated_at": "2017-02-09T17:16:35Z",
"packet_user": "core",
"packet_userdata": "#cloud-config\ncoreos:\n etcd2:\n discovery: https://discovery.etcd.io/e0c8a4a9b8fe61acd51ec599e2a4f68e\n advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001\n initial-advertise-peer-urls: http://$private_ipv4:2380\n listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001\n listen-peer-urls: http://$private_ipv4:2380\n fleet:\n public-ip: $private_ipv4\n units:\n - name: etcd2.service\n command: start\n - name: fleet.service\n command: start"
}
}
},
"baremetal_0": [
"147.75.202.255",
"147.75.202.251",
"147.75.202.249",
"147.75.64.129",
"147.75.192.51",
"147.75.64.169"
],
"coreos_beta": [
"147.75.202.255",
"147.75.202.251",
"147.75.202.249",
"147.75.64.129",
"147.75.192.51",
"147.75.64.169"
],
"ewr1": [
"147.75.64.129",
"147.75.192.51",
"147.75.64.169"
],
"sjc1": [
"147.75.202.255",
"147.75.202.251",
"147.75.202.249"
],
"coreos-two": [
"147.75.64.169"
],
"d0ab8972-54a8-4bff-832b-28549d1bec96": [
"147.75.64.169"
]
}
```
In the `['_meta']['hostvars']` key, there is a list of devices (uniquely identified by their public IPv4 address) with their parameters. The other keys under `['_meta']` are lists of devices grouped by some parameter. Here, it is type (all devices are of type baremetal\_0), operating system, and facility (ewr1 and sjc1).
In addition to the parameter groups, there are also one-item groups with the UUID or hostname of the device.
You can now target groups in playbooks! The following playbook will install a role that supplies resources for an Ansible target into all devices in the “coreos\_beta” group:
```
# playbook_bootstrap.yml
- hosts: coreos_beta
gather_facts: false
roles:
- defunctzombie.coreos-boostrap
```
Don’t forget to supply the dynamic inventory in the `-i` argument!
```
$ ansible-playbook -u core -i packet_net.py playbook_bootstrap.yml
```
If you have any questions or comments let us know! [[email protected]](https://docs.ansible.com/cdn-cgi/l/email-protection#177f727b67313424202c313422252c3134232f2c6776747c7263313423212c797263)
ansible Oracle Cloud Infrastructure Guide Oracle Cloud Infrastructure Guide
=================================
Introduction
------------
Oracle provides a number of Ansible modules to interact with Oracle Cloud Infrastructure (OCI). In this guide, we will explain how you can use these modules to orchestrate, provision and configure your infrastructure on OCI.
Requirements
------------
To use the OCI Ansible modules, you must have the following prerequisites on your control node, the computer from which Ansible playbooks are executed.
1. [An Oracle Cloud Infrastructure account.](https://cloud.oracle.com/en_US/tryit)
2. A user created in that account, in a security group with a policy that grants the necessary permissions for working with resources in those compartments. For guidance, see [How Policies Work](https://docs.cloud.oracle.com/iaas/Content/Identity/Concepts/policies.htm).
3. The necessary credentials and OCID information.
Installation
------------
1. Install the Oracle Cloud Infrastructure Python SDK ([detailed installation instructions](https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/installation.html)):
```
pip install oci
```
2. Install the Ansible OCI Modules in one of two ways:
1. From Galaxy:
```
ansible-galaxy install oracle.oci_ansible_modules
```
2. From GitHub:
```
$ git clone https://github.com/oracle/oci-ansible-modules.git
```
```
$ cd oci-ansible-modules
```
Run one of the following commands:
* If Ansible is installed only for your user:
```
$ ./install.py
```
* If Ansible is installed as root:
```
$ sudo ./install.py
```
Configuration
-------------
When creating and configuring Oracle Cloud Infrastructure resources, Ansible modules use the authentication information outlined [here](https://docs.cloud.oracle.com/iaas/Content/API/Concepts/sdkconfig.htm). .
Examples
--------
### Launch a compute instance
This [sample launch playbook](https://github.com/oracle/oci-ansible-modules/tree/master/samples/compute/launch_compute_instance) launches a public Compute instance and then accesses the instance from an Ansible module over an SSH connection. The sample illustrates how to:
* Generate a temporary, host-specific SSH key pair.
* Specify the public key from the key pair for connecting to the instance, and then launch the instance.
* Connect to the newly launched instance using SSH.
### Create and manage Autonomous Data Warehouses
This [sample warehouse playbook](https://github.com/oracle/oci-ansible-modules/tree/master/samples/database/autonomous_data_warehouse) creates an Autonomous Data Warehouse and manage its lifecycle. The sample shows how to:
* Set up an Autonomous Data Warehouse.
* List all of the Autonomous Data Warehouse instances available in a compartment, filtered by the display name.
* Get the “facts” for a specified Autonomous Data Warehouse.
* Stop and start an Autonomous Data Warehouse instance.
* Delete an Autonomous Data Warehouse instance.
### Create and manage Autonomous Transaction Processing
This [sample playbook](https://github.com/oracle/oci-ansible-modules/tree/master/samples/database/autonomous_database) creates an Autonomous Transaction Processing database and manage its lifecycle. The sample shows how to:
* Set up an Autonomous Transaction Processing database instance.
* List all of the Autonomous Transaction Processing instances in a compartment, filtered by the display name.
* Get the “facts” for a specified Autonomous Transaction Processing instance.
* Delete an Autonomous Transaction Processing database instance.
You can find more examples here: [Sample Ansible Playbooks](https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/ansiblesamples.htm).
ansible Vagrant Guide Vagrant Guide
=============
Introduction
------------
[Vagrant](https://www.vagrantup.com/) is a tool to manage virtual machine environments, and allows you to configure and use reproducible work environments on top of various virtualization and cloud platforms. It also has integration with Ansible as a provisioner for these virtual machines, and the two tools work together well.
This guide will describe how to use Vagrant 1.7+ and Ansible together.
If you’re not familiar with Vagrant, you should visit [the documentation](https://www.vagrantup.com/docs/).
This guide assumes that you already have Ansible installed and working. Running from a Git checkout is fine. Follow the [Installing Ansible](../installation_guide/intro_installation#installation-guide) guide for more information.
Vagrant Setup
-------------
The first step once you’ve installed Vagrant is to create a `Vagrantfile` and customize it to suit your needs. This is covered in detail in the Vagrant documentation, but here is a quick example that includes a section to use the Ansible provisioner to manage a single machine:
```
# This guide is optimized for Vagrant 1.8 and above.
# Older versions of Vagrant put less info in the inventory they generate.
Vagrant.require_version ">= 1.8.0"
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/bionic64"
config.vm.provision "ansible" do |ansible|
ansible.verbose = "v"
ansible.playbook = "playbook.yml"
end
end
```
Notice the `config.vm.provision` section that refers to an Ansible playbook called `playbook.yml` in the same directory as the `Vagrantfile`. Vagrant runs the provisioner once the virtual machine has booted and is ready for SSH access.
There are a lot of Ansible options you can configure in your `Vagrantfile`. Visit the [Ansible Provisioner documentation](https://www.vagrantup.com/docs/provisioning/ansible.html) for more information.
```
$ vagrant up
```
This will start the VM, and run the provisioning playbook (on the first VM startup).
To re-run a playbook on an existing VM, just run:
```
$ vagrant provision
```
This will re-run the playbook against the existing VM.
Note that having the `ansible.verbose` option enabled will instruct Vagrant to show the full `ansible-playbook` command used behind the scene, as illustrated by this example:
```
$ PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --limit="default" --inventory-file=/home/someone/coding-in-a-project/.vagrant/provisioners/ansible/inventory -v playbook.yml
```
This information can be quite useful to debug integration issues and can also be used to manually execute Ansible from a shell, as explained in the next section.
Running Ansible Manually
------------------------
Sometimes you may want to run Ansible manually against the machines. This is faster than kicking `vagrant provision` and pretty easy to do.
With our `Vagrantfile` example, Vagrant automatically creates an Ansible inventory file in `.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory`. This inventory is configured according to the SSH tunnel that Vagrant automatically creates. A typical automatically-created inventory file for a single machine environment may look something like this:
```
# Generated by Vagrant
default ansible_host=127.0.0.1 ansible_port=2222 ansible_user='vagrant' ansible_ssh_private_key_file='/home/someone/coding-in-a-project/.vagrant/machines/default/virtualbox/private_key'
```
If you want to run Ansible manually, you will want to make sure to pass `ansible` or `ansible-playbook` commands the correct arguments, at least for the *inventory*.
```
$ ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml
```
Advanced Usages
---------------
The “Tips and Tricks” chapter of the [Ansible Provisioner documentation](https://www.vagrantup.com/docs/provisioning/ansible.html) provides detailed information about more advanced Ansible features like:
* how to execute a playbook in parallel within a multi-machine environment
* how to integrate a local `ansible.cfg` configuration file
See also
[Vagrant Home](https://www.vagrantup.com/)
The Vagrant homepage with downloads
[Vagrant Documentation](https://www.vagrantup.com/docs/)
Vagrant Documentation
[Ansible Provisioner](https://www.vagrantup.com/docs/provisioning/ansible.html)
The Vagrant documentation for the Ansible provisioner
[Vagrant Issue Tracker](https://github.com/hashicorp/vagrant/issues?q=is%3Aopen+is%3Aissue+label%3Aprovisioners%2Fansible)
The open issues for the Ansible provisioner in the Vagrant project
[Working with playbooks](../user_guide/playbooks#working-with-playbooks)
An introduction to playbooks
| programming_docs |
ansible Amazon Web Services Guide Amazon Web Services Guide
=========================
Introduction
------------
Ansible contains a number of modules for controlling Amazon Web Services (AWS). The purpose of this section is to explain how to put Ansible modules together (and use inventory scripts) to use Ansible in AWS context.
Requirements for the AWS modules are minimal.
All of the modules require and are tested against recent versions of botocore and boto3. Starting with the 2.0 AWS collection releases, it is generally the policy of the collections to support the versions of these libraries released 12 months prior to the most recent major collection revision. Individual modules may require a more recent library version to support specific features or may require the boto library, check the module documentation for the minimum required version for each module. You must have the boto3 Python module installed on your control machine. You can install these modules from your OS distribution or using the python package installer: `pip install boto3`.
Starting with the 2.0 releases of both collections, Python 2.7 support will be ended in accordance with AWS’ [end of Python 2.7 support](https://aws.amazon.com/blogs/developer/announcing-end-of-support-for-python-2-7-in-aws-sdk-for-python-and-aws-cli-v1/) and Python 3.6 or greater will be required.
Whereas classically Ansible will execute tasks in its host loop against multiple remote machines, most cloud-control steps occur on your local machine with reference to the regions to control.
In your playbook steps we’ll typically be using the following pattern for provisioning steps:
```
- hosts: localhost
gather_facts: False
tasks:
- ...
```
Authentication
--------------
Authentication with the AWS-related modules is handled by either specifying your access and secret key as ENV variables or module arguments.
For environment variables:
```
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
```
For storing these in a vars\_file, ideally encrypted with ansible-vault:
```
---
ec2_access_key: "--REMOVED--"
ec2_secret_key: "--REMOVED--"
```
Note that if you store your credentials in vars\_file, you need to refer to them in each AWS-module. For example:
```
- ec2
aws_access_key: "{{ec2_access_key}}"
aws_secret_key: "{{ec2_secret_key}}"
image: "..."
```
Provisioning
------------
The ec2 module provisions and de-provisions instances within EC2.
An example of making sure there are only 5 instances tagged ‘Demo’ in EC2 follows.
In the example below, the “exact\_count” of instances is set to 5. This means if there are 0 instances already existing, then 5 new instances would be created. If there were 2 instances, only 3 would be created, and if there were 8 instances, 3 instances would be terminated.
What is being counted is specified by the “count\_tag” parameter. The parameter “instance\_tags” is used to apply tags to the newly created instance.:
```
# demo_setup.yml
- hosts: localhost
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
key_name: my_key
group: test
instance_type: t2.micro
image: "{{ ami_id }}"
wait: true
exact_count: 5
count_tag:
Name: Demo
instance_tags:
Name: Demo
register: ec2
```
The data about what instances are created is being saved by the “register” keyword in the variable named “ec2”.
From this, we’ll use the add\_host module to dynamically create a host group consisting of these new instances. This facilitates performing configuration actions on the hosts immediately in a subsequent task.:
```
# demo_setup.yml
- hosts: localhost
gather_facts: False
tasks:
- name: Provision a set of instances
ec2:
key_name: my_key
group: test
instance_type: t2.micro
image: "{{ ami_id }}"
wait: true
exact_count: 5
count_tag:
Name: Demo
instance_tags:
Name: Demo
register: ec2
- name: Add all instance public IPs to host group
add_host: hostname={{ item.public_ip }} groups=ec2hosts
loop: "{{ ec2.instances }}"
```
With the host group now created, a second play at the bottom of the same provisioning playbook file might now have some configuration steps:
```
# demo_setup.yml
- name: Provision a set of instances
hosts: localhost
# ... AS ABOVE ...
- hosts: ec2hosts
name: configuration play
user: ec2-user
gather_facts: true
tasks:
- name: Check NTP service
service: name=ntpd state=started
```
Security Groups
---------------
Security groups on AWS are stateful. The response of a request from your instance is allowed to flow in regardless of inbound security group rules and vice-versa. In case you only want allow traffic with AWS S3 service, you need to fetch the current IP ranges of AWS S3 for one region and apply them as an egress rule.:
```
- name: fetch raw ip ranges for aws s3
set_fact:
raw_s3_ranges: "{{ lookup('aws_service_ip_ranges', region='eu-central-1', service='S3', wantlist=True) }}"
- name: prepare list structure for ec2_group module
set_fact:
s3_ranges: "{{ s3_ranges | default([]) + [{'proto': 'all', 'cidr_ip': item, 'rule_desc': 'S3 Service IP range'}] }}"
loop: "{{ raw_s3_ranges }}"
- name: set S3 IP ranges to egress rules
ec2_group:
name: aws_s3_ip_ranges
description: allow outgoing traffic to aws S3 service
region: eu-central-1
state: present
vpc_id: vpc-123456
purge_rules: true
purge_rules_egress: true
rules: []
rules_egress: "{{ s3_ranges }}"
tags:
Name: aws_s3_ip_ranges
```
Host Inventory
--------------
Once your nodes are spun up, you’ll probably want to talk to them again. With a cloud setup, it’s best to not maintain a static list of cloud hostnames in text files. Rather, the best way to handle this is to use the aws\_ec2 inventory plugin. See [Working with dynamic inventory](../user_guide/intro_dynamic_inventory#dynamic-inventory).
The plugin will also return instances that were created outside of Ansible and allow Ansible to manage them.
Tags And Groups And Variables
-----------------------------
When using the inventory plugin, you can configure extra inventory structure based on the metadata returned by AWS.
For instance, you might use `keyed_groups` to create groups from instance tags:
```
plugin: aws_ec2
keyed_groups:
- prefix: tag
key: tags
```
You can then target all instances with a “class” tag where the value is “webserver” in a play:
```
- hosts: tag_class_webserver
tasks:
- ping
```
You can also use these groups with ‘group\_vars’ to set variables that are automatically applied to matching instances. See [Organizing host and group variables](../user_guide/intro_inventory#splitting-out-vars).
Autoscaling with Ansible Pull
-----------------------------
Amazon Autoscaling features automatically increase or decrease capacity based on load. There are also Ansible modules shown in the cloud documentation that can configure autoscaling policy.
When nodes come online, it may not be sufficient to wait for the next cycle of an ansible command to come along and configure that node.
To do this, pre-bake machine images which contain the necessary ansible-pull invocation. Ansible-pull is a command line tool that fetches a playbook from a git server and runs it locally.
One of the challenges of this approach is that there needs to be a centralized way to store data about the results of pull commands in an autoscaling context. For this reason, the autoscaling solution provided below in the next section can be a better approach.
Read [ansible-pull](../cli/ansible-pull#ansible-pull) for more information on pull-mode playbooks.
Autoscaling with Ansible Tower
------------------------------
[Red Hat Ansible Tower](https://docs.ansible.com/ansible/3/reference_appendices/tower.html#ansible-tower "(in Ansible v3)") also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call a defined URL and the server will “dial out” to the requester and configure an instance that is spinning up. This can be a great way to reconfigure ephemeral nodes. See the Tower install and product documentation for more details.
A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less information has to be shared with remote hosts.
Ansible With (And Versus) CloudFormation
----------------------------------------
CloudFormation is a Amazon technology for defining a cloud stack as a JSON or YAML document.
Ansible modules provide an easier to use interface than CloudFormation in many examples, without defining a complex JSON/YAML document. This is recommended for most users.
However, for users that have decided to use CloudFormation, there is an Ansible module that can be used to apply a CloudFormation template to Amazon.
When using Ansible with CloudFormation, typically Ansible will be used with a tool like Packer to build images, and CloudFormation will launch those images, or ansible will be invoked through user data once the image comes online, or a combination of the two.
Please see the examples in the Ansible CloudFormation module for more details.
AWS Image Building With Ansible
-------------------------------
Many users may want to have images boot to a more complete configuration rather than configuring them entirely after instantiation. To do this, one of many programs can be used with Ansible playbooks to define and upload a base image, which will then get its own AMI ID for usage with the ec2 module or other Ansible AWS modules such as ec2\_asg or the cloudformation module. Possible tools include Packer, aminator, and Ansible’s ec2\_ami module.
Generally speaking, we find most users using Packer.
See the Packer documentation of the [Ansible local Packer provisioner](https://www.packer.io/docs/provisioners/ansible-local.html) and [Ansible remote Packer provisioner](https://www.packer.io/docs/provisioners/ansible.html).
If you do not want to adopt Packer at this time, configuring a base-image with Ansible after provisioning (as shown above) is acceptable.
Next Steps: Explore Modules
---------------------------
Ansible ships with lots of modules for configuring a wide array of EC2 services. Browse the “Cloud” category of the module documentation for a full list with examples.
See also
[Collection Index](../collections/index#list-of-collections)
Browse existing collections, modules, and plugins
[Working with playbooks](../user_guide/playbooks#working-with-playbooks)
An introduction to playbooks
[Controlling where tasks run: delegation and local actions](../user_guide/playbooks_delegation#playbooks-delegation)
Delegation, useful for working with loud balancers, clouds, and locally executed steps.
[User Mailing List](https://groups.google.com/group/ansible-devel)
Have a question? Stop by the google group!
[irc.libera.chat](https://libera.chat/)
#ansible IRC chat channel
ansible Google Cloud Platform Guide Google Cloud Platform Guide
===========================
Introduction
------------
Ansible + Google have been working together on a set of auto-generated Ansible modules designed to consistently and comprehensively cover the entirety of the Google Cloud Platform (GCP).
Ansible contains modules for managing Google Cloud Platform resources, including creating instances, controlling network access, working with persistent disks, managing load balancers, and a lot more.
These new modules can be found under a new consistent name scheme “gcp\_\*” (Note: gcp\_target\_proxy and gcp\_url\_map are legacy modules, despite the “gcp\_\*” name. Please use gcp\_compute\_target\_proxy and gcp\_compute\_url\_map instead).
Additionally, the gcp\_compute inventory plugin can discover all Google Compute Engine (GCE) instances and make them automatically available in your Ansible inventory.
You may see a collection of other GCP modules that do not conform to this naming convention. These are the original modules primarily developed by the Ansible community. You will find some overlapping functionality such as with the “gce” module and the new “gcp\_compute\_instance” module. Either can be used, but you may experience issues trying to use them together.
While the community GCP modules are not going away, Google is investing effort into the new “gcp\_\*” modules. Google is committed to ensuring the Ansible community has a great experience with GCP and therefore recommends adopting these new modules if possible.
Requisites
----------
The GCP modules require both the `requests` and the `google-auth` libraries to be installed.
```
$ pip install requests google-auth
```
Alternatively for RHEL / CentOS, the `python-requests` package is also available to satisfy `requests` libraries.
```
$ yum install python-requests
```
Credentials
-----------
It’s easy to create a GCP account with credentials for Ansible. You have multiple options to get your credentials - here are two of the most common options:
* Service Accounts (Recommended): Use JSON service accounts with specific permissions.
* Machine Accounts: Use the permissions associated with the GCP Instance you’re using Ansible on.
For the following examples, we’ll be using service account credentials.
To work with the GCP modules, you’ll first need to get some credentials in the JSON format:
1. [Create a Service Account](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount)
2. [Download JSON credentials](https://support.google.com/cloud/answer/6158849?hl=en&ref_topic=6262490#serviceaccounts)
Once you have your credentials, there are two different ways to provide them to Ansible:
* by specifying them directly as module parameters
* by setting environment variables
### Providing Credentials as Module Parameters
For the GCE modules you can specify the credentials as arguments:
* `auth_kind`: type of authentication being used (choices: machineaccount, serviceaccount, application)
* `service_account_email`: email associated with the project
* `service_account_file`: path to the JSON credentials file
* `project`: id of the project
* `scopes`: The specific scopes that you want the actions to use.
For example, to create a new IP address using the `gcp_compute_address` module, you can use the following configuration:
```
- name: Create IP address
hosts: localhost
gather_facts: no
vars:
service_account_file: /home/my_account.json
project: my-project
auth_kind: serviceaccount
scopes:
- https://www.googleapis.com/auth/compute
tasks:
- name: Allocate an IP Address
gcp_compute_address:
state: present
name: 'test-address1'
region: 'us-west1'
project: "{{ project }}"
auth_kind: "{{ auth_kind }}"
service_account_file: "{{ service_account_file }}"
scopes: "{{ scopes }}"
```
### Providing Credentials as Environment Variables
Set the following environment variables before running Ansible in order to configure your credentials:
```
GCP_AUTH_KIND
GCP_SERVICE_ACCOUNT_EMAIL
GCP_SERVICE_ACCOUNT_FILE
GCP_SCOPES
```
GCE Dynamic Inventory
---------------------
The best way to interact with your hosts is to use the gcp\_compute inventory plugin, which dynamically queries GCE and tells Ansible what nodes can be managed.
To be able to use this GCE dynamic inventory plugin, you need to enable it first by specifying the following in the `ansible.cfg` file:
```
[inventory]
enable_plugins = gcp_compute
```
Then, create a file that ends in `.gcp.yml` in your root directory.
The gcp\_compute inventory script takes in the same authentication information as any module.
Here’s an example of a valid inventory file:
```
plugin: gcp_compute
projects:
- graphite-playground
auth_kind: serviceaccount
service_account_file: /home/alexstephen/my_account.json
```
Executing `ansible-inventory --list -i <filename>.gcp.yml` will create a list of GCP instances that are ready to be configured using Ansible.
### Create an instance
The full range of GCP modules provide the ability to create a wide variety of GCP resources with the full support of the entire GCP API.
The following playbook creates a GCE Instance. This instance relies on other GCP resources like Disk. By creating other resources separately, we can give as much detail as necessary about how we want to configure the other resources, for example formatting of the Disk. By registering it to a variable, we can simply insert the variable into the instance task. The gcp\_compute\_instance module will figure out the rest.
```
- name: Create an instance
hosts: localhost
gather_facts: no
vars:
gcp_project: my-project
gcp_cred_kind: serviceaccount
gcp_cred_file: /home/my_account.json
zone: "us-central1-a"
region: "us-central1"
tasks:
- name: create a disk
gcp_compute_disk:
name: 'disk-instance'
size_gb: 50
source_image: 'projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts'
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
state: present
register: disk
- name: create a address
gcp_compute_address:
name: 'address-instance'
region: "{{ region }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
state: present
register: address
- name: create a instance
gcp_compute_instance:
state: present
name: test-vm
machine_type: n1-standard-1
disks:
- auto_delete: true
boot: true
source: "{{ disk }}"
network_interfaces:
- network: null # use default
access_configs:
- name: 'External NAT'
nat_ip: "{{ address }}"
type: 'ONE_TO_ONE_NAT'
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
register: instance
- name: Wait for SSH to come up
wait_for: host={{ address.address }} port=22 delay=10 timeout=60
- name: Add host to groupname
add_host: hostname={{ address.address }} groupname=new_instances
- name: Manage new instances
hosts: new_instances
connection: ssh
become: True
roles:
- base_configuration
- production_server
```
Note that use of the “add\_host” module above creates a temporary, in-memory group. This means that a play in the same playbook can then manage machines in the ‘new\_instances’ group, if so desired. Any sort of arbitrary configuration is possible at this point.
For more information about Google Cloud, please visit the [Google Cloud website](https://cloud.google.com).
Migration Guides
----------------
### gce.py -> gcp\_compute\_instance.py
As of Ansible 2.8, we’re encouraging everyone to move from the `gce` module to the `gcp_compute_instance` module. The `gcp_compute_instance` module has better support for all of GCP’s features, fewer dependencies, more flexibility, and better supports GCP’s authentication systems.
The `gcp_compute_instance` module supports all of the features of the `gce` module (and more!). Below is a mapping of `gce` fields over to `gcp_compute_instance` fields.
| gce.py | gcp\_compute\_instance.py | Notes |
| --- | --- | --- |
| state | state/status | State on gce has multiple values: “present”, “absent”, “stopped”, “started”, “terminated”. State on gcp\_compute\_instance is used to describe if the instance exists (present) or does not (absent). Status is used to describe if the instance is “started”, “stopped” or “terminated”. |
| image | disks[].initialize\_params.source\_image | You’ll need to create a single disk using the disks[] parameter and set it to be the boot disk (disks[].boot = true) |
| image\_family | disks[].initialize\_params.source\_image | See above. |
| external\_projects | disks[].initialize\_params.source\_image | The name of the source\_image will include the name of the project. |
| instance\_names | Use a loop or multiple tasks. | Using loops is a more Ansible-centric way of creating multiple instances and gives you the most flexibility. |
| service\_account\_email | service\_accounts[].email | This is the service\_account email address that you want the instance to be associated with. It is not the service\_account email address that is used for the credentials necessary to create the instance. |
| service\_account\_permissions | service\_accounts[].scopes | These are the permissions you want to grant to the instance. |
| pem\_file | Not supported. | We recommend using JSON service account credentials instead of PEM files. |
| credentials\_file | service\_account\_file | |
| project\_id | project | |
| name | name | This field does not accept an array of names. Use a loop to create multiple instances. |
| num\_instances | Use a loop | For maximum flexibility, we’re encouraging users to use Ansible features to create multiple instances, rather than letting the module do it for you. |
| network | network\_interfaces[].network | |
| subnetwork | network\_interfaces[].subnetwork | |
| persistent\_boot\_disk | disks[].type = ‘PERSISTENT’ | |
| disks | disks[] | |
| ip\_forward | can\_ip\_forward | |
| external\_ip | network\_interfaces[].access\_configs.nat\_ip | This field takes multiple types of values. You can create an IP address with `gcp_compute_address` and place the name/output of the address here. You can also place the string value of the IP address’s GCP name or the actual IP address. |
| disks\_auto\_delete | disks[].auto\_delete | |
| preemptible | scheduling.preemptible | |
| disk\_size | disks[].initialize\_params.disk\_size\_gb | |
An example playbook is below:
```
gcp_compute_instance:
name: "{{ item }}"
machine_type: n1-standard-1
... # any other settings
zone: us-central1-a
project: "my-project"
auth_kind: "service_account_file"
service_account_file: "~/my_account.json"
state: present
loop:
- instance-1
- instance-2
```
| programming_docs |
ansible Online.net Guide Online.net Guide
================
Introduction
------------
Online is a French hosting company mainly known for providing bare-metal servers named Dedibox. Check it out: <https://www.online.net/en>
### Dynamic inventory for Online resources
Ansible has a dynamic inventory plugin that can list your resources.
1. Create a YAML configuration such as `online_inventory.yml` with this content:
```
plugin: online
```
2. `Set your ONLINE_TOKEN environment variable with your token.`
You need to open an account and log into it before you can get a token. You can find your token at the following page: <https://console.online.net/en/api/access>
3. You can test that your inventory is working by running:
```
$ ansible-inventory -v -i online_inventory.yml --list
```
4. Now you can run your playbook or any other module with this inventory:
```
$ ansible all -i online_inventory.yml -m ping
sd-96735 | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
ansible VMware REST Scenarios VMware REST Scenarios
=====================
These scenarios teach you how to accomplish common VMware tasks using the REST API and the Ansible `vmware.vmware_rest` collection. To get started, please select the task you want to accomplish.
* [How to install the vmware\_rest collection](vmware_rest_scenarios/installation)
* [How to configure the vmware\_rest collection](vmware_rest_scenarios/authentication)
* [How to collect information about your environment](vmware_rest_scenarios/collect_information)
* [How to create a Virtual Machine](vmware_rest_scenarios/create_vm)
* [Retrieve information from a specific VM](vmware_rest_scenarios/vm_info)
* [How to modify a virtual machine](vmware_rest_scenarios/vm_hardware_tuning)
* [How to run a virtual machine](vmware_rest_scenarios/run_a_vm)
* [How to get information from a running virtual machine](vmware_rest_scenarios/vm_tool_information)
* [How to configure the VMware tools of a running virtual machine](vmware_rest_scenarios/vm_tool_configuration)
ansible Rackspace Cloud Guide Rackspace Cloud Guide
=====================
Introduction
------------
Note
This section of the documentation is under construction. We are in the process of adding more examples about the Rackspace modules and how they work together. Once complete, there will also be examples for Rackspace Cloud in [ansible-examples](https://github.com/ansible/ansible-examples/).
Ansible contains a number of core modules for interacting with Rackspace Cloud.
The purpose of this section is to explain how to put Ansible modules together (and use inventory scripts) to use Ansible in a Rackspace Cloud context.
Prerequisites for using the rax modules are minimal. In addition to ansible itself, all of the modules require and are tested against pyrax 1.5 or higher. You’ll need this Python module installed on the execution host.
`pyrax` is not currently available in many operating system package repositories, so you will likely need to install it via pip:
```
$ pip install pyrax
```
Ansible creates an implicit localhost that executes in the same context as the `ansible-playbook` and the other CLI tools. If for any reason you need or want to have it in your inventory you should do something like the following:
```
[localhost]
localhost ansible_connection=local ansible_python_interpreter=/usr/local/bin/python2
```
For more information see [Implicit Localhost](../inventory/implicit_localhost#implicit-localhost)
In playbook steps, we’ll typically be using the following pattern:
```
- hosts: localhost
gather_facts: False
tasks:
```
Credentials File
----------------
The `rax.py` inventory script and all `rax` modules support a standard `pyrax` credentials file that looks like:
```
[rackspace_cloud]
username = myraxusername
api_key = d41d8cd98f00b204e9800998ecf8427e
```
Setting the environment parameter `RAX_CREDS_FILE` to the path of this file will help Ansible find how to load this information.
More information about this credentials file can be found at <https://github.com/pycontribs/pyrax/blob/master/docs/getting_started.md#authenticating>
### Running from a Python Virtual Environment (Optional)
Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to.
There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done via the interpreter line in modules, however when instructed by setting the inventory variable ‘ansible\_python\_interpreter’, Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on ‘localhost’, or perhaps running via ‘local\_action’, are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:
```
[localhost]
localhost ansible_connection=local ansible_python_interpreter=/path/to/ansible_venv/bin/python
```
Note
pyrax may be installed in the global Python package scope or in a virtual environment. There are no special considerations to keep in mind when installing pyrax.
Provisioning
------------
Now for the fun parts.
The ‘rax’ module provides the ability to provision instances within Rackspace Cloud. Typically the provisioning task will be performed from your Ansible control server (in our example, localhost) against the Rackspace cloud API. This is done for several reasons:
* Avoiding installing the pyrax library on remote nodes
* No need to encrypt and distribute credentials to remote nodes
* Speed and simplicity
Note
Authentication with the Rackspace-related modules is handled by either specifying your username and API key as environment variables or passing them as module arguments, or by specifying the location of a credentials file.
Here is a basic example of provisioning an instance in ad hoc mode:
```
$ ansible localhost -m rax -a "name=awx flavor=4 image=ubuntu-1204-lts-precise-pangolin wait=yes"
```
Here’s what it would look like in a playbook, assuming the parameters were defined in variables:
```
tasks:
- name: Provision a set of instances
rax:
name: "{{ rax_name }}"
flavor: "{{ rax_flavor }}"
image: "{{ rax_image }}"
count: "{{ rax_count }}"
group: "{{ group }}"
wait: yes
register: rax
delegate_to: localhost
```
The rax module returns data about the nodes it creates, like IP addresses, hostnames, and login passwords. By registering the return value of the step, it is possible used this data to dynamically add the resulting hosts to inventory (temporarily, in memory). This facilitates performing configuration actions on the hosts in a follow-on task. In the following example, the servers that were successfully created using the above task are dynamically added to a group called “raxhosts”, with each nodes hostname, IP address, and root password being added to the inventory.
```
- name: Add the instances we created (by public IP) to the group 'raxhosts'
add_host:
hostname: "{{ item.name }}"
ansible_host: "{{ item.rax_accessipv4 }}"
ansible_password: "{{ item.rax_adminpass }}"
groups: raxhosts
loop: "{{ rax.success }}"
when: rax.action == 'create'
```
With the host group now created, the next play in this playbook could now configure servers belonging to the raxhosts group.
```
- name: Configuration play
hosts: raxhosts
user: root
roles:
- ntp
- webserver
```
The method above ties the configuration of a host with the provisioning step. This isn’t always what you want, and leads us to the next section.
Host Inventory
--------------
Once your nodes are spun up, you’ll probably want to talk to them again. The best way to handle this is to use the “rax” inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up cloud instances via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, and so on. Utilizing metadata is highly recommended in “rax” and can provide an easy way to sort between host groups and roles. If you don’t want to use the `rax.py` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.
In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.
### rax.py
To use the Rackspace dynamic inventory script, copy `rax.py` into your inventory directory and make it executable. You can specify a credentials file for `rax.py` utilizing the `RAX_CREDS_FILE` environment variable.
Note
Dynamic inventory scripts (like `rax.py`) are saved in `/usr/share/ansible/inventory` if Ansible has been installed globally. If installed to a virtualenv, the inventory scripts are installed to `$VIRTUALENV/share/inventory`.
Note
Users of [Red Hat Ansible Automation Platform](https://docs.ansible.com/ansible/latest/reference_appendices/tower.html#ansible-platform) will note that dynamic inventory is natively supported by the controller in the platform, and all you have to do is associate a group with your Rackspace Cloud credentials, and it will easily synchronize without going through these steps:
```
$ RAX_CREDS_FILE=~/.raxpub ansible all -i rax.py -m setup
```
`rax.py` also accepts a `RAX_REGION` environment variable, which can contain an individual region, or a comma separated list of regions.
When using `rax.py`, you will not have a ‘localhost’ defined in the inventory.
As mentioned previously, you will often be running most of these modules outside of the host loop, and will need ‘localhost’ defined. The recommended way to do this, would be to create an `inventory` directory, and place both the `rax.py` script and a file containing `localhost` in it.
Executing `ansible` or `ansible-playbook` and specifying the `inventory` directory instead of an individual file, will cause ansible to evaluate each file in that directory for inventory.
Let’s test our inventory script to see if it can talk to Rackspace Cloud.
```
$ RAX_CREDS_FILE=~/.raxpub ansible all -i inventory/ -m setup
```
Assuming things are properly configured, the `rax.py` inventory script will output information similar to the following information, which will be utilized for inventory and variables.
```
{
"ORD": [
"test"
],
"_meta": {
"hostvars": {
"test": {
"ansible_host": "198.51.100.1",
"rax_accessipv4": "198.51.100.1",
"rax_accessipv6": "2001:DB8::2342",
"rax_addresses": {
"private": [
{
"addr": "192.0.2.2",
"version": 4
}
],
"public": [
{
"addr": "198.51.100.1",
"version": 4
},
{
"addr": "2001:DB8::2342",
"version": 6
}
]
},
"rax_config_drive": "",
"rax_created": "2013-11-14T20:48:22Z",
"rax_flavor": {
"id": "performance1-1",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
"rel": "bookmark"
}
]
},
"rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
"rax_human_id": "test",
"rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
"rax_image": {
"id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
"rel": "bookmark"
}
]
},
"rax_key_name": null,
"rax_links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
"rel": "self"
},
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
"rel": "bookmark"
}
],
"rax_metadata": {
"foo": "bar"
},
"rax_name": "test",
"rax_name_attr": "name",
"rax_networks": {
"private": [
"192.0.2.2"
],
"public": [
"198.51.100.1",
"2001:DB8::2342"
]
},
"rax_os-dcf_diskconfig": "AUTO",
"rax_os-ext-sts_power_state": 1,
"rax_os-ext-sts_task_state": null,
"rax_os-ext-sts_vm_state": "active",
"rax_progress": 100,
"rax_status": "ACTIVE",
"rax_tenant_id": "111111",
"rax_updated": "2013-11-14T20:49:27Z",
"rax_user_id": "22222"
}
}
}
}
```
### Standard Inventory
When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), it may still be advantageous to retrieve discoverable hostvar information from the Rackspace API.
This can be achieved with the `rax_facts` module and an inventory file similar to the following:
```
[test_servers]
hostname1 rax_region=ORD
hostname2 rax_region=ORD
```
```
- name: Gather info about servers
hosts: test_servers
gather_facts: False
tasks:
- name: Get facts about servers
rax_facts:
credentials: ~/.raxpub
name: "{{ inventory_hostname }}"
region: "{{ rax_region }}"
delegate_to: localhost
- name: Map some facts
set_fact:
ansible_host: "{{ rax_accessipv4 }}"
```
While you don’t need to know how it works, it may be interesting to know what kind of variables are returned.
The `rax_facts` module provides facts as followings, which match the `rax.py` inventory script:
```
{
"ansible_facts": {
"rax_accessipv4": "198.51.100.1",
"rax_accessipv6": "2001:DB8::2342",
"rax_addresses": {
"private": [
{
"addr": "192.0.2.2",
"version": 4
}
],
"public": [
{
"addr": "198.51.100.1",
"version": 4
},
{
"addr": "2001:DB8::2342",
"version": 6
}
]
},
"rax_config_drive": "",
"rax_created": "2013-11-14T20:48:22Z",
"rax_flavor": {
"id": "performance1-1",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
"rel": "bookmark"
}
]
},
"rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
"rax_human_id": "test",
"rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
"rax_image": {
"id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
"rel": "bookmark"
}
]
},
"rax_key_name": null,
"rax_links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
"rel": "self"
},
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
"rel": "bookmark"
}
],
"rax_metadata": {
"foo": "bar"
},
"rax_name": "test",
"rax_name_attr": "name",
"rax_networks": {
"private": [
"192.0.2.2"
],
"public": [
"198.51.100.1",
"2001:DB8::2342"
]
},
"rax_os-dcf_diskconfig": "AUTO",
"rax_os-ext-sts_power_state": 1,
"rax_os-ext-sts_task_state": null,
"rax_os-ext-sts_vm_state": "active",
"rax_progress": 100,
"rax_status": "ACTIVE",
"rax_tenant_id": "111111",
"rax_updated": "2013-11-14T20:49:27Z",
"rax_user_id": "22222"
},
"changed": false
}
```
Use Cases
---------
This section covers some additional usage examples built around a specific use case.
### Network and Server
Create an isolated cloud network and build a server
```
- name: Build Servers on an Isolated Network
hosts: localhost
gather_facts: False
tasks:
- name: Network create request
rax_network:
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
region: IAD
state: present
delegate_to: localhost
- name: Server create request
rax:
credentials: ~/.raxpub
name: web%04d.example.org
flavor: 2
image: ubuntu-1204-lts-precise-pangolin
disk_config: manual
networks:
- public
- my-net
region: IAD
state: present
count: 5
exact_count: yes
group: web
wait: yes
wait_timeout: 360
register: rax
delegate_to: localhost
```
### Complete Environment
Build a complete webserver environment with servers, custom networks and load balancers, install nginx and create a custom index.html
```
---
- name: Build environment
hosts: localhost
gather_facts: False
tasks:
- name: Load Balancer create request
rax_clb:
credentials: ~/.raxpub
name: my-lb
port: 80
protocol: HTTP
algorithm: ROUND_ROBIN
type: PUBLIC
timeout: 30
region: IAD
wait: yes
state: present
meta:
app: my-cool-app
register: clb
- name: Network create request
rax_network:
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
state: present
region: IAD
register: network
- name: Server create request
rax:
credentials: ~/.raxpub
name: web%04d.example.org
flavor: performance1-1
image: ubuntu-1204-lts-precise-pangolin
disk_config: manual
networks:
- public
- private
- my-net
region: IAD
state: present
count: 5
exact_count: yes
group: web
wait: yes
register: rax
- name: Add servers to web host group
add_host:
hostname: "{{ item.name }}"
ansible_host: "{{ item.rax_accessipv4 }}"
ansible_password: "{{ item.rax_adminpass }}"
ansible_user: root
groups: web
loop: "{{ rax.success }}"
when: rax.action == 'create'
- name: Add servers to Load balancer
rax_clb_nodes:
credentials: ~/.raxpub
load_balancer_id: "{{ clb.balancer.id }}"
address: "{{ item.rax_networks.private|first }}"
port: 80
condition: enabled
type: primary
wait: yes
region: IAD
loop: "{{ rax.success }}"
when: rax.action == 'create'
- name: Configure servers
hosts: web
handlers:
- name: restart nginx
service: name=nginx state=restarted
tasks:
- name: Install nginx
apt: pkg=nginx state=latest update_cache=yes cache_valid_time=86400
notify:
- restart nginx
- name: Ensure nginx starts on boot
service: name=nginx state=started enabled=yes
- name: Create custom index.html
copy: content="{{ inventory_hostname }}" dest=/usr/share/nginx/www/index.html
owner=root group=root mode=0644
```
### RackConnect and Managed Cloud
When using RackConnect version 2 or Rackspace Managed Cloud there are Rackspace automation tasks that are executed on the servers you create after they are successfully built. If your automation executes before the RackConnect or Managed Cloud automation, you can cause failures and unusable servers.
These examples show creating servers, and ensuring that the Rackspace automation has completed before Ansible continues onwards.
For simplicity, these examples are joined, however both are only needed when using RackConnect. When only using Managed Cloud, the RackConnect portion can be ignored.
The RackConnect portions only apply to RackConnect version 2.
#### Using a Control Machine
```
- name: Create an exact count of servers
hosts: localhost
gather_facts: False
tasks:
- name: Server build requests
rax:
credentials: ~/.raxpub
name: web%03d.example.org
flavor: performance1-1
image: ubuntu-1204-lts-precise-pangolin
disk_config: manual
region: DFW
state: present
count: 1
exact_count: yes
group: web
wait: yes
register: rax
- name: Add servers to in memory groups
add_host:
hostname: "{{ item.name }}"
ansible_host: "{{ item.rax_accessipv4 }}"
ansible_password: "{{ item.rax_adminpass }}"
ansible_user: root
rax_id: "{{ item.rax_id }}"
groups: web,new_web
loop: "{{ rax.success }}"
when: rax.action == 'create'
- name: Wait for rackconnect and managed cloud automation to complete
hosts: new_web
gather_facts: false
tasks:
- name: ensure we run all tasks from localhost
delegate_to: localhost
block:
- name: Wait for rackconnnect automation to complete
rax_facts:
credentials: ~/.raxpub
id: "{{ rax_id }}"
region: DFW
register: rax_facts
until: rax_facts.ansible_facts['rax_metadata']['rackconnect_automation_status']|default('') == 'DEPLOYED'
retries: 30
delay: 10
- name: Wait for managed cloud automation to complete
rax_facts:
credentials: ~/.raxpub
id: "{{ rax_id }}"
region: DFW
register: rax_facts
until: rax_facts.ansible_facts['rax_metadata']['rax_service_level_automation']|default('') == 'Complete'
retries: 30
delay: 10
- name: Update new_web hosts with IP that RackConnect assigns
hosts: new_web
gather_facts: false
tasks:
- name: Get facts about servers
rax_facts:
name: "{{ inventory_hostname }}"
region: DFW
delegate_to: localhost
- name: Map some facts
set_fact:
ansible_host: "{{ rax_accessipv4 }}"
- name: Base Configure Servers
hosts: web
roles:
- role: users
- role: openssh
opensshd_PermitRootLogin: "no"
- role: ntp
```
#### Using Ansible Pull
```
---
- name: Ensure Rackconnect and Managed Cloud Automation is complete
hosts: all
tasks:
- name: ensure we run all tasks from localhost
delegate_to: localhost
block:
- name: Check for completed bootstrap
stat:
path: /etc/bootstrap_complete
register: bootstrap
- name: Get region
command: xenstore-read vm-data/provider_data/region
register: rax_region
when: bootstrap.stat.exists != True
- name: Wait for rackconnect automation to complete
uri:
url: "https://{{ rax_region.stdout|trim }}.api.rackconnect.rackspace.com/v1/automation_status?format=json"
return_content: yes
register: automation_status
when: bootstrap.stat.exists != True
until: automation_status['automation_status']|default('') == 'DEPLOYED'
retries: 30
delay: 10
- name: Wait for managed cloud automation to complete
wait_for:
path: /tmp/rs_managed_cloud_automation_complete
delay: 10
when: bootstrap.stat.exists != True
- name: Set bootstrap completed
file:
path: /etc/bootstrap_complete
state: touch
owner: root
group: root
mode: 0400
- name: Base Configure Servers
hosts: all
roles:
- role: users
- role: openssh
opensshd_PermitRootLogin: "no"
- role: ntp
```
#### Using Ansible Pull with XenStore
```
---
- name: Ensure Rackconnect and Managed Cloud Automation is complete
hosts: all
tasks:
- name: Check for completed bootstrap
stat:
path: /etc/bootstrap_complete
register: bootstrap
- name: Wait for rackconnect_automation_status xenstore key to exist
command: xenstore-exists vm-data/user-metadata/rackconnect_automation_status
register: rcas_exists
when: bootstrap.stat.exists != True
failed_when: rcas_exists.rc|int > 1
until: rcas_exists.rc|int == 0
retries: 30
delay: 10
- name: Wait for rackconnect automation to complete
command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
register: rcas
when: bootstrap.stat.exists != True
until: rcas.stdout|replace('"', '') == 'DEPLOYED'
retries: 30
delay: 10
- name: Wait for rax_service_level_automation xenstore key to exist
command: xenstore-exists vm-data/user-metadata/rax_service_level_automation
register: rsla_exists
when: bootstrap.stat.exists != True
failed_when: rsla_exists.rc|int > 1
until: rsla_exists.rc|int == 0
retries: 30
delay: 10
- name: Wait for managed cloud automation to complete
command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
register: rsla
when: bootstrap.stat.exists != True
until: rsla.stdout|replace('"', '') == 'DEPLOYED'
retries: 30
delay: 10
- name: Set bootstrap completed
file:
path: /etc/bootstrap_complete
state: touch
owner: root
group: root
mode: 0400
- name: Base Configure Servers
hosts: all
roles:
- role: users
- role: openssh
opensshd_PermitRootLogin: "no"
- role: ntp
```
Advanced Usage
--------------
### Autoscaling with AWX or Red Hat Ansible Automation Platform
The GUI component of [Red Hat Ansible Automation Platform](https://docs.ansible.com/ansible/3/reference_appendices/tower.html#ansible-tower "(in Ansible v3)") also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call a defined URL and the server will “dial out” to the requester and configure an instance that is spinning up. This can be a great way to reconfigure ephemeral nodes. See [the documentation on provisioning callbacks](https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#provisioning-callbacks) for more details.
A benefit of using the callback approach over pull mode is that job results are still centrally recorded and less information has to be shared with remote hosts.
### Orchestration in the Rackspace Cloud
Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks, deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other piece of software in an environment. Complex deployments might have previously required manual manipulation of load balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the deployment of additional nodes contingent on the current number of running nodes, or the configuration of a clustered application dependent on the number of nodes with common metadata. One could automate the following scenarios, for example:
* Servers that are removed from a Cloud Load Balancer one-by-one, updated, verified, and returned to the load balancer pool
* Expansion of an already-online environment, where nodes are provisioned, bootstrapped, configured, and software installed
* A procedure where app log files are uploaded to a central location, like Cloud Files, before a node is decommissioned
* Servers and load balancers that have DNS records created and destroyed on creation and decommissioning, respectively
| programming_docs |
ansible Alibaba Cloud Compute Services Guide Alibaba Cloud Compute Services Guide
====================================
Introduction
------------
Ansible contains several modules for controlling and managing Alibaba Cloud Compute Services (Alicloud). This guide explains how to use the Alicloud Ansible modules together.
All Alicloud modules require `footmark` - install it on your control machine with `pip install footmark`.
Cloud modules, including Alicloud modules, execute on your local machine (the control machine) with `connection: local`, rather than on remote machines defined in your hosts.
Normally, you’ll use the following pattern for plays that provision Alicloud resources:
```
- hosts: localhost
connection: local
vars:
- ...
tasks:
- ...
```
Authentication
--------------
You can specify your Alicloud authentication credentials (access key and secret key) by passing them as environment variables or by storing them in a vars file.
To pass authentication credentials as environment variables:
```
export ALICLOUD_ACCESS_KEY='Alicloud123'
export ALICLOUD_SECRET_KEY='AlicloudSecret123'
```
To store authentication credentials in a vars\_file, encrypt them with [Ansible Vault](../user_guide/vault#vault) to keep them secure, then list them:
```
---
alicloud_access_key: "--REMOVED--"
alicloud_secret_key: "--REMOVED--"
```
Note that if you store your credentials in a vars\_file, you need to refer to them in each Alicloud module. For example:
```
- ali_instance:
alicloud_access_key: "{{alicloud_access_key}}"
alicloud_secret_key: "{{alicloud_secret_key}}"
image_id: "..."
```
Provisioning
------------
Alicloud modules create Alicloud ECS instances, disks, virtual private clouds, virtual switches, security groups and other resources.
You can use the `count` parameter to control the number of resources you create or terminate. For example, if you want exactly 5 instances tagged `NewECS`, set the `count` of instances to 5 and the `count_tag` to `NewECS`, as shown in the last task of the example playbook below. If there are no instances with the tag `NewECS`, the task creates 5 new instances. If there are 2 instances with that tag, the task creates 3 more. If there are 8 instances with that tag, the task terminates 3 of those instances.
If you do not specify a `count_tag`, the task creates the number of instances you specify in `count` with the `instance_name` you provide.
```
# alicloud_setup.yml
- hosts: localhost
connection: local
tasks:
- name: Create VPC
ali_vpc:
cidr_block: '{{ cidr_block }}'
vpc_name: new_vpc
register: created_vpc
- name: Create VSwitch
ali_vswitch:
alicloud_zone: '{{ alicloud_zone }}'
cidr_block: '{{ vsw_cidr }}'
vswitch_name: new_vswitch
vpc_id: '{{ created_vpc.vpc.id }}'
register: created_vsw
- name: Create security group
ali_security_group:
name: new_group
vpc_id: '{{ created_vpc.vpc.id }}'
rules:
- proto: tcp
port_range: 22/22
cidr_ip: 0.0.0.0/0
priority: 1
rules_egress:
- proto: tcp
port_range: 80/80
cidr_ip: 192.168.0.54/32
priority: 1
register: created_group
- name: Create a set of instances
ali_instance:
security_groups: '{{ created_group.group_id }}'
instance_type: ecs.n4.small
image_id: "{{ ami_id }}"
instance_name: "My-new-instance"
instance_tags:
Name: NewECS
Version: 0.0.1
count: 5
count_tag:
Name: NewECS
allocate_public_ip: true
max_bandwidth_out: 50
vswitch_id: '{{ created_vsw.vswitch.id}}'
register: create_instance
```
In the example playbook above, data about the vpc, vswitch, group, and instances created by this playbook are saved in the variables defined by the “register” keyword in each task.
Each Alicloud module offers a variety of parameter options. Not all options are demonstrated in the above example. See each individual module for further details and examples.
ansible Cisco Meraki Guide Cisco Meraki Guide
==================
* [What is Cisco Meraki?](#what-is-cisco-meraki)
+ [MS Switches](#ms-switches)
+ [MX Firewalls](#mx-firewalls)
+ [MR Wireless Access Points](#mr-wireless-access-points)
* [Using the Meraki modules](#using-the-meraki-modules)
* [Common Parameters](#common-parameters)
* [Meraki Authentication](#meraki-authentication)
* [Returned Data Structures](#returned-data-structures)
* [Handling Returned Data](#handling-returned-data)
* [Merging Existing and New Data](#merging-existing-and-new-data)
* [Error Handling](#error-handling)
What is Cisco Meraki?
---------------------
Cisco Meraki is an easy-to-use, cloud-based, network infrastructure platform for enterprise environments. While most network hardware uses command-line interfaces (CLIs) for configuration, Meraki uses an easy-to-use Dashboard hosted in the Meraki cloud. No on-premises management hardware or software is required - only the network infrastructure to run your business.
### MS Switches
Meraki MS switches come in multiple flavors and form factors. Meraki switches support 10/100/1000/10000 ports, as well as Cisco’s mGig technology for 2.5/5/10Gbps copper connectivity. 8, 24, and 48 port flavors are available with PoE (802.3af/802.3at/UPoE) available on many models.
### MX Firewalls
Meraki’s MX firewalls support full layer 3-7 deep packet inspection. MX firewalls are compatible with a variety of VPN technologies including IPSec, SSL VPN, and Meraki’s easy-to-use AutoVPN.
### MR Wireless Access Points
MR access points are enterprise-class, high-performance access points for the enterprise. MR access points have MIMO technology and integrated beamforming built-in for high performance applications. BLE allows for advanced location applications to be developed with no on-premises analytics platforms.
Using the Meraki modules
------------------------
Meraki modules provide a user-friendly interface to manage your Meraki environment using Ansible. For example, details about SNMP settings for a particular organization can be discovered using the module `meraki_snmp <meraki_snmp_module>`.
```
- name: Query SNMP settings
meraki_snmp:
api_key: abc123
org_name: AcmeCorp
state: query
delegate_to: localhost
```
Information about a particular object can be queried. For example, the `meraki_admin <meraki_admin_module>` module supports
```
- name: Gather information about Jane Doe
meraki_admin:
api_key: abc123
org_name: AcmeCorp
state: query
email: [email protected]
delegate_to: localhost
```
Common Parameters
-----------------
All Ansible Meraki modules support the following parameters which affect communication with the Meraki Dashboard API. Most of these should only be used by Meraki developers and not the general public.
host
Hostname or IP of Meraki Dashboard.
use\_https
Specifies whether communication should be over HTTPS. (Defaults to `yes`)
use\_proxy
Whether to use a proxy for any communication.
validate\_certs
Determine whether certificates should be validated or trusted. (Defaults to `yes`)
These are the common parameters which are used for most every module.
org\_name
Name of organization to perform actions in.
org\_id
ID of organization to perform actions in.
net\_name
Name of network to perform actions in.
net\_id
ID of network to perform actions in.
state
General specification of what action to take. `query` does lookups. `present` creates or edits. `absent` deletes.
Hint
Use the `org_id` and `net_id` parameters when possible. `org_name` and `net_name` require additional behind-the-scenes API calls to learn the ID values. `org_id` and `net_id` will perform faster.
Meraki Authentication
---------------------
All API access with the Meraki Dashboard requires an API key. An API key can be generated from the organization’s settings page. Each play in a playbook requires the `api_key` parameter to be specified.
The “Vault” feature of Ansible allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plain text in your playbooks or roles. These vault files can then be distributed or placed in source control. See [Using encrypted variables and files](../user_guide/vault#playbooks-vault) for more information.
Meraki’s API returns a 404 error if the API key is not correct. It does not provide any specific error saying the key is incorrect. If you receive a 404 error, check the API key first.
Returned Data Structures
------------------------
Meraki and its related Ansible modules return most information in the form of a list. For example, this is returned information by `meraki_admin` querying administrators. It returns a list even though there’s only one.
```
[
{
"orgAccess": "full",
"name": "John Doe",
"tags": [],
"networks": [],
"email": "[email protected]",
"id": "12345677890"
}
]
```
Handling Returned Data
----------------------
Since Meraki’s response data uses lists instead of properly keyed dictionaries for responses, certain strategies should be used when querying data for particular information. For many situations, use the `selectattr()` Jinja2 function.
Merging Existing and New Data
-----------------------------
Ansible’s Meraki modules do not allow for manipulating data. For example, you may need to insert a rule in the middle of a firewall ruleset. Ansible and the Meraki modules lack a way to directly merge to manipulate data. However, a playlist can use a few tasks to split the list where you need to insert a rule and then merge them together again with the new rule added. The steps involved are as follows:
1. Create blank “front” and “back” lists.
```
vars:
- front_rules: []
- back_rules: []
```
2. Get existing firewall rules from Meraki and create a new variable.
```
- name: Get firewall rules
meraki_mx_l3_firewall:
auth_key: abc123
org_name: YourOrg
net_name: YourNet
state: query
delegate_to: localhost
register: rules
- set_fact:
original_ruleset: '{{rules.data}}'
```
3. `Write the new rule. The new rule needs to be in a list so it can be merged with other lists in an upcoming step. The blank - puts the rule in a list so it can be merged.`
```
- set_fact:
new_rule:
-
- comment: Block traffic to server
src_cidr: 192.0.1.0/24
src_port: any
dst_cidr: 192.0.1.2/32
dst_port: any
protocol: any
policy: deny
```
4. Split the rules into two lists. This assumes the existing ruleset is 2 rules long.
```
- set_fact:
front_rules: '{{front_rules + [ original_ruleset[:1] ]}}'
- set_fact:
back_rules: '{{back_rules + [ original_ruleset[1:] ]}}'
```
5. Merge rules with the new rule in the middle.
```
- set_fact:
new_ruleset: '{{front_rules + new_rule + back_rules}}'
```
6. Upload new ruleset to Meraki.
```
- name: Set two firewall rules
meraki_mx_l3_firewall:
auth_key: abc123
org_name: YourOrg
net_name: YourNet
state: present
rules: '{{ new_ruleset }}'
delegate_to: localhost
```
Error Handling
--------------
Ansible’s Meraki modules will often fail if improper or incompatible parameters are specified. However, there will likely be scenarios where the module accepts the information but the Meraki API rejects the data. If this happens, the error will be returned in the `body` field for HTTP status of 400 return code.
Meraki’s API returns a 404 error if the API key is not correct. It does not provide any specific error saying the key is incorrect. If you receive a 404 error, check the API key first. 404 errors can also occur if improper object IDs (ex. `org_id`) are specified.
ansible Docker Guide Docker Guide
============
The content on this page has moved. Please see the updated [Docker Guide](../collections/community/docker/docsite/scenario_guide#ansible-collections-community-docker-docsite-scenario-guide) in the [community.docker collection](https://galaxy.ansible.com/community/docker).
ansible Public Cloud Guides Public Cloud Guides
===================
The guides in this section cover using Ansible with a range of public cloud platforms. They explore particular use cases in greater depth and provide a more “top-down” explanation of some basic features.
* [Alibaba Cloud Compute Services Guide](guide_alicloud)
* [Amazon Web Services Guide](guide_aws)
* [CloudStack Cloud Guide](guide_cloudstack)
* [Google Cloud Platform Guide](guide_gce)
* [Microsoft Azure Guide](guide_azure)
* [Online.net Guide](guide_online)
* [Oracle Cloud Infrastructure Guide](guide_oracle)
* [Packet.net Guide](guide_packet)
* [Rackspace Cloud Guide](guide_rax)
* [Scaleway Guide](guide_scaleway)
* [Vultr Guide](guide_vultr)
ansible Kubernetes Guide Kubernetes Guide
================
Welcome to the Ansible for Kubernetes Guide!
The purpose of this guide is to teach you everything you need to know about using Ansible with Kubernetes.
To get started, please select one of the following topics.
* [Introduction to Ansible for Kubernetes](kubernetes_scenarios/k8s_intro)
* [Using Kubernetes dynamic inventory plugin](kubernetes_scenarios/k8s_inventory)
* [Ansible for Kubernetes Scenarios](kubernetes_scenarios/k8s_scenarios)
ansible Creating K8S object Creating K8S object
===================
* [Introduction](#introduction)
* [Scenario Requirements](#scenario-requirements)
* [Assumptions](#assumptions)
* [Caveats](#caveats)
* [Example Description](#example-description)
+ [What to expect](#what-to-expect)
+ [Troubleshooting](#troubleshooting)
Introduction
------------
This guide will show you how to utilize Ansible to create Kubernetes objects such as Pods, Deployments, and Secrets.
Scenario Requirements
---------------------
* Software
+ Ansible 2.9.10 or later must be installed
+ The Python modules `openshift` and `kubernetes` must be installed on the Ansible controller (or Target host if not executing against localhost)
+ Kubernetes Cluster
+ Kubectl binary installed on the Ansible controller
* Access / Credentials
+ Kubeconfig configured with the given Kubernetes cluster
Assumptions
-----------
* User has required level of authorization to create, delete and update resources on the given Kubernetes cluster.
Caveats
-------
* community.kubernetes 1.1.0 is going to migrate to [kubernetes.core](https://github.com/ansible-collections/kubernetes.core)
Example Description
-------------------
In this use case / example, we will create a Pod in the given Kubernetes Cluster. The following Ansible playbook showcases the basic parameters that are needed for this.
```
---
- hosts: localhost
collections:
- kubernetes.core
tasks:
- name: Create a pod
k8s:
state: present
definition:
apiVersion: v1
kind: Pod
metadata:
name: "utilitypod-1"
namespace: default
labels:
app: galaxy
spec:
containers:
- name: utilitypod
image: busybox
```
Since Ansible utilizes the Kubernetes API to perform actions, in this use case we will be connecting directly to the Kubernetes cluster.
To begin, there are a few bits of information we will need. Here you are using Kubeconfig which is pre-configured in your machine. The Kubeconfig is generally located at `~/.kube/config`. It is highly recommended to store sensitive information such as password, user certificates in a more secure fashion using [ansible-vault](../../cli/ansible-vault#ansible-vault) or using [Ansible Tower credentials](https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html).
Now you need to supply the information about the Pod which will be created. Using `definition` parameter of the `kubernetes.core.k8s` module, you specify [PodTemplate](https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates). This PodTemplate is identical to what you provide to the `kubectl` command.
### What to expect
* You will see a bit of JSON output after this playbook completes. This output shows various parameters that are returned from the module and from cluster about the newly created Pod.
```
{
"changed": true,
"method": "create",
"result": {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2020-10-03T15:36:25Z",
"labels": {
"app": "galaxy"
},
"name": "utilitypod-1",
"namespace": "default",
"resourceVersion": "4511073",
"selfLink": "/api/v1/namespaces/default/pods/utilitypod-1",
"uid": "c7dec819-09df-4efd-9d78-67cf010b4f4e"
},
"spec": {
"containers": [{
"image": "busybox",
"imagePullPolicy": "Always",
"name": "utilitypod",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "default-token-6j842",
"readOnly": true
}]
}],
"dnsPolicy": "ClusterFirst",
"enableServiceLinks": true,
"priority": 0,
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "default",
"serviceAccountName": "default",
"terminationGracePeriodSeconds": 30,
"tolerations": [{
"effect": "NoExecute",
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"tolerationSeconds": 300
},
{
"effect": "NoExecute",
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"tolerationSeconds": 300
}
],
"volumes": [{
"name": "default-token-6j842",
"secret": {
"defaultMode": 420,
"secretName": "default-token-6j842"
}
}]
},
"status": {
"phase": "Pending",
"qosClass": "BestEffort"
}
}
}
```
* In the above example, ‘changed’ is `True` which notifies that the Pod creation started on the given cluster. This can take some time depending on your environment.
### Troubleshooting
Things to inspect
* Check if the values provided for username and password are correct
* Check if the Kubeconfig is populated with correct values
See also
[Kubernetes Python client](https://github.com/kubernetes-client/python)
The GitHub Page of Kubernetes Python client
[Kubernetes Python client - Issue Tracker](https://github.com/kubernetes-client/python/issues)
The issue tracker for Kubernetes Python client
[OpenShift Python client](https://github.com/openshift/openshift-restclient-python)
The GitHub Page of OpenShift Dynamic API client
[OpenShift Python client - Issue Tracker](https://github.com/openshift/openshift-restclient-python/issues)
The issue tracker for OpenShift Dynamic API client
[Kubectl installation](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
Installation guide for installing Kubectl
[Working with playbooks](../../user_guide/playbooks#working-with-playbooks)
An introduction to playbooks
[Using encrypted variables and files](../../user_guide/vault#playbooks-vault)
Using Vault in playbooks
| programming_docs |
ansible Introduction to Ansible for Kubernetes Introduction to Ansible for Kubernetes
======================================
* [Introduction](#introduction)
* [Requirements](#requirements)
* [Installation](#installation)
* [Authenticating with the API](#authenticating-with-the-api)
* [Reporting an issue](#reporting-an-issue)
Introduction
------------
Modules for interacting with the Kubernetes (K8s) and OpenShift API are under development, and can be used in preview mode. To use them, review the requirements, and then follow the installation and use instructions.
Requirements
------------
To use the modules, you’ll need the following:
* Run Ansible from source. For assistance, view [Running the devel branch from a clone](../../installation_guide/intro_installation#from-source).
* [OpenShift Rest Client](https://github.com/openshift/openshift-restclient-python) installed on the host that will execute the modules.
Installation
------------
The Kubernetes modules are part of the [Ansible Kubernetes collection](https://github.com/ansible-collections/kubernetes.core).
To install the collection, run the following:
```
$ ansible-galaxy collection install kubernetes.core
```
Authenticating with the API
---------------------------
By default the OpenShift Rest Client will look for `~/.kube/config`, and if found, connect using the active context. You can override the location of the file using the``kubeconfig`` parameter, and the context, using the `context` parameter.
Basic authentication is also supported using the `username` and `password` options. You can override the URL using the `host` parameter. Certificate authentication works through the `ssl_ca_cert`, `cert_file`, and `key_file` parameters, and for token authentication, use the `api_key` parameter.
To disable SSL certificate verification, set `verify_ssl` to false.
Reporting an issue
------------------
If you find a bug or have a suggestion regarding modules, please file issues at [Ansible Kubernetes collection](https://github.com/ansible-collections/kubernetes.core). If you find a bug regarding OpenShift client, please file issues at [OpenShift REST Client issues](https://github.com/openshift/openshift-restclient-python/issues). If you find a bug regarding Kubectl binary, please file issues at [Kubectl issue tracker](https://github.com/kubernetes/kubectl)
ansible Using Kubernetes dynamic inventory plugin Using Kubernetes dynamic inventory plugin
=========================================
* [Kubernetes dynamic inventory plugin](#kubernetes-dynamic-inventory-plugin)
+ [Requirements](#requirements)
* [Using vaulted configuration files](#using-vaulted-configuration-files)
Kubernetes dynamic inventory plugin
-----------------------------------
The best way to interact with your Pods is to use the Kubernetes dynamic inventory plugin, which dynamically queries Kubernetes APIs using `kubectl` command line available on controller node and tells Ansible what Pods can be managed.
### Requirements
To use the Kubernetes dynamic inventory plugins, you must install [Kubernetes Python client](https://github.com/kubernetes-client/python), [kubectl](https://github.com/kubernetes/kubectl) and [OpenShift Python client](https://github.com/openshift/openshift-restclient-python) on your control node (the host running Ansible).
```
$ pip install kubernetes openshift
```
Please refer to Kubernetes official documentation for [installing kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on the given operating systems.
To use this Kubernetes dynamic inventory plugin, you need to enable it first by specifying the following in the `ansible.cfg` file:
```
[inventory]
enable_plugins = kubernetes.core.k8s
```
Then, create a file that ends in `.k8s.yml` or `.k8s.yaml` in your working directory.
The `kubernetes.core.k8s` inventory plugin takes in the same authentication information as any other Kubernetes modules.
Here’s an example of a valid inventory file:
```
plugin: kubernetes.core.k8s
```
Executing `ansible-inventory --list -i <filename>.k8s.yml` will create a list of Pods that are ready to be configured using Ansible.
You can also provide the namespace to gather information about specific pods from the given namespace. For example, to gather information about Pods under the `test` namespace you will specify the `namespaces` parameter:
```
plugin: kubernetes.core.k8s
connections:
- namespaces:
- test
```
Using vaulted configuration files
---------------------------------
Since the inventory configuration file contains Kubernetes related sensitive information in plain text, a security risk, you may want to encrypt your entire inventory configuration file.
You can encrypt a valid inventory configuration file as follows:
```
$ ansible-vault encrypt <filename>.k8s.yml
New Vault password:
Confirm New Vault password:
Encryption successful
$ echo "MySuperSecretPassw0rd!" > /path/to/vault_password_file
```
And you can use this vaulted inventory configuration file using:
```
$ ansible-inventory -i <filename>.k8s.yml --list --vault-password-file=/path/to/vault_password_file
```
See also
[Kubernetes Python client](https://github.com/kubernetes-client/python)
The GitHub Page of Kubernetes Python client
[Kubernetes Python client - Issue Tracker](https://github.com/kubernetes-client/python/issues)
The issue tracker for Kubernetes Python client
[OpenShift Python client](https://github.com/openshift/openshift-restclient-python)
The GitHub Page of OpenShift Dynamic API client
[OpenShift Python client - Issue Tracker](https://github.com/openshift/openshift-restclient-python/issues)
The issue tracker for OpenShift Dynamic API client
[Kubectl installation](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
Installation guide for installing Kubectl
[Working with playbooks](../../user_guide/playbooks#working-with-playbooks)
An introduction to playbooks
[Using encrypted variables and files](../../user_guide/vault#playbooks-vault)
Using Vault in playbooks
ansible Ansible for Kubernetes Scenarios Ansible for Kubernetes Scenarios
================================
These scenarios teach you how to accomplish common Kubernetes tasks using Ansible. To get started, please select the task you want to accomplish.
* [Creating K8S object](scenario_k8s_object)
ansible Other useful VMware resources Other useful VMware resources
=============================
* [VMware API and SDK Documentation](https://www.vmware.com/support/pubs/sdk_pubs.html)
* [VCSIM test container image](https://quay.io/repository/ansible/vcenter-test-container)
* [Ansible VMware community wiki page](https://github.com/ansible/community/wiki/VMware)
* [VMware’s official Guest Operating system customization matrix](https://partnerweb.vmware.com/programs/guestOS/guest-os-customization-matrix.pdf)
* [VMware Compatibility Guide](https://www.vmware.com/resources/compatibility/search.php)
ansible Ansible for VMware Concepts Ansible for VMware Concepts
===========================
Some of these concepts are common to all uses of Ansible, including VMware automation; some are specific to VMware. You need to understand them to use Ansible for VMware automation. This introduction provides the background you need to follow the [scenarios](vmware_scenarios#vmware-scenarios) in this guide.
* [Control Node](#control-node)
* [Delegation](#delegation)
* [Modules](#modules)
* [Playbooks](#playbooks)
* [pyVmomi](#pyvmomi)
Control Node
------------
Any machine with Ansible installed. You can run commands and playbooks, invoking `/usr/bin/ansible` or `/usr/bin/ansible-playbook`, from any control node. You can use any computer that has Python installed on it as a control node - laptops, shared desktops, and servers can all run Ansible. However, you cannot use a Windows machine as a control node. You can have multiple control nodes.
Delegation
----------
Delegation allows you to select the system that executes a given task. If you do not have `pyVmomi` installed on your control node, use the `delegate_to` keyword on VMware-specific tasks to execute them on any host where you have `pyVmomi` installed.
Modules
-------
The units of code Ansible executes. Each module has a particular use, from creating virtual machines on vCenter to managing distributed virtual switches in the vCenter environment. You can invoke a single module with a task, or invoke several different modules in a playbook. For an idea of how many modules Ansible includes, take a look at the [list of cloud modules](https://docs.ansible.com/ansible/2.9/modules/list_of_cloud_modules.html#cloud-modules "(in Ansible v2.9)"), which includes VMware modules.
Playbooks
---------
Ordered lists of tasks, saved so you can run those tasks in that order repeatedly. Playbooks can include variables as well as tasks. Playbooks are written in YAML and are easy to read, write, share and understand.
pyVmomi
-------
Ansible VMware modules are written on top of [pyVmomi](https://github.com/vmware/pyvmomi). `pyVmomi` is the official Python SDK for the VMware vSphere API that allows user to manage ESX, ESXi, and vCenter infrastructure.
You need to install this Python SDK on host from where you want to invoke VMware automation. For example, if you are using control node then `pyVmomi` must be installed on control node.
If you are using any `delegate_to` host which is different from your control node then you need to install `pyVmomi` on that `delegate_to` node.
You can install pyVmomi using pip:
```
$ pip install pyvmomi
```
ansible Using VMware HTTP API using Ansible Using VMware HTTP API using Ansible
===================================
* [Introduction](#introduction)
* [Scenario requirements](#scenario-requirements)
* [Caveats](#caveats)
* [Example description](#example-description)
+ [What to expect](#what-to-expect)
+ [Troubleshooting](#troubleshooting)
Introduction
------------
This guide will show you how to utilize Ansible to use VMware HTTP APIs to automate various tasks.
Scenario requirements
---------------------
* Software
+ Ansible 2.5 or later must be installed.
+ We recommend installing the latest version with pip: `pip install Pyvmomi` on the Ansible control node (as the OS packages are usually out of date and incompatible) if you are planning to use any existing VMware modules.
* Hardware
+ vCenter Server 6.5 and above with at least one ESXi server
* Access / Credentials
+ Ansible (or the target server) must have network access to either the vCenter server or the ESXi server
+ Username and Password for vCenter
Caveats
-------
* All variable names and VMware object names are case sensitive.
* You need to use Python 2.7.9 version in order to use `validate_certs` option, as this version is capable of changing the SSL verification behaviours.
* VMware HTTP APIs are introduced in vSphere 6.5 and above so minimum level required in 6.5.
* There are very limited number of APIs exposed, so you may need to rely on XMLRPC based VMware modules.
Example description
-------------------
With the following Ansible playbook you can find the VMware ESXi host system(s) and can perform various tasks depending on the list of host systems. This is a generic example to show how Ansible can be utilized to consume VMware HTTP APIs.
```
---
- name: Example showing VMware HTTP API utilization
hosts: localhost
gather_facts: no
vars_files:
- vcenter_vars.yml
vars:
ansible_python_interpreter: "/usr/bin/env python3"
tasks:
- name: Login into vCenter and get cookies
uri:
url: https://{{ vcenter_server }}/rest/com/vmware/cis/session
force_basic_auth: yes
validate_certs: no
method: POST
user: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
register: login
- name: Get all hosts from vCenter using cookies from last task
uri:
url: https://{{ vcenter_server }}/rest/vcenter/host
force_basic_auth: yes
validate_certs: no
headers:
Cookie: "{{ login.set_cookie }}"
register: vchosts
- name: Change Log level configuration of the given hostsystem
vmware_host_config_manager:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
esxi_hostname: "{{ item.name }}"
options:
'Config.HostAgent.log.level': 'error'
validate_certs: no
loop: "{{ vchosts.json.value }}"
register: host_config_results
```
Since Ansible utilizes the VMware HTTP API using the `uri` module to perform actions, in this use case it will be connecting directly to the VMware HTTP API from localhost.
This means that playbooks will not be running from the vCenter or ESXi Server.
Note that this play disables the `gather_facts` parameter, since you don’t want to collect facts about localhost.
Before you begin, make sure you have:
* Hostname of the vCenter server
* Username and password for the vCenter server
* Version of vCenter is at least 6.5
For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using [ansible-vault](../../cli/ansible-vault#ansible-vault) or using [Ansible Tower credentials](https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html).
If your vCenter server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the `validate_certs` parameter. To do this you need to set `validate_certs=False` in your playbook.
As you can see, we are using the `uri` module in first task to login into the vCenter server and storing result in the `login` variable using register. In the second task, using cookies from the first task we are gathering information about the ESXi host system.
Using this information, we are changing the ESXi host system’s advance configuration.
### What to expect
Running this playbook can take some time, depending on your environment and network connectivity. When the run is complete you will see
```
"results": [
{
...
"invocation": {
"module_args": {
"cluster_name": null,
"esxi_hostname": "10.76.33.226",
"hostname": "10.65.223.114",
"options": {
"Config.HostAgent.log.level": "error"
},
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"username": "[email protected]",
"validate_certs": false
}
},
"item": {
"connection_state": "CONNECTED",
"host": "host-21",
"name": "10.76.33.226",
"power_state": "POWERED_ON"
},
"msg": "Config.HostAgent.log.level changed."
...
}
]
```
### Troubleshooting
If your playbook fails:
* Check if the values provided for username and password are correct.
* Check if you are using vCenter 6.5 and onwards to use this HTTP APIs.
See also
[VMware vSphere and Ansible From Zero to Useful by @arielsanchezmor](https://www.youtube.com/watch?v=0_qwOKlBlo8)
vBrownBag session video related to VMware HTTP APIs
[Sample Playbooks for using VMware HTTP APIs](https://github.com/Akasurde/ansible-vmware-http)
GitHub repo for examples of Ansible playbook to manage VMware using HTTP APIs
ansible VMware Prerequisites VMware Prerequisites
====================
* [Installing vCenter SSL certificates for Ansible](#installing-vcenter-ssl-certificates-for-ansible)
* [Installing ESXi SSL certificates for Ansible](#installing-esxi-ssl-certificates-for-ansible)
* [Using custom path for SSL certificates](#using-custom-path-for-ssl-certificates)
Installing SSL Certificates
---------------------------
All vCenter and ESXi servers require SSL encryption on all connections to enforce secure communication. You must enable SSL encryption for Ansible by installing the server’s SSL certificates on your Ansible control node or delegate node.
If the SSL certificate of your vCenter or ESXi server is not correctly installed on your Ansible control node, you will see the following warning when using Ansible VMware modules:
`Unable to connect to vCenter or ESXi API at xx.xx.xx.xx on TCP/443: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)`
To install the SSL certificate for your VMware server, and run your Ansible VMware modules in encrypted mode, please follow the instructions for the server you are running with VMware.
### Installing vCenter SSL certificates for Ansible
* From any web browser, go to the base URL of the vCenter Server without port number like `https://vcenter-domain.example.com`
* Click the “Download trusted root CA certificates” link at the bottom of the grey box on the right and download the file.
* Change the extension of the file to .zip. The file is a ZIP file of all root certificates and all CRLs.
* Extract the contents of the zip file. The extracted directory contains a `.certs` directory that contains two types of files. Files with a number as the extension (.0, .1, and so on) are root certificates.
* Install the certificate files are trusted certificates by the process that is appropriate for your operating system.
### Installing ESXi SSL certificates for Ansible
* Enable SSH Service on ESXi either by using Ansible VMware module [vmware\_host\_service\_manager](https://github.com/ansible-collections/vmware/blob/main/plugins/modules/vmware_host_config_manager.py) or manually using vSphere Web interface.
* SSH to ESXi server using administrative credentials, and navigate to directory `/etc/vmware/ssl`
* Secure copy (SCP) `rui.crt` located in `/etc/vmware/ssl` directory to Ansible control node.
* Install the certificate file by the process that is appropriate for your operating system.
### Using custom path for SSL certificates
If you need to use a custom path for SSL certificates, you can set the `REQUESTS_CA_BUNDLE` environment variable in your playbook.
For example, if `/var/vmware/certs/vcenter1.crt` is the SSL certificate for your vCenter Server, you can use the [environment](../../user_guide/playbooks_environment#playbooks-environment) keyword to pass it to the modules:
```
- name: Gather all tags from vCenter
community.vmware.vmware_tag_info:
validate_certs: True
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
environment:
REQUESTS_CA_BUNDLE: /var/vmware/certs/vcenter1.crt
```
There is a [known issue](https://github.com/psf/requests/issues/3829) in `requests` library (version 2) which you may want to consider when using this environment variable. Basically, setting `REQUESTS_CA_BUNDLE` environment variable on managed nodes overrides the `validate_certs` value. This may result in unexpected behavior while running the playbook. Please see [community.vmware issue 601](https://github.com/ansible-collections/community.vmware/issues/601) and [vmware issue 254](https://github.com/vmware/vsphere-automation-sdk-python/issues/254) for more information.
ansible Rename an existing virtual machine Rename an existing virtual machine
==================================
* [Introduction](#introduction)
* [Scenario requirements](#scenario-requirements)
* [Caveats](#caveats)
* [Example description](#example-description)
+ [What to expect](#what-to-expect)
+ [Troubleshooting](#troubleshooting)
Introduction
------------
This guide will show you how to utilize Ansible to rename an existing virtual machine.
Scenario requirements
---------------------
* Software
+ Ansible 2.5 or later must be installed.
+ The Python module `Pyvmomi` must be installed on the Ansible control node (or Target host if not executing against localhost).
+ We recommend installing the latest version with pip: `pip install Pyvmomi` (as the OS packages are usually out of date and incompatible).
* Hardware
+ At least one standalone ESXi server or
+ vCenter Server with at least one ESXi server
* Access / Credentials
+ Ansible (or the target server) must have network access to the either vCenter server or the ESXi server
+ Username and Password for vCenter or ESXi server
+ Hosts in the ESXi cluster must have access to the datastore that the template resides on.
Caveats
-------
* All variable names and VMware object names are case sensitive.
* You need to use Python 2.7.9 version in order to use `validate_certs` option, as this version is capable of changing the SSL verification behaviours.
Example description
-------------------
With the following Ansible playbook you can rename an existing virtual machine by changing the UUID.
```
---
- name: Rename virtual machine from old name to new name using UUID
gather_facts: no
vars_files:
- vcenter_vars.yml
vars:
ansible_python_interpreter: "/usr/bin/env python3"
hosts: localhost
tasks:
- set_fact:
vm_name: "old_vm_name"
new_vm_name: "new_vm_name"
datacenter: "DC1"
cluster_name: "DC1_C1"
- name: Get VM "{{ vm_name }}" uuid
vmware_guest_facts:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
validate_certs: False
datacenter: "{{ datacenter }}"
folder: "/{{datacenter}}/vm"
name: "{{ vm_name }}"
register: vm_facts
- name: Rename "{{ vm_name }}" to "{{ new_vm_name }}"
vmware_guest:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
validate_certs: False
cluster: "{{ cluster_name }}"
uuid: "{{ vm_facts.instance.hw_product_uuid }}"
name: "{{ new_vm_name }}"
```
Since Ansible utilizes the VMware API to perform actions, in this use case it will be connecting directly to the API from localhost.
This means that playbooks will not be running from the vCenter or ESXi Server.
Note that this play disables the `gather_facts` parameter, since you don’t want to collect facts about localhost.
You can run these modules against another server that would then connect to the API if localhost does not have access to vCenter. If so, the required Python modules will need to be installed on that target server. We recommend installing the latest version with pip: `pip install Pyvmomi` (as the OS packages are usually out of date and incompatible).
Before you begin, make sure you have:
* Hostname of the ESXi server or vCenter server
* Username and password for the ESXi or vCenter server
* The UUID of the existing Virtual Machine you want to rename
For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using [ansible-vault](../../cli/ansible-vault#ansible-vault) or using [Ansible Tower credentials](https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html).
If your vCenter or ESXi server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the `validate_certs` parameter. To do this you need to set `validate_certs=False` in your playbook.
Now you need to supply the information about the existing virtual machine which will be renamed. For renaming virtual machine, `vmware_guest` module uses VMware UUID, which is unique across vCenter environment. This value is autogenerated and can not be changed. You will use `vmware_guest_facts` module to find virtual machine and get information about VMware UUID of the virtual machine.
This value will be used input for `vmware_guest` module. Specify new name to virtual machine which conforms to all VMware requirements for naming conventions as `name` parameter. Also, provide `uuid` as the value of VMware UUID.
### What to expect
Running this playbook can take some time, depending on your environment and network connectivity. When the run is complete you will see
```
{
"changed": true,
"instance": {
"annotation": "",
"current_snapshot": null,
"customvalues": {},
"guest_consolidation_needed": false,
"guest_question": null,
"guest_tools_status": "guestToolsNotRunning",
"guest_tools_version": "10247",
"hw_cores_per_socket": 1,
"hw_datastores": ["ds_204_2"],
"hw_esxi_host": "10.x.x.x",
"hw_eth0": {
"addresstype": "assigned",
"ipaddresses": [],
"label": "Network adapter 1",
"macaddress": "00:50:56:8c:b8:42",
"macaddress_dash": "00-50-56-8c-b8-42",
"portgroup_key": "dvportgroup-31",
"portgroup_portkey": "15",
"summary": "DVSwitch: 50 0c 3a 69 df 78 2c 7b-6e 08 0a 89 e3 a6 31 17"
},
"hw_files": ["[ds_204_2] old_vm_name/old_vm_name.vmx", "[ds_204_2] old_vm_name/old_vm_name.nvram", "[ds_204_2] old_vm_name/old_vm_name.vmsd", "[ds_204_2] old_vm_name/vmware.log", "[ds_204_2] old_vm_name/old_vm_name.vmdk"],
"hw_folder": "/DC1/vm",
"hw_guest_full_name": null,
"hw_guest_ha_state": null,
"hw_guest_id": null,
"hw_interfaces": ["eth0"],
"hw_is_template": false,
"hw_memtotal_mb": 1024,
"hw_name": "new_vm_name",
"hw_power_status": "poweredOff",
"hw_processor_count": 1,
"hw_product_uuid": "420cbebb-835b-980b-7050-8aea9b7b0a6d",
"hw_version": "vmx-13",
"instance_uuid": "500c60a6-b7b4-8ae5-970f-054905246a6f",
"ipv4": null,
"ipv6": null,
"module_hw": true,
"snapshots": []
}
}
```
confirming that you’ve renamed the virtual machine.
### Troubleshooting
If your playbook fails:
* Check if the values provided for username and password are correct.
* Check if the datacenter you provided is available.
* Check if the virtual machine specified exists and you have permissions to access the datastore.
* Ensure the full folder path you specified already exists.
| programming_docs |
ansible Using Virtual machine attributes in VMware dynamic inventory plugin Using Virtual machine attributes in VMware dynamic inventory plugin
===================================================================
* [capability](#capability)
+ [snapshotOperationsSupported (bool)](#snapshotoperationssupported-bool)
+ [multipleSnapshotsSupported (bool)](#multiplesnapshotssupported-bool)
+ [snapshotConfigSupported (bool)](#snapshotconfigsupported-bool)
+ [poweredOffSnapshotsSupported (bool)](#poweredoffsnapshotssupported-bool)
+ [memorySnapshotsSupported (bool)](#memorysnapshotssupported-bool)
+ [revertToSnapshotSupported (bool)](#reverttosnapshotsupported-bool)
+ [quiescedSnapshotsSupported (bool)](#quiescedsnapshotssupported-bool)
+ [disableSnapshotsSupported (bool)](#disablesnapshotssupported-bool)
+ [lockSnapshotsSupported (bool)](#locksnapshotssupported-bool)
+ [consolePreferencesSupported (bool)](#consolepreferencessupported-bool)
+ [cpuFeatureMaskSupported (bool)](#cpufeaturemasksupported-bool)
+ [s1AcpiManagementSupported (bool)](#s1acpimanagementsupported-bool)
+ [settingScreenResolutionSupported (bool)](#settingscreenresolutionsupported-bool)
+ [toolsAutoUpdateSupported (bool)](#toolsautoupdatesupported-bool)
+ [vmNpivWwnSupported (bool)](#vmnpivwwnsupported-bool)
+ [npivWwnOnNonRdmVmSupported (bool)](#npivwwnonnonrdmvmsupported-bool)
+ [vmNpivWwnDisableSupported (bool)](#vmnpivwwndisablesupported-bool)
+ [vmNpivWwnUpdateSupported (bool)](#vmnpivwwnupdatesupported-bool)
+ [swapPlacementSupported (bool)](#swapplacementsupported-bool)
+ [toolsSyncTimeSupported (bool)](#toolssynctimesupported-bool)
+ [virtualMmuUsageSupported (bool)](#virtualmmuusagesupported-bool)
+ [diskSharesSupported (bool)](#disksharessupported-bool)
+ [bootOptionsSupported (bool)](#bootoptionssupported-bool)
+ [bootRetryOptionsSupported (bool)](#bootretryoptionssupported-bool)
+ [settingVideoRamSizeSupported (bool)](#settingvideoramsizesupported-bool)
+ [settingDisplayTopologySupported (bool)](#settingdisplaytopologysupported-bool)
+ [recordReplaySupported (bool)](#recordreplaysupported-bool)
+ [changeTrackingSupported (bool)](#changetrackingsupported-bool)
+ [multipleCoresPerSocketSupported (bool)](#multiplecorespersocketsupported-bool)
+ [hostBasedReplicationSupported (bool)](#hostbasedreplicationsupported-bool)
+ [guestAutoLockSupported (bool)](#guestautolocksupported-bool)
+ [memoryReservationLockSupported (bool)](#memoryreservationlocksupported-bool)
+ [featureRequirementSupported (bool)](#featurerequirementsupported-bool)
+ [poweredOnMonitorTypeChangeSupported (bool)](#poweredonmonitortypechangesupported-bool)
+ [seSparseDiskSupported (bool)](#sesparsedisksupported-bool)
+ [nestedHVSupported (bool)](#nestedhvsupported-bool)
+ [vPMCSupported (bool)](#vpmcsupported-bool)
* [config](#config)
+ [changeVersion (str)](#changeversion-str)
+ [modified (datetime)](#modified-datetime)
+ [name (str)](#name-str)
+ [guestFullName (str)](#guestfullname-str)
+ [version (str)](#version-str)
+ [uuid (str)](#uuid-str)
+ [instanceUuid (str, optional)](#instanceuuid-str-optional)
+ [npivNodeWorldWideName (long, optional)](#npivnodeworldwidename-long-optional)
+ [npivPortWorldWideName (long, optional)](#npivportworldwidename-long-optional)
+ [npivWorldWideNameType (str, optional)](#npivworldwidenametype-str-optional)
+ [npivDesiredNodeWwns (short, optional)](#npivdesirednodewwns-short-optional)
+ [npivDesiredPortWwns (short, optional)](#npivdesiredportwwns-short-optional)
+ [npivTemporaryDisabled (bool, optional)](#npivtemporarydisabled-bool-optional)
+ [npivOnNonRdmDisks (bool, optional)](#npivonnonrdmdisks-bool-optional)
+ [locationId (str, optional)](#locationid-str-optional)
+ [template (bool)](#template-bool)
+ [guestId (str)](#guestid-str)
+ [alternateGuestName (str)](#alternateguestname-str)
+ [annotation (str, optional)](#annotation-str-optional)
+ [files (vim.vm.FileInfo)](#files-vim-vm-fileinfo)
+ [tools (vim.vm.ToolsConfigInfo, optional)](#tools-vim-vm-toolsconfiginfo-optional)
+ [flags (vim.vm.FlagInfo)](#flags-vim-vm-flaginfo)
+ [consolePreferences (vim.vm.ConsolePreferences, optional)](#consolepreferences-vim-vm-consolepreferences-optional)
+ [defaultPowerOps (vim.vm.DefaultPowerOpInfo)](#defaultpowerops-vim-vm-defaultpoweropinfo)
+ [hardware (vim.vm.VirtualHardware)](#hardware-vim-vm-virtualhardware)
+ [cpuAllocation (vim.ResourceAllocationInfo, optional)](#cpuallocation-vim-resourceallocationinfo-optional)
+ [memoryAllocation (vim.ResourceAllocationInfo, optional)](#memoryallocation-vim-resourceallocationinfo-optional)
+ [latencySensitivity (vim.LatencySensitivity, optional)](#latencysensitivity-vim-latencysensitivity-optional)
+ [memoryHotAddEnabled (bool, optional)](#memoryhotaddenabled-bool-optional)
+ [cpuHotAddEnabled (bool, optional)](#cpuhotaddenabled-bool-optional)
+ [cpuHotRemoveEnabled (bool, optional)](#cpuhotremoveenabled-bool-optional)
+ [hotPlugMemoryLimit (long, optional)](#hotplugmemorylimit-long-optional)
+ [hotPlugMemoryIncrementSize (long, optional)](#hotplugmemoryincrementsize-long-optional)
+ [cpuAffinity (vim.vm.AffinityInfo, optional)](#cpuaffinity-vim-vm-affinityinfo-optional)
+ [memoryAffinity (vim.vm.AffinityInfo, optional)](#memoryaffinity-vim-vm-affinityinfo-optional)
+ [networkShaper (vim.vm.NetworkShaperInfo, optional)](#networkshaper-vim-vm-networkshaperinfo-optional)
+ [extraConfig (vim.option.OptionValue, optional)](#extraconfig-vim-option-optionvalue-optional)
+ [cpuFeatureMask (vim.host.CpuIdInfo, optional)](#cpufeaturemask-vim-host-cpuidinfo-optional)
+ [datastoreUrl (vim.vm.ConfigInfo.DatastoreUrlPair, optional)](#datastoreurl-vim-vm-configinfo-datastoreurlpair-optional)
+ [swapPlacement (str, optional)](#swapplacement-str-optional)
+ [bootOptions (vim.vm.BootOptions, optional)](#bootoptions-vim-vm-bootoptions-optional)
+ [ftInfo (vim.vm.FaultToleranceConfigInfo, optional)](#ftinfo-vim-vm-faulttoleranceconfiginfo-optional)
+ [vAppConfig (vim.vApp.VmConfigInfo, optional)](#vappconfig-vim-vapp-vmconfiginfo-optional)
+ [vAssertsEnabled (bool, optional)](#vassertsenabled-bool-optional)
+ [changeTrackingEnabled (bool, optional)](#changetrackingenabled-bool-optional)
+ [firmware (str, optional)](#firmware-str-optional)
+ [maxMksConnections (int, optional)](#maxmksconnections-int-optional)
+ [guestAutoLockEnabled (bool, optional)](#guestautolockenabled-bool-optional)
+ [managedBy (vim.ext.ManagedByInfo, optional)](#managedby-vim-ext-managedbyinfo-optional)
+ [memoryReservationLockedToMax (bool, optional)](#memoryreservationlockedtomax-bool-optional)
+ [initialOverhead (vim.vm.ConfigInfo.OverheadInfo), optional)](#initialoverhead-vim-vm-configinfo-overheadinfo-optional)
+ [nestedHVEnabled (bool, optional)](#nestedhvenabled-bool-optional)
+ [vPMCEnabled (bool, optional)](#vpmcenabled-bool-optional)
+ [scheduledHardwareUpgradeInfo (vim.vm.ScheduledHardwareUpgradeInfo, optional)](#scheduledhardwareupgradeinfo-vim-vm-scheduledhardwareupgradeinfo-optional)
+ [vFlashCacheReservation (long, optional)](#vflashcachereservation-long-optional)
* [layout](#layout)
+ [configFile (str, optional)](#configfile-str-optional)
+ [logFile (str, optional)](#logfile-str-optional)
+ [disk (vim.vm.FileLayout.DiskLayout, optional)](#disk-vim-vm-filelayout-disklayout-optional)
+ [snapshot (vim.vm.FileLayout.SnapshotLayout, optional)](#snapshot-vim-vm-filelayout-snapshotlayout-optional)
+ [swapFile (str, optional)](#swapfile-str-optional)
* [layoutEx](#layoutex)
+ [file (vim.vm.FileLayoutEx.FileInfo, optional)](#file-vim-vm-filelayoutex-fileinfo-optional)
+ [disk (vim.vm.FileLayoutEx.DiskLayout, optional)](#disk-vim-vm-filelayoutex-disklayout-optional)
+ [snapshot (vim.vm.FileLayoutEx.SnapshotLayout, optional)](#snapshot-vim-vm-filelayoutex-snapshotlayout-optional)
+ [timestamp (datetime)](#timestamp-datetime)
* [storage (vim.vm.StorageInfo)](#storage-vim-vm-storageinfo)
+ [perDatastoreUsage (vim.vm.StorageInfo.UsageOnDatastore, optional)](#perdatastoreusage-vim-vm-storageinfo-usageondatastore-optional)
+ [timestamp (datetime)](#id1)
* [environmentBrowser (vim.EnvironmentBrowser)](#environmentbrowser-vim-environmentbrowser)
+ [datastoreBrowser (vim.host.DatastoreBrowser)](#datastorebrowser-vim-host-datastorebrowser)
* [resourcePool (vim.ResourcePool)](#resourcepool-vim-resourcepool)
+ [summary (vim.ResourcePool.Summary)](#summary-vim-resourcepool-summary)
+ [runtime (vim.ResourcePool.RuntimeInfo)](#runtime-vim-resourcepool-runtimeinfo)
+ [owner (vim.ComputeResource)](#owner-vim-computeresource)
+ [resourcePool (vim.ResourcePool)](#id2)
+ [vm (vim.VirtualMachine)](#vm-vim-virtualmachine)
+ [config (vim.ResourceConfigSpec)](#config-vim-resourceconfigspec)
+ [childConfiguration (vim.ResourceConfigSpec)](#childconfiguration-vim-resourceconfigspec)
* [parentVApp (vim.ManagedEntity)](#parentvapp-vim-managedentity)
+ [parent (vim.ManagedEntity)](#parent-vim-managedentity)
+ [customValue (vim.CustomFieldsManager.Value)](#customvalue-vim-customfieldsmanager-value)
+ [overallStatus (vim.ManagedEntity.Status)](#overallstatus-vim-managedentity-status)
+ [configStatus (vim.ManagedEntity.Status)](#configstatus-vim-managedentity-status)
+ [configIssue (vim.event.Event)](#configissue-vim-event-event)
+ [effectiveRole (int)](#effectiverole-int)
+ [permission (vim.AuthorizationManager.Permission)](#permission-vim-authorizationmanager-permission)
+ [name (str)](#id3)
+ [disabledMethod (str)](#disabledmethod-str)
+ [recentTask (vim.Task)](#recenttask-vim-task)
+ [declaredAlarmState (vim.alarm.AlarmState)](#declaredalarmstate-vim-alarm-alarmstate)
+ [triggeredAlarmState (vim.alarm.AlarmState)](#triggeredalarmstate-vim-alarm-alarmstate)
+ [alarmActionsEnabled (bool)](#alarmactionsenabled-bool)
+ [tag (vim.Tag)](#tag-vim-tag)
* [resourceConfig (vim.ResourceConfigSpec)](#resourceconfig-vim-resourceconfigspec)
+ [entity (vim.ManagedEntity, optional)](#entity-vim-managedentity-optional)
+ [changeVersion (str, optional)](#changeversion-str-optional)
+ [lastModified (datetime, optional)](#lastmodified-datetime-optional)
+ [cpuAllocation (vim.ResourceAllocationInfo)](#cpuallocation-vim-resourceallocationinfo)
+ [memoryAllocation (vim.ResourceAllocationInfo)](#memoryallocation-vim-resourceallocationinfo)
* [runtime (vim.vm.RuntimeInfo)](#runtime-vim-vm-runtimeinfo)
+ [device (vim.vm.DeviceRuntimeInfo, optional)](#device-vim-vm-deviceruntimeinfo-optional)
+ [host (vim.HostSystem, optional)](#host-vim-hostsystem-optional)
+ [connectionState (vim.VirtualMachine.ConnectionState)](#connectionstate-vim-virtualmachine-connectionstate)
+ [powerState (vim.VirtualMachine.PowerState)](#powerstate-vim-virtualmachine-powerstate)
+ [faultToleranceState (vim.VirtualMachine.FaultToleranceState)](#faulttolerancestate-vim-virtualmachine-faulttolerancestate)
+ [dasVmProtection (vim.vm.RuntimeInfo.DasProtectionState, optional)](#dasvmprotection-vim-vm-runtimeinfo-dasprotectionstate-optional)
+ [toolsInstallerMounted (bool)](#toolsinstallermounted-bool)
+ [suspendTime (datetime, optional)](#suspendtime-datetime-optional)
+ [bootTime (datetime, optional)](#boottime-datetime-optional)
+ [suspendInterval (long, optional)](#suspendinterval-long-optional)
+ [question (vim.vm.QuestionInfo, optional)](#question-vim-vm-questioninfo-optional)
+ [memoryOverhead (long, optional)](#memoryoverhead-long-optional)
+ [maxCpuUsage (int, optional)](#maxcpuusage-int-optional)
+ [maxMemoryUsage (int, optional)](#maxmemoryusage-int-optional)
+ [numMksConnections (int)](#nummksconnections-int)
+ [recordReplayState (vim.VirtualMachine.RecordReplayState)](#recordreplaystate-vim-virtualmachine-recordreplaystate)
+ [cleanPowerOff (bool, optional)](#cleanpoweroff-bool-optional)
+ [needSecondaryReason (str, optional)](#needsecondaryreason-str-optional)
+ [onlineStandby (bool)](#onlinestandby-bool)
+ [minRequiredEVCModeKey (str, optional)](#minrequiredevcmodekey-str-optional)
+ [consolidationNeeded (bool)](#consolidationneeded-bool)
+ [offlineFeatureRequirement (vim.vm.FeatureRequirement, optional)](#offlinefeaturerequirement-vim-vm-featurerequirement-optional)
+ [featureRequirement (vim.vm.FeatureRequirement, optional)](#featurerequirement-vim-vm-featurerequirement-optional)
+ [featureMask (vim.host.FeatureMask, optional)](#featuremask-vim-host-featuremask-optional)
+ [vFlashCacheAllocation (long, optional)](#vflashcacheallocation-long-optional)
* [guest (vim.vm.GuestInfo)](#guest-vim-vm-guestinfo)
+ [toolsStatus (vim.vm.GuestInfo.ToolsStatus, optional)](#toolsstatus-vim-vm-guestinfo-toolsstatus-optional)
+ [toolsVersionStatus (str, optional)](#toolsversionstatus-str-optional)
+ [toolsVersionStatus2 (str, optional)](#toolsversionstatus2-str-optional)
+ [toolsRunningStatus (str, optional)](#toolsrunningstatus-str-optional)
+ [toolsVersion (str, optional)](#toolsversion-str-optional)
+ [guestId (str, optional)](#guestid-str-optional)
+ [guestFamily (str, optional)](#guestfamily-str-optional)
+ [guestFullName (str, optional)](#guestfullname-str-optional)
+ [hostName (str, optional)](#hostname-str-optional)
+ [ipAddress (str, optional)](#ipaddress-str-optional)
+ [net (vim.vm.GuestInfo.NicInfo, optional)](#net-vim-vm-guestinfo-nicinfo-optional)
+ [ipStack (vim.vm.GuestInfo.StackInfo, optional)](#ipstack-vim-vm-guestinfo-stackinfo-optional)
+ [disk (vim.vm.GuestInfo.DiskInfo, optional)](#disk-vim-vm-guestinfo-diskinfo-optional)
+ [screen (vim.vm.GuestInfo.ScreenInfo, optional)](#screen-vim-vm-guestinfo-screeninfo-optional)
+ [guestState (str)](#gueststate-str)
+ [appHeartbeatStatus (str, optional)](#appheartbeatstatus-str-optional)
+ [appState (str, optional)](#appstate-str-optional)
+ [guestOperationsReady (bool, optional)](#guestoperationsready-bool-optional)
+ [interactiveGuestOperationsReady (bool, optional)](#interactiveguestoperationsready-bool-optional)
+ [generationInfo (vim.vm.GuestInfo.NamespaceGenerationInfo, privilege: VirtualMachine.Namespace.EventNotify, optional)](#generationinfo-vim-vm-guestinfo-namespacegenerationinfo-privilege-virtualmachine-namespace-eventnotify-optional)
* [summary (vim.vm.Summary)](#summary-vim-vm-summary)
+ [vm (vim.VirtualMachine, optional)](#vm-vim-virtualmachine-optional)
+ [runtime (vim.vm.RuntimeInfo)](#id4)
+ [guest (vim.vm.Summary.GuestSummary, optional)](#guest-vim-vm-summary-guestsummary-optional)
+ [config (vim.vm.Summary.ConfigSummary)](#config-vim-vm-summary-configsummary)
+ [storage (vim.vm.Summary.StorageSummary, optional)](#storage-vim-vm-summary-storagesummary-optional)
+ [quickStats (vim.vm.Summary.QuickStats)](#quickstats-vim-vm-summary-quickstats)
+ [overallStatus (vim.ManagedEntity.Status)](#id5)
+ [customValue (vim.CustomFieldsManager.Value, optional)](#customvalue-vim-customfieldsmanager-value-optional)
* [datastore (vim.Datastore)](#datastore-vim-datastore)
+ [info (vim.Datastore.Info)](#info-vim-datastore-info)
+ [summary (vim.Datastore.Summary)](#summary-vim-datastore-summary)
+ [host (vim.Datastore.HostMount)](#host-vim-datastore-hostmount)
+ [vm (vim.VirtualMachine)](#id6)
+ [browser (vim.host.DatastoreBrowser)](#browser-vim-host-datastorebrowser)
+ [capability (vim.Datastore.Capability)](#capability-vim-datastore-capability)
+ [iormConfiguration (vim.StorageResourceManager.IORMConfigInfo)](#iormconfiguration-vim-storageresourcemanager-iormconfiginfo)
* [network (vim.Network)](#network-vim-network)
+ [name (str)](#id7)
+ [summary (vim.Network.Summary)](#summary-vim-network-summary)
+ [host (vim.HostSystem)](#host-vim-hostsystem)
+ [vm (vim.VirtualMachine)](#id8)
* [snapshot (vim.vm.SnapshotInfo)](#snapshot-vim-vm-snapshotinfo)
+ [currentSnapshot (vim.vm.Snapshot, optional)](#currentsnapshot-vim-vm-snapshot-optional)
+ [rootSnapshotList (vim.vm.SnapshotTree)](#rootsnapshotlist-vim-vm-snapshottree)
* [rootSnapshot (vim.vm.Snapshot)](#rootsnapshot-vim-vm-snapshot)
+ [config (vim.vm.ConfigInfo)](#config-vim-vm-configinfo)
+ [childSnapshot (vim.vm.Snapshot)](#childsnapshot-vim-vm-snapshot)
* [guestHeartbeatStatus (vim.ManagedEntity.Status)](#guestheartbeatstatus-vim-managedentity-status)
Virtual machine attributes
--------------------------
You can use virtual machine properties which can be used to populate `hostvars` for the given virtual machine in a VMware dynamic inventory plugin.
### capability
This section describes settings for the runtime capabilities of the virtual machine.
#### snapshotOperationsSupported (bool)
Indicates whether or not a virtual machine supports snapshot operations.
#### multipleSnapshotsSupported (bool)
Indicates whether or not a virtual machine supports multiple snapshots. This value is not set when the virtual machine is unavailable, for instance, when it is being created or deleted.
#### snapshotConfigSupported (bool)
Indicates whether or not a virtual machine supports snapshot config.
#### poweredOffSnapshotsSupported (bool)
Indicates whether or not a virtual machine supports snapshot operations in `poweredOff` state.
#### memorySnapshotsSupported (bool)
Indicates whether or not a virtual machine supports memory snapshots.
#### revertToSnapshotSupported (bool)
Indicates whether or not a virtual machine supports reverting to a snapshot.
#### quiescedSnapshotsSupported (bool)
Indicates whether or not a virtual machine supports quiesced snapshots.
#### disableSnapshotsSupported (bool)
Indicates whether or not snapshots can be disabled.
#### lockSnapshotsSupported (bool)
Indicates whether or not the snapshot tree can be locked.
#### consolePreferencesSupported (bool)
Indicates whether console preferences can be set for the virtual machine.
#### cpuFeatureMaskSupported (bool)
Indicates whether CPU feature requirements masks can be set for the virtual machine.
#### s1AcpiManagementSupported (bool)
Indicates whether or not a virtual machine supports ACPI S1 settings management.
#### settingScreenResolutionSupported (bool)
Indicates whether or not the virtual machine supports setting the screen resolution of the console window.
#### toolsAutoUpdateSupported (bool)
Supports tools auto-update.
#### vmNpivWwnSupported (bool)
Supports virtual machine NPIV WWN.
#### npivWwnOnNonRdmVmSupported (bool)
Supports assigning NPIV WWN to virtual machines that do not have RDM disks.
#### vmNpivWwnDisableSupported (bool)
Indicates whether the NPIV disabling operation is supported on the virtual machine.
#### vmNpivWwnUpdateSupported (bool)
Indicates whether the update of NPIV WWNs are supported on the virtual machine.
#### swapPlacementSupported (bool)
Flag indicating whether the virtual machine has a configurable (swapfile placement policy).
#### toolsSyncTimeSupported (bool)
Indicates whether asking tools to sync time with the host is supported.
#### virtualMmuUsageSupported (bool)
Indicates whether or not the use of nested page table hardware support can be explicitly set.
#### diskSharesSupported (bool)
Indicates whether resource settings for disks can be applied to the virtual machine.
#### bootOptionsSupported (bool)
Indicates whether boot options can be configured for the virtual machine.
#### bootRetryOptionsSupported (bool)
Indicates whether automatic boot retry can be configured for the virtual machine.
#### settingVideoRamSizeSupported (bool)
Flag indicating whether the video RAM size of the virtual machine can be configured.
#### settingDisplayTopologySupported (bool)
Indicates whether or not the virtual machine supports setting the display topology of the console window.
#### recordReplaySupported (bool)
Indicates whether record and replay functionality is supported on the virtual machine.
#### changeTrackingSupported (bool)
Indicates that change tracking is supported for virtual disks of the virtual machine. However, even if change tracking is supported, it might not be available for all disks of the virtual machine. For example, passthru raw disk mappings or disks backed by any Ver1BackingInfo cannot be tracked.
#### multipleCoresPerSocketSupported (bool)
Indicates whether multiple virtual cores per socket is supported on the virtual machine.
#### hostBasedReplicationSupported (bool)
Indicates that host based replication is supported on the virtual machine. However, even if host based replication is supported, it might not be available for all disk types. For example, passthru raw disk mappings can not be replicated.
#### guestAutoLockSupported (bool)
Indicates whether or not guest autolock is supported on the virtual machine.
#### memoryReservationLockSupported (bool)
Indicates whether [memoryReservationLockedToMax (bool, optional)](#memory-reservation-locked-to-max) may be set to true for the virtual machine.
#### featureRequirementSupported (bool)
Indicates whether the featureRequirement feature is supported.
#### poweredOnMonitorTypeChangeSupported (bool)
Indicates whether a monitor type change is supported while the virtual machine is in the `poweredOn` state.
#### seSparseDiskSupported (bool)
Indicates whether the virtual machine supports the Flex-SE (space-efficent, sparse) format for virtual disks.
#### nestedHVSupported (bool)
Indicates whether the virtual machine supports nested hardware-assisted virtualization.
#### vPMCSupported (bool)
Indicates whether the virtual machine supports virtualized CPU performance counters.
### config
This section describes the configuration settings of the virtual machine, including the name and UUID. This property is set when a virtual machine is created or when the `reconfigVM` method is called. The virtual machine configuration is not guaranteed to be available. For example, the configuration information would be unavailable if the server is unable to access the virtual machine files on disk, and is often also unavailable during the initial phases of virtual machine creation.
#### changeVersion (str)
The changeVersion is a unique identifier for a given version of the configuration. Each change to the configuration updates this value. This is typically implemented as an ever increasing count or a time-stamp. However, a client should always treat this as an opaque string.
#### modified (datetime)
Last time a virtual machine’s configuration was modified.
#### name (str)
Display name of the virtual machine. Any / (slash), (backslash), character used in this name element is escaped. Similarly, any % (percent) character used in this name element is escaped, unless it is used to start an escape sequence. A slash is escaped as %2F or %2f. A backslash is escaped as %5C or %5c, and a percent is escaped as %25.
#### guestFullName (str)
This is the full name of the guest operating system for the virtual machine. For example: Windows 2000 Professional. See [alternateGuestName (str)](#alternate-guest-name).
#### version (str)
The version string for the virtual machine.
#### uuid (str)
128-bit SMBIOS UUID of a virtual machine represented as a hexadecimal string in “12345678-abcd-1234-cdef-123456789abc” format.
#### instanceUuid (str, optional)
VirtualCenter-specific 128-bit UUID of a virtual machine, represented as a hexademical string. This identifier is used by VirtualCenter to uniquely identify all virtual machine instances, including those that may share the same SMBIOS UUID.
#### npivNodeWorldWideName (long, optional)
A 64-bit node WWN (World Wide Name).
#### npivPortWorldWideName (long, optional)
A 64-bit port WWN (World Wide Name).
#### npivWorldWideNameType (str, optional)
The source that provides/generates the assigned WWNs.
#### npivDesiredNodeWwns (short, optional)
The NPIV node WWNs to be extended from the original list of WWN numbers.
#### npivDesiredPortWwns (short, optional)
The NPIV port WWNs to be extended from the original list of WWN numbers.
#### npivTemporaryDisabled (bool, optional)
This property is used to enable or disable the NPIV capability on a desired virtual machine on a temporary basis.
#### npivOnNonRdmDisks (bool, optional)
This property is used to check whether the NPIV can be enabled on the Virtual machine with non-rdm disks in the configuration, so this is potentially not enabling npiv on vmfs disks. Also this property is used to check whether RDM is required to generate WWNs for a virtual machine.
#### locationId (str, optional)
Hash incorporating the virtual machine’s config file location and the UUID of the host assigned to run the virtual machine.
#### template (bool)
Flag indicating whether or not a virtual machine is a template.
#### guestId (str)
Guest operating system configured on a virtual machine.
#### alternateGuestName (str)
Used as display name for the operating system if guestId isotherorother-64. See [guestFullName (str)](#guest-full-name).
#### annotation (str, optional)
Description for the virtual machine.
#### files (vim.vm.FileInfo)
Information about the files associated with a virtual machine. This information does not include files for specific virtual disks or snapshots.
#### tools (vim.vm.ToolsConfigInfo, optional)
Configuration of VMware Tools running in the guest operating system.
#### flags (vim.vm.FlagInfo)
Additional flags for a virtual machine.
#### consolePreferences (vim.vm.ConsolePreferences, optional)
Legacy console viewer preferences when doing power operations.
#### defaultPowerOps (vim.vm.DefaultPowerOpInfo)
Configuration of default power operations.
#### hardware (vim.vm.VirtualHardware)
Processor, memory, and virtual devices for a virtual machine.
#### cpuAllocation (vim.ResourceAllocationInfo, optional)
Resource limits for CPU.
#### memoryAllocation (vim.ResourceAllocationInfo, optional)
Resource limits for memory.
#### latencySensitivity (vim.LatencySensitivity, optional)
The latency-sensitivity of the virtual machine.
#### memoryHotAddEnabled (bool, optional)
Whether memory can be added while the virtual machine is running.
#### cpuHotAddEnabled (bool, optional)
Whether virtual processors can be added while the virtual machine is running.
#### cpuHotRemoveEnabled (bool, optional)
Whether virtual processors can be removed while the virtual machine is running.
#### hotPlugMemoryLimit (long, optional)
The maximum amount of memory, in MB, than can be added to a running virtual machine.
#### hotPlugMemoryIncrementSize (long, optional)
Memory, in MB that can be added to a running virtual machine.
#### cpuAffinity (vim.vm.AffinityInfo, optional)
Affinity settings for CPU.
#### memoryAffinity (vim.vm.AffinityInfo, optional)
Affinity settings for memory.
#### networkShaper (vim.vm.NetworkShaperInfo, optional)
Resource limits for network.
#### extraConfig (vim.option.OptionValue, optional)
Additional configuration information for the virtual machine.
#### cpuFeatureMask (vim.host.CpuIdInfo, optional)
Specifies CPU feature compatibility masks that override the defaults from the `GuestOsDescriptor` of the virtual machine’s guest OS.
#### datastoreUrl (vim.vm.ConfigInfo.DatastoreUrlPair, optional)
Enumerates the set of datastores that the virtual machine is stored on, as well as the URL identification for each of these.
#### swapPlacement (str, optional)
Virtual machine swapfile placement policy.
#### bootOptions (vim.vm.BootOptions, optional)
Configuration options for the boot behavior of the virtual machine.
#### ftInfo (vim.vm.FaultToleranceConfigInfo, optional)
Fault tolerance settings for the virtual machine.
#### vAppConfig (vim.vApp.VmConfigInfo, optional)
vApp meta-data for the virtual machine.
#### vAssertsEnabled (bool, optional)
Indicates whether user-configured virtual asserts will be triggered during virtual machine replay.
#### changeTrackingEnabled (bool, optional)
Indicates whether changed block tracking for the virtual machine’s disks is active.
#### firmware (str, optional)
Information about firmware type for the virtual machine.
#### maxMksConnections (int, optional)
Indicates the maximum number of active remote display connections that the virtual machine will support.
#### guestAutoLockEnabled (bool, optional)
Indicates whether the guest operating system will logout any active sessions whenever there are no remote display connections open to the virtual machine.
#### managedBy (vim.ext.ManagedByInfo, optional)
Specifies that the virtual machine is managed by a VC Extension.
#### memoryReservationLockedToMax (bool, optional)
If set true, memory resource reservation for the virtual machine will always be equal to the virtual machine’s memory size; increases in memory size will be rejected when a corresponding reservation increase is not possible.
#### initialOverhead (vim.vm.ConfigInfo.OverheadInfo), optional)
Set of values to be used only to perform admission control when determining if a host has sufficient resources for the virtual machine to power on.
#### nestedHVEnabled (bool, optional)
Indicates whether the virtual machine is configured to use nested hardware-assisted virtualization.
#### vPMCEnabled (bool, optional)
Indicates whether the virtual machine have virtual CPU performance counters enabled.
#### scheduledHardwareUpgradeInfo (vim.vm.ScheduledHardwareUpgradeInfo, optional)
Configuration of scheduled hardware upgrades and result from last attempt to run scheduled hardware upgrade.
#### vFlashCacheReservation (long, optional)
Specifies the total vFlash resource reservation for the vFlash caches associated with the virtual machine’s virtual disks, in bytes.
### layout
Detailed information about the files that comprise the virtual machine.
#### configFile (str, optional)
A list of files that makes up the configuration of the virtual machine (excluding the .vmx file, since that file is represented in the FileInfo). These are relative paths from the configuration directory. A slash is always used as a separator. This list will typically include the NVRAM file, but could also include other meta-data files.
#### logFile (str, optional)
A list of files stored in the virtual machine’s log directory. These are relative paths from the `logDirectory`. A slash is always used as a separator.
#### disk (vim.vm.FileLayout.DiskLayout, optional)
Files making up each virtual disk.
#### snapshot (vim.vm.FileLayout.SnapshotLayout, optional)
Files of each snapshot.
#### swapFile (str, optional)
The swapfile specific to the virtual machine, if any. This is a complete datastore path, not a relative path.
### layoutEx
Detailed information about the files that comprise the virtual machine.
#### file (vim.vm.FileLayoutEx.FileInfo, optional)
Information about all the files that constitute the virtual machine including configuration files, disks, swap file, suspend file, log files, core files, memory file and so on.
#### disk (vim.vm.FileLayoutEx.DiskLayout, optional)
Layout of each virtual disk attached to the virtual machine. For a virtual machine with snaphots, this property gives only those disks that are attached to it at the current point of running.
#### snapshot (vim.vm.FileLayoutEx.SnapshotLayout, optional)
Layout of each snapshot of the virtual machine.
#### timestamp (datetime)
Time when values in this structure were last updated.
### storage (vim.vm.StorageInfo)
Storage space used by the virtual machine, split by datastore.
#### perDatastoreUsage (vim.vm.StorageInfo.UsageOnDatastore, optional)
Storage space used by the virtual machine on all datastores that it is located on. Total storage space committed to the virtual machine across all datastores is simply an aggregate of the property `committed`
#### timestamp (datetime)
Time when values in this structure were last updated.
### environmentBrowser (vim.EnvironmentBrowser)
The current virtual machine’s environment browser object. This contains information on all the configurations that can be used on the virtual machine. This is identical to the environment browser on the ComputeResource to which the virtual machine belongs.
#### datastoreBrowser (vim.host.DatastoreBrowser)
DatastoreBrowser to browse datastores that are available on this entity.
### resourcePool (vim.ResourcePool)
The current resource pool that specifies resource allocation for the virtual machine. This property is set when a virtual machine is created or associated with a different resource pool. Returns null if the virtual machine is a template or the session has no access to the resource pool.
#### summary (vim.ResourcePool.Summary)
Basic information about a resource pool.
#### runtime (vim.ResourcePool.RuntimeInfo)
Runtime information about a resource pool.
#### owner (vim.ComputeResource)
The ComputeResource to which this set of one or more nested resource pools belong.
#### resourcePool (vim.ResourcePool)
The set of child resource pools.
#### vm (vim.VirtualMachine)
The set of virtual machines associated with this resource pool.
#### config (vim.ResourceConfigSpec)
Configuration of this resource pool.
#### childConfiguration (vim.ResourceConfigSpec)
The resource configuration of all direct children (VirtualMachine and ResourcePool) of this resource group.
### parentVApp (vim.ManagedEntity)
Reference to the parent vApp.
#### parent (vim.ManagedEntity)
Parent of this entity. This value is null for the root object and for (VirtualMachine) objects that are part of a (VirtualApp).
#### customValue (vim.CustomFieldsManager.Value)
Custom field values.
#### overallStatus (vim.ManagedEntity.Status)
General health of this managed entity.
#### configStatus (vim.ManagedEntity.Status)
The configStatus indicates whether or not the system has detected a configuration issue involving this entity. For example, it might have detected a duplicate IP address or MAC address, or a host in a cluster might be out of `compliance.property`.
#### configIssue (vim.event.Event)
Current configuration issues that have been detected for this entity.
#### effectiveRole (int)
Access rights the current session has to this entity.
#### permission (vim.AuthorizationManager.Permission)
List of permissions defined for this entity.
#### name (str)
Name of this entity, unique relative to its parent. Any / (slash), (backslash), character used in this name element will be escaped. Similarly, any % (percent) character used in this name element will be escaped, unless it is used to start an escape sequence. A slash is escaped as %2F or %2f. A backslash is escaped as %5C or %5c, and a percent is escaped as %25.
#### disabledMethod (str)
List of operations that are disabled, given the current runtime state of the entity. For example, a power-on operation always fails if a virtual machine is already powered on.
#### recentTask (vim.Task)
The set of recent tasks operating on this managed entity. A task in this list could be in one of the four states: pending, running, success or error.
#### declaredAlarmState (vim.alarm.AlarmState)
A set of alarm states for alarms that apply to this managed entity.
#### triggeredAlarmState (vim.alarm.AlarmState)
A set of alarm states for alarms triggered by this entity or by its descendants.
#### alarmActionsEnabled (bool)
Whether alarm actions are enabled for this entity. True if enabled; false otherwise.
#### tag (vim.Tag)
The set of tags associated with this managed entity. Experimental. Subject to change.
### resourceConfig (vim.ResourceConfigSpec)
The resource configuration for a virtual machine.
#### entity (vim.ManagedEntity, optional)
Reference to the entity with this resource specification: either a VirtualMachine or a ResourcePool.
#### changeVersion (str, optional)
The changeVersion is a unique identifier for a given version of the configuration. Each change to the configuration will update this value. This is typically implemented as an ever increasing count or a time-stamp.
#### lastModified (datetime, optional)
Timestamp when the resources were last modified. This is ignored when the object is used to update a configuration.
#### cpuAllocation (vim.ResourceAllocationInfo)
Resource allocation for CPU.
#### memoryAllocation (vim.ResourceAllocationInfo)
Resource allocation for memory.
### runtime (vim.vm.RuntimeInfo)
Execution state and history for the virtual machine.
#### device (vim.vm.DeviceRuntimeInfo, optional)
Per-device runtime info. This array will be empty if the host software does not provide runtime info for any of the device types currently in use by the virtual machine.
#### host (vim.HostSystem, optional)
The host that is responsible for running a virtual machine. This property is null if the virtual machine is not running and is not assigned to run on a particular host.
#### connectionState (vim.VirtualMachine.ConnectionState)
Indicates whether or not the virtual machine is available for management.
#### powerState (vim.VirtualMachine.PowerState)
The current power state of the virtual machine.
#### faultToleranceState (vim.VirtualMachine.FaultToleranceState)
The fault tolerance state of the virtual machine.
#### dasVmProtection (vim.vm.RuntimeInfo.DasProtectionState, optional)
The vSphere HA protection state for a virtual machine. Property is unset if vSphere HA is not enabled.
#### toolsInstallerMounted (bool)
Flag to indicate whether or not the VMware Tools installer is mounted as a CD-ROM.
#### suspendTime (datetime, optional)
The timestamp when the virtual machine was most recently suspended. This property is updated every time the virtual machine is suspended.
#### bootTime (datetime, optional)
The timestamp when the virtual machine was most recently powered on. This property is updated when the virtual machine is powered on from the poweredOff state, and is cleared when the virtual machine is powered off. This property is not updated when a virtual machine is resumed from a suspended state.
#### suspendInterval (long, optional)
The total time the virtual machine has been suspended since it was initially powered on. This time excludes the current period, if the virtual machine is currently suspended. This property is updated when the virtual machine resumes, and is reset to zero when the virtual machine is powered off.
#### question (vim.vm.QuestionInfo, optional)
The current question, if any, that is blocking the virtual machine’s execution.
#### memoryOverhead (long, optional)
The amount of memory resource (in bytes) that will be used by the virtual machine above its guest memory requirements. This value is set if and only if the virtual machine is registered on a host that supports memory resource allocation features. For powered off VMs, this is the minimum overhead required to power on the VM on the registered host. For powered on VMs, this is the current overhead reservation, a value which is almost always larger than the minimum overhead, and which grows with time.
#### maxCpuUsage (int, optional)
Current upper-bound on CPU usage. The upper-bound is based on the host the virtual machine is current running on, as well as limits configured on the virtual machine itself or any parent resource pool. Valid while the virtual machine is running.
#### maxMemoryUsage (int, optional)
Current upper-bound on memory usage. The upper-bound is based on memory configuration of the virtual machine, as well as limits configured on the virtual machine itself or any parent resource pool. Valid while the virtual machine is running.
#### numMksConnections (int)
Number of active MKS connections to the virtual machine.
#### recordReplayState (vim.VirtualMachine.RecordReplayState)
Record / replay state of the virtual machine.
#### cleanPowerOff (bool, optional)
For a powered off virtual machine, indicates whether the virtual machine’s last shutdown was an orderly power off or not. Unset if the virtual machine is running or suspended.
#### needSecondaryReason (str, optional)
If set, indicates the reason the virtual machine needs a secondary.
#### onlineStandby (bool)
This property indicates whether the guest has gone into one of the s1, s2 or s3 standby modes. False indicates the guest is awake.
#### minRequiredEVCModeKey (str, optional)
For a powered-on or suspended virtual machine in a cluster with Enhanced VMotion Compatibility (EVC) enabled, this identifies the least-featured EVC mode (among those for the appropriate CPU vendor) that could admit the virtual machine. This property will be unset if the virtual machine is powered off or is not in an EVC cluster. This property may be used as a general indicator of the CPU feature baseline currently in use by the virtual machine. However, the virtual machine may be suppressing some of the features present in the CPU feature baseline of the indicated mode, either explicitly (in the virtual machine’s configured `cpuFeatureMask`) or implicitly (in the default masks for the `GuestOsDescriptor` appropriate for the virtual machine’s configured guest OS).
#### consolidationNeeded (bool)
Whether any disk of the virtual machine requires consolidation. This can happen for example when a snapshot is deleted but its associated disk is not committed back to the base disk.
#### offlineFeatureRequirement (vim.vm.FeatureRequirement, optional)
These requirements must have equivalent host capabilities `featureCapability` in order to power on.
#### featureRequirement (vim.vm.FeatureRequirement, optional)
These requirements must have equivalent host capabilities `featureCapability` in order to power on, resume, or migrate to the host.
#### featureMask (vim.host.FeatureMask, optional)
The masks applied to an individual virtual machine as a result of its configuration.
#### vFlashCacheAllocation (long, optional)
Specifies the total allocated vFlash resource for the vFlash caches associated with VM’s VMDKs when VM is powered on, in bytes.
### guest (vim.vm.GuestInfo)
Information about VMware Tools and about the virtual machine from the perspective of VMware Tools. Information about the guest operating system is available in VirtualCenter. Guest operating system information reflects the last known state of the virtual machine. For powered on machines, this is current information. For powered off machines, this is the last recorded state before the virtual machine was powered off.
#### toolsStatus (vim.vm.GuestInfo.ToolsStatus, optional)
Current status of VMware Tools in the guest operating system, if known.
#### toolsVersionStatus (str, optional)
Current version status of VMware Tools in the guest operating system, if known.
#### toolsVersionStatus2 (str, optional)
Current version status of VMware Tools in the guest operating system, if known.
#### toolsRunningStatus (str, optional)
Current running status of VMware Tools in the guest operating system, if known.
#### toolsVersion (str, optional)
Current version of VMware Tools, if known.
#### guestId (str, optional)
Guest operating system identifier (short name), if known.
#### guestFamily (str, optional)
Guest operating system family, if known.
#### guestFullName (str, optional)
See [guestFullName (str)](#guest-full-name).
#### hostName (str, optional)
Hostname of the guest operating system, if known.
#### ipAddress (str, optional)
Primary IP address assigned to the guest operating system, if known.
#### net (vim.vm.GuestInfo.NicInfo, optional)
Guest information about network adapters, if known.
#### ipStack (vim.vm.GuestInfo.StackInfo, optional)
Guest information about IP networking stack, if known.
#### disk (vim.vm.GuestInfo.DiskInfo, optional)
Guest information about disks. You can obtain Linux guest disk information for the following file system types only: Ext2, Ext3, Ext4, ReiserFS, ZFS, NTFS, VFAT, UFS, PCFS, HFS, and MS-DOS.
#### screen (vim.vm.GuestInfo.ScreenInfo, optional)
Guest screen resolution info, if known.
#### guestState (str)
Operation mode of guest operating system.
#### appHeartbeatStatus (str, optional)
Application heartbeat status.
#### appState (str, optional)
Application state. If vSphere HA is enabled and the vm is configured for Application Monitoring and this field’s value is `appStateNeedReset` then HA will attempt immediately reset the virtual machine. There are some system conditions which may delay the immediate reset. The immediate reset will be performed as soon as allowed by vSphere HA and ESX. If during these conditions the value is changed to `appStateOk` the reset will be cancelled.
#### guestOperationsReady (bool, optional)
Guest Operations availability. If true, the vitrual machine is ready to process guest operations.
#### interactiveGuestOperationsReady (bool, optional)
Interactive Guest Operations availability. If true, the virtual machine is ready to process guest operations as the user interacting with the guest desktop.
#### generationInfo (vim.vm.GuestInfo.NamespaceGenerationInfo, privilege: VirtualMachine.Namespace.EventNotify, optional)
A list of namespaces and their corresponding generation numbers. Only namespaces with non-zero `maxSizeEventsFromGuest` are guaranteed to be present here.
### summary (vim.vm.Summary)
Basic information about the virtual machine.
#### vm (vim.VirtualMachine, optional)
Reference to the virtual machine managed object.
#### runtime (vim.vm.RuntimeInfo)
Runtime and state information of a running virtual machine. Most of this information is also available when a virtual machine is powered off. In that case, it contains information from the last run, if available.
#### guest (vim.vm.Summary.GuestSummary, optional)
Guest operating system and VMware Tools information.
#### config (vim.vm.Summary.ConfigSummary)
Basic configuration information about the virtual machine. This information is not available when the virtual machine is unavailable, for instance, when it is being created or deleted.
#### storage (vim.vm.Summary.StorageSummary, optional)
Storage information of the virtual machine.
#### quickStats (vim.vm.Summary.QuickStats)
A set of statistics that are typically updated with near real-time regularity.
#### overallStatus (vim.ManagedEntity.Status)
Overall alarm status on this node.
#### customValue (vim.CustomFieldsManager.Value, optional)
Custom field values.
### datastore (vim.Datastore)
A collection of references to the subset of datastore objects in the datacenter that is used by the virtual machine.
#### info (vim.Datastore.Info)
Specific information about the datastore.
#### summary (vim.Datastore.Summary)
Global properties of the datastore.
#### host (vim.Datastore.HostMount)
Hosts attached to this datastore.
#### vm (vim.VirtualMachine)
Virtual machines stored on this datastore.
#### browser (vim.host.DatastoreBrowser)
DatastoreBrowser used to browse this datastore.
#### capability (vim.Datastore.Capability)
Capabilities of this datastore.
#### iormConfiguration (vim.StorageResourceManager.IORMConfigInfo)
Configuration of storage I/O resource management for the datastore. Currently VMware only support storage I/O resource management on VMFS volumes of a datastore. This configuration may not be available if the datastore is not accessible from any host, or if the datastore does not have VMFS volume.
### network (vim.Network)
A collection of references to the subset of network objects in the datacenter that is used by the virtual machine.
#### name (str)
Name of this network.
#### summary (vim.Network.Summary)
Properties of a network.
#### host (vim.HostSystem)
Hosts attached to this network.
#### vm (vim.VirtualMachine)
Virtual machines using this network.
### snapshot (vim.vm.SnapshotInfo)
Current snapshot and tree. The property is valid if snapshots have been created for the virtual machine.
#### currentSnapshot (vim.vm.Snapshot, optional)
Current snapshot of the virtual machineThis property is set by calling `Snapshot.revert` or `VirtualMachine.createSnapshot`. This property will be empty when the working snapshot is at the root of the snapshot tree.
#### rootSnapshotList (vim.vm.SnapshotTree)
Data for the entire set of snapshots for one virtual machine.
### rootSnapshot (vim.vm.Snapshot)
The roots of all snapshot trees for the virtual machine.
#### config (vim.vm.ConfigInfo)
Information about the configuration of the virtual machine when this snapshot was taken. The datastore paths for the virtual machine disks point to the head of the disk chain that represents the disk at this given snapshot.
#### childSnapshot (vim.vm.Snapshot)
All snapshots for which this snapshot is the parent.
### guestHeartbeatStatus (vim.ManagedEntity.Status)
The guest heartbeat.
See also
[pyVmomi](https://github.com/vmware/pyvmomi)
The GitHub Page of pyVmomi
[pyVmomi Issue Tracker](https://github.com/vmware/pyvmomi/issues)
The issue tracker for the pyVmomi project
rst/scenario\_guides/guide\_vmware.rst
The GitHub Page of vSphere Automation SDK for Python
[vSphere Automation SDK Issue Tracker](https://github.com/vmware/vsphere-automation-sdk-python/issues)
The issue tracker for vSphere Automation SDK for Python
[Working with playbooks](../../user_guide/playbooks#working-with-playbooks)
An introduction to playbooks
[Using encrypted variables and files](../../user_guide/vault#playbooks-vault)
Using Vault in playbooks
| programming_docs |
ansible Ansible for VMware Scenarios Ansible for VMware Scenarios
============================
These scenarios teach you how to accomplish common VMware tasks using Ansible. To get started, please select the task you want to accomplish.
* [Deploy a virtual machine from a template](scenario_clone_template)
* [Rename an existing virtual machine](scenario_rename_vm)
* [Remove an existing VMware virtual machine](scenario_remove_vm)
* [Find folder path of an existing VMware virtual machine](scenario_find_vm_folder)
* [Using VMware HTTP API using Ansible](scenario_vmware_http)
* [Using vmware\_tools connection plugin](scenario_vmware_tools_connection)
ansible Using VMware dynamic inventory plugin Using VMware dynamic inventory plugin
=====================================
* [VMware Dynamic Inventory Plugin](#vmware-dynamic-inventory-plugin)
+ [Requirements](#requirements)
* [Using vaulted configuration files](#using-vaulted-configuration-files)
VMware Dynamic Inventory Plugin
-------------------------------
The best way to interact with your hosts is to use the VMware dynamic inventory plugin, which dynamically queries VMware APIs and tells Ansible what nodes can be managed.
### Requirements
To use the VMware dynamic inventory plugins, you must install [pyVmomi](https://github.com/vmware/pyvmomi) on your control node (the host running Ansible).
To include tag-related information for the virtual machines in your dynamic inventory, you also need the [vSphere Automation SDK](https://code.vmware.com/web/sdk/65/vsphere-automation-python), which supports REST API features like tagging and content libraries, on your control node. You can install the `vSphere Automation SDK` following [these instructions](https://github.com/vmware/vsphere-automation-sdk-python#installing-required-python-packages).
```
$ pip install pyvmomi
```
To use this VMware dynamic inventory plugin, you need to enable it first by specifying the following in the `ansible.cfg` file:
```
[inventory]
enable_plugins = vmware_vm_inventory
```
Then, create a file that ends in `.vmware.yml` or `.vmware.yaml` in your working directory.
The `vmware_vm_inventory` script takes in the same authentication information as any VMware module.
Here’s an example of a valid inventory file:
```
plugin: vmware_vm_inventory
strict: False
hostname: 10.65.223.31
username: [email protected]
password: Esxi@123$%
validate_certs: False
with_tags: True
```
Executing `ansible-inventory --list -i <filename>.vmware.yml` will create a list of VMware instances that are ready to be configured using Ansible.
Using vaulted configuration files
---------------------------------
Since the inventory configuration file contains vCenter password in plain text, a security risk, you may want to encrypt your entire inventory configuration file.
You can encrypt a valid inventory configuration file as follows:
```
$ ansible-vault encrypt <filename>.vmware.yml
New Vault password:
Confirm New Vault password:
Encryption successful
```
And you can use this vaulted inventory configuration file using:
```
$ ansible-inventory -i filename.vmware.yml --list --vault-password-file=/path/to/vault_password_file
```
See also
[pyVmomi](https://github.com/vmware/pyvmomi)
The GitHub Page of pyVmomi
[pyVmomi Issue Tracker](https://github.com/vmware/pyvmomi/issues)
The issue tracker for the pyVmomi project
[vSphere Automation SDK GitHub Page](https://github.com/vmware/vsphere-automation-sdk-python)
The GitHub Page of vSphere Automation SDK for Python
[vSphere Automation SDK Issue Tracker](https://github.com/vmware/vsphere-automation-sdk-python/issues)
The issue tracker for vSphere Automation SDK for Python
[Working with playbooks](../../user_guide/playbooks#working-with-playbooks)
An introduction to playbooks
[Using encrypted variables and files](../../user_guide/vault#playbooks-vault)
Using Vault in playbooks
ansible Find folder path of an existing VMware virtual machine Find folder path of an existing VMware virtual machine
======================================================
* [Introduction](#introduction)
* [Scenario requirements](#scenario-requirements)
* [Caveats](#caveats)
* [Example description](#example-description)
+ [What to expect](#what-to-expect)
+ [Troubleshooting](#troubleshooting)
Introduction
------------
This guide will show you how to utilize Ansible to find folder path of an existing VMware virtual machine.
Scenario requirements
---------------------
* Software
+ Ansible 2.5 or later must be installed.
+ The Python module `Pyvmomi` must be installed on the Ansible control node (or Target host if not executing against localhost).
+ We recommend installing the latest version with pip: `pip install Pyvmomi` (as the OS packages are usually out of date and incompatible).
* Hardware
+ At least one standalone ESXi server or
+ vCenter Server with at least one ESXi server
* Access / Credentials
+ Ansible (or the target server) must have network access to the either vCenter server or the ESXi server
+ Username and Password for vCenter or ESXi server
Caveats
-------
* All variable names and VMware object names are case sensitive.
* You need to use Python 2.7.9 version in order to use `validate_certs` option, as this version is capable of changing the SSL verification behaviours.
Example description
-------------------
With the following Ansible playbook you can find the folder path of an existing virtual machine using name.
```
---
- name: Find folder path of an existing virtual machine
hosts: localhost
gather_facts: False
vars_files:
- vcenter_vars.yml
vars:
ansible_python_interpreter: "/usr/bin/env python3"
tasks:
- set_fact:
vm_name: "DC0_H0_VM0"
- name: "Find folder for VM - {{ vm_name }}"
vmware_guest_find:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
validate_certs: False
name: "{{ vm_name }}"
delegate_to: localhost
register: vm_facts
```
Since Ansible utilizes the VMware API to perform actions, in this use case it will be connecting directly to the API from localhost.
This means that playbooks will not be running from the vCenter or ESXi Server.
Note that this play disables the `gather_facts` parameter, since you don’t want to collect facts about localhost.
You can run these modules against another server that would then connect to the API if localhost does not have access to vCenter. If so, the required Python modules will need to be installed on that target server. We recommend installing the latest version with pip: `pip install Pyvmomi` (as the OS packages are usually out of date and incompatible).
Before you begin, make sure you have:
* Hostname of the ESXi server or vCenter server
* Username and password for the ESXi or vCenter server
* Name of the existing Virtual Machine for which you want to collect folder path
For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using [ansible-vault](../../cli/ansible-vault#ansible-vault) or using [Ansible Tower credentials](https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html).
If your vCenter or ESXi server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the `validate_certs` parameter. To do this you need to set `validate_certs=False` in your playbook.
The name of existing virtual machine will be used as input for `vmware_guest_find` module via `name` parameter.
### What to expect
Running this playbook can take some time, depending on your environment and network connectivity. When the run is complete you will see
```
"vm_facts": {
"changed": false,
"failed": false,
...
"folders": [
"/F0/DC0/vm/F0"
]
}
```
### Troubleshooting
If your playbook fails:
* Check if the values provided for username and password are correct.
* Check if the datacenter you provided is available.
* Check if the virtual machine specified exists and you have respective permissions to access VMware object.
* Ensure the full folder path you specified already exists.
ansible Remove an existing VMware virtual machine Remove an existing VMware virtual machine
=========================================
* [Introduction](#introduction)
* [Scenario requirements](#scenario-requirements)
* [Caveats](#caveats)
* [Example description](#example-description)
+ [What to expect](#what-to-expect)
+ [Troubleshooting](#troubleshooting)
Introduction
------------
This guide will show you how to utilize Ansible to remove an existing VMware virtual machine.
Scenario requirements
---------------------
* Software
+ Ansible 2.5 or later must be installed.
+ The Python module `Pyvmomi` must be installed on the Ansible control node (or Target host if not executing against localhost).
+ We recommend installing the latest version with pip: `pip install Pyvmomi` (as the OS packages are usually out of date and incompatible).
* Hardware
+ At least one standalone ESXi server or
+ vCenter Server with at least one ESXi server
* Access / Credentials
+ Ansible (or the target server) must have network access to the either vCenter server or the ESXi server
+ Username and Password for vCenter or ESXi server
+ Hosts in the ESXi cluster must have access to the datastore that the template resides on.
Caveats
-------
* All variable names and VMware object names are case sensitive.
* You need to use Python 2.7.9 version in order to use `validate_certs` option, as this version is capable of changing the SSL verification behaviours.
* `vmware_guest` module tries to mimic VMware Web UI and workflow, so the virtual machine must be in powered off state in order to remove it from the VMware inventory.
Warning
The removal VMware virtual machine using `vmware_guest` module is destructive operation and can not be reverted, so it is strongly recommended to take the backup of virtual machine and related files (vmx and vmdk files) before proceeding.
Example description
-------------------
In this use case / example, user will be removing a virtual machine using name. The following Ansible playbook showcases the basic parameters that are needed for this.
```
---
- name: Remove virtual machine
gather_facts: no
vars_files:
- vcenter_vars.yml
vars:
ansible_python_interpreter: "/usr/bin/env python3"
hosts: localhost
tasks:
- set_fact:
vm_name: "VM_0003"
datacenter: "DC1"
- name: Remove "{{ vm_name }}"
vmware_guest:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
validate_certs: no
cluster: "DC1_C1"
name: "{{ vm_name }}"
state: absent
delegate_to: localhost
register: facts
```
Since Ansible utilizes the VMware API to perform actions, in this use case it will be connecting directly to the API from localhost.
This means that playbooks will not be running from the vCenter or ESXi Server.
Note that this play disables the `gather_facts` parameter, since you don’t want to collect facts about localhost.
You can run these modules against another server that would then connect to the API if localhost does not have access to vCenter. If so, the required Python modules will need to be installed on that target server. We recommend installing the latest version with pip: `pip install Pyvmomi` (as the OS packages are usually out of date and incompatible).
Before you begin, make sure you have:
* Hostname of the ESXi server or vCenter server
* Username and password for the ESXi or vCenter server
* Name of the existing Virtual Machine you want to remove
For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using [ansible-vault](../../cli/ansible-vault#ansible-vault) or using [Ansible Tower credentials](https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html).
If your vCenter or ESXi server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the `validate_certs` parameter. To do this you need to set `validate_certs=False` in your playbook.
The name of existing virtual machine will be used as input for `vmware_guest` module via `name` parameter.
### What to expect
* You will not see any JSON output after this playbook completes as compared to other operations performed using `vmware_guest` module.
```
{
"changed": true
}
```
* State is changed to `True` which notifies that the virtual machine is removed from the VMware inventory. This can take some time depending upon your environment and network connectivity.
### Troubleshooting
If your playbook fails:
* Check if the values provided for username and password are correct.
* Check if the datacenter you provided is available.
* Check if the virtual machine specified exists and you have permissions to access the datastore.
* Ensure the full folder path you specified already exists. It will not create folders automatically for you.
ansible Using VMware dynamic inventory plugin - Hostnames Using VMware dynamic inventory plugin - Hostnames
=================================================
* [Requirements](#requirements)
* [What to expect](#what-to-expect)
* [Troubleshooting](#troubleshooting)
VMware dynamic inventory plugin - customizing hostnames
-------------------------------------------------------
VMware inventory plugin allows you to configure hostnames using the `hostnames` configuration parameter.
In this scenario guide we will see how you configure hostnames from the given VMware guest in the inventory.
### Requirements
To use the VMware dynamic inventory plugins, you must install [pyVmomi](https://github.com/vmware/pyvmomi) on your control node (the host running Ansible).
To include tag-related information for the virtual machines in your dynamic inventory, you also need the [vSphere Automation SDK](https://code.vmware.com/web/sdk/65/vsphere-automation-python), which supports REST API features such as tagging and content libraries, on your control node. You can install the `vSphere Automation SDK` following [these instructions](https://github.com/vmware/vsphere-automation-sdk-python#installing-required-python-packages).
```
$ pip install pyvmomi
```
Starting in Ansible 2.10, the VMware dynamic inventory plugin is available in the `community.vmware` collection included Ansible. To install the latest `community.vmware` collection:
```
$ ansible-galaxy collection install community.vmware
```
To use this VMware dynamic inventory plugin:
1. Enable it first by specifying the following in the `ansible.cfg` file:
```
[inventory]
enable_plugins = community.vmware.vmware_vm_inventory
```
2. Create a file that ends in `vmware.yml` or `vmware.yaml` in your working directory.
The `vmware_vm_inventory` inventory plugin takes in the same authentication information as any other VMware modules does.
Here’s an example of a valid inventory file with custom hostname for the given VMware guest:
```
plugin: community.vmware.vmware_vm_inventory
strict: False
hostname: 10.65.223.31
username: [email protected]
password: Esxi@123$%
validate_certs: False
with_tags: False
hostnames:
- config.name
```
Here, we have configured a custom hostname by setting the `hostnames` parameter to `config.name`. This will retrieve the `config.name` property from the virtual machine and populate it in the inventory.
You can check all allowed properties for the given virtual machine at [Using Virtual machine attributes in VMware dynamic inventory plugin](vmware_inventory_vm_attributes#vmware-inventory-vm-attributes).
Executing `ansible-inventory --list -i <filename>.vmware.yml` creates a list of the virtual machines that are ready to be configured using Ansible.
### What to expect
You will notice that instead of default behavior of representing the hostname as `config.name + _ + config.uuid`, the inventory hosts show value as `config.name`.
```
{
"_meta": {
"hostvars": {
"template_001": {
"config.name": "template_001",
"guest.toolsRunningStatus": "guestToolsNotRunning",
...
"guest.toolsStatus": "toolsNotInstalled",
"name": "template_001"
},
"vm_8046": {
"config.name": "vm_8046",
"guest.toolsRunningStatus": "guestToolsNotRunning",
...
"guest.toolsStatus": "toolsNotInstalled",
"name": "vm_8046"
},
...
}
```
### Troubleshooting
If the custom property specified in `hostnames` fails:
* Check if the values provided for username and password are correct.
* Make sure it is a valid property, see [Using Virtual machine attributes in VMware dynamic inventory plugin](vmware_inventory_vm_attributes#vmware-inventory-vm-attributes).
* Use `strict: True` to get more information about the error.
* Please make sure that you are using latest version VMware collection.
See also
[pyVmomi](https://github.com/vmware/pyvmomi)
The GitHub Page of pyVmomi
[pyVmomi Issue Tracker](https://github.com/vmware/pyvmomi/issues)
The issue tracker for the pyVmomi project
[vSphere Automation SDK GitHub Page](https://github.com/vmware/vsphere-automation-sdk-python)
The GitHub Page of vSphere Automation SDK for Python
[vSphere Automation SDK Issue Tracker](https://github.com/vmware/vsphere-automation-sdk-python/issues)
The issue tracker for vSphere Automation SDK for Python
[Using Virtual machine attributes in VMware dynamic inventory plugin](vmware_inventory_vm_attributes#vmware-inventory-vm-attributes)
Using Virtual machine attributes in VMware dynamic inventory plugin
[Working with playbooks](../../user_guide/playbooks#working-with-playbooks)
An introduction to playbooks
[Using encrypted variables and files](../../user_guide/vault#playbooks-vault)
Using Vault in playbooks
ansible Introduction to Ansible for VMware Introduction to Ansible for VMware
==================================
* [Introduction](#introduction)
* [Requirements](#requirements)
* [vmware\_guest module](#vmware-guest-module)
Introduction
------------
Ansible provides various modules to manage VMware infrastructure, which includes datacenter, cluster, host system and virtual machine.
Requirements
------------
Ansible VMware modules are written on top of [pyVmomi](https://github.com/vmware/pyvmomi). pyVmomi is the Python SDK for the VMware vSphere API that allows user to manage ESX, ESXi, and vCenter infrastructure. You can install pyVmomi using pip (you may need to use pip3, depending on your OS/distro):
```
$ pip install pyvmomi
```
Ansible VMware modules leveraging latest vSphere(6.0+) features are using [vSphere Automation Python SDK](https://github.com/vmware/vsphere-automation-sdk-python). The vSphere Automation Python SDK also has client libraries, documentation, and sample code for VMware Cloud on AWS Console APIs, NSX VMware Cloud on AWS integration APIs, VMware Cloud on AWS site recovery APIs, NSX-T APIs.
You can install vSphere Automation Python SDK using pip:
```
$ pip install --upgrade git+https://github.com/vmware/vsphere-automation-sdk-python.git
```
Note:
Installing vSphere Automation Python SDK also installs `pyvmomi`. A separate installation of `pyvmomi` is not required.
vmware\_guest module
--------------------
The [vmware\_guest](https://docs.ansible.com/ansible/2.9/modules/vmware_guest_module.html#vmware-guest-module "(in Ansible v2.9)") module manages various operations related to virtual machines in the given ESXi or vCenter server.
See also
[pyVmomi](https://github.com/vmware/pyvmomi)
The GitHub Page of pyVmomi
[pyVmomi Issue Tracker](https://github.com/vmware/pyvmomi/issues)
The issue tracker for the pyVmomi project
[govc](https://github.com/vmware/govmomi/tree/master/govc)
govc is a vSphere CLI built on top of govmomi
[Working with playbooks](../../user_guide/playbooks#working-with-playbooks)
An introduction to playbooks
| programming_docs |
ansible Deploy a virtual machine from a template Deploy a virtual machine from a template
========================================
* [Introduction](#introduction)
* [Scenario requirements](#scenario-requirements)
* [Assumptions](#assumptions)
* [Caveats](#caveats)
* [Example description](#example-description)
+ [What to expect](#what-to-expect)
+ [Troubleshooting](#troubleshooting)
Introduction
------------
This guide will show you how to utilize Ansible to clone a virtual machine from already existing VMware template or existing VMware guest.
Scenario requirements
---------------------
* Software
+ Ansible 2.5 or later must be installed
+ The Python module `Pyvmomi` must be installed on the Ansible (or Target host if not executing against localhost)
+ Installing the latest `Pyvmomi` via `pip` is recommended [as the OS provided packages are usually out of date and incompatible]
* Hardware
+ vCenter Server with at least one ESXi server
* Access / Credentials
+ Ansible (or the target server) must have network access to the either vCenter server or the ESXi server you will be deploying to
+ Username and Password
+ Administrator user with following privileges
- `Datastore.AllocateSpace` on the destination datastore or datastore folder
- `Network.Assign` on the network to which the virtual machine will be assigned
- `Resource.AssignVMToPool` on the destination host, cluster, or resource pool
- `VirtualMachine.Config.AddNewDisk` on the datacenter or virtual machine folder
- `VirtualMachine.Config.AddRemoveDevice` on the datacenter or virtual machine folder
- `VirtualMachine.Interact.PowerOn` on the datacenter or virtual machine folder
- `VirtualMachine.Inventory.CreateFromExisting` on the datacenter or virtual machine folder
- `VirtualMachine.Provisioning.Clone` on the virtual machine you are cloning
- `VirtualMachine.Provisioning.Customize` on the virtual machine or virtual machine folder if you are customizing the guest operating system
- `VirtualMachine.Provisioning.DeployTemplate` on the template you are using
- `VirtualMachine.Provisioning.ReadCustSpecs` on the root vCenter Server if you are customizing the guest operating systemDepending on your requirements, you could also need one or more of the following privileges:
- `VirtualMachine.Config.CPUCount` on the datacenter or virtual machine folder
- `VirtualMachine.Config.Memory` on the datacenter or virtual machine folder
- `VirtualMachine.Config.DiskExtend` on the datacenter or virtual machine folder
- `VirtualMachine.Config.Annotation` on the datacenter or virtual machine folder
- `VirtualMachine.Config.AdvancedConfig` on the datacenter or virtual machine folder
- `VirtualMachine.Config.EditDevice` on the datacenter or virtual machine folder
- `VirtualMachine.Config.Resource` on the datacenter or virtual machine folder
- `VirtualMachine.Config.Settings` on the datacenter or virtual machine folder
- `VirtualMachine.Config.UpgradeVirtualHardware` on the datacenter or virtual machine folder
- `VirtualMachine.Interact.SetCDMedia` on the datacenter or virtual machine folder
- `VirtualMachine.Interact.SetFloppyMedia` on the datacenter or virtual machine folder
- `VirtualMachine.Interact.DeviceConnection` on the datacenter or virtual machine folder
Assumptions
-----------
* All variable names and VMware object names are case sensitive
* VMware allows creation of virtual machine and templates with same name across datacenters and within datacenters
* You need to use Python 2.7.9 version in order to use `validate_certs` option, as this version is capable of changing the SSL verification behaviours
Caveats
-------
* Hosts in the ESXi cluster must have access to the datastore that the template resides on.
* Multiple templates with the same name will cause module failures.
* In order to utilize Guest Customization, VMware Tools must be installed on the template. For Linux, the `open-vm-tools` package is recommended, and it requires that `Perl` be installed.
Example description
-------------------
In this use case / example, we will be selecting a virtual machine template and cloning it into a specific folder in our Datacenter / Cluster. The following Ansible playbook showcases the basic parameters that are needed for this.
```
---
- name: Create a VM from a template
hosts: localhost
gather_facts: no
tasks:
- name: Clone the template
vmware_guest:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
name: testvm_2
template: template_el7
datacenter: "{{ datacenter_name }}"
folder: /DC1/vm
state: poweredon
cluster: "{{ cluster_name }}"
wait_for_ip_address: yes
```
Since Ansible utilizes the VMware API to perform actions, in this use case we will be connecting directly to the API from our localhost. This means that our playbooks will not be running from the vCenter or ESXi Server. We do not necessarily need to collect facts about our localhost, so the `gather_facts` parameter will be disabled. You can run these modules against another server that would then connect to the API if your localhost does not have access to vCenter. If so, the required Python modules will need to be installed on that target server.
To begin, there are a few bits of information we will need. First and foremost is the hostname of the ESXi server or vCenter server. After this, you will need the username and password for this server. For now, you will be entering these directly, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using [ansible-vault](../../cli/ansible-vault#ansible-vault) or using [Ansible Tower credentials](https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html). If your vCenter or ESXi server is not setup with proper CA certificates that can be verified from the Ansible server, then it is necessary to disable validation of these certificates by using the `validate_certs` parameter. To do this you need to set `validate_certs=False` in your playbook.
Now you need to supply the information about the virtual machine which will be created. Give your virtual machine a name, one that conforms to all VMware requirements for naming conventions. Next, select the display name of the template from which you want to clone new virtual machine. This must match what’s displayed in VMware Web UI exactly. Then you can specify a folder to place this new virtual machine in. This path can either be a relative path or a full path to the folder including the Datacenter. You may need to specify a state for the virtual machine. This simply tells the module which action you want to take, in this case you will be ensure that the virtual machine exists and is powered on. An optional parameter is `wait_for_ip_address`, this will tell Ansible to wait for the virtual machine to fully boot up and VMware Tools is running before completing this task.
### What to expect
* You will see a bit of JSON output after this playbook completes. This output shows various parameters that are returned from the module and from vCenter about the newly created VM.
```
{
"changed": true,
"instance": {
"annotation": "",
"current_snapshot": null,
"customvalues": {},
"guest_consolidation_needed": false,
"guest_question": null,
"guest_tools_status": "guestToolsNotRunning",
"guest_tools_version": "0",
"hw_cores_per_socket": 1,
"hw_datastores": [
"ds_215"
],
"hw_esxi_host": "192.0.2.44",
"hw_eth0": {
"addresstype": "assigned",
"ipaddresses": null,
"label": "Network adapter 1",
"macaddress": "00:50:56:8c:19:f4",
"macaddress_dash": "00-50-56-8c-19-f4",
"portgroup_key": "dvportgroup-17",
"portgroup_portkey": "0",
"summary": "DVSwitch: 50 0c 5b 22 b6 68 ab 89-fc 0b 59 a4 08 6e 80 fa"
},
"hw_files": [
"[ds_215] testvm_2/testvm_2.vmx",
"[ds_215] testvm_2/testvm_2.vmsd",
"[ds_215] testvm_2/testvm_2.vmdk"
],
"hw_folder": "/DC1/vm",
"hw_guest_full_name": null,
"hw_guest_ha_state": null,
"hw_guest_id": null,
"hw_interfaces": [
"eth0"
],
"hw_is_template": false,
"hw_memtotal_mb": 512,
"hw_name": "testvm_2",
"hw_power_status": "poweredOff",
"hw_processor_count": 2,
"hw_product_uuid": "420cb25b-81e8-8d3b-dd2d-a439ee54fcc5",
"hw_version": "vmx-13",
"instance_uuid": "500cd53b-ed57-d74e-2da8-0dc0eddf54d5",
"ipv4": null,
"ipv6": null,
"module_hw": true,
"snapshots": []
},
"invocation": {
"module_args": {
"annotation": null,
"cdrom": {},
"cluster": "DC1_C1",
"customization": {},
"customization_spec": null,
"customvalues": [],
"datacenter": "DC1",
"disk": [],
"esxi_hostname": null,
"folder": "/DC1/vm",
"force": false,
"guest_id": null,
"hardware": {},
"hostname": "192.0.2.44",
"is_template": false,
"linked_clone": false,
"name": "testvm_2",
"name_match": "first",
"networks": [],
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"resource_pool": null,
"snapshot_src": null,
"state": "present",
"state_change_timeout": 0,
"template": "template_el7",
"username": "[email protected]",
"uuid": null,
"validate_certs": false,
"vapp_properties": [],
"wait_for_ip_address": true
}
}
}
```
* State is changed to `True` which notifies that the virtual machine is built using given template. The module will not complete until the clone task in VMware is finished. This can take some time depending on your environment.
* If you utilize the `wait_for_ip_address` parameter, then it will also increase the clone time as it will wait until virtual machine boots into the OS and an IP Address has been assigned to the given NIC.
### Troubleshooting
Things to inspect
* Check if the values provided for username and password are correct
* Check if the datacenter you provided is available
* Check if the template specified exists and you have permissions to access the datastore
* Ensure the full folder path you specified already exists. It will not create folders automatically for you
ansible Ansible VMware FAQ Ansible VMware FAQ
==================
vmware\_guest
-------------
### Can I deploy a virtual machine on a standalone ESXi server ?
Yes. `vmware_guest` can deploy a virtual machine with required settings on a standalone ESXi server. However, you must have a paid license to deploy virtual machines this way. If you are using the free version, the API is read-only.
### Is `/vm` required for `vmware_guest` module ?
Prior to Ansible version 2.5, `folder` was an optional parameter with a default value of `/vm`.
The folder parameter was used to discover information about virtual machines in the given infrastructure.
Starting with Ansible version 2.5, `folder` is still an optional parameter with no default value. This parameter will be now used to identify a user’s virtual machine, if multiple virtual machines or virtual machine templates are found with same name. VMware does not restrict the system administrator from creating virtual machines with same name.
ansible Using vmware_tools connection plugin Using vmware\_tools connection plugin
=====================================
* [Introduction](#introduction)
* [Scenario requirements](#scenario-requirements)
* [Caveats](#caveats)
* [Example description](#example-description)
+ [What to expect](#what-to-expect)
+ [Troubleshooting](#troubleshooting)
Introduction
------------
This guide will show you how to utilize VMware Connection plugin to communicate and automate various tasks on VMware guest machines.
Scenario requirements
---------------------
* Software
+ Ansible 2.9 or later must be installed.
+ We recommend installing the latest version with pip: `pip install Pyvmomi` on the Ansible control node (as the OS packages are usually out of date and incompatible) if you are planning to use any existing VMware modules.
* Hardware
+ vCenter Server 6.5 and above
* Access / Credentials
+ Ansible (or the target server) must have network access to either the vCenter server
+ Username and Password for vCenter with required permissions
+ VMware tools or openvm-tools with required dependencies like Perl installed on the given virtual machine
Caveats
-------
* All variable names and VMware object names are case sensitive.
* You need to use Python 2.7.9 version in order to use `validate_certs` option, as this version is capable of changing the SSL verification behaviors.
Example description
-------------------
User can run playbooks against VMware virtual machines using `vmware_tools` connection plugin.
In order work with `vmware_tools` connection plugin, you will need to specify hostvars for the given virtual machine.
For example, if you want to run a playbook on a virtual machine called `centos_7` located at `/Asia-Datacenter1/prod/centos_7` in the given vCenter, you will need to specify hostvars as follows:
```
[centos7]
host1
[centos7:vars]
# vmware_tools related variables
ansible_connection=vmware_tools
ansible_vmware_host=10.65.201.128
[email protected]
ansible_vmware_password=Esxi@123$%
ansible_vmware_validate_certs=no
# Location of the virtual machine
ansible_vmware_guest_path=Asia-Datacenter1/vm/prod/centos_7
# Credentials
ansible_vmware_tools_user=root
ansible_vmware_tools_password=Secret123
```
Here, we are providing vCenter details and credentials for the given virtual machine to run the playbook on. If your virtual machine path is `Asia-Datacenter1/prod/centos_7`, you specify `ansible_vmware_guest_path` as `Asia-Datacenter1/vm/prod/centos_7`. Please take a note that `/vm` is added in the virtual machine path, since this is a logical folder structure in the VMware inventory.
Let us now run following playbook,
```
---
- name: Example showing VMware Connection plugin
hosts: centos7
tasks:
- name: Gather information about temporary directory inside VM
shell: ls /tmp
```
Since Ansible utilizes the `vmware-tools` or `openvm-tools` service capabilities running in the virtual machine to perform actions, in this use case it will be connecting directly to the guest machine.
For now, you will be entering credentials in plain text, but in a more advanced playbook this can be abstracted out and stored in a more secure fashion using [ansible-vault](../../cli/ansible-vault#ansible-vault) or using [Ansible Tower credentials](https://docs.ansible.com/ansible-tower/latest/html/userguide/credentials.html).
### What to expect
Running this playbook can take some time, depending on your environment and network connectivity. When the run is complete you will see:
```
{
"changed": true,
"cmd": "ls /tmp",
"delta": "0:00:00.005440",
"end": "2020-10-01 07:30:56.940813",
"rc": 0,
"start": "2020-10-01 07:30:56.935373",
"stderr": "",
"stderr_lines": [],
"stdout": "ansible_command_payload_JzWiL9\niso",
"stdout_lines": ["ansible_command_payload_JzWiL9", "iso", "vmware-root"]
}
```
### Troubleshooting
If your playbook fails:
* Check if the values provided for username and password are correct.
* Check if the path of virtual machine is correct. Please mind that `/vm/` needs to be provided while specifying virtual machine location.
ansible Troubleshooting Ansible for VMware Troubleshooting Ansible for VMware
==================================
* [Debugging Ansible for VMware](#debugging-ansible-for-vmware)
* [Known issues with Ansible for VMware](#known-issues-with-ansible-for-vmware)
+ [Network settings with vmware\_guest in Ubuntu 18.04](#network-settings-with-vmware-guest-in-ubuntu-18-04)
- [Potential Workarounds](#potential-workarounds)
This section lists things that can go wrong and possible ways to fix them.
Debugging Ansible for VMware
----------------------------
When debugging or creating a new issue, you will need information about your VMware infrastructure. You can get this information using [govc](https://github.com/vmware/govmomi/tree/master/govc), For example:
```
$ export GOVC_USERNAME=ESXI_OR_VCENTER_USERNAME
$ export GOVC_PASSWORD=ESXI_OR_VCENTER_PASSWORD
$ export GOVC_URL=https://ESXI_OR_VCENTER_HOSTNAME:443
$ govc find /
```
Known issues with Ansible for VMware
------------------------------------
### Network settings with vmware\_guest in Ubuntu 18.04
Setting the network with `vmware_guest` in Ubuntu 18.04 is known to be broken, due to missing support for `netplan` in the `open-vm-tools`. This issue is tracked via:
* <https://github.com/vmware/open-vm-tools/issues/240>
* <https://github.com/ansible/ansible/issues/41133>
#### Potential Workarounds
There are several workarounds for this issue.
1. Modify the Ubuntu 18.04 images and installing `ifupdown` in them via `sudo apt install ifupdown`. If so you need to remove `netplan` via `sudo apt remove netplan.io` and you need stop `systemd-networkd` via `sudo systemctl disable systemctl-networkd`.
2. Generate the `systemd-networkd` files with a task in your VMware Ansible role:
```
- name: make sure cache directory exists
file: path="{{ inventory_dir }}/cache" state=directory
delegate_to: localhost
- name: generate network templates
template: src=network.j2 dest="{{ inventory_dir }}/cache/{{ inventory_hostname }}.network"
delegate_to: localhost
- name: copy generated files to vm
vmware_guest_file_operation:
hostname: "{{ vmware_general.hostname }}"
username: "{{ vmware_username }}"
password: "{{ vmware_password }}"
datacenter: "{{ vmware_general.datacenter }}"
validate_certs: "{{ vmware_general.validate_certs }}"
vm_id: "{{ inventory_hostname }}"
vm_username: root
vm_password: "{{ template_password }}"
copy:
src: "{{ inventory_dir }}/cache/{{ inventory_hostname }}.network"
dest: "/etc/systemd/network/ens160.network"
overwrite: False
delegate_to: localhost
- name: restart systemd-networkd
vmware_vm_shell:
hostname: "{{ vmware_general.hostname }}"
username: "{{ vmware_username }}"
password: "{{ vmware_password }}"
datacenter: "{{ vmware_general.datacenter }}"
folder: /vm
vm_id: "{{ inventory_hostname}}"
vm_username: root
vm_password: "{{ template_password }}"
vm_shell: /bin/systemctl
vm_shell_args: " restart systemd-networkd"
delegate_to: localhost
- name: restart systemd-resolved
vmware_vm_shell:
hostname: "{{ vmware_general.hostname }}"
username: "{{ vmware_username }}"
password: "{{ vmware_password }}"
datacenter: "{{ vmware_general.datacenter }}"
folder: /vm
vm_id: "{{ inventory_hostname}}"
vm_username: root
vm_password: "{{ template_password }}"
vm_shell: /bin/systemctl
vm_shell_args: " restart systemd-resolved"
delegate_to: localhost
```
3. Wait for `netplan` support in `open-vm-tools`
ansible Using VMware dynamic inventory plugin - Filters Using VMware dynamic inventory plugin - Filters
===============================================
* [Requirements](#requirements)
+ [Using `or` conditions in filters](#using-or-conditions-in-filters)
+ [Using regular expression in filters](#using-regular-expression-in-filters)
* [What to expect](#what-to-expect)
* [Troubleshooting filters](#troubleshooting-filters)
VMware dynamic inventory plugin - filtering VMware guests
---------------------------------------------------------
VMware inventory plugin allows you to filter VMware guests using the `filters` configuration parameter.
This section shows how you configure `filters` for the given VMware guest in the inventory.
### Requirements
To use the VMware dynamic inventory plugins, you must install [pyVmomi](https://github.com/vmware/pyvmomi) on your control node (the host running Ansible).
To include tag-related information for the virtual machines in your dynamic inventory, you also need the [vSphere Automation SDK](https://code.vmware.com/web/sdk/65/vsphere-automation-python), which supports REST API features such as tagging and content libraries, on your control node. You can install the `vSphere Automation SDK` following [these instructions](https://github.com/vmware/vsphere-automation-sdk-python#installing-required-python-packages).
```
$ pip install pyvmomi
```
Starting in Ansible 2.10, the VMware dynamic inventory plugin is available in the `community.vmware` collection included Ansible. Alternately, to install the latest `community.vmware` collection:
```
$ ansible-galaxy collection install community.vmware
```
To use this VMware dynamic inventory plugin:
1. Enable it first by specifying the following in the `ansible.cfg` file:
```
[inventory]
enable_plugins = community.vmware.vmware_vm_inventory
```
2. Create a file that ends in `vmware.yml` or `vmware.yaml` in your working directory.
The `vmware_vm_inventory` inventory plugin takes in the same authentication information as any other VMware modules does.
Let us assume we want to list all RHEL7 VMs with the power state as “poweredOn”. A valid inventory file with filters for the given VMware guest looks as follows:
```
plugin: community.vmware.vmware_vm_inventory
strict: False
hostname: 10.65.223.31
username: [email protected]
password: Esxi@123$%
validate_certs: False
with_tags: False
hostnames:
- config.name
filters:
- config.guestId == "rhel7_64Guest"
- summary.runtime.powerState == "poweredOn"
```
Here, we have configured two filters -
* `config.guestId` is equal to `rhel7_64Guest`
* `summary.runtime.powerState` is equal to `poweredOn`
This retrieves all the VMs which satisfy these two conditions and populates them in the inventory. Notice that the conditions are combined using an `and` operation.
#### Using `or` conditions in filters
Let us assume you want filter RHEL7 and Ubuntu VMs. You can use multiple filters using `or` condition in your inventory file.
A valid filter in the VMware inventory file for this example is:
```
plugin: community.vmware.vmware_vm_inventory
strict: False
hostname: 10.65.223.31
username: [email protected]
password: Esxi@123$%
validate_certs: False
with_tags: False
hostnames:
- config.name
filters:
- config.guestId == "rhel7_64Guest" or config.guestId == "ubuntu64Guest"
```
You can check all allowed properties for filters for the given virtual machine at [Using Virtual machine attributes in VMware dynamic inventory plugin](vmware_inventory_vm_attributes#vmware-inventory-vm-attributes).
If you are using the `properties` parameter with custom VM properties, make sure that you include all the properties used by filters as well in your VM property list.
For example, if we want all RHEL7 and Ubuntu VMs that are poweredOn, you can use inventory file:
```
plugin: community.vmware.vmware_vm_inventory
strict: False
hostname: 10.65.223.31
username: [email protected]
password: Esxi@123$%
validate_certs: False
with_tags: False
hostnames:
- 'config.name'
properties:
- 'config.name'
- 'config.guestId'
- 'guest.ipAddress'
- 'summary.runtime.powerState'
filters:
- config.guestId == "rhel7_64Guest" or config.guestId == "ubuntu64Guest"
- summary.runtime.powerState == "poweredOn"
```
Here, we are using minimum VM properties, that is `config.name`, `config.guestId`, `summary.runtime.powerState`, and `guest.ipAddress`.
* `config.name` is used by the `hostnames` parameter.
* `config.guestId` and `summary.runtime.powerState` are used by the `filters` parameter.
* `guest.guestId` is used by `ansible_host` internally by the inventory plugin.
#### Using regular expression in filters
Let us assume you want filter VMs with specific IP range. You can use regular expression in `filters` in your inventory file.
For example, if we want all RHEL7 and Ubuntu VMs that are poweredOn, you can use inventory file:
```
plugin: community.vmware.vmware_vm_inventory
strict: False
hostname: 10.65.223.31
username: [email protected]
password: Esxi@123$%
validate_certs: False
with_tags: False
hostnames:
- 'config.name'
properties:
- 'config.name'
- 'config.guestId'
- 'guest.ipAddress'
- 'summary.runtime.powerState'
filters:
- guest.ipAddress is defined and guest.ipAddress is match('192.168.*')
```
Here, we are using `guest.ipAddress` VM property. This property is optional and depended upon VMware tools installed on VMs. We are using `match` to validate the regular expression for the given IP range.
Executing `ansible-inventory --list -i <filename>.vmware.yml` creates a list of the virtual machines that are ready to be configured using Ansible.
### What to expect
You will notice that the inventory hosts are filtered depending on your `filters` section.
```
{
"_meta": {
"hostvars": {
"template_001": {
"config.name": "template_001",
"config.guestId": "ubuntu64Guest",
...
"guest.toolsStatus": "toolsNotInstalled",
"summary.runtime.powerState": "poweredOn",
},
"vm_8046": {
"config.name": "vm_8046",
"config.guestId": "rhel7_64Guest",
...
"guest.toolsStatus": "toolsNotInstalled",
"summary.runtime.powerState": "poweredOn",
},
...
}
```
### Troubleshooting filters
If the custom property specified in `filters` fails:
* Check if the values provided for username and password are correct.
* Make sure it is a valid property, see [Using Virtual machine attributes in VMware dynamic inventory plugin](vmware_inventory_vm_attributes#vmware-inventory-vm-attributes).
* Use `strict: True` to get more information about the error.
* Please make sure that you are using latest version of the VMware collection.
See also
[pyVmomi](https://github.com/vmware/pyvmomi)
The GitHub Page of pyVmomi
[pyVmomi Issue Tracker](https://github.com/vmware/pyvmomi/issues)
The issue tracker for the pyVmomi project
[vSphere Automation SDK GitHub Page](https://github.com/vmware/vsphere-automation-sdk-python)
The GitHub Page of vSphere Automation SDK for Python
[vSphere Automation SDK Issue Tracker](https://github.com/vmware/vsphere-automation-sdk-python/issues)
The issue tracker for vSphere Automation SDK for Python
[Using Virtual machine attributes in VMware dynamic inventory plugin](vmware_inventory_vm_attributes#vmware-inventory-vm-attributes)
Using Virtual machine attributes in VMware dynamic inventory plugin
[Working with playbooks](../../user_guide/playbooks#working-with-playbooks)
An introduction to playbooks
[Using encrypted variables and files](../../user_guide/vault#playbooks-vault)
Using Vault in playbooks
| programming_docs |
ansible How to modify a virtual machine How to modify a virtual machine
===============================
* [Introduction](#introduction)
* [Scenario requirements](#scenario-requirements)
* [How to add a CDROM drive to a virtual machine](#how-to-add-a-cdrom-drive-to-a-virtual-machine)
+ [Add a new SATA adapter](#add-a-new-sata-adapter)
+ [Result](#result)
+ [Add a CDROM drive](#add-a-cdrom-drive)
+ [Result](#id1)
* [How to attach a VM to a network](#how-to-attach-a-vm-to-a-network)
+ [Attach a new NIC](#attach-a-new-nic)
+ [Result](#id2)
+ [Adjust the configuration of the NIC](#adjust-the-configuration-of-the-nic)
+ [Result](#id3)
* [Increase the memory of the VM](#increase-the-memory-of-the-vm)
+ [Result](#id4)
* [Upgrade the hardware version of the VM](#upgrade-the-hardware-version-of-the-vm)
+ [Result](#id5)
* [Adjust the number of CPUs of the VM](#adjust-the-number-of-cpus-of-the-vm)
+ [Result](#id6)
* [Remove a SATA controller](#remove-a-sata-controller)
+ [Result](#id7)
* [Attach a floppy drive](#attach-a-floppy-drive)
+ [Result](#id8)
* [Attach a new disk](#attach-a-new-disk)
+ [Result](#id9)
Introduction
------------
This section shows you how to use Ansible to modify an existing virtual machine.
Scenario requirements
---------------------
You’ve already followed [How to create a Virtual Machine](create_vm#vmware-rest-create-vm) and created a VM.
How to add a CDROM drive to a virtual machine
---------------------------------------------
In this example, we use the `vcenter_vm_hardware_*` modules to add a new CDROM to an existing VM.
### Add a new SATA adapter
First we create a new SATA adapter. We specify the `pci_slot_number`. This way if we run the task again it won’t do anything if there is already an adapter there.
```
- name: Create a SATA adapter at PCI slot 34
vmware.vmware_rest.vcenter_vm_hardware_adapter_sata:
vm: '{{ test_vm1_info.id }}'
pci_slot_number: 34
register: _sata_adapter_result_1
```
### Result
```
{
"value": {
"bus": 0,
"pci_slot_number": 34,
"label": "SATA controller 0",
"type": "AHCI"
},
"id": "15000",
"changed": true
}
```
### Add a CDROM drive
Now we can create the CDROM drive:
```
- name: Attach an ISO image to a guest VM
vmware.vmware_rest.vcenter_vm_hardware_cdrom:
vm: '{{ test_vm1_info.id }}'
type: SATA
sata:
bus: 0
unit: 2
start_connected: true
backing:
iso_file: '[ro_datastore] fedora.iso'
type: ISO_FILE
register: _result
```
### Result
```
{
"value": {
"start_connected": true,
"backing": {
"iso_file": "[ro_datastore] fedora.iso",
"type": "ISO_FILE"
},
"allow_guest_control": false,
"label": "CD/DVD drive 1",
"state": "NOT_CONNECTED",
"type": "SATA",
"sata": {
"bus": 0,
"unit": 2
}
},
"id": "16002",
"changed": true
}
```
How to attach a VM to a network
-------------------------------
### Attach a new NIC
Here we attach the VM to the network (through the portgroup). We specify a `pci_slot_number` for the same reason.
The second task adjusts the NIC configuration.
```
- name: Attach a VM to a dvswitch
vmware.vmware_rest.vcenter_vm_hardware_ethernet:
vm: '{{ test_vm1_info.id }}'
pci_slot_number: 4
backing:
type: DISTRIBUTED_PORTGROUP
network: "{{ my_portgroup_info.dvs_portgroup_info.dvswitch1[0].key }}"
start_connected: false
register: vm_hardware_ethernet_1
```
### Result
```
{
"value": {
"start_connected": false,
"pci_slot_number": 4,
"backing": {
"connection_cookie": 2145337177,
"distributed_switch_uuid": "50 33 88 3a 8c 6e f9 02-7a fd c2 c0 2c cf f2 ac",
"distributed_port": "2",
"type": "DISTRIBUTED_PORTGROUP",
"network": "dvportgroup-1649"
},
"mac_address": "00:50:56:b3:49:5c",
"mac_type": "ASSIGNED",
"allow_guest_control": false,
"wake_on_lan_enabled": false,
"label": "Network adapter 1",
"state": "NOT_CONNECTED",
"type": "VMXNET3",
"upt_compatibility_enabled": false
},
"id": "4000",
"changed": true
}
```
### Adjust the configuration of the NIC
```
- name: Turn the NIC's start_connected flag on
vmware.vmware_rest.vcenter_vm_hardware_ethernet:
nic: '{{ vm_hardware_ethernet_1.id }}'
start_connected: true
vm: '{{ test_vm1_info.id }}'
```
### Result
```
{
"id": "4000",
"changed": true
}
```
Increase the memory of the VM
-----------------------------
We can also adjust the amount of memory that we dedicate to our VM.
```
- name: Increase the memory of a VM
vmware.vmware_rest.vcenter_vm_hardware_memory:
vm: '{{ test_vm1_info.id }}'
size_MiB: 1080
register: _result
```
### Result
```
{
"id": null,
"changed": true
}
```
Upgrade the hardware version of the VM
--------------------------------------
Here we use the `vcenter_vm_hardware` module to upgrade the version of the hardware:
```
- name: Upgrade the VM hardware version
vmware.vmware_rest.vcenter_vm_hardware:
upgrade_policy: AFTER_CLEAN_SHUTDOWN
upgrade_version: VMX_13
vm: '{{ test_vm1_info.id }}'
register: _result
```
### Result
```
{
"id": null,
"changed": true
}
```
Adjust the number of CPUs of the VM
-----------------------------------
You can use `vcenter_vm_hardware_cpu` for that:
```
- name: Dedicate one core to the VM
vmware.vmware_rest.vcenter_vm_hardware_cpu:
vm: '{{ test_vm1_info.id }}'
count: 1
register: _result
```
### Result
```
{
"value": {
"hot_remove_enabled": false,
"count": 1,
"hot_add_enabled": false,
"cores_per_socket": 1
},
"id": null,
"changed": false
}
```
Remove a SATA controller
------------------------
In this example, we remove the SATA controller of the PCI slot 34.
```
{
"changed": true
}
```
### Result
```
{
"changed": true
}
```
Attach a floppy drive
---------------------
Here we attach a floppy drive to a VM.
```
- name: Add a floppy disk drive
vmware.vmware_rest.vcenter_vm_hardware_floppy:
vm: '{{ test_vm1_info.id }}'
allow_guest_control: true
register: my_floppy_drive
```
### Result
```
{
"value": {
"start_connected": false,
"backing": {
"auto_detect": true,
"type": "HOST_DEVICE",
"host_device": ""
},
"allow_guest_control": true,
"label": "Floppy drive 1",
"state": "NOT_CONNECTED"
},
"id": "8000",
"changed": true
}
```
Attach a new disk
-----------------
Here we attach a tiny disk to the VM. The `capacity` is in bytes.
```
- name: Create a new disk
vmware.vmware_rest.vcenter_vm_hardware_disk:
vm: '{{ test_vm1_info.id }}'
type: SATA
new_vmdk:
capacity: 320000
register: my_new_disk
```
### Result
```
{
"value": {
"backing": {
"vmdk_file": "[local] test_vm1_8/test_vm1_1.vmdk",
"type": "VMDK_FILE"
},
"label": "Hard disk 2",
"type": "SATA",
"sata": {
"bus": 0,
"unit": 0
},
"capacity": 320000
},
"id": "16000",
"changed": true
}
```
ansible How to configure the VMware tools of a running virtual machine How to configure the VMware tools of a running virtual machine
==============================================================
* [Introduction](#introduction)
* [Scenario requirements](#scenario-requirements)
* [How to change the upgrade policy](#how-to-change-the-upgrade-policy)
+ [Change the upgrade policy to MANUAL](#change-the-upgrade-policy-to-manual)
- [Result](#result)
+ [Change the upgrade policy to UPGRADE\_AT\_POWER\_CYCLE](#change-the-upgrade-policy-to-upgrade-at-power-cycle)
- [Result](#id1)
Introduction
------------
This section show you how to collection information from a running virtual machine.
Scenario requirements
---------------------
You’ve already followed [How to run a virtual machine](run_a_vm#vmware-rest-run-a-vm) and your virtual machine runs VMware Tools.
How to change the upgrade policy
--------------------------------
### Change the upgrade policy to MANUAL
You can adjust the VMware Tools upgrade policy with the `vcenter_vm_tools` module.
```
- name: Change vm-tools upgrade policy to MANUAL
vmware.vmware_rest.vcenter_vm_tools:
vm: '{{ test_vm1_info.id }}'
upgrade_policy: MANUAL
register: _result
```
#### Result
```
{
"id": null,
"changed": true
}
```
### Change the upgrade policy to UPGRADE\_AT\_POWER\_CYCLE
```
- name: Change vm-tools upgrade policy to UPGRADE_AT_POWER_CYCLE
vmware.vmware_rest.vcenter_vm_tools:
vm: '{{ test_vm1_info.id }}'
upgrade_policy: UPGRADE_AT_POWER_CYCLE
register: _result
```
#### Result
```
{
"id": null,
"changed": true
}
```
ansible How to run a virtual machine How to run a virtual machine
============================
* [Introduction](#introduction)
* [Power information](#power-information)
+ [Result](#result)
* [How to start a virtual machine](#how-to-start-a-virtual-machine)
+ [Result](#id1)
* [How to wait until my virtual machine is ready](#how-to-wait-until-my-virtual-machine-is-ready)
+ [Result](#id2)
Introduction
------------
This section covers the power management of your virtual machine.
Power information
-----------------
Use `vcenter_vm_power_info` to know the power state of the VM.
```
- name: Get guest power information
vmware.vmware_rest.vcenter_vm_power_info:
vm: '{{ test_vm1_info.id }}'
register: _result
```
### Result
```
{
"value": {
"state": "POWERED_ON"
},
"changed": false
}
```
How to start a virtual machine
------------------------------
Use the `vcenter_vm_power` module to start your VM:
```
- name: Turn the power of the VM on
vmware.vmware_rest.vcenter_vm_power:
state: start
vm: '{{ test_vm1_info.id }}'
```
### Result
```
{
"changed": false
}
```
How to wait until my virtual machine is ready
---------------------------------------------
If your virtual machine runs VMware Tools, you can build a loop around the `center_vm_tools_info` module:
```
- name: Wait until my VM is ready
vmware.vmware_rest.vcenter_vm_tools_info:
vm: '{{ test_vm1_info.id }}'
register: vm_tools_info
until:
- vm_tools_info is not failed
- vm_tools_info.value.run_state == "RUNNING"
retries: 60
delay: 5
```
### Result
```
{
"value": {
"auto_update_supported": false,
"upgrade_policy": "MANUAL",
"install_attempt_count": 0,
"version_status": "UNMANAGED",
"version_number": 10346,
"run_state": "RUNNING",
"version": "10346",
"install_type": "OPEN_VM_TOOLS"
},
"changed": false
}
```
ansible How to configure the vmware_rest collection How to configure the vmware\_rest collection
============================================
* [Introduction](#introduction)
* [Environment variables](#environment-variables)
* [Module parameters](#module-parameters)
* [Ignore SSL certificate error](#ignore-ssl-certificate-error)
Introduction
------------
The vcenter\_rest modules need to be authenticated. There are two different
Environment variables
---------------------
Note
This solution requires that you call the modules from the local machine.
You need to export some environment variables in your shell before you call the modules.
```
$ export VMWARE_HOST=vcenter.test
$ export VMWARE_USER=myvcenter-user
$ export VMWARE_password=mypassword
$ ansible-playbook my-playbook.yaml
```
Module parameters
-----------------
All the vcenter\_rest modules accept the following arguments:
* `vcenter_host`
* `vcenter_username`
* `vcenter_password`
Ignore SSL certificate error
----------------------------
It’s common to run a test environment without a proper SSL certificate configuration.
To ignore the SSL error, you can use the `vcenter_validate_certs: no` argument or `export VMWARE_VALIDATE_CERTS=no` to set the environment variable.
ansible How to get information from a running virtual machine How to get information from a running virtual machine
=====================================================
* [Introduction](#introduction)
* [Scenario requirements](#scenario-requirements)
* [How to collect information](#how-to-collect-information)
+ [Filesystem](#filesystem)
- [Result](#result)
+ [Guest identity](#guest-identity)
- [Result](#id1)
+ [Network](#network)
- [Result](#id2)
+ [Network interfaces](#network-interfaces)
- [Result](#id3)
+ [Network routes](#network-routes)
- [Result](#id4)
Introduction
------------
This section shows you how to collection information from a running virtual machine.
Scenario requirements
---------------------
You’ve already followed [How to run a virtual machine](run_a_vm#vmware-rest-run-a-vm) and your virtual machine runs VMware Tools.
How to collect information
--------------------------
In this example, we use the `vcenter_vm_guest_*` module to collect information about the associated resources.
### Filesystem
Here we use `vcenter_vm_guest_localfilesystem_info` to retrieve the details about the filesystem of the guest. In this example we also use a `retries` loop. The VMware Tools may take a bit of time to start and by doing so, we give the VM a bit more time.
```
- name: Get guest filesystem information
vmware.vmware_rest.vcenter_vm_guest_localfilesystem_info:
vm: '{{ test_vm1_info.id }}'
register: _result
until:
- _result is not failed
retries: 60
delay: 5
```
#### Result
```
{
"value": [
{
"value": {
"mappings": [],
"free_space": 774766592,
"capacity": 2515173376
},
"key": "/"
}
],
"changed": false
}
```
### Guest identity
You can use `vcenter_vm_guest_identity_info` to get details like the OS family or the hostname of the running VM.
```
- name: Get guest identity information
vmware.vmware_rest.vcenter_vm_guest_identity_info:
vm: '{{ test_vm1_info.id }}'
register: _result
```
#### Result
```
{
"value": {
"full_name": {
"args": [],
"default_message": "Red Hat Fedora (64-bit)",
"id": "vmsg.guestos.fedora64Guest.label"
},
"name": "FEDORA_64",
"ip_address": "192.168.122.242",
"family": "LINUX",
"host_name": "localhost.localdomain"
},
"changed": false
}
```
### Network
`vcenter_vm_guest_networking_info` will return the OS network configuration.
```
- name: Get guest networking information
vmware.vmware_rest.vcenter_vm_guest_networking_info:
vm: '{{ test_vm1_info.id }}'
register: _result
```
#### Result
```
{
"value": {
"dns": {
"ip_addresses": [
"10.0.2.3"
],
"search_domains": [
"localdomain"
]
},
"dns_values": {
"domain_name": "localdomain",
"host_name": "localhost.localdomain"
}
},
"changed": false
}
```
### Network interfaces
`vcenter_vm_guest_networking_interfaces_info` will return a list of NIC configurations.
See also [How to attach a VM to a network](vm_hardware_tuning#vmware-rest-attach-a-network).
```
- name: Get guest network interfaces information
vmware.vmware_rest.vcenter_vm_guest_networking_interfaces_info:
vm: '{{ test_vm1_info.id }}'
register: _result
```
#### Result
```
{
"value": [
{
"mac_address": "00:50:56:b3:49:5c",
"ip": {
"ip_addresses": [
{
"ip_address": "192.168.122.242",
"prefix_length": 24,
"state": "PREFERRED"
},
{
"ip_address": "fe80::b8d0:511b:897f:65a2",
"prefix_length": 64,
"state": "UNKNOWN"
}
]
},
"nic": "4000"
}
],
"changed": false
}
```
### Network routes
Use `vcenter_vm_guest_networking_routes_info` to explore the route table of your vitual machine.
```
- name: Get guest network routes information
vmware.vmware_rest.vcenter_vm_guest_networking_routes_info:
vm: '{{ test_vm1_info.id }}'
register: _result
```
#### Result
```
{
"value": [
{
"gateway_address": "192.168.122.1",
"interface_index": 0,
"prefix_length": 0,
"network": "0.0.0.0"
},
{
"interface_index": 0,
"prefix_length": 24,
"network": "192.168.122.0"
},
{
"interface_index": 0,
"prefix_length": 64,
"network": "fe80::"
},
{
"interface_index": 0,
"prefix_length": 128,
"network": "fe80::b8d0:511b:897f:65a2"
},
{
"interface_index": 0,
"prefix_length": 8,
"network": "ff00::"
}
],
"changed": false
}
```
ansible How to create a Virtual Machine How to create a Virtual Machine
===============================
* [Introduction](#introduction)
* [Scenario requirements](#scenario-requirements)
* [How to create a virtual machine](#id1)
+ [Result](#result)
Introduction
------------
This section shows you how to use Ansible to create a virtual machine.
Scenario requirements
---------------------
You’ve already followed [How to collect information about your environment](collect_information#vmware-rest-collect-info) and you’ve got the following variables defined:
* `my_cluster_info`
* `my_datastore`
* `my_virtual_machine_folder`
* `my_cluster_info`
How to create a virtual machine
-------------------------------
In this example, we will use the `vcenter_vm` module to create a new guest.
```
- name: Create a VM
vmware.vmware_rest.vcenter_vm:
placement:
cluster: "{{ my_cluster_info.id }}"
datastore: "{{ my_datastore.datastore }}"
folder: "{{ my_virtual_machine_folder.folder }}"
resource_pool: "{{ my_cluster_info.value.resource_pool }}"
name: test_vm1
guest_OS: DEBIAN_8_64
hardware_version: VMX_11
memory:
hot_add_enabled: true
size_MiB: 1024
register: _result
```
### Result
```
{
"value": {
"instant_clone_frozen": false,
"cdroms": [],
"memory": {
"size_MiB": 1024,
"hot_add_enabled": true
},
"disks": [
{
"value": {
"scsi": {
"bus": 0,
"unit": 0
},
"backing": {
"vmdk_file": "[local] test_vm1_8/test_vm1.vmdk",
"type": "VMDK_FILE"
},
"label": "Hard disk 1",
"type": "SCSI",
"capacity": 17179869184
},
"key": "2000"
}
],
"parallel_ports": [],
"sata_adapters": [],
"cpu": {
"hot_remove_enabled": false,
"count": 1,
"hot_add_enabled": false,
"cores_per_socket": 1
},
"scsi_adapters": [
{
"value": {
"scsi": {
"bus": 0,
"unit": 7
},
"label": "SCSI controller 0",
"sharing": "NONE",
"type": "PVSCSI"
},
"key": "1000"
}
],
"power_state": "POWERED_OFF",
"floppies": [],
"identity": {
"name": "test_vm1",
"instance_uuid": "5033c296-6954-64df-faca-d001de53763d",
"bios_uuid": "42330d17-e603-d925-fa4b-18827dbc1409"
},
"nvme_adapters": [],
"name": "test_vm1",
"nics": [],
"boot": {
"delay": 0,
"retry_delay": 10000,
"enter_setup_mode": false,
"type": "BIOS",
"retry": false
},
"serial_ports": [],
"boot_devices": [],
"guest_OS": "DEBIAN_8_64",
"hardware": {
"upgrade_policy": "NEVER",
"upgrade_status": "NONE",
"version": "VMX_11"
}
},
"id": "vm-1650",
"changed": true
}
```
Note
`vcenter_vm` accepts more parameters, however you may prefer to start with a simple VM and use the `vcenter_vm_hardware` modules to tune it up afterwards. It’s easier this way to identify a potential problematical step.
| programming_docs |
ansible How to install the vmware_rest collection How to install the vmware\_rest collection
==========================================
* [Requirements](#requirements)
* [aiohttp](#aiohttp)
* [Installation](#installation)
Requirements
------------
The collection depends on:
* Ansible >=2.9.10 or greater
* Python 3.6 or greater
aiohttp
-------
[aiohttp](https://docs.aiohttp.org/en/stable/) is the only dependency of the collection. You can install it with `pip` if you use a virtualenv to run Ansible.
```
$ pip install aiohttp
```
Or using an RPM.
```
$ sudo dnf install python3-aiohttp
```
Installation
------------
The best option to install the collection is to use the `ansible-galaxy` command:
```
$ ansible-galaxy collection install vmware.vmware_rest
```
ansible How to collect information about your environment How to collect information about your environment
=================================================
* [Introduction](#introduction)
* [Scenario requirements](#scenario-requirements)
* [How to collect information](#how-to-collect-information)
+ [Datacenter](#datacenter)
- [Result](#result)
+ [Cluster](#cluster)
- [Result](#id1)
- [Result](#id2)
+ [Datastore](#datastore)
- [Result](#id3)
+ [Folder](#folder)
- [Result](#id4)
- [Result](#id5)
Introduction
------------
This section shows you how to utilize Ansible to collect information about your environment. This information is useful for the other tutorials.
Scenario requirements
---------------------
In this scenario we’ve got a vCenter with an ESXi host.
Our environment is pre-initialized with the following elements:
* A datacenter called `my_dc`
* A cluster called `my_cluser`
* A cluster called `my_cluser`
* An ESXi host called `esxi1` is in the cluster
* Two datastores on the ESXi: `rw_datastore` and `ro_datastore`
* A dvswitch based guest network
Finally, we use the environment variables to authenticate ourselves as explained in [How to configure the vmware\_rest collection](authentication#vmware-rest-authentication).
How to collect information
--------------------------
In these examples, we use the `vcenter_*_info` module to collect information about the associated resources.
All these modules return a `value` key. Depending on the context, this `value` key will be either a list or a dictionary.
### Datacenter
Here we use the `vcenter_datacenter_info` module to list all the datacenters:
```
- name: collect a list of the datacenters
vmware.vmware_rest.vcenter_datacenter_info:
register: my_datacenters
```
#### Result
As expected, the `value` key of the output is a list.
```
{
"value": [
{
"name": "my_dc",
"datacenter": "datacenter-1630"
}
],
"changed": false
}
```
### Cluster
Here we do the same with `vcenter_cluster_info`:
```
- name: Build a list of all the clusters
vmware.vmware_rest.vcenter_cluster_info:
register: all_the_clusters
```
#### Result
```
{
"value": [
{
"drs_enabled": false,
"cluster": "domain-c1636",
"name": "my_cluster",
"ha_enabled": false
}
],
"changed": false
}
```
And we can also fetch the details about a specific cluster, with the `cluster` parameter:
```
- name: Retrieve details about the first cluster
vmware.vmware_rest.vcenter_cluster_info:
cluster: "{{ all_the_clusters.value[0].cluster }}"
register: my_cluster_info
```
#### Result
And the `value` key of the output is this time a dictionary.
```
{
"value": {
"name": "my_cluster",
"resource_pool": "resgroup-1637"
},
"id": "domain-c1636",
"changed": false
}
```
### Datastore
Here we use `vcenter_datastore_info` to get a list of all the datastores:
```
- name: Retrieve a list of all the datastores
vmware.vmware_rest.vcenter_datastore_info:
register: my_datastores
```
#### Result
```
{
"value": [
{
"datastore": "datastore-1644",
"name": "local",
"type": "VMFS",
"free_space": 13523484672,
"capacity": 15032385536
},
{
"datastore": "datastore-1645",
"name": "ro_datastore",
"type": "NFS",
"free_space": 24638349312,
"capacity": 26831990784
},
{
"datastore": "datastore-1646",
"name": "rw_datastore",
"type": "NFS",
"free_space": 24638349312,
"capacity": 26831990784
}
],
"changed": false
}
```
### Folder
And here again, you use the `vcenter_folder_info` module to retrieve a list of all the folders.
```
- name: Build a list of all the folders
vmware.vmware_rest.vcenter_folder_info:
register: my_folders
```
#### Result
```
{
"value": [
{
"folder": "group-d1",
"name": "Datacenters",
"type": "DATACENTER"
}
],
"changed": false
}
```
Most of the time, you will just want one type of folder. In this case we can use filters to reduce the amount to collect. Most of the `_info` modules come with similar filters.
```
- name: Build a list of all the folders with the type VIRTUAL_MACHINE and called vm
vmware.vmware_rest.vcenter_folder_info:
filter_type: VIRTUAL_MACHINE
filter_names:
- vm
register: my_folders
```
#### Result
```
{
"value": [
{
"folder": "group-v1631",
"name": "vm",
"type": "VIRTUAL_MACHINE"
}
],
"changed": false
}
```
ansible Retrieve information from a specific VM Retrieve information from a specific VM
=======================================
* [Introduction](#introduction)
* [Scenario requirements](#scenario-requirements)
* [How to collect virtual machine information](#how-to-collect-virtual-machine-information)
+ [List the VM](#list-the-vm)
+ [Result](#result)
+ [Collect the details about a specific VM](#collect-the-details-about-a-specific-vm)
+ [Result](#id1)
+ [Get the hardware version of a specific VM](#get-the-hardware-version-of-a-specific-vm)
+ [Result](#id2)
+ [List the SCSI adapter(s) of a specific VM](#list-the-scsi-adapter-s-of-a-specific-vm)
+ [Result](#id3)
+ [List the CDROM drive(s) of a specific VM](#list-the-cdrom-drive-s-of-a-specific-vm)
+ [Result](#id4)
+ [Get the memory information of the VM](#get-the-memory-information-of-the-vm)
+ [Result](#id5)
- [Get the storage policy of the VM](#get-the-storage-policy-of-the-vm)
+ [Result](#id6)
- [Get the disk information of the VM](#get-the-disk-information-of-the-vm)
+ [Result](#id7)
Introduction
------------
This section shows you how to use Ansible to retrieve information about a specific virtual machine.
Scenario requirements
---------------------
You’ve already followed [How to create a Virtual Machine](create_vm#vmware-rest-create-vm) and you’ve got create a new VM called `test_vm1`.
How to collect virtual machine information
------------------------------------------
### List the VM
In this example, we use the `vcenter_vm_info` module to collect information about our new VM.
In this example, we start by asking for a list of VMs. We use a filter to limit the results to just the VM called `test_vm1`. So we are in a list context, with one single entry in the `value` key.
```
- name: Look up the VM called test_vm1 in the inventory
register: search_result
vmware.vmware_rest.vcenter_vm_info:
filter_names:
- test_vm1
```
### Result
As expected, we get a list. And thanks to our filter, we just get one entry.
```
{
"value": [
{
"memory_size_MiB": 1024,
"vm": "vm-1650",
"name": "test_vm1",
"power_state": "POWERED_OFF",
"cpu_count": 1
}
],
"changed": false
}
```
### Collect the details about a specific VM
For the next steps, we pass the ID of the VM through the `vm` parameter. This allow us to collect more details about this specific VM.
```
- name: Collect information about a specific VM
vmware.vmware_rest.vcenter_vm_info:
vm: '{{ search_result.value[0].vm }}'
register: test_vm1_info
```
### Result
The result is a structure with all the details about our VM. You will note this is actually the same information that we get when we created the VM.
```
{
"value": {
"instant_clone_frozen": false,
"cdroms": [],
"memory": {
"size_MiB": 1024,
"hot_add_enabled": true
},
"disks": [
{
"value": {
"scsi": {
"bus": 0,
"unit": 0
},
"backing": {
"vmdk_file": "[local] test_vm1_8/test_vm1.vmdk",
"type": "VMDK_FILE"
},
"label": "Hard disk 1",
"type": "SCSI",
"capacity": 17179869184
},
"key": "2000"
}
],
"parallel_ports": [],
"sata_adapters": [],
"cpu": {
"hot_remove_enabled": false,
"count": 1,
"hot_add_enabled": false,
"cores_per_socket": 1
},
"scsi_adapters": [
{
"value": {
"scsi": {
"bus": 0,
"unit": 7
},
"label": "SCSI controller 0",
"sharing": "NONE",
"type": "PVSCSI"
},
"key": "1000"
}
],
"power_state": "POWERED_OFF",
"floppies": [],
"identity": {
"name": "test_vm1",
"instance_uuid": "5033c296-6954-64df-faca-d001de53763d",
"bios_uuid": "42330d17-e603-d925-fa4b-18827dbc1409"
},
"nvme_adapters": [],
"name": "test_vm1",
"nics": [],
"boot": {
"delay": 0,
"retry_delay": 10000,
"enter_setup_mode": false,
"type": "BIOS",
"retry": false
},
"serial_ports": [],
"boot_devices": [],
"guest_OS": "DEBIAN_8_64",
"hardware": {
"upgrade_policy": "NEVER",
"upgrade_status": "NONE",
"version": "VMX_11"
}
},
"id": "vm-1650",
"changed": false
}
```
### Get the hardware version of a specific VM
We can also use all the `vcenter_vm_*_info` modules to retrieve a smaller amount of information. Here we use `vcenter_vm_hardware_info` to know the hardware version of the VM.
```
- name: Collect the hardware information
vmware.vmware_rest.vcenter_vm_hardware_info:
vm: '{{ search_result.value[0].vm }}'
register: my_vm1_hardware_info
```
### Result
```
{
"value": {
"upgrade_policy": "NEVER",
"upgrade_status": "NONE",
"version": "VMX_11"
},
"changed": false
}
```
### List the SCSI adapter(s) of a specific VM
Here for instance, we list the SCSI adapter(s) of the VM:
```
- name: List the SCSI adapter of a given VM
vmware.vmware_rest.vcenter_vm_hardware_adapter_scsi_info:
vm: '{{ test_vm1_info.id }}'
register: _result
```
You can do the same for the SATA controllers with `vcenter_vm_adapter_sata_info`.
### Result
```
{
"value": [
{
"scsi": {
"bus": 0,
"unit": 7
},
"label": "SCSI controller 0",
"type": "PVSCSI",
"sharing": "NONE"
}
],
"changed": false
}
```
### List the CDROM drive(s) of a specific VM
And we list its CDROM drives.
```
- name: List the cdrom devices on the guest
vmware.vmware_rest.vcenter_vm_hardware_cdrom_info:
vm: '{{ test_vm1_info.id }}'
register: _result
```
### Result
```
{
"value": [],
"changed": false
}
```
### Get the memory information of the VM
Here we collect the memory information of the VM:
```
- name: Retrieve the memory information from the VM
vmware.vmware_rest.vcenter_vm_hardware_memory_info:
vm: '{{ test_vm1_info.id }}'
register: _result
```
### Result
```
{
"value": {
"size_MiB": 1024,
"hot_add_enabled": true
},
"changed": false
}
```
#### Get the storage policy of the VM
We use the `vcenter_vm_storage_policy_info` module for that:
```
- name: Get VM storage policy
vmware.vmware_rest.vcenter_vm_storage_policy_info:
vm: '{{ test_vm1_info.id }}'
register: _result
```
### Result
```
{
"value": {
"disks": []
},
"changed": false
}
```
#### Get the disk information of the VM
We use the `vcenter_vm_hardware_disk_info` for this operation:
```
- name: Retrieve the disk information from the VM
vmware.vmware_rest.vcenter_vm_hardware_disk_info:
vm: '{{ test_vm1_info.id }}'
register: _result
```
### Result
```
{
"value": [
{
"scsi": {
"bus": 0,
"unit": 0
},
"backing": {
"vmdk_file": "[local] test_vm1_8/test_vm1.vmdk",
"type": "VMDK_FILE"
},
"label": "Hard disk 1",
"type": "SCSI",
"capacity": 17179869184
}
],
"changed": false
}
```
ansible Collection Index Collection Index
================
These are the collections with docs hosted on [docs.ansible.com](https://docs.ansible.com/).
* [amazon.aws](amazon/aws/index#plugins-in-amazon-aws)
* [ansible.builtin](ansible/builtin/index#plugins-in-ansible-builtin)
* [ansible.netcommon](ansible/netcommon/index#plugins-in-ansible-netcommon)
* [ansible.posix](ansible/posix/index#plugins-in-ansible-posix)
* [ansible.utils](ansible/utils/index#plugins-in-ansible-utils)
* [ansible.windows](ansible/windows/index#plugins-in-ansible-windows)
* [arista.eos](arista/eos/index#plugins-in-arista-eos)
* [awx.awx](awx/awx/index#plugins-in-awx-awx)
* [azure.azcollection](azure/azcollection/index#plugins-in-azure-azcollection)
* [check\_point.mgmt](check_point/mgmt/index#plugins-in-check-point-mgmt)
* [chocolatey.chocolatey](chocolatey/chocolatey/index#plugins-in-chocolatey-chocolatey)
* [cisco.aci](cisco/aci/index#plugins-in-cisco-aci)
* [cisco.asa](cisco/asa/index#plugins-in-cisco-asa)
* [cisco.intersight](cisco/intersight/index#plugins-in-cisco-intersight)
* [cisco.ios](cisco/ios/index#plugins-in-cisco-ios)
* [cisco.iosxr](cisco/iosxr/index#plugins-in-cisco-iosxr)
* [cisco.meraki](cisco/meraki/index#plugins-in-cisco-meraki)
* [cisco.mso](cisco/mso/index#plugins-in-cisco-mso)
* [cisco.nso](cisco/nso/index#plugins-in-cisco-nso)
* [cisco.nxos](cisco/nxos/index#plugins-in-cisco-nxos)
* [cisco.ucs](cisco/ucs/index#plugins-in-cisco-ucs)
* [cloudscale\_ch.cloud](cloudscale_ch/cloud/index#plugins-in-cloudscale-ch-cloud)
* [community.aws](community/aws/index#plugins-in-community-aws)
* [community.azure](community/azure/index#plugins-in-community-azure)
* [community.crypto](community/crypto/index#plugins-in-community-crypto)
* [community.digitalocean](community/digitalocean/index#plugins-in-community-digitalocean)
* [community.docker](community/docker/index#plugins-in-community-docker)
* [community.fortios](community/fortios/index#plugins-in-community-fortios)
* [community.general](community/general/index#plugins-in-community-general)
* [community.google](community/google/index#plugins-in-community-google)
* [community.grafana](community/grafana/index#plugins-in-community-grafana)
* [community.hashi\_vault](community/hashi_vault/index#plugins-in-community-hashi-vault)
* [community.hrobot](community/hrobot/index#plugins-in-community-hrobot)
* [community.kubernetes](community/kubernetes/index#plugins-in-community-kubernetes)
* [community.kubevirt](community/kubevirt/index#plugins-in-community-kubevirt)
* [community.libvirt](community/libvirt/index#plugins-in-community-libvirt)
* [community.mongodb](community/mongodb/index#plugins-in-community-mongodb)
* [community.mysql](community/mysql/index#plugins-in-community-mysql)
* [community.network](community/network/index#plugins-in-community-network)
* [community.okd](community/okd/index#plugins-in-community-okd)
* [community.postgresql](community/postgresql/index#plugins-in-community-postgresql)
* [community.proxysql](community/proxysql/index#plugins-in-community-proxysql)
* [community.rabbitmq](community/rabbitmq/index#plugins-in-community-rabbitmq)
* [community.routeros](community/routeros/index#plugins-in-community-routeros)
* [community.skydive](community/skydive/index#plugins-in-community-skydive)
* [community.sops](community/sops/index#plugins-in-community-sops)
* [community.vmware](community/vmware/index#plugins-in-community-vmware)
* [community.windows](community/windows/index#plugins-in-community-windows)
* [community.zabbix](community/zabbix/index#plugins-in-community-zabbix)
* [containers.podman](containers/podman/index#plugins-in-containers-podman)
* [cyberark.conjur](cyberark/conjur/index#plugins-in-cyberark-conjur)
* [cyberark.pas](cyberark/pas/index#plugins-in-cyberark-pas)
* [dellemc.enterprise\_sonic](dellemc/enterprise_sonic/index#plugins-in-dellemc-enterprise-sonic)
* [dellemc.openmanage](dellemc/openmanage/index#plugins-in-dellemc-openmanage)
* [dellemc.os10](dellemc/os10/index#plugins-in-dellemc-os10)
* [dellemc.os6](dellemc/os6/index#plugins-in-dellemc-os6)
* [dellemc.os9](dellemc/os9/index#plugins-in-dellemc-os9)
* [f5networks.f5\_modules](f5networks/f5_modules/index#plugins-in-f5networks-f5-modules)
* [fortinet.fortimanager](fortinet/fortimanager/index#plugins-in-fortinet-fortimanager)
* [fortinet.fortios](fortinet/fortios/index#plugins-in-fortinet-fortios)
* [frr.frr](frr/frr/index#plugins-in-frr-frr)
* [gluster.gluster](gluster/gluster/index#plugins-in-gluster-gluster)
* [google.cloud](google/cloud/index#plugins-in-google-cloud)
* [hetzner.hcloud](hetzner/hcloud/index#plugins-in-hetzner-hcloud)
* [hpe.nimble](hpe/nimble/index#plugins-in-hpe-nimble)
* [ibm.qradar](ibm/qradar/index#plugins-in-ibm-qradar)
* [infinidat.infinibox](infinidat/infinibox/index#plugins-in-infinidat-infinibox)
* [inspur.sm](inspur/sm/index#plugins-in-inspur-sm)
* [junipernetworks.junos](junipernetworks/junos/index#plugins-in-junipernetworks-junos)
* [kubernetes.core](kubernetes/core/index#plugins-in-kubernetes-core)
* [mellanox.onyx](mellanox/onyx/index#plugins-in-mellanox-onyx)
* [netapp.aws](netapp/aws/index#plugins-in-netapp-aws)
* [netapp.azure](netapp/azure/index#plugins-in-netapp-azure)
* [netapp.cloudmanager](netapp/cloudmanager/index#plugins-in-netapp-cloudmanager)
* [netapp.elementsw](netapp/elementsw/index#plugins-in-netapp-elementsw)
* [netapp.ontap](netapp/ontap/index#plugins-in-netapp-ontap)
* [netapp.um\_info](netapp/um_info/index#plugins-in-netapp-um-info)
* [netapp\_eseries.santricity](netapp_eseries/santricity/index#plugins-in-netapp-eseries-santricity)
* [netbox.netbox](netbox/netbox/index#plugins-in-netbox-netbox)
* [ngine\_io.cloudstack](ngine_io/cloudstack/index#plugins-in-ngine-io-cloudstack)
* [ngine\_io.exoscale](ngine_io/exoscale/index#plugins-in-ngine-io-exoscale)
* [ngine\_io.vultr](ngine_io/vultr/index#plugins-in-ngine-io-vultr)
* [openstack.cloud](openstack/cloud/index#plugins-in-openstack-cloud)
* [openvswitch.openvswitch](openvswitch/openvswitch/index#plugins-in-openvswitch-openvswitch)
* [ovirt.ovirt](ovirt/ovirt/index#plugins-in-ovirt-ovirt)
* [purestorage.flasharray](purestorage/flasharray/index#plugins-in-purestorage-flasharray)
* [purestorage.flashblade](purestorage/flashblade/index#plugins-in-purestorage-flashblade)
* [sensu.sensu\_go](sensu/sensu_go/index#plugins-in-sensu-sensu-go)
* [servicenow.servicenow](servicenow/servicenow/index#plugins-in-servicenow-servicenow)
* [splunk.es](splunk/es/index#plugins-in-splunk-es)
* [t\_systems\_mms.icinga\_director](t_systems_mms/icinga_director/index#plugins-in-t-systems-mms-icinga-director)
* [theforeman.foreman](theforeman/foreman/index#plugins-in-theforeman-foreman)
* [vyos.vyos](vyos/vyos/index#plugins-in-vyos-vyos)
* [wti.remote](wti/remote/index#plugins-in-wti-remote)
| programming_docs |
ansible Collections in the Frr Namespace Collections in the Frr Namespace
================================
These are the collections with docs hosted on [docs.ansible.com](https://docs.ansible.com/) in the **frr** namespace.
* [frr.frr](frr/index#plugins-in-frr-frr)
ansible Frr.Frr Frr.Frr
=======
Collection version 1.0.3
Plugin Index
------------
These are the plugins in the frr.frr collection
### Cliconf Plugins
* [frr](frr_cliconf#ansible-collections-frr-frr-frr-cliconf) – Use frr cliconf to run command on Free Range Routing platform
### Modules
* [frr\_bgp](frr_bgp_module#ansible-collections-frr-frr-frr-bgp-module) – Configure global BGP settings on Free Range Routing(FRR).
* [frr\_facts](frr_facts_module#ansible-collections-frr-frr-frr-facts-module) – Collect facts from remote devices running Free Range Routing (FRR).
See also
List of [collections](../../index#list-of-collections) with docs hosted here.
ansible frr.frr.frr – Use frr cliconf to run command on Free Range Routing platform frr.frr.frr – Use frr cliconf to run command on Free Range Routing platform
===========================================================================
Note
This plugin is part of the [frr.frr collection](https://galaxy.ansible.com/frr/frr) (version 1.0.3).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install frr.frr`.
To use it in a playbook, specify: `frr.frr.frr`.
New in version 1.0.0: of frr.frr
Synopsis
--------
* This frr plugin provides low level abstraction apis for sending and receiving CLI commands from FRR network devices.
### Authors
* Ansible Networking Team
ansible frr.frr.frr_bgp – Configure global BGP settings on Free Range Routing(FRR). frr.frr.frr\_bgp – Configure global BGP settings on Free Range Routing(FRR).
============================================================================
Note
This plugin is part of the [frr.frr collection](https://galaxy.ansible.com/frr/frr) (version 1.0.3).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install frr.frr`.
To use it in a playbook, specify: `frr.frr.frr_bgp`.
New in version 1.0.0: of frr.frr
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module provides configuration management of global BGP parameters on devices running Free Range Routing(FRR).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** dictionary | | Specifies the BGP related configuration. |
| | **address\_family** list / elements=dictionary | | Specifies BGP address family related configurations. |
| | | **afi** string / required | **Choices:*** ipv4
* ipv6
| Type of address family to configure. |
| | | **neighbors** list / elements=dictionary | | Specifies BGP neighbor related configurations in Address Family configuration mode. |
| | | | **activate** boolean | **Choices:*** no
* yes
| Enable the address family for this neighbor. |
| | | | **maximum\_prefix** integer | | Maximum number of prefixes to accept from this peer. The range is from 1 to 4294967295. |
| | | | **neighbor** string / required | | Neighbor router address. |
| | | | **next\_hop\_self** boolean | **Choices:*** no
* yes
| Enable/disable the next hop calculation for this neighbor. |
| | | | **remove\_private\_as** boolean | **Choices:*** no
* yes
| Remove the private AS number from outbound updates. |
| | | | **route\_reflector\_client** boolean | **Choices:*** no
* yes
| Specify a neighbor as a route reflector client. |
| | | | **route\_server\_client** boolean | **Choices:*** no
* yes
| Specify a neighbor as a route server client. |
| | | **networks** list / elements=dictionary | | Specify networks to announce via BGP. For operation replace, this option is mutually exclusive with root level networks option. |
| | | | **masklen** integer / required | | Subnet mask length for the network to announce(e.g, 8, 16, 24, etc.). |
| | | | **prefix** string / required | | Network ID to announce via BGP. |
| | | | **route\_map** string | | Route map to modify the attributes. |
| | | **redistribute** list / elements=dictionary | | Specifies the redistribute information from another routing protocol. |
| | | | **id** string | | Specifies the instance ID/table ID for this protocol Valid for ospf and table |
| | | | **metric** integer | | Specifies the metric for redistributed routes. |
| | | | **protocol** string / required | **Choices:*** ospf
* ospf6
* eigrp
* isis
* table
* static
* connected
* sharp
* nhrp
* kernel
* babel
* rip
| Specifies the protocol for configuring redistribute information. |
| | | | **route\_map** string | | Specifies the route map reference. |
| | | **safi** string | **Choices:*** flowspec
* **unicast** ←
* multicast
* labeled-unicast
| Specifies the type of cast for the address family. |
| | **bgp\_as** integer / required | | Specifies the BGP Autonomous System (AS) number to configure on the device. |
| | **log\_neighbor\_changes** boolean | **Choices:*** no
* yes
| Enable/disable logging neighbor up/down and reset reason. |
| | **neighbors** list / elements=dictionary | | Specifies BGP neighbor related configurations. |
| | | **advertisement\_interval** integer | | Minimum interval between sending BGP routing updates for this neighbor. |
| | | **description** string | | Neighbor specific description. |
| | | **ebgp\_multihop** integer | | Specifies the maximum hop count for EBGP neighbors not on directly connected networks. The range is from 1 to 255. |
| | | **enabled** boolean | **Choices:*** no
* yes
| Administratively shutdown or enable a neighbor. |
| | | **local\_as** integer | | The local AS number for the neighbor. |
| | | **neighbor** string / required | | Neighbor router address. |
| | | **password** string | | Password to authenticate the BGP peer connection. |
| | | **peer\_group** string | | Name of the peer group that the neighbor is a member of. |
| | | **port** integer | | The TCP Port number to use for this neighbor. The range is from 0 to 65535. |
| | | **remote\_as** integer / required | | Remote AS of the BGP neighbor to configure. |
| | | **timers** dictionary | | Specifies BGP neighbor timer related configurations. |
| | | | **holdtime** integer / required | | Interval (in seconds) after not receiving a keepalive message that FRR declares a peer dead. The range is from 0 to 65535. |
| | | | **keepalive** integer / required | | Frequency (in seconds) with which the FRR sends keepalive messages to its peer. The range is from 0 to 65535. |
| | | **update\_source** string | | Source of the routing updates. |
| | **networks** list / elements=dictionary | | Specify networks to announce via BGP. For operation replace, this option is mutually exclusive with networks option under address\_family. For operation replace, if the device already has an address family activated, this option is not allowed. |
| | | **masklen** integer / required | | Subnet mask length for the network to announce(e.g, 8, 16, 24, etc.). |
| | | **prefix** string / required | | Network ID to announce via BGP. |
| | | **route\_map** string | | Route map to modify the attributes. |
| | **router\_id** string | | Configures the BGP routing process router-id value. |
| **operation** string | **Choices:*** **merge** ←
* replace
* override
* delete
| Specifies the operation to be performed on the BGP process configured on the device. In case of merge, the input configuration will be merged with the existing BGP configuration on the device. In case of replace, if there is a diff between the existing configuration and the input configuration, the existing configuration will be replaced by the input configuration for every option that has the diff. In case of override, all the existing BGP configuration will be removed from the device and replaced with the input configuration. In case of delete the existing BGP configuration will be removed from the device. |
Notes
-----
Note
* Tested against FRRouting 6.0.
Examples
--------
```
- name: configure global bgp as 64496
frr.frr.frr_bgp:
config:
bgp_as: 64496
router_id: 192.0.2.1
log_neighbor_changes: true
neighbors:
- neighbor: 192.51.100.1
remote_as: 64497
timers:
keepalive: 120
holdtime: 360
- neighbor: 198.51.100.2
remote_as: 64498
networks:
- prefix: 192.0.2.0
masklen: 24
route_map: RMAP_1
- prefix: 198.51.100.0
masklen: 24
address_family:
- afi: ipv4
safi: unicast
redistribute:
- protocol: ospf
id: 223
metric: 10
operation: merge
- name: Configure BGP neighbors
frr.frr.frr_bgp:
config:
bgp_as: 64496
neighbors:
- neighbor: 192.0.2.10
remote_as: 64496
password: ansible
description: IBGP_NBR_1
timers:
keepalive: 120
holdtime: 360
- neighbor: 192.0.2.15
remote_as: 64496
description: IBGP_NBR_2
advertisement_interval: 120
operation: merge
- name: Configure BGP neighbors under address family mode
frr.frr.frr_bgp:
config:
bgp_as: 64496
address_family:
- afi: ipv4
safi: multicast
neighbors:
- neighbor: 203.0.113.10
activate: yes
maximum_prefix: 250
- neighbor: 192.0.2.15
activate: yes
route_reflector_client: true
operation: merge
- name: Configure root-level networks for BGP
frr.frr.frr_bgp:
config:
bgp_as: 64496
networks:
- prefix: 203.0.113.0
masklen: 27
route_map: RMAP_1
- prefix: 203.0.113.32
masklen: 27
route_map: RMAP_2
operation: merge
- name: remove bgp as 64496 from config
frr.frr.frr_bgp:
config:
bgp_as: 64496
operation: delete
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always | The list of configuration mode commands to send to the device **Sample:** ['router bgp 64496', 'bgp router-id 192.0.2.1', 'neighbor 192.51.100.1 remote-as 64497', 'neighbor 192.51.100.1 timers 120 360', 'neighbor 198.51.100.2 remote-as 64498', 'address-family ipv4 unicast', 'redistribute ospf 223 metric 10', 'exit-address-family', 'bgp log-neighbor-changes', 'network 192.0.2.0/24 route-map RMAP\_1', 'network 198.51.100.0/24', 'exit'] |
### Authors
* Nilashish Chakraborty (@NilashishC)
ansible frr.frr.frr_facts – Collect facts from remote devices running Free Range Routing (FRR). frr.frr.frr\_facts – Collect facts from remote devices running Free Range Routing (FRR).
========================================================================================
Note
This plugin is part of the [frr.frr collection](https://galaxy.ansible.com/frr/frr) (version 1.0.3).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install frr.frr`.
To use it in a playbook, specify: `frr.frr.frr_facts`.
New in version 1.0.0: of frr.frr
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* Collects a base set of device facts from a remote device that is running FRR. This module prepends all of the base network fact keys with `ansible_net_<fact>`. The facts module will always collect a base set of facts from the device and can enable or disable collection of additional facts.
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **gather\_subset** list / elements=string | **Default:**"!config" | When supplied, this argument restricts the facts collected to a given subset. Possible values for this argument include `all`, `hardware`, `config`, and `interfaces`. Specify a list of values to include a larger subset. Use a value with an initial `!` to collect all facts except that subset. |
Notes
-----
Note
* Tested against FRR 6.0.
Examples
--------
```
- name: Collect all facts from the device
frr.frr.frr_facts:
gather_subset: all
- name: Collect only the config and default facts
frr.frr.frr_facts:
gather_subset:
- config
- name: Collect the config and hardware facts
frr.frr.frr_facts:
gather_subset:
- config
- hardware
- name: Do not collect hardware facts
frr.frr.frr_facts:
gather_subset:
- '!hardware'
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **ansible\_net\_all\_ipv4\_addresses** list / elements=string | when interfaces is configured | All IPv4 addresses configured on the device |
| **ansible\_net\_all\_ipv6\_addresses** list / elements=string | when interfaces is configured | All IPv6 addresses configured on the device |
| **ansible\_net\_api** string | always | The name of the transport |
| **ansible\_net\_config** string | when config is configured | The current active config from the device |
| **ansible\_net\_gather\_subset** list / elements=string | always | The list of fact subsets collected from the device |
| **ansible\_net\_hostname** string | always | The configured hostname of the device |
| **ansible\_net\_interfaces** dictionary | when interfaces is configured | A hash of all interfaces running on the system |
| **ansible\_net\_mem\_stats** dictionary | when hardware is configured | The memory statistics fetched from the device |
| **ansible\_net\_mpls\_ldp\_neighbors** dictionary | when interfaces is configured and LDP daemon is running on the device | The list of MPLS LDP neighbors from the remote device |
| **ansible\_net\_python\_version** string | always | The Python version that the Ansible controller is using |
| **ansible\_net\_version** string | always | The FRR version running on the remote device |
### Authors
* Nilashish Chakraborty (@NilashishC)
ansible Collections in the Arista Namespace Collections in the Arista Namespace
===================================
These are the collections with docs hosted on [docs.ansible.com](https://docs.ansible.com/) in the **arista** namespace.
* [arista.eos](eos/index#plugins-in-arista-eos)
ansible arista.eos.eos_banner – Manage multiline banners on Arista EOS devices arista.eos.eos\_banner – Manage multiline banners on Arista EOS devices
=======================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_banner`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This will configure both login and motd banners on remote devices running Arista EOS. It allows playbooks to add or remote banner text from the active running configuration.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **banner** string / required | **Choices:*** login
* motd
| Specifies which banner that should be configured on the remote device. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **state** string | **Choices:*** **present** ←
* absent
| Specifies whether or not the configuration is present in the current devices active running configuration. |
| **text** string | | The banner text that should be present in the remote device running configuration. This argument accepts a multiline string. Requires *state=present*. |
Notes
-----
Note
* Tested against EOS 4.15
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: configure the login banner
arista.eos.eos_banner:
banner: login
text: |
this is my login banner
that contains a multiline
string
state: present
- name: remove the motd banner
arista.eos.eos_banner:
banner: motd
state: absent
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always | The list of configuration mode commands to send to the device **Sample:** ['banner login', 'this is my login banner', 'that contains a multiline', 'string', 'EOF'] |
| **session\_name** string | if changes | The EOS config session name used to load the configuration **Sample:** ansible\_1479315771 |
### Authors
* Peter Sprygada (@privateip)
| programming_docs |
ansible arista.eos.eos_l3_interfaces – L3 interfaces resource module arista.eos.eos\_l3\_interfaces – L3 interfaces resource module
==============================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_l3_interfaces`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module provides declarative management of Layer 3 interfaces on Arista EOS devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** list / elements=dictionary | | A dictionary of Layer 3 interface options |
| | **ipv4** list / elements=dictionary | | List of IPv4 addresses to be set for the Layer 3 interface mentioned in *name* option. |
| | | **address** string | | IPv4 address to be set in the format <ipv4 address>/<mask> eg. 192.0.2.1/24, or `dhcp` to query DHCP for an IP address. |
| | | **secondary** boolean | **Choices:*** no
* yes
| Whether or not this address is a secondary address. |
| | | **virtual** boolean | **Choices:*** no
* yes
| Whether or not this address is a virtual address. |
| | **ipv6** list / elements=dictionary | | List of IPv6 addresses to be set for the Layer 3 interface mentioned in *name* option. |
| | | **address** string | | IPv6 address to be set in the address format is <ipv6 address>/<mask> eg. 2001:db8:2201:1::1/64 or `auto-config` to use SLAAC to chose an address. |
| | **name** string / required | | Full name of the interface, i.e. Ethernet1. |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section ^interface**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** **merged** ←
* replaced
* overridden
* deleted
* parsed
* gathered
* rendered
| The state of the configuration after module completion |
Notes
-----
Note
* Tested against Arista EOS 4.20.10M
* This module works with connection `network_cli`. See the [EOS Platform Options](../network/user_guide/platform_eos). ‘eos\_l2\_interfaces/eos\_interfaces’ should be used for preparing the interfaces , before applying L3 configurations using this module (eos\_l3\_interfaces).
Examples
--------
```
# Using deleted
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# ip address 192.0.2.12/24
# !
# interface Ethernet2
# ipv6 address 2001:db8::1/64
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
- name: Delete L3 attributes of given interfaces.
arista.eos.eos_l3_interfaces:
config:
- name: Ethernet1
- name: Ethernet2
state: deleted
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# !
# interface Ethernet2
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
# Using merged
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# ip address 192.0.2.12/24
# !
# interface Ethernet2
# ipv6 address 2001:db8::1/64
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
- name: Merge provided configuration with device configuration.
arista.eos.eos_l3_interfaces:
config:
- name: Ethernet1
ipv4:
- address: 198.51.100.14/24
- name: Ethernet2
ipv4:
- address: 203.0.113.27/24
state: merged
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# ip address 198.51.100.14/24
# !
# interface Ethernet2
# ip address 203.0.113.27/24
# ipv6 address 2001:db8::1/64
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
# Using overridden
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# ip address 192.0.2.12/24
# !
# interface Ethernet2
# ipv6 address 2001:db8::1/64
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
- name: Override device configuration of all L2 interfaces on device with provided
configuration.
arista.eos.eos_l3_interfaces:
config:
- name: Ethernet1
ipv6:
- address: 2001:db8:feed::1/96
- name: Management1
ipv4:
- address: dhcp
ipv6: auto-config
state: overridden
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# ipv6 address 2001:db8:feed::1/96
# !
# interface Ethernet2
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
# Using replaced
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# ip address 192.0.2.12/24
# !
# interface Ethernet2
# ipv6 address 2001:db8::1/64
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
- name: Replace device configuration of specified L2 interfaces with provided configuration.
arista.eos.eos_l3_interfaces:
config:
- name: Ethernet2
ipv4:
- address: 203.0.113.27/24
state: replaced
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# ip address 192.0.2.12/24
# !
# interface Ethernet2
# ip address 203.0.113.27/24
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
# Using parsed:
# parsed.cfg
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# ip address 198.51.100.14/24
# !
# interface Ethernet2
# ip address 203.0.113.27/24
# !
- name: Use parsed to convert native configs to structured data
arista.eos.interfaces:
running_config: "{{ lookup('file', 'parsed.cfg') }}"
state: parsed
# Output:
# parsed:
# - name: Ethernet1
# ipv4:
# - address: 198.51.100.14/24
# - name: Ethernet2
# ipv4:
# - address: 203.0.113.27/24
# Using rendered:
- name: Use Rendered to convert the structured data to native config
arista.eos.eos_l3_interfaces:
config:
- name: Ethernet1
ipv4:
- address: 198.51.100.14/24
- name: Ethernet2
ipv4:
- address: 203.0.113.27/24
state: rendered
# Output
# ------------
#rendered:
# - "interface Ethernet1"
# - "ip address 198.51.100.14/24"
# - "interface Ethernet2"
# - "ip address 203.0.113.27/24"
# using gathered:
# Native COnfig:
# veos#show running-config | section interface
# interface Ethernet1
# ip address 198.51.100.14/24
# !
# interface Ethernet2
# ip address 203.0.113.27/24
# !
- name: Gather l3 interfaces facts from the device
arista.eos.l3_interfaces:
state: gathered
# gathered:
# - name: Ethernet1
# ipv4:
# - address: 198.51.100.14/24
# - name: Ethernet2
# ipv4:
# - address: 203.0.113.27/24
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **after** list / elements=string | when changed | The configuration as structured data after module completion. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **before** list / elements=string | always | The configuration as structured data prior to module invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **commands** list / elements=string | always | The set of commands pushed to the remote device. **Sample:** ['interface Ethernet2', 'ip address 192.0.2.12/24'] |
### Authors
* Nathaniel Case (@qalthos)
ansible arista.eos.eos_ospfv2 – OSPFv2 resource module arista.eos.eos\_ospfv2 – OSPFv2 resource module
===============================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_ospfv2`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module configures and manages the attributes of ospfv2 on Arista EOS platforms.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** dictionary | | A list of configurations for ospfv2. |
| | **processes** list / elements=dictionary | | A list of dictionary specifying the ospfv2 processes. |
| | | **adjacency** dictionary | | Configure adjacency options for OSPF instance. |
| | | | **exchange\_start** dictionary | | Configure exchange-start options for OSPF instance. |
| | | | | **threshold** integer | | Number of peers to bring up simultaneously. |
| | | **areas** list / elements=dictionary | | Specifies the configuration for OSPF areas |
| | | | **area\_id** string | | Specifies a 32 bit number expressed in decimal or dotted-decimal notation. |
| | | | **default\_cost** integer | | Specify the cost for default summary route in stub/NSSA area. |
| | | | **filter** dictionary | | Specify the filter for incoming summary LSAs. |
| | | | | **address** string | | IP address. |
| | | | | **prefix\_list** string | | Specify list to filter for incoming LSAs. |
| | | | | **subnet\_address** string | | IP address with mask length |
| | | | | **subnet\_mask** string | | IP subnet mask |
| | | | **not\_so\_stubby** dictionary | | Configures NSSA parameters. |
| | | | | **default\_information\_originate** dictionary | | Originate default Type 7 LSA. |
| | | | | | **metric** integer | | Metric for default route. |
| | | | | | **metric\_type** integer | | Metric type for default route. |
| | | | | | **nssa\_only** boolean | **Choices:*** no
* yes
| Limit default advertisement to this NSSA area. |
| | | | | **lsa** boolean | **Choices:*** no
* yes
| lsa parameters |
| | | | | **no\_summary** boolean | **Choices:*** no
* yes
| Filter all type-3 LSAs in the nssa area. |
| | | | | **nssa\_only** boolean | **Choices:*** no
* yes
| Disable Type-7 LSA p-bit setting |
| | | | | **set** boolean | **Choices:*** no
* yes
| Set config up to not-so-stubby |
| | | | **nssa** dictionary | | Configures NSSA parameters. |
| | | | | **default\_information\_originate** dictionary | | Originate default Type 7 LSA. |
| | | | | | **metric** integer | | Metric for default route. |
| | | | | | **metric\_type** integer | | Metric type for default route. |
| | | | | | **nssa\_only** boolean | **Choices:*** no
* yes
| Limit default advertisement to this NSSA area. |
| | | | | **no\_summary** boolean | **Choices:*** no
* yes
| Filter all type-3 LSAs in the nssa area. |
| | | | | **nssa\_only** boolean | **Choices:*** no
* yes
| Disable Type-7 LSA p-bit setting |
| | | | | **set** boolean | **Choices:*** no
* yes
| Set config up to nssa |
| | | | **range** dictionary | | Configure route summarization. |
| | | | | **address** string | | IP address. |
| | | | | **advertise** boolean | **Choices:*** no
* yes
| Enable Advertisement of the range. |
| | | | | **cost** integer | | Configures the metric. |
| | | | | **subnet\_address** string | | IP address with mask length |
| | | | | **subnet\_mask** string | | IP subnet mask |
| | | | **stub** dictionary | | Stub area. |
| | | | | **no\_summary** boolean | **Choices:*** no
* yes
| If False , Filter all type-3 LSAs in the stub area. |
| | | | | **set** boolean | **Choices:*** no
* yes
| When true sets the stub config alone. |
| | | **auto\_cost** dictionary | | Set auto-cost. |
| | | | **reference\_bandwidth** integer | | reference bandwidth in megabits per sec. |
| | | **bfd** dictionary | | Enable BFD. |
| | | | **all\_interfaces** boolean | **Choices:*** no
* yes
| Enable BFD on all interfaces. |
| | | **default\_information** dictionary | | Control distribution of default information. |
| | | | **always** boolean | **Choices:*** no
* yes
| Always advertise default route. |
| | | | **metric** integer | | Metric for default route. |
| | | | **metric\_type** integer | | Metric type for default route. |
| | | | **originate** boolean | **Choices:*** no
* yes
| Distribute a default route. |
| | | | **route\_map** string | | Specify which route-map to use. |
| | | **default\_metric** integer | | Configure the default metric for redistributed routes |
| | | **distance** dictionary | | Specifies the administrative distance for routes. |
| | | | **external** integer | | Routes external to the area |
| | | | **inter\_area** integer | | Routes from other areas |
| | | | **intra\_area** integer | | Routes with in an area |
| | | **distribute\_list** dictionary | | Specifies the list of routes to be filtered. |
| | | | **prefix\_list** string | | prefix list to be filtered |
| | | | **route\_map** string | | route map to be filtered |
| | | **dn\_bit\_ignore** boolean | **Choices:*** no
* yes
| If True, Disable dn-bit check for Type-3 LSAs in non-default VRFs. |
| | | **fips\_restrictions** string | | Use FIPS compliant algorithms |
| | | **graceful\_restart** dictionary | | Enable graceful restart mode. |
| | | | **grace\_period** integer | | Specify maximum time to wait for graceful-restart to complete. |
| | | | **set** boolean | **Choices:*** no
* yes
| When true sets the grace\_fulrestart config alone. |
| | | **graceful\_restart\_helper** boolean | **Choices:*** no
* yes
| If True, Enable graceful restart helper. |
| | | **log\_adjacency\_changes** dictionary | | To configure link-state changes and transitions of OSPFv2 neighbors. |
| | | | **detail** boolean | **Choices:*** no
* yes
| If true , configures the switch to log all link-state changes. |
| | | **max\_lsa** dictionary | | Specifies the switch behavior on reaching max lsa count. |
| | | | **count** integer | | maximum count of lsas. |
| | | | **ignore\_count** integer | | No. of times the switch can shut down temporarily on warning |
| | | | **ignore\_time** integer | | time in minutes, for which the switch shoud be shutdown on max-lsa warning |
| | | | **reset\_time** integer | | Time in minutes, after which the shutdown counter resets. |
| | | | **threshold** integer | | percentage of <count> , when a warning should be raised. |
| | | | **warning** boolean | **Choices:*** no
* yes
| Only give warning message when limit is exceeded |
| | | **max\_metric** dictionary | | Set maximum metric. |
| | | | **router\_lsa** dictionary | | Maximum metric in self-originated router-LSAs. |
| | | | | **external\_lsa** dictionary | | Override external-lsa metric with max-metric value. |
| | | | | | **max\_metric\_value** integer | | Set max metric value for external LSAs. |
| | | | | | **set** boolean | **Choices:*** no
* yes
| Set external-lsa attribute. |
| | | | | **include\_stub** boolean | **Choices:*** no
* yes
| Set maximum metric for stub links in router-LSAs. |
| | | | | **on\_startup** dictionary | | Set maximum metric temporarily after reboot. |
| | | | | | **wait\_period** integer | | Wait period in seconds after startup. |
| | | | | **set** boolean | **Choices:*** no
* yes
| Set router-lsa attribute. |
| | | | | **summary\_lsa** dictionary | | Override summary-lsa metric with max-metric value. |
| | | | | | **max\_metric\_value** integer | | Set max metric value for external LSAs. |
| | | | | | **set** boolean | **Choices:*** no
* yes
| Set external-lsa attribute. |
| | | **maximum\_paths** integer | | Maximum number of next-hops in an ECMP route. |
| | | **mpls\_ldp** boolean | **Choices:*** no
* yes
| mpls ldp sync configuration. |
| | | **networks** list / elements=dictionary | | Configure routing for a network. |
| | | | **area** string | | Configure OSPF area. |
| | | | **mask** string | | Network Wildcard Mask. |
| | | | **network\_address** string | | Network Address. |
| | | | **prefix** string | | Prefix. |
| | | **passive\_interface** dictionary | | Include interface but without actively running OSPF. |
| | | | **default** boolean | **Choices:*** no
* yes
| If True, Set all interfaces to passive by default |
| | | | **interface\_list** string | | Interface range. |
| | | **point\_to\_point** boolean | **Choices:*** no
* yes
| Configure Point-to-point specific features. |
| | | **process\_id** integer | | ID of OSPFV2 process. |
| | | **redistribute** list / elements=dictionary | | Specifies the routes to be redistributed |
| | | | **isis\_level** string | | ISIS levels. |
| | | | **route\_map** string | | Specify which route map to use. |
| | | | **routes** string | | Route types (BGP,isis,connected etc) |
| | | **retransmission\_threshold** integer | | Configure threshold for retransmission. |
| | | **rfc1583compatibility** boolean | **Choices:*** no
* yes
| Specifies different methods for calculating summary route metrics. |
| | | **router\_id** string | | 32-bit number assigned to a router running OSPFv2. |
| | | **shutdown** boolean | **Choices:*** no
* yes
| Disable the OSPF instance. |
| | | **summary\_address** dictionary | | Summary route configuration. |
| | | | **address** string | | IP summary address. |
| | | | **attribute\_map** string | | Set attributes of summary route. |
| | | | **mask** string | | Summary Mask. |
| | | | **not\_advertise** boolean | **Choices:*** no
* yes
| Do not advertise summary route. |
| | | | **prefix** string | | Prefix. |
| | | | **tag** integer | | Set tag. |
| | | **timers** list / elements=dictionary | | Configure OSPF timers. |
| | | | **lsa** dictionary | | Configure OSPF LSA timers. |
| | | | | **rx** dictionary | | Configure OSPF LSA receiving timers |
| | | | | | **min\_interval** integer | | Configure OSPF LSA arrival timer. |
| | | | | **tx** dictionary | | Configure OSPF LSA transmission timers. |
| | | | | | **delay** dictionary | | Configure OSPF LSA transmission delay. |
| | | | | | | **initial** integer | | Delay to generate first occurrence of LSA in msecs. |
| | | | | | | **max** integer | | Maximum delay between originating the same LSA in msecs. |
| | | | | | | **min** integer | | Min delay between originating the same LSA in msecs. |
| | | | **out\_delay** integer | | Configure out-delay timer. |
| | | | **pacing** integer | | Configure OSPF packet pacing. |
| | | | **spf** dictionary | | Configure SPF timers |
| | | | | **initial** integer | | Initial SPF schedule delay in msecs. |
| | | | | **max** integer | | Max wait time between two SPFs in msecs. |
| | | | | **min** integer | | Min Hold time between two SPFs in msecs |
| | | | | **seconds** integer | | Seconds. |
| | | | **throttle** dictionary | | Configure throttle timers(valid only for eos version < 4.23). |
| | | | | **attr** string | | throttle attribute. |
| | | | | **initial** integer | | Initial schedule delay in msecs. |
| | | | | **max** integer | | Max wait time |
| | | | | **min** integer | | Min Hold time |
| | | **traffic\_engineering** boolean | **Choices:*** no
* yes
| Enter traffic engineering config mode |
| | | **vrf** string | | VRF name . |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section ospf**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** deleted
* **merged** ←
* overridden
* replaced
* gathered
* rendered
* parsed
| The state the configuration should be left in. |
Notes
-----
Note
* Tested against Arista EOS 4.23.0F
* This module works with connection `network_cli`. See the [EOS Platform Options](../network/user_guide/platform_eos).
Examples
--------
```
# Using merged
# Before state:
# ------------
# localhost#show running-config | section ospf
# localhost#
- name: replace Ospf configs
arista.eos.eos_ospfv2:
config:
- processes:
- process_id: 1
adjacency:
exchange_start:
threshold: 20045623
areas:
- filter:
address: "10.1.1.0/24"
id: "0.0.0.2"
- id: "0.0.0.50"
range:
address: "172.20.0.0/16"
cost: 34
default_information:
metric: 100
metric_type: 1
originate: True
distance:
intra_area: 85
max_lsa:
count: 8000
ignore_count: 3
ignore_time: 6
reset_time: 20
threshold: 40
networks:
- area: "0.0.0.0"
prefix: 10.10.2.0/24
- area: "0.0.0.0"
prefix: "10.10.3.0/24"
redistribute:
- routes: "static"
router_id: "170.21.0.4"
- process_id: 2
vrf: "vrf01"
areas:
- id: "0.0.0.9"
default_cost: 20
max_lsa:
count: 8000
ignore_count: 3
ignore_time: 6
reset_time: 20
threshold: 40
networks:
- area: "0.0.0.0"
prefix: 10.10.2.0/24
- area: "0.0.0.0"
prefix: "10.10.3.0/24"
redistribute:
- routes: "static"
router_id: "170.21.0.4"
- process_id: 2
vrf: "vrf01"
areas:
- id: "0.0.0.9"
default_cost: 20
max_lsa:
count: 8000
ignore_count: 3
ignore_time: 6
reset_time: 20
threshold: 40
- process_id: 3
vrf: "vrf02"
redistribute:
- routes: "connected"
# After state:
# localhost#show running-config | section ospf
# router ospf 1
# router-id 170.21.0.4
# distance ospf intra-area 85
# redistribute static
# area 0.0.0.2 filter 10.1.1.0/24
# area 0.0.0.50 range 172.20.0.0/16 cost 34
# network 10.10.2.0/24 area 0.0.0.0
# network 10.10.3.0/24 area 0.0.0.0
# max-lsa 8000 40 ignore-time 6 ignore-count 3 reset-time 20
# adjacency exchange-start threshold 20045623
# default-information originate metric 100 metric-type 1
#
# router ospf 2 vrf vrf01
# area 0.0.0.9 default-cost 20
# max-lsa 8000 40 ignore-time 6 ignore-count 3 reset-time 20
# !
# router ospf 3 vrf vrf02
# redistribute connected
# max-lsa 12000
# localhost#
#
# "processes": [
# {
# "adjacency": {
# "exchange_start": {
# "threshold": 20045623
# }
# },
# "areas": [
# {
# "filter": {
# "address": "10.1.1.0/24"
# },
# "id": "0.0.0.2"
# },
# {
# "id": "0.0.0.50",
# "range": {
# "address": "172.20.0.0/16",
# "cost": 34
# }
# }
# ],
# "default_information": {
# "metric": 100,
# "metric_type": 1,
# "originate": true
# },
# "distance": {
# "intra_area": 85
# },
# "max_lsa": {
# "count": 8000,
# "ignore_count": 3,
# "ignore_time": 6,
# "reset_time": 20,
# "threshold": 40
# },
# "networks": [
# {
# "area": "0.0.0.0",
# "prefix": "10.10.2.0/24"
# },
# {
# "area": "0.0.0.0",
# "prefix": "10.10.3.0/24"
# }
# ],
# "process_id": 1,
# "redistribute": [
# {
# "routes": "static"
# }
# ],
# "router_id": "170.21.0.4"
# },
# {
# "areas": [
# {
# "default_cost": 20,
# "id": "0.0.0.9"
# }
# ],
# "max_lsa": {
# "count": 8000,
# "ignore_count": 3,
# "ignore_time": 6,
# "reset_time": 20,
# "threshold": 40
# },
# "process_id": 2,
# "vrf": "vrf01"
# },
# {
# "max_lsa": {
# "count": 12000
# },
# "process_id": 3,
# "redistribute": [
# {
# "routes": "connected"
# }
# ],
# "vrf": "vrf02"
# }
# ]
# }
# ]
#
# Using replaced:
# --------------
# Before State:
# localhost#show running-config | section ospf
# router ospf 1
# router-id 170.21.0.4
# distance ospf intra-area 85
# redistribute static
# area 0.0.0.2 filter 10.1.1.0/24
# area 0.0.0.50 range 172.20.0.0/16 cost 34
# network 10.10.2.0/24 area 0.0.0.0
# network 10.10.3.0/24 area 0.0.0.0
# max-lsa 8000 40 ignore-time 6 ignore-count 3 reset-time 20
# adjacency exchange-start threshold 20045623
# default-information originate metric 100 metric-type 1
# !
# router ospf 2 vrf vrf01
# area 0.0.0.9 default-cost 20
# max-lsa 8000 40 ignore-time 6 ignore-count 3 reset-time 20
# !
# router ospf 3 vrf vrf02
# redistribute connected
# max-lsa 12000
# localhost#
#
# "before": [
# {
# "processes": [
# {
# "adjacency": {
# "exchange_start": {
# "threshold": 20045623
# }
# },
# "areas": [
# {
# "filter": {
# "address": "10.1.1.0/24"
# },
# "id": "0.0.0.2"
# },
# {
# "id": "0.0.0.50",
# "range": {
# "address": "172.20.0.0/16",
# "cost": 34
# }
# }
# ],
# "default_information": {
# "metric": 100,
# "metric_type": 1,
# "originate": true
# },
# "distance": {
# "intra_area": 85
# },
# "max_lsa": {
# "count": 8000,
# "ignore_count": 3,
# "ignore_time": 6,
# "reset_time": 20,
# "threshold": 40
# },
# "networks": [
# {
# "area": "0.0.0.0",
# "prefix": "10.10.2.0/24"
# },
# {
# "area": "0.0.0.0",
# "prefix": "10.10.3.0/24"
# }
# ],
# "process_id": 1,
# "redistribute": [
# {
# "routes": "static"
# }
# ],
# "router_id": "170.21.0.4"
# },
# {
# "areas": [
# {
# "default_cost": 20,
# "id": "0.0.0.9"
# }
# ],
# "max_lsa": {
# "count": 8000,
# "ignore_count": 3,
# "ignore_time": 6,
# "reset_time": 20,
# "threshold": 40
# },
# "process_id": 2,
# "vrf": "vrf01"
# },
# {
# "max_lsa": {
# "count": 12000
# },
# "process_id": 3,
# "redistribute": [
# {
# "routes": "connected"
# }
# ],
# "vrf": "vrf02"
# }
# ]
# }
# ]
#
- name: replace Ospf configs
arista.eos.eos_ospfv2:
config:
- processes:
- process_id: 2
vrf: "vrf01"
point_to_point: True
redistribute:
- routes: "isis"
isis_level: "level-1"
state: replaced
# After State:
# -----------
# "router ospf 2 vrf vrf01",
# "no area 0.0.0.9 default-cost 20",
# "no max-lsa 8000 40 ignore-time 6 ignore-count 3 reset-time 20",
# "point-to-point routes",
# "redistribute isis level-1"
#
# "after": [
# {
# "processes": [
# {
# "adjacency": {
# "exchange_start": {
# "threshold": 20045623
# }
# },
# "areas": [
# {
# "filter": {
# "address": "10.1.1.0/24"
# },
# "id": "0.0.0.2"
# },
# {
# "id": "0.0.0.50",
# "range": {
# "address": "172.20.0.0/16",
# "cost": 34
# }
# }
# ],
# "default_information": {
# "metric": 100,
# "metric_type": 1,
# "originate": true
# },
# "distance": {
# "intra_area": 85
# },
# "max_lsa": {
# "count": 8000,
# "ignore_count": 3,
# "ignore_time": 6,
# "reset_time": 20,
# "threshold": 40
# },
# "networks": [
# {
# "area": "0.0.0.0",
# "prefix": "10.10.2.0/24"
# },
# {
# "area": "0.0.0.0",
# "prefix": "10.10.3.0/24"
# }
# ],
# "process_id": 1,
# "redistribute": [
# {
# "routes": "static"
# }
# ],
# "router_id": "170.21.0.4"
# },
# {
# "max_lsa": {
# "count": 12000
# },
# "process_id": 2,
# "redistribute": [
# {
# "isis_level": "level-1",
# "routes": "isis"
# }
# ],
# "vrf": "vrf01"
# },
# {
# "max_lsa": {
# "count": 12000
# },
# "process_id": 3,
# "redistribute": [
# {
# "routes": "connected"
# }
# ],
# "vrf": "vrf02"
# }
# ]
# }
# ]
#
# Using overridden:
# ----------------
# Before State:
# localhost#show running-config | section ospf
# router ospf 1
# router-id 170.21.0.4
# distance ospf intra-area 85
# redistribute static
# area 0.0.0.2 filter 10.1.1.0/24
# area 0.0.0.50 range 172.20.0.0/16 cost 34
# network 10.10.2.0/24 area 0.0.0.0
# network 10.10.3.0/24 area 0.0.0.0
# max-lsa 8000 40 ignore-time 6 ignore-count 3 reset-time 20
# adjacency exchange-start threshold 20045623
# default-information originate metric 100 metric-type 1
# !
# router ospf 2 vrf vrf01
# redistribute isis level-1
# max-lsa 12000
# !
# router ospf 3 vrf vrf02
# redistribute connected
# max-lsa 12000
# localhost#
#
# "before": [
# {
# "processes": [
# {
# "adjacency": {
# "exchange_start": {
# "threshold": 20045623
# }
# },
# "areas": [
# {
# "filter": {
# "address": "10.1.1.0/24"
# },
# "id": "0.0.0.2"
# },
# {
# "id": "0.0.0.50",
# "range": {
# "address": "172.20.0.0/16",
# "cost": 34
# }
# }
# ],
# "default_information": {
# "metric": 100,
# "metric_type": 1,
# "originate": true
# },
# "distance": {
# "intra_area": 85
# },
# "max_lsa": {
# "count": 8000,
# "ignore_count": 3,
# "ignore_time": 6,
# "reset_time": 20,
# "threshold": 40
# },
# "networks": [
# {
# "area": "0.0.0.0",
# "prefix": "10.10.2.0/24"
# },
# {
# "area": "0.0.0.0",
# "prefix": "10.10.3.0/24"
# }
# ],
# "process_id": 1,
# "redistribute": [
# {
# "routes": "static"
# }
# ],
# "router_id": "170.21.0.4"
# },
# {
# "max_lsa": {
# "count": 12000
# },
# "process_id": 2,
# "redistribute": [
# {
# "isis_level": "level-1",
# "routes": "isis"
# }
# ],
# "vrf": "vrf01"
# },
# {
# "max_lsa": {
# "count": 12000
# },
# "process_id": 3,
# "redistribute": [
# {
# "routes": "connected"
# }
# ],
# "vrf": "vrf02"
# }
# ]
# }
# ]
- name: override Ospf configs
arista.eos.eos_ospfv2:
config:
- processes:
- process_id: 2
vrf: "vrf01"
redistribute:
- routes: "connected"
state: override
# After State:
# "no router ospf 1",
# "no router ospf 3",
# "router ospf 2 vrf vrf01",
# "no max-lsa 12000",
# "no redistribute isis level-1",
# "redistribute connected"
#
# "after": [
# {
# "processes": [
# {
# "max_lsa": {
# "count": 12000
# },
# "process_id": 2,
# "redistribute": [
# {
# "routes": "connected"
# }
# ],
# "vrf": "vrf01"
# }
# ]
# }
# ]
# Using Deleted:
# localhost#show running-config | section ospf
# router ospf 1
# router-id 170.21.0.4
# distance ospf intra-area 85
# redistribute static
# area 0.0.0.2 filter 10.1.1.0/24
# area 0.0.0.50 range 172.20.0.0/16 cost 34
# network 10.10.2.0/24 area 0.0.0.0
# network 10.10.3.0/24 area 0.0.0.0
# max-lsa 8000 40 ignore-time 6 ignore-count 3 reset-time 20
# adjacency exchange-start threshold 20045623
# default-information originate metric 100 metric-type 1
# !
# router ospf 2 vrf vrf01
# redistribute connected
# area 0.0.0.9 default-cost 20
# max-lsa 8000 40 ignore-time 6 ignore-count 3 reset-time 20
# !
# router ospf 3 vrf vrf02
# redistribute connected
# max-lsa 12000
# localhost#
#
# "before": [
# {
# "processes": [
# {
# "adjacency": {
# "exchange_start": {
# "threshold": 20045623
# }
# },
# "areas": [
# {
# "filter": {
# "address": "10.1.1.0/24"
# },
# "id": "0.0.0.2"
# },
# {
# "id": "0.0.0.50",
# "range": {
# "address": "172.20.0.0/16",
# "cost": 34
# }
# }
# ],
# "default_information": {
# "metric": 100,
# "metric_type": 1,
# "originate": true
# },
# "distance": {
# "intra_area": 85
# },
# "max_lsa": {
# "count": 8000,
# "ignore_count": 3,
# "ignore_time": 6,
# "reset_time": 20,
# "threshold": 40
# },
# "networks": [
# {
# "area": "0.0.0.0",
# "prefix": "10.10.2.0/24"
# },
# {
# "area": "0.0.0.0",
# "prefix": "10.10.3.0/24"
# }
# ],
# "process_id": 1,
# "redistribute": [
# {
# "routes": "static"
# }
# ],
# "router_id": "170.21.0.4"
# },
# {
# "areas": [
# {
# "default_cost": 20,
# "id": "0.0.0.9"
# }
# ],
# "max_lsa": {
# "count": 8000,
# "ignore_count": 3,
# "ignore_time": 6,
# "reset_time": 20,
# "threshold": 40
# },
# "process_id": 2,
# "redistribute": [
# {
# "routes": "connected"
# }
# ],
# "vrf": "vrf01"
# },
# {
# "max_lsa": {
# "count": 12000
# },
# "process_id": 3,
# "redistribute": [
# {
# "routes": "connected"
# }
# ],
# "vrf": "vrf02"
# }
# ]
# }
# ]
- name: Delete Ospf configs
arista.eos.eos_ospfv2:
config:
- processes:
- process_id: 1
state: deleted
# After State:
# Commands:
# "no router ospf 1"
# "after": [
# {
# "processes": [
# {
# "areas": [
# {
# "default_cost": 20,
# "id": "0.0.0.9"
# }
# ],
# "max_lsa": {
# "count": 8000,
# "ignore_count": 3,
# "ignore_time": 6,
# "reset_time": 20,
# "threshold": 40
# },
# "process_id": 2,
# "redistribute": [
# {
# "routes": "connected"
# }
# ],
# "vrf": "vrf01"
# },
# {
# "max_lsa": {
# "count": 12000
# },
# "process_id": 3,
# "redistribute": [
# {
# "routes": "connected"
# }
# ],
# "vrf": "vrf02"
# }
# ]
# }
# ]
# Using gathered:
# localhost#show running-config | section ospf
# router ospf 2 vrf vrf01
# redistribute connected
# area 0.0.0.9 default-cost 20
# max-lsa 8000 40 ignore-time 6 ignore-count 3 reset-time 20
# !
# router ospf 3 vrf vrf02
# redistribute connected
# max-lsa 12000
# localhost#
- name: replace Ospf configs
arista.eos.eos_ospfv2:
state: gathered
# "gathered": [
# {
# "processes": [
# {
# "areas": [
# {
# "default_cost": 20,
# "id": "0.0.0.9"
# }
# ],
# "max_lsa": {
# "count": 8000,
# "ignore_count": 3,
# "ignore_time": 6,
# "reset_time": 20,
# "threshold": 40
# },
# "process_id": 2,
# "redistribute": [
# {
# "routes": "connected"
# }
# ],
# "vrf": "vrf01"
# },
# {
# "max_lsa": {
# "count": 12000
# },
# "process_id": 3,
# "redistribute": [
# {
# "routes": "connected"
# }
# ],
# "vrf": "vrf02"
# }
# ]
# }
# ]
# Using parsed:
# ------------
# parsed.cfg
# router ospf 1
# adjacency exchange-start threshold 20045623
# area 0.0.0.2 filter 10.1.1.0/24
# area 0.0.0.50 range 172.20.0.0/16 cost 34
# default-information originate metric 100 metric-type 1
# distance ospf intra-area 85
# max-lsa 80000 40 ignore-count 3 ignore-time 6 reset-time 20
# network 10.10.2.0/24 area 0.0.0.0
# network 10.10.3.0/24 area 0.0.0.0
# redistribute static
# router-id 170.21.0.4
# router ospf 2 vrf vrf01,
# area 0.0.0.9 default-cost 20
# max-lsa 80000 40 ignore-count 3 ignore-time 6 reset-time 20
# router ospf 3 vrf vrf02
# redistribute static
- name: Parse Ospf configs
arista.eos.eos_ospfv2:
running_config: "{{ lookup('file', './parsed.cfg') }}"
state: parsed
# "parsed": [
# {
# "processes": [
# {
# "adjacency": {
# "exchange_start": {
# "threshold": 20045623
# }
# },
# "areas": [
# {
# "filter": {
# "address": "10.1.1.0/24"
# },
# "id": "0.0.0.2"
# },
# {
# "id": "0.0.0.50",
# "range": {
# "address": "172.20.0.0/16",
# "cost": 34
# }
# }
# ],
# "default_information": {
# "metric": 100,
# "metric_type": 1,
# "originate": true
# },
# "distance": {
# "intra_area": 85
# },
# "max_lsa": {
# "count": 80000,
# "ignore_count": 3,
# "ignore_time": 6,
# "reset_time": 20,
# "threshold": 40
# },
# "networks": [
# {
# "area": "0.0.0.0",
# "prefix": "10.10.2.0/24"
# },
# {
# "area": "0.0.0.0",
# "prefix": "10.10.3.0/24"
# }
# ],
# "process_id": 1,
# "redistribute": [
# {
# "routes": "static"
# }
# ],
# "router_id": "170.21.0.4"
# },
# {
# "areas": [
# {
# "default_cost": 20,
# "id": "0.0.0.9"
# }
# ],
# "max_lsa": {
# "count": 80000,
# "ignore_count": 3,
# "ignore_time": 6,
# "reset_time": 20,
# "threshold": 40
# },
# "process_id": 2,
# "vrf": "vrf01,"
# },
# {
# "process_id": 3,
# "redistribute": [
# {
# "routes": "static"
# }
# ],
# "vrf": "vrf02"
# }
# ]
# }
# ]
# Using rendered:
# --------------
- name: replace Ospf configs
arista.eos.eos_ospfv2:
config:
- processes:
- process_id: 1
adjacency:
exchange_start:
threshold: 20045623
areas:
- filter:
address: 10.1.1.0/24
id: 0.0.0.2
- id: 0.0.0.50
range:
address: 172.20.0.0/16
cost: 34
default_information:
metric: 100
metric_type: 1
originate: true
distance:
intra_area: 85
max_lsa:
count: 8000
ignore_count: 3
ignore_time: 6
reset_time: 20
threshold: 40
networks:
- area: 0.0.0.0
prefix: 10.10.2.0/24
- area: 0.0.0.0
prefix: 10.10.3.0/24
redistribute:
- routes: static
router_id: 170.21.0.4
state: rendered
# "rendered": [
# "router ospf 1",
# "adjacency exchange-start threshold 20045623",
# "area 0.0.0.2 filter 10.1.1.0/24",
# "area 0.0.0.50 range 172.20.0.0/16 cost 34",
# "default-information originate metric 100 metric-type 1",
# "distance ospf intra-area 85",
# "max-lsa 8000 40 ignore-count 3 ignore-time 6 reset-time 20",
# "network 10.10.2.0/24 area 0.0.0.0",
# "network 10.10.3.0/24 area 0.0.0.0",
# "redistribute static",
# "router-id 170.21.0.4"
# ]
#
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **after** list / elements=string | when changed | The resulting configuration model invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **before** list / elements=string | always | The configuration prior to the model invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **commands** list / elements=string | always | The set of commands pushed to the remote device. **Sample:** ['router ospf 1', 'adjacency exchange-start threshold 20045623', 'area 0.0.0.2 filter 10.1.1.0/24', 'area 0.0.0.50 range 172.20.0.0/16 cost 34', 'default-information originate metric 100 metric-type 1', 'distance ospf intra-area 85', 'max-lsa 8000 40 ignore-count 3 ignore-time 6 reset-time 20', 'network 10.10.2.0/24 area 0.0.0.0', 'network 10.10.3.0/24 area 0.0.0.0', 'redistribute static', 'router-id 170.21.0.4'] |
### Authors
* Gomathi Selvi Srinivasan (@GomathiselviS)
| programming_docs |
ansible arista.eos.eos_vlan – (deprecated, removed after 2022-06-01) Manage VLANs on Arista EOS network devices arista.eos.eos\_vlan – (deprecated, removed after 2022-06-01) Manage VLANs on Arista EOS network devices
========================================================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_vlan`.
New in version 1.0.0: of arista.eos
* [DEPRECATED](#deprecated)
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
* [Status](#status)
DEPRECATED
----------
Removed in
major release after 2022-06-01
Why
Updated modules released with more functionality
Alternative
eos\_vlans
Synopsis
--------
* This module provides declarative management of VLANs on Arista EOS network devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **aggregate** list / elements=dictionary | | List of VLANs definitions. |
| | **associated\_interfaces** list / elements=string | | This is a intent option and checks the operational state of the for given vlan `name` for associated interfaces. The name of interface is case sensitive and should be in expanded format and not abbreviated. If the value in the `associated_interfaces` does not match with the operational state of vlan interfaces on device it will result in failure. |
| | **delay** integer | **Default:**10 | Delay the play should wait to check for declarative intent params values. |
| | **interfaces** list / elements=string | | List of interfaces that should be associated to the VLAN. The name of interface is case sensitive and should be in expanded format and not abbreviated. |
| | **name** string | | Name of the VLAN. |
| | **state** string | **Choices:*** **present** ←
* absent
* active
* suspend
| State of the VLAN configuration. |
| | **vlan\_id** integer / required | | ID of the VLAN. |
| **associated\_interfaces** list / elements=string | | This is a intent option and checks the operational state of the for given vlan `name` for associated interfaces. The name of interface is case sensitive and should be in expanded format and not abbreviated. If the value in the `associated_interfaces` does not match with the operational state of vlan interfaces on device it will result in failure. |
| **delay** integer | **Default:**10 | Delay the play should wait to check for declarative intent params values. |
| **interfaces** list / elements=string | | List of interfaces that should be associated to the VLAN. The name of interface is case sensitive and should be in expanded format and not abbreviated. |
| **name** string | | Name of the VLAN. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **purge** boolean | **Choices:*** **no** ←
* yes
| Purge VLANs not defined in the *aggregate* parameter. |
| **state** string | **Choices:*** **present** ←
* absent
* active
* suspend
| State of the VLAN configuration. |
| **vlan\_id** integer | | ID of the VLAN. |
Notes
-----
Note
* Tested against EOS 4.15
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: Create vlan
arista.eos.eos_vlan:
vlan_id: 4000
name: vlan-4000
state: present
- name: Add interfaces to vlan
arista.eos.eos_vlan:
vlan_id: 4000
state: present
interfaces:
- Ethernet1
- Ethernet2
- name: Check if interfaces is assigned to vlan
arista.eos.eos_vlan:
vlan_id: 4000
associated_interfaces:
- Ethernet1
- Ethernet2
- name: Suspend vlan
arista.eos.eos_vlan:
vlan_id: 4000
state: suspend
- name: Unsuspend vlan
arista.eos.eos_vlan:
vlan_id: 4000
state: active
- name: Create aggregate of vlans
arista.eos.eos_vlan:
aggregate:
- vlan_id: 4000
- {vlan_id: 4001, name: vlan-4001}
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always | The list of configuration mode commands to send to the device **Sample:** ['vlan 20', 'name test-vlan'] |
Status
------
* This module will be removed in a major release after 2022-06-01. *[deprecated]*
* For more information see [DEPRECATED](#deprecated).
### Authors
* Ricardo Carrillo Cruz (@rcarrillocruz)
ansible arista.eos.eos_interface – (deprecated, removed after 2022-06-01) Manage Interface on Arista EOS network devices arista.eos.eos\_interface – (deprecated, removed after 2022-06-01) Manage Interface on Arista EOS network devices
=================================================================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_interface`.
New in version 1.0.0: of arista.eos
* [DEPRECATED](#deprecated)
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
* [Status](#status)
DEPRECATED
----------
Removed in
major release after 2022-06-01
Why
Updated modules released with more functionality
Alternative
eos\_interfaces
Synopsis
--------
* This module provides declarative management of Interfaces on Arista EOS network devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **aggregate** list / elements=dictionary | | List of Interfaces definitions. Each of the entry in aggregate list should define name of interface `name` and other options as required. |
| | **delay** integer | **Default:**10 | Time in seconds to wait before checking for the operational state on remote device. This wait is applicable for operational state argument which are *state* with values `up`/`down`, *tx\_rate* and *rx\_rate*. |
| | **description** string | | Description of Interface upto 240 characters. |
| | **enabled** boolean | **Choices:*** no
* yes
| Interface link status. If the value is *True* the interface state will be enabled, else if value is *False* interface will be in disable (shutdown) state. |
| | **mtu** string | | Set maximum transmission unit size in bytes of transmit packet for the interface given in `name` option. |
| | **name** string / required | | Name of the Interface to be configured on remote device. The name of interface should be in expanded format and not abbreviated. |
| | **neighbors** list / elements=dictionary | | Check the operational state of given interface `name` for LLDP neighbor. The following suboptions are available. |
| | | **host** string | | LLDP neighbor host for given interface `name`. |
| | | **port** string | | LLDP neighbor port to which given interface `name` is connected. |
| | **rx\_rate** string | | Receiver rate in bits per second (bps) for the interface given in `name` option. This is state check parameter only. Supports conditionals, see [Conditionals in Networking Modules](../network/user_guide/network_working_with_command_output)
|
| | **speed** string | | This option configures autoneg and speed/duplex/flowcontrol for the interface given in `name` option. |
| | **state** string | **Choices:*** present
* absent
* up
* down
| State of the Interface configuration, `up` means present and operationally up and `down` means present and operationally `down`
|
| | **tx\_rate** string | | Transmit rate in bits per second (bps) for the interface given in `name` option. This is state check parameter only. Supports conditionals, see [Conditionals in Networking Modules](../network/user_guide/network_working_with_command_output)
|
| **delay** integer | **Default:**10 | Time in seconds to wait before checking for the operational state on remote device. This wait is applicable for operational state argument which are *state* with values `up`/`down`, *tx\_rate* and *rx\_rate*. |
| **description** string | | Description of Interface upto 240 characters. |
| **enabled** boolean | **Choices:*** no
* **yes** ←
| Interface link status. If the value is *True* the interface state will be enabled, else if value is *False* interface will be in disable (shutdown) state. |
| **mtu** string | | Set maximum transmission unit size in bytes of transmit packet for the interface given in `name` option. |
| **name** string | | Name of the Interface to be configured on remote device. The name of interface should be in expanded format and not abbreviated. |
| **neighbors** list / elements=dictionary | | Check the operational state of given interface `name` for LLDP neighbor. The following suboptions are available. |
| | **host** string | | LLDP neighbor host for given interface `name`. |
| | **port** string | | LLDP neighbor port to which given interface `name` is connected. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **rx\_rate** string | | Receiver rate in bits per second (bps) for the interface given in `name` option. This is state check parameter only. Supports conditionals, see [Conditionals in Networking Modules](../network/user_guide/network_working_with_command_output)
|
| **speed** string | | This option configures autoneg and speed/duplex/flowcontrol for the interface given in `name` option. |
| **state** string | **Choices:*** **present** ←
* absent
* up
* down
| State of the Interface configuration, `up` means present and operationally up and `down` means present and operationally `down`
|
| **tx\_rate** string | | Transmit rate in bits per second (bps) for the interface given in `name` option. This is state check parameter only. Supports conditionals, see [Conditionals in Networking Modules](../network/user_guide/network_working_with_command_output)
|
Notes
-----
Note
* Tested against EOS 4.15
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: configure interface
arista.eos.eos_interface:
name: ethernet1
description: test-interface
speed: 100full
mtu: 512
- name: remove interface
arista.eos.eos_interface:
name: ethernet1
state: absent
- name: make interface up
arista.eos.eos_interface:
name: ethernet1
enabled: true
- name: make interface down
arista.eos.eos_interface:
name: ethernet1
enabled: false
- name: Check intent arguments
arista.eos.eos_interface:
name: ethernet1
state: up
tx_rate: ge(0)
rx_rate: le(0)
- name: Check neighbors intent arguments
arista.eos.eos_interface:
name: ethernet1
neighbors:
- port: eth0
host: netdev
- name: Configure interface in disabled state and check if the operational state is
disabled or not
arista.eos.eos_interface:
name: ethernet1
enabled: false
state: down
- name: Add interface using aggregate
arista.eos.eos_interface:
aggregate:
- {name: ethernet1, mtu: 256, description: test-interface-1}
- {name: ethernet2, mtu: 516, description: test-interface-2}
speed: 100full
state: present
- name: Delete interface using aggregate
arista.eos.eos_interface:
aggregate:
- name: loopback9
- name: loopback10
state: absent
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always, except for the platforms that use Netconf transport to manage the device. | The list of configuration mode commands to send to the device. **Sample:** ['interface ethernet1', 'description test-interface', 'speed 100full', 'mtu 512'] |
Status
------
* This module will be removed in a major release after 2022-06-01. *[deprecated]*
* For more information see [DEPRECATED](#deprecated).
### Authors
* Ganesh Nalawade (@ganeshrn)
| programming_docs |
ansible Arista.Eos Arista.Eos
==========
Collection version 2.2.0
Plugin Index
------------
These are the plugins in the arista.eos collection
### Cliconf Plugins
* [eos](eos_cliconf#ansible-collections-arista-eos-eos-cliconf) – Use eos cliconf to run command on Arista EOS platform
### Httpapi Plugins
* [eos](eos_httpapi#ansible-collections-arista-eos-eos-httpapi) – Use eAPI to run command on eos platform
### Modules
* [eos\_acl\_interfaces](eos_acl_interfaces_module#ansible-collections-arista-eos-eos-acl-interfaces-module) – ACL interfaces resource module
* [eos\_acls](eos_acls_module#ansible-collections-arista-eos-eos-acls-module) – ACLs resource module
* [eos\_banner](eos_banner_module#ansible-collections-arista-eos-eos-banner-module) – Manage multiline banners on Arista EOS devices
* [eos\_bgp](eos_bgp_module#ansible-collections-arista-eos-eos-bgp-module) – (deprecated, removed after 2023-01-29) Configure global BGP protocol settings on Arista EOS.
* [eos\_bgp\_address\_family](eos_bgp_address_family_module#ansible-collections-arista-eos-eos-bgp-address-family-module) – Manages BGP address family resource module
* [eos\_bgp\_global](eos_bgp_global_module#ansible-collections-arista-eos-eos-bgp-global-module) – Manages BGP global resource module
* [eos\_command](eos_command_module#ansible-collections-arista-eos-eos-command-module) – Run arbitrary commands on an Arista EOS device
* [eos\_config](eos_config_module#ansible-collections-arista-eos-eos-config-module) – Manage Arista EOS configuration sections
* [eos\_eapi](eos_eapi_module#ansible-collections-arista-eos-eos-eapi-module) – Manage and configure Arista EOS eAPI.
* [eos\_facts](eos_facts_module#ansible-collections-arista-eos-eos-facts-module) – Collect facts from remote devices running Arista EOS
* [eos\_interface](eos_interface_module#ansible-collections-arista-eos-eos-interface-module) – (deprecated, removed after 2022-06-01) Manage Interface on Arista EOS network devices
* [eos\_interfaces](eos_interfaces_module#ansible-collections-arista-eos-eos-interfaces-module) – Interfaces resource module
* [eos\_l2\_interface](eos_l2_interface_module#ansible-collections-arista-eos-eos-l2-interface-module) – (deprecated, removed after 2022-06-01) Manage L2 interfaces on Arista EOS network devices.
* [eos\_l2\_interfaces](eos_l2_interfaces_module#ansible-collections-arista-eos-eos-l2-interfaces-module) – L2 interfaces resource module
* [eos\_l3\_interface](eos_l3_interface_module#ansible-collections-arista-eos-eos-l3-interface-module) – (deprecated, removed after 2022-06-01) Manage L3 interfaces on Arista EOS network devices.
* [eos\_l3\_interfaces](eos_l3_interfaces_module#ansible-collections-arista-eos-eos-l3-interfaces-module) – L3 interfaces resource module
* [eos\_lacp](eos_lacp_module#ansible-collections-arista-eos-eos-lacp-module) – LACP resource module
* [eos\_lacp\_interfaces](eos_lacp_interfaces_module#ansible-collections-arista-eos-eos-lacp-interfaces-module) – LACP interfaces resource module
* [eos\_lag\_interfaces](eos_lag_interfaces_module#ansible-collections-arista-eos-eos-lag-interfaces-module) – LAG interfaces resource module
* [eos\_linkagg](eos_linkagg_module#ansible-collections-arista-eos-eos-linkagg-module) – (deprecated, removed after 2022-06-01) Manage link aggregation groups on Arista EOS network devices
* [eos\_lldp](eos_lldp_module#ansible-collections-arista-eos-eos-lldp-module) – Manage LLDP configuration on Arista EOS network devices
* [eos\_lldp\_global](eos_lldp_global_module#ansible-collections-arista-eos-eos-lldp-global-module) – LLDP resource module
* [eos\_lldp\_interfaces](eos_lldp_interfaces_module#ansible-collections-arista-eos-eos-lldp-interfaces-module) – LLDP interfaces resource module
* [eos\_logging](eos_logging_module#ansible-collections-arista-eos-eos-logging-module) – Manage logging on network devices
* [eos\_ospf\_interfaces](eos_ospf_interfaces_module#ansible-collections-arista-eos-eos-ospf-interfaces-module) – OSPF Interfaces Resource Module.
* [eos\_ospfv2](eos_ospfv2_module#ansible-collections-arista-eos-eos-ospfv2-module) – OSPFv2 resource module
* [eos\_ospfv3](eos_ospfv3_module#ansible-collections-arista-eos-eos-ospfv3-module) – OSPFv3 resource module
* [eos\_prefix\_lists](eos_prefix_lists_module#ansible-collections-arista-eos-eos-prefix-lists-module) – Manages Prefix lists resource module
* [eos\_route\_maps](eos_route_maps_module#ansible-collections-arista-eos-eos-route-maps-module) – Manages Route Maps resource module
* [eos\_static\_route](eos_static_route_module#ansible-collections-arista-eos-eos-static-route-module) – (deprecated, removed after 2022-06-01) Manage static IP routes on Arista EOS network devices
* [eos\_static\_routes](eos_static_routes_module#ansible-collections-arista-eos-eos-static-routes-module) – Static routes resource module
* [eos\_system](eos_system_module#ansible-collections-arista-eos-eos-system-module) – Manage the system attributes on Arista EOS devices
* [eos\_user](eos_user_module#ansible-collections-arista-eos-eos-user-module) – Manage the collection of local users on EOS devices
* [eos\_vlan](eos_vlan_module#ansible-collections-arista-eos-eos-vlan-module) – (deprecated, removed after 2022-06-01) Manage VLANs on Arista EOS network devices
* [eos\_vlans](eos_vlans_module#ansible-collections-arista-eos-eos-vlans-module) – VLANs resource module
* [eos\_vrf](eos_vrf_module#ansible-collections-arista-eos-eos-vrf-module) – Manage VRFs on Arista EOS network devices
See also
List of [collections](../../index#list-of-collections) with docs hosted here.
ansible arista.eos.eos_l3_interface – (deprecated, removed after 2022-06-01) Manage L3 interfaces on Arista EOS network devices. arista.eos.eos\_l3\_interface – (deprecated, removed after 2022-06-01) Manage L3 interfaces on Arista EOS network devices.
==========================================================================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_l3_interface`.
New in version 1.0.0: of arista.eos
* [DEPRECATED](#deprecated)
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
* [Status](#status)
DEPRECATED
----------
Removed in
major release after 2022-06-01
Why
Updated modules released with more functionality
Alternative
eos\_l3\_interfaces
Synopsis
--------
* This module provides declarative management of L3 interfaces on Arista EOS network devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **aggregate** list / elements=dictionary | | List of L3 interfaces definitions. Each of the entry in aggregate list should define name of interface `name` and a optional `ipv4` or `ipv6` address. |
| | **ipv4** string | | IPv4 address to be set for the L3 interface mentioned in *name* option. The address format is <ipv4 address>/<mask>, the mask is number in range 0-32 eg. 192.168.0.1/24 |
| | **ipv6** string | | IPv6 address to be set for the L3 interface mentioned in *name* option. The address format is <ipv6 address>/<mask>, the mask is number in range 0-128 eg. fd5d:12c9:2201:1::1/64 |
| | **name** string / required | | Name of the L3 interface to be configured eg. ethernet1 |
| | **state** string | **Choices:*** present
* absent
| State of the L3 interface configuration. It indicates if the configuration should be present or absent on remote device. |
| **ipv4** string | | IPv4 address to be set for the L3 interface mentioned in *name* option. The address format is <ipv4 address>/<mask>, the mask is number in range 0-32 eg. 192.168.0.1/24 |
| **ipv6** string | | IPv6 address to be set for the L3 interface mentioned in *name* option. The address format is <ipv6 address>/<mask>, the mask is number in range 0-128 eg. fd5d:12c9:2201:1::1/64 |
| **name** string | | Name of the L3 interface to be configured eg. ethernet1 |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **state** string | **Choices:*** **present** ←
* absent
| State of the L3 interface configuration. It indicates if the configuration should be present or absent on remote device. |
Notes
-----
Note
* Tested against EOS 4.15
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: Remove ethernet1 IPv4 and IPv6 address
arista.eos.eos_l3_interface:
name: ethernet1
state: absent
- name: Set ethernet1 IPv4 address
arista.eos.eos_l3_interface:
name: ethernet1
ipv4: 192.168.0.1/24
- name: Set ethernet1 IPv6 address
arista.eos.eos_l3_interface:
name: ethernet1
ipv6: fd5d:12c9:2201:1::1/64
- name: Set interface Vlan1 (SVI) IPv4 address
arista.eos.eos_l3_interface:
name: Vlan1
ipv4: 192.168.0.5/24
- name: Set IP addresses on aggregate
arista.eos.eos_l3_interface:
aggregate:
- name: ethernet1
ipv4: 192.168.2.10/24
- name: ethernet1
ipv4: 192.168.3.10/24
ipv6: fd5d:12c9:2201:1::1/64
- name: Remove IP addresses on aggregate
arista.eos.eos_l3_interface:
aggregate:
- name: ethernet1
ipv4: 192.168.2.10/24
- name: ethernet1
ipv4: 192.168.3.10/24
ipv6: fd5d:12c9:2201:1::1/64
state: absent
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always, except for the platforms that use Netconf transport to manage the device. | The list of configuration mode commands to send to the device **Sample:** ['interface ethernet1', 'ip address 192.168.0.1/24', 'ipv6 address fd5d:12c9:2201:1::1/64'] |
Status
------
* This module will be removed in a major release after 2022-06-01. *[deprecated]*
* For more information see [DEPRECATED](#deprecated).
### Authors
* Ganesh Nalawade (@ganeshrn)
ansible arista.eos.eos_lag_interfaces – LAG interfaces resource module arista.eos.eos\_lag\_interfaces – LAG interfaces resource module
================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_lag_interfaces`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module manages attributes of link aggregation groups on Arista EOS devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** list / elements=dictionary | | A list of link aggregation group configurations. |
| | **members** list / elements=dictionary | | Ethernet interfaces that are part of the group. |
| | | **member** string | | Name of ethernet interface that is a member of the LAG. |
| | | **mode** string | **Choices:*** active
* on
* passive
| LAG mode for this interface. |
| | **name** string / required | | Name of the port-channel interface of the link aggregation group (LAG) e.g., Port-Channel5. |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section interfaces**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** **merged** ←
* replaced
* overridden
* deleted
* rendered
* gathered
* parsed
| The state of the configuration after module completion. |
Notes
-----
Note
* Tested against Arista EOS 4.20.10M
* This module works with connection `network_cli`. See the [EOS Platform Options](../network/user_guide/platform_eos).
Examples
--------
```
# Using merged
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# channel-group 5 mode on
# interface Ethernet2
- name: Merge provided LAG attributes with existing device configuration
arista.eos.eos_lag_interfaces:
config:
- name: 5
members:
- member: Ethernet2
mode: on
state: merged
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# channel-group 5 mode on
# interface Ethernet2
# channel-group 5 mode on
# Using replaced
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# channel-group 5 mode on
# interface Ethernet2
- name: Replace all device configuration of specified LAGs with provided configuration
arista.eos.eos_lag_interfaces:
config:
- name: 5
members:
- member: Ethernet2
mode: on
state: replaced
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# interface Ethernet2
# channel-group 5 mode on
# Using overridden
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# channel-group 5 mode on
# interface Ethernet2
- name: Override all device configuration of all LAG attributes with provided configuration
arista.eos.eos_lag_interfaces:
config:
- name: 10
members:
- member: Ethernet2
mode: on
state: overridden
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# interface Ethernet2
# channel-group 10 mode on
# Using deleted
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# channel-group 5 mode on
# interface Ethernet2
# channel-group 5 mode on
- name: Delete LAG attributes of the given interfaces.
arista.eos.eos_lag_interfaces:
config:
- name: 5
members:
- member: Ethernet1
state: deleted
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# interface Ethernet2
# channel-group 5 mode on
# Using parsed:
# parsed.cfg
# interface Ethernet1
# channel-group 5 mode on
# interface Ethernet2
# channel-group 5 mode on
- name: Use parsed to convert native configs to structured data
arista.eos.lag_interfaces:
running_config: "{{ lookup('file', 'parsed.cfg') }}"
state: parsed
# Output:
# parsed:
# - name: 5
# members:
# - member: Ethernet2
# mode: on
# - member: Ethernet1
# mode: on
# using rendered:
- name: Use Rendered to convert the structured data to native config
arista.eos.eos_lag_interfaces:
config:
- name: 5
members:
- member: Ethernet2
mode: on
- member: Ethernet1
mode: on
state: rendered
# -----------
# Output
# -----------
#
# rendered:
# interface Ethernet1
# channel-group 5 mode on
# interface Ethernet2
# channel-group 5 mode on
# Using gathered:
# native config:
# interface Ethernet1
# channel-group 5 mode on
# interface Ethernet2
# channel-group 5 mode on
- name: Gather lldp_global facts from the device
arista.eos.lldp_global:
state: gathered
# Output:
# gathered:
# - name: 5
# members:
# - member: Ethernet2
# mode: on
# - member: Ethernet1
# mode: on
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **after** list / elements=string | when changed | The configuration as structured data after module completion. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **before** list / elements=string | always | The configuration as structured data prior to module invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **commands** list / elements=string | always | The set of commands pushed to the remote device. **Sample:** ['command 1', 'command 2', 'command 3'] |
### Authors
* Nathaniel Case (@Qalthos)
| programming_docs |
ansible arista.eos.eos_prefix_lists – Manages Prefix lists resource module arista.eos.eos\_prefix\_lists – Manages Prefix lists resource module
====================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_prefix_lists`.
New in version 2.2.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
Synopsis
--------
* This module configures and manages the attributes of Prefix lists on Arista EOS platforms.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** list / elements=dictionary | | A list of dictionary of prefix-list options |
| | **afi** string / required | **Choices:*** ipv4
* ipv6
| The Address Family Indicator (AFI) for the prefix list. |
| | **prefix\_lists** list / elements=dictionary | | A list of prefix-lists. |
| | | **entries** list / elements=dictionary | | List of prefix-lists |
| | | | **action** string | **Choices:*** deny
* permit
| action to be performed on the specified path |
| | | | **address** string | | ipv4/v6 address in prefix-mask or address-masklen format |
| | | | **match** dictionary | | match masklen |
| | | | | **masklen** integer | | Mask Length. |
| | | | | **operator** string | **Choices:*** eq
* le
* ge
| equalto/greater than/lesser than |
| | | | **resequence** dictionary | | Resequence the list. |
| | | | | **default** boolean | **Choices:*** no
* yes
| Resequence with default values (10). |
| | | | | **start\_seq** integer | | Starting sequence number. |
| | | | | **step** integer | | Step to increment the sequence number. |
| | | | **sequence** integer | | sequence number |
| | | **name** string / required | | Name of the prefix-list |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section access-list**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** deleted
* **merged** ←
* overridden
* replaced
* gathered
* rendered
* parsed
| The state the configuration should be left in. |
Notes
-----
Note
* Tested against Arista EOS 4.20.10M
* This module works with connection `network_cli`. See the [EOS Platform Options](eos_platform_options).
Examples
--------
```
# Using merged
# Before state
# veos#show running-config | section prefix-lists
# veos#
- name: Merge provided configuration with device configuration
arista.eos.eos_prefix_lists:
config:
- afi: "ipv4"
prefix_lists:
- name: "v401"
entries:
- sequence: 25
action: "deny"
address: "45.55.4.0/24"
- sequence: 100
action: "permit"
address: "11.11.2.0/24"
match:
masklen: 32
operator: "ge"
- name: "v402"
entries:
- action: "deny"
address: "10.1.1.0/24"
sequence: 10
match:
masklen: 32
operator: "ge"
- afi: "ipv6"
prefix_lists:
- name: "v601"
entries:
- sequence: 125
action: "deny"
address: "5000:1::/64"
# After State
# veos#
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24
# seq 100 permit 11.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
#
# Module Execution:
# "after": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
# "before": {},
# "changed": true,
# "commands": [
# "ipv6 prefix-list v601",
# "seq 125 deny 5000:1::/64",
# "ip prefix-list v401",
# "seq 25 deny 45.55.4.0/24",
# "seq 100 permit 11.11.2.0/24 ge 32",
# "ip prefix-list v402",
# "seq 10 deny 10.1.1.0/24 ge 32"
# ],
#
# using merged:
# Failure scenario : 'merged' should not be used when an existing prefix-list (sequence number)
# is to be modified.
# Before State:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24
# seq 100 permit 11.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
- name: Merge provided configuration with device configuration
arista.eos.eos_prefix_lists:
config:
- afi: "ipv4"
prefix_lists:
- name: "v401"
entries:
- sequence: 25
action: "deny"
address: "45.55.4.0/24"
match:
masklen: 32
operator: "ge"
- sequence: 100
action: "permit"
address: "11.11.2.0/24"
match:
masklen: 32
operator: "ge"
- name: "v402"
entries:
- action: "deny"
address: "10.1.1.0/24"
sequence: 10
match:
masklen: 32
operator: "ge"
- afi: "ipv6"
prefix_lists:
- name: "v601"
entries:
- sequence: 125
action: "deny"
address: "5000:1::/64"
state: merged
# Module Execution:
# fatal: [192.168.122.113]: FAILED! => {
# "changed": false,
# "invocation": {
# "module_args": {
# "config": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "resequence": null,
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "resequence": null,
# "sequence": 100
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "resequence": null,
# "sequence": 10
# }
# ],
# "name": "v402"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "match": null,
# "resequence": null,
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
# "running_config": null,
# "state": "merged"
# }
# },
# "msg": "Sequence number 25 is already present. Use replaced/overridden operation to change the configuration"
# }
#
# Using Replaced:
# Before state:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24
# seq 100 permit 11.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
- name: Replace
arista.eos.eos_prefix_lists:
config:
- afi: "ipv4"
prefix_lists:
- name: "v401"
entries:
- sequence: 25
action: "deny"
address: "45.55.4.0/24"
match:
masklen: 32
operator: "ge"
- sequence: 200
action: "permit"
address: "200.11.2.0/24"
match:
masklen: 32
operator: "ge"
state: replaced
# After State:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24 ge 32
# seq 200 permit 200.11.2.0/24 ge 32
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
#
#
# Module Execution:
#
# "after": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "200.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 200
# }
# ],
# "name": "v401"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
# "before": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
# "changed": true,
# "commands": [
# "ip prefix-list v401",
# "no seq 25",
# "seq 25 deny 45.55.4.0/24 ge 32",
# "seq 200 permit 200.11.2.0/24 ge 32",
# "no seq 100",
# "no ip prefix-list v402"
# ],
# Using overridden:
# Before State:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24 ge 32
# seq 100 permit 11.11.2.0/24 ge 32
# seq 200 permit 200.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
- name: Override
arista.eos.eos_prefix_lists:
config:
- afi: "ipv4"
prefix_lists:
- name: "v401"
entries:
- sequence: 25
action: "deny"
address: "45.55.4.0/24"
- sequence: 300
action: "permit"
address: "30.11.2.0/24"
match:
masklen: 32
operator: "ge"
- name: "v403"
entries:
- action: "deny"
address: "10.1.1.0/24"
sequence: 10
state: overridden
# After State
# veos#
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24 ge 32
# seq 300 permit 30.11.2.0/24 ge 32
# !
# ip prefix-list v403
# seq 10 deny 10.1.1.0/24
# veos#
#
#
# Module Execution:
# "after": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "30.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 300
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "sequence": 10
# }
# ],
# "name": "v403"
# }
# ]
# }
# ],
# "before": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# },
# {
# "action": "permit",
# "address": "200.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 200
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
# "changed": true,
# "commands": [
# "no ipv6 prefix-list v601",
# "ip prefix-list v401",
# "seq 25 deny 45.55.4.0/24",
# "seq 300 permit 30.11.2.0/24 ge 32",
# "no seq 100",
# "no seq 200",
# "ip prefix-list v403",
# "seq 10 deny 10.1.1.0/24",
# "no ip prefix-list v402"
# ],
#
# Using deleted:
# Before State:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24 ge 32
# seq 100 permit 11.11.2.0/24 ge 32
# seq 300 permit 30.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ip prefix-list v403
# seq 10 deny 10.1.1.0/24
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
- name: Delete device configuration
arista.eos.eos_prefix_lists:
config:
- afi: "ipv6"
state: deleted
# after State:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24 ge 32
# seq 100 permit 11.11.2.0/24 ge 32
# seq 300 permit 30.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ip prefix-list v403
# seq 10 deny 10.1.1.0/24
#
#
# Module Execution:
# "after": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# },
# {
# "action": "permit",
# "address": "30.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 300
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "sequence": 10
# }
# ],
# "name": "v403"
# }
# ]
# }
# ],
# "before": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# },
# {
# "action": "permit",
# "address": "30.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 300
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "sequence": 10
# }
# ],
# "name": "v403"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
# "changed": true,
# "commands": [
# "no ipv6 prefix-list v601"
# ],
#
# Using deleted
# Before state:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24 ge 32
# seq 100 permit 11.11.2.0/24 ge 32
# seq 300 permit 30.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ip prefix-list v403
# seq 10 deny 10.1.1.0/24
# veos#
- name: Delete device configuration
arista.eos.eos_prefix_lists:
state: deleted
# After State:
# veos#show running-config | section prefix-list
# veos#
#
# Module Execution:
# "after": {},
# "before": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# },
# {
# "action": "permit",
# "address": "30.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 300
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "sequence": 10
# }
# ],
# "name": "v403"
# }
# ]
# }
# ],
# "changed": true,
# "commands": [
# "no ip prefix-list v401",
# "no ip prefix-list v402",
# "no ip prefix-list v403"
# ],
#
# Using parsed:
# parse_prefix_lists.cfg
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24
# seq 100 permit 11.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
#
- name: parse configs
arista.eos.eos_prefix_lists:
running_config: "{{ lookup('file', './parsed_prefix_lists.cfg') }}"
state: parsed
# Module Execution:
# "parsed": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "sequence": 10
# }
# ],
# "name": "v402"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ]
# Using rendered:
- name: Render provided configuration
arista.eos.eos_prefix_lists:
config:
- afi: "ipv4"
prefix_lists:
- name: "v401"
entries:
- sequence: 25
action: "deny"
address: "45.55.4.0/24"
- sequence: 200
action: "permit"
address: "200.11.2.0/24"
match:
masklen: 32
operator: "ge"
- name: "v403"
entries:
- action: "deny"
address: "10.1.1.0/24"
sequence: 10
state: rendered
# Module Execution:
# "rendered": [
# "ip prefix-list v401",
# "seq 25 deny 45.55.4.0/24",
# "seq 200 permit 200.11.2.0/24 ge 32",
# "ip prefix-list v403",
# "seq 10 deny 10.1.1.0/24"
# ]
#
# using gathered:
# Device config:
# veos#show running-config | section prefix-list
# ip prefix-list v401
# seq 25 deny 45.55.4.0/24
# seq 100 permit 11.11.2.0/24 ge 32
# !
# ip prefix-list v402
# seq 10 deny 10.1.1.0/24 ge 32
# !
# ipv6 prefix-list v601
# seq 125 deny 5000:1::/64
# veos#
- name: gather configs
arista.eos.eos_prefix_lists:
state: gathered
# Module Execution:
#
# "gathered": [
# {
# "afi": "ipv4",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "45.55.4.0/24",
# "sequence": 25
# },
# {
# "action": "permit",
# "address": "11.11.2.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 100
# }
# ],
# "name": "v401"
# },
# {
# "entries": [
# {
# "action": "deny",
# "address": "10.1.1.0/24",
# "match": {
# "masklen": 32,
# "operator": "ge"
# },
# "sequence": 10
# }
# ],
# "name": "v402"
# }
# ]
# },
# {
# "afi": "ipv6",
# "prefix_lists": [
# {
# "entries": [
# {
# "action": "deny",
# "address": "5000:1::/64",
# "sequence": 125
# }
# ],
# "name": "v601"
# }
# ]
# }
# ],
```
### Authors
* Gomathi Selvi Srinivasan (@GomathiselviS)
| programming_docs |
ansible arista.eos.eos_interfaces – Interfaces resource module arista.eos.eos\_interfaces – Interfaces resource module
=======================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_interfaces`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module manages the interface attributes of Arista EOS interfaces.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** list / elements=dictionary | | The provided configuration |
| | **description** string | | Interface description |
| | **duplex** string | | Interface link status. Applicable for Ethernet interfaces only. Values other than `auto` must also set *speed*. Ignored when *speed* is set above `1000`. |
| | **enabled** boolean | **Choices:*** no
* **yes** ←
| Administrative state of the interface. Set the value to `true` to administratively enable the interface or `false` to disable it. |
| | **mode** string | **Choices:*** layer2
* layer3
| Manage Layer2 or Layer3 state of the interface. Applicable for Ethernet and port channel interfaces only. |
| | **mtu** integer | | MTU for a specific interface. Must be an even number between 576 and 9216. Applicable for Ethernet interfaces only. |
| | **name** string / required | | Full name of the interface, e.g. GigabitEthernet1. |
| | **speed** string | | Interface link speed. Applicable for Ethernet interfaces only. |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section ^interface**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** **merged** ←
* replaced
* overridden
* deleted
* parsed
* rendered
* gathered
| The state of the configuration after module completion. |
Notes
-----
Note
* Tested against Arista EOS 4.20.10M
* This module works with connection `network_cli`. See the [EOS Platform Options](../network/user_guide/platform_eos).
Examples
--------
```
# Using merged
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# description "Interface 1"
# !
# interface Ethernet2
# !
# interface Management1
# description "Management interface"
# ip address dhcp
# !
- name: Merge provided configuration with device configuration
arista.eos.eos_interfaces:
config:
- name: Ethernet1
enabled: true
mode: layer3
- name: Ethernet2
description: Configured by Ansible
enabled: false
state: merged
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# description "Interface 1"
# no switchport
# !
# interface Ethernet2
# description "Configured by Ansible"
# shutdown
# !
# interface Management1
# description "Management interface"
# ip address dhcp
# !
# Using replaced
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# description "Interface 1"
# !
# interface Ethernet2
# !
# interface Management1
# description "Management interface"
# ip address dhcp
# !
- name: Replaces device configuration of listed interfaces with provided configuration
arista.eos.eos_interfaces:
config:
- name: Ethernet1
enabled: true
- name: Ethernet2
description: Configured by Ansible
enabled: false
state: replaced
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# !
# interface Ethernet2
# description "Configured by Ansible"
# shutdown
# !
# interface Management1
# description "Management interface"
# ip address dhcp
# !
# Using overridden
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# description "Interface 1"
# !
# interface Ethernet2
# !
# interface Management1
# description "Management interface"
# ip address dhcp
# !
- name: Overrides all device configuration with provided configuration
arista.eos.eos_interfaces:
config:
- name: Ethernet1
enabled: true
- name: Ethernet2
description: Configured by Ansible
enabled: false
state: overridden
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# !
# interface Ethernet2
# description "Configured by Ansible"
# shutdown
# !
# interface Management1
# ip address dhcp
# !
# Using deleted
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# description "Interface 1"
# no switchport
# !
# interface Ethernet2
# !
# interface Management1
# description "Management interface"
# ip address dhcp
# !
- name: Delete or return interface parameters to default settings
arista.eos.eos_interfaces:
config:
- name: Ethernet1
state: deleted
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# !
# interface Ethernet2
# !
# interface Management1
# description "Management interface"
# ip address dhcp
# !
# Using rendered
- name: Use Rendered to convert the structured data to native config
arista.eos.eos_interfaces:
config:
- name: Ethernet1
enabled: true
mode: layer3
- name: Ethernet2
description: Configured by Ansible
enabled: false
state: merged
# Output:
# ------------
# - "interface Ethernet1"
# - "description "Interface 1""
# - "no swithcport"
# - "interface Ethernet2"
# - "description "Configured by Ansible""
# - "shutdown"
# - "interface Management1"
# - "description "Management interface""
# - "ip address dhcp"
# Using parsed
# parsed.cfg
# interface Ethernet1
# description "Interface 1"
# !
# interface Ethernet2
# description "Configured by Ansible"
# shutdown
# !
- name: Use parsed to convert native configs to structured data
arista.eos.interfaces:
running_config: "{{ lookup('file', 'parsed.cfg') }}"
state: parsed
# Output
# parsed:
# - name: Ethernet1
# enabled: True
# mode: layer2
# - name: Ethernet2
# description: 'Configured by Ansible'
# enabled: False
# mode: layer2
# Using gathered:
# Existing config on the device
# -----------------------------
# interface Ethernet1
# description "Interface 1"
# !
# interface Ethernet2
# description "Configured by Ansible"
# shutdown
# !
- name: Gather interfaces facts from the device
arista.eos.interfaces:
state: gathered
# output
# gathered:
# - name: Ethernet1
# enabled: True
# mode: layer2
# - name: Ethernet2
# description: 'Configured by Ansible'
# enabled: False
# mode: layer2
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **after** dictionary | when changed | The configuration as structured data after module completion. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **before** dictionary | always | The configuration as structured data prior to module invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **commands** list / elements=string | always | The set of commands pushed to the remote device. **Sample:** ['interface Ethernet2', 'shutdown', 'speed 10full'] |
### Authors
* Nathaniel Case (@qalthos)
ansible arista.eos.eos_vrf – Manage VRFs on Arista EOS network devices arista.eos.eos\_vrf – Manage VRFs on Arista EOS network devices
===============================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_vrf`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module provides declarative management of VRFs on Arista EOS network devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **aggregate** list / elements=dictionary | | List of VRFs definitions |
| | **associated\_interfaces** list / elements=string | | This is a intent option and checks the operational state of the for given vrf `name` for associated interfaces. If the value in the `associated_interfaces` does not match with the operational state of vrf interfaces on device it will result in failure. |
| | **delay** integer | **Default:**10 | Time in seconds to wait before checking for the operational state on remote device. This wait is applicable for operational state arguments. |
| | **interfaces** list / elements=string | | Identifies the set of interfaces that should be configured in the VRF. Interfaces must be routed interfaces in order to be placed into a VRF. The name of interface should be in expanded format and not abbreviated. |
| | **name** string / required | | Name of the VRF. |
| | **rd** string | | Route distinguisher of the VRF |
| | **state** string | **Choices:*** **present** ←
* absent
| State of the VRF configuration. |
| **associated\_interfaces** list / elements=string | | This is a intent option and checks the operational state of the for given vrf `name` for associated interfaces. If the value in the `associated_interfaces` does not match with the operational state of vrf interfaces on device it will result in failure. |
| **delay** integer | **Default:**10 | Time in seconds to wait before checking for the operational state on remote device. This wait is applicable for operational state arguments. |
| **interfaces** list / elements=string | | Identifies the set of interfaces that should be configured in the VRF. Interfaces must be routed interfaces in order to be placed into a VRF. The name of interface should be in expanded format and not abbreviated. |
| **name** string | | Name of the VRF. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **purge** boolean | **Choices:*** **no** ←
* yes
| Purge VRFs not defined in the *aggregate* parameter. |
| **rd** string | | Route distinguisher of the VRF |
| **state** string | **Choices:*** **present** ←
* absent
| State of the VRF configuration. |
Notes
-----
Note
* Tested against EOS 4.15
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: Create vrf
arista.eos.eos_vrf:
name: test
rd: 1:200
interfaces:
- Ethernet2
state: present
- name: Delete VRFs
arista.eos.eos_vrf:
name: test
state: absent
- name: Create aggregate of VRFs with purge
arista.eos.eos_vrf:
aggregate:
- name: test4
rd: 1:204
- name: test5
rd: 1:205
state: present
purge: yes
- name: Delete aggregate of VRFs
arista.eos.eos_vrf:
aggregate:
- name: test2
- name: test3
- name: test4
- name: test5
state: absent
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always | The list of configuration mode commands to send to the device **Sample:** ['vrf definition test', 'rd 1:100', 'interface Ethernet1', 'vrf forwarding test'] |
### Authors
* Ricardo Carrillo Cruz (@rcarrillocruz)
ansible arista.eos.eos_lldp_global – LLDP resource module arista.eos.eos\_lldp\_global – LLDP resource module
===================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_lldp_global`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module manages Global Link Layer Discovery Protocol (LLDP) settings on Arista EOS devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** dictionary | | The provided global LLDP configuration. |
| | **holdtime** integer | | Specifies the holdtime (in sec) to be sent in packets. |
| | **reinit** integer | | Specifies the delay (in sec) for LLDP initialization on any interface. |
| | **timer** integer | | Specifies the rate at which LLDP packets are sent (in sec). |
| | **tlv\_select** dictionary | | Specifies the LLDP TLVs to enable or disable. |
| | | **link\_aggregation** boolean | **Choices:*** no
* yes
| Enable or disable link aggregation TLV. |
| | | **management\_address** boolean | **Choices:*** no
* yes
| Enable or disable management address TLV. |
| | | **max\_frame\_size** boolean | **Choices:*** no
* yes
| Enable or disable maximum frame size TLV. |
| | | **port\_description** boolean | **Choices:*** no
* yes
| Enable or disable port description TLV. |
| | | **system\_capabilities** boolean | **Choices:*** no
* yes
| Enable or disable system capabilities TLV. |
| | | **system\_description** boolean | **Choices:*** no
* yes
| Enable or disable system description TLV. |
| | | **system\_name** boolean | **Choices:*** no
* yes
| Enable or disable system name TLV. |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section lldp**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** **merged** ←
* replaced
* deleted
* rendered
* gathered
* parsed
| The state of the configuration after module completion. |
Notes
-----
Note
* Tested against Arista EOS 4.20.10M
* This module works with connection `network_cli`. See the [EOS Platform Options](../network/user_guide/platform_eos).
Examples
--------
```
# Using merged
#
# ------------
# Before State
# ------------
#
# veos# show run | section lldp
# lldp timer 3000
# lldp holdtime 100
# lldp reinit 5
# no lldp tlv-select management-address
# no lldp tlv-select system-description
- name: Merge provided LLDP configuration with the existing configuration
arista.eos.eos_lldp_global:
config:
holdtime: 100
tlv_select:
management_address: false
port_description: false
system_description: true
state: merged
# -----------
# After state
# -----------
#
# veos# show run | section lldp
# lldp timer 3000
# lldp holdtime 100
# lldp reinit 5
# no lldp tlv-select management-address
# no lldp tlv-select port-description
# Using replaced
#
# ------------
# Before State
# ------------
#
# veos# show run | section lldp
# lldp timer 3000
# lldp holdtime 100
# lldp reinit 5
# no lldp tlv-select management-address
# no lldp tlv-select system-description
- name: Replace existing LLDP device configuration with provided configuration
arista.eos.eos_lldp_global:
config:
holdtime: 100
tlv_select:
management_address: false
port_description: false
system_description: true
state: replaced
# -----------
# After state
# -----------
#
# veos# show run | section lldp
# lldp holdtime 100
# no lldp tlv-select management-address
# no lldp tlv-select port-description
# Using deleted
#
# ------------
# Before State
# ------------
#
# veos# show run | section lldp
# lldp timer 3000
# lldp holdtime 100
# lldp reinit 5
# no lldp tlv-select management-address
# no lldp tlv-select system-description
- name: Delete existing LLDP configurations from the device
arista.eos.eos_lldp_global:
state: deleted
# -----------
# After state
# -----------
#
# veos# show run | section ^lldp
# Using rendered:
- name: Use Rendered to convert the structured data to native config
arista.eos.eos_lldp_global:
config:
holdtime: 100
tlv_select:
management_address: false
port_description: false
system_description: true
state: rendered
# -----------
# Output
# -----------
#
# rendered:
# - "lldp holdtime 100"
# - "no lldp tlv-select management-address"
# - "no lldp tlv-select port-description"
# Using parsed
# parsed.cfg
# lldp timer 3000
# lldp holdtime 100
# lldp reinit 5
# no lldp tlv-select management-address
# no lldp tlv-select system-description
- name: Use parsed to convert native configs to structured data
arista.eos.lldp_global:
running_config: "{{ lookup('file', 'parsed.cfg') }}"
state: parsed
# -----------
# Output
# -----------
# parsed:
# holdtime: 100
# timer 3000
# reinit 5
# tlv_select:
# management_address: False
# port_description: False
# system_description: True
# Using gathered:
# native config:
# lldp timer 3000
# lldp holdtime 100
# lldp reinit 5
# no lldp tlv-select management-address
# no lldp tlv-select system-description
- name: Gather lldp_global facts from the device
arista.eos.lldp_global:
state: gathered
# -----------
# Output
# -----------
# gathered:
# holdtime: 100
# timer 3000
# reinit 5
# tlv_select:
# management_address: False
# port_description: False
# system_description: True
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **after** dictionary | when changed | The configuration as structured data after module completion. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **before** dictionary | always | The configuration as structured data prior to module invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **commands** list / elements=string | always | The set of commands pushed to the remote device. **Sample:** ['lldp holdtime 100', 'no lldp timer', 'lldp tlv-select system-description'] |
### Authors
* Nathaniel Case (@Qalthos)
| programming_docs |
ansible arista.eos.eos_static_route – (deprecated, removed after 2022-06-01) Manage static IP routes on Arista EOS network devices arista.eos.eos\_static\_route – (deprecated, removed after 2022-06-01) Manage static IP routes on Arista EOS network devices
============================================================================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_static_route`.
New in version 1.0.0: of arista.eos
* [DEPRECATED](#deprecated)
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
* [Status](#status)
DEPRECATED
----------
Removed in
major release after 2022-06-01
Why
Updated modules with more functionality
Alternative
eos\_static\_routes
Synopsis
--------
* This module provides declarative management of static IP routes on Arista EOS network devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **address** string | | Network address with prefix of the static route.
aliases: prefix |
| **admin\_distance** integer | **Default:**1 | Admin distance of the static route. |
| **aggregate** list / elements=dictionary | | List of static route definitions |
| | **address** string / required | | Network address with prefix of the static route.
aliases: prefix |
| | **admin\_distance** integer | | Admin distance of the static route. |
| | **next\_hop** string | | Next hop IP of the static route. |
| | **state** string | **Choices:*** present
* absent
| State of the static route configuration. |
| | **vrf** string | **Default:**"default" | VRF for static route. |
| **next\_hop** string | | Next hop IP of the static route. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **state** string | **Choices:*** **present** ←
* absent
| State of the static route configuration. |
| **vrf** string | **Default:**"default" | VRF for static route. |
Notes
-----
Note
* Tested against EOS 4.15
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: configure static route
arista.eos.eos_static_route:
address: 10.0.2.0/24
next_hop: 10.8.38.1
admin_distance: 2
- name: delete static route
arista.eos.eos_static_route:
address: 10.0.2.0/24
next_hop: 10.8.38.1
state: absent
- name: configure static routes using aggregate
arista.eos.eos_static_route:
aggregate:
- {address: 10.0.1.0/24, next_hop: 10.8.38.1}
- {address: 10.0.3.0/24, next_hop: 10.8.38.1}
- name: Delete static route using aggregate
arista.eos.eos_static_route:
aggregate:
- {address: 10.0.1.0/24, next_hop: 10.8.38.1}
- {address: 10.0.3.0/24, next_hop: 10.8.38.1}
state: absent
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always | The list of configuration mode commands to send to the device **Sample:** ['ip route 10.0.2.0/24 10.8.38.1 3', 'no ip route 10.0.2.0/24 10.8.38.1'] |
Status
------
* This module will be removed in a major release after 2022-06-01. *[deprecated]*
* For more information see [DEPRECATED](#deprecated).
### Authors
* Trishna Guha (@trishnaguha)
ansible arista.eos.eos_logging – Manage logging on network devices arista.eos.eos\_logging – Manage logging on network devices
===========================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_logging`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module provides declarative management of logging on Arista Eos devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **aggregate** list / elements=dictionary | | List of logging definitions. |
| | **dest** string | **Choices:*** on
* host
* console
* monitor
* buffered
| Destination of the logs. |
| | **facility** string | | Set logging facility. |
| | **level** string | **Choices:*** emergencies
* alerts
* critical
* errors
* warnings
* notifications
* informational
* debugging
| Set logging severity levels. |
| | **name** string | | The hostname or IP address of the destination. Required when *dest=host*. |
| | **size** integer | | Size of buffer. The acceptable value is in range from 10 to 2147483647 bytes. |
| | **state** string | **Choices:*** **present** ←
* absent
| State of the logging configuration. |
| **dest** string | **Choices:*** on
* host
* console
* monitor
* buffered
| Destination of the logs. |
| **facility** string | | Set logging facility. |
| **level** string | **Choices:*** emergencies
* alerts
* critical
* errors
* warnings
* notifications
* informational
* debugging
| Set logging severity levels. |
| **name** string | | The hostname or IP address of the destination. Required when *dest=host*. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **size** integer | | Size of buffer. The acceptable value is in range from 10 to 2147483647 bytes. |
| **state** string | **Choices:*** **present** ←
* absent
| State of the logging configuration. |
Notes
-----
Note
* Tested against EOS 4.15
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: configure host logging
arista.eos.eos_logging:
dest: host
name: 172.16.0.1
state: present
- name: remove host logging configuration
arista.eos.eos_logging:
dest: host
name: 172.16.0.1
state: absent
- name: configure console logging level and facility
arista.eos.eos_logging:
dest: console
facility: local7
level: debugging
state: present
- name: enable logging to all
arista.eos.eos_logging:
dest: on
- name: configure buffer size
arista.eos.eos_logging:
dest: buffered
size: 5000
- name: Configure logging using aggregate
arista.eos.eos_logging:
aggregate:
- {dest: console, level: warnings}
- {dest: buffered, size: 480000}
state: present
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always | The list of configuration mode commands to send to the device **Sample:** ['logging facility local7', 'logging host 172.16.0.1'] |
### Authors
* Trishna Guha (@trishnaguha)
ansible arista.eos.eos_facts – Collect facts from remote devices running Arista EOS arista.eos.eos\_facts – Collect facts from remote devices running Arista EOS
============================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_facts`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* Collects facts from Arista devices running the EOS operating system. This module places the facts gathered in the fact tree keyed by the respective resource name. The facts module will always collect a base set of facts from the device and can enable or disable collection of additional facts.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **available\_network\_resources** boolean | **Choices:*** **no** ←
* yes
| When 'True' a list of network resources for which resource modules are available will be provided. |
| **gather\_network\_resources** list / elements=string | | When supplied, this argument will restrict the facts collected to a given subset. Possible values for this argument include all and the resources like interfaces, vlans etc. Can specify a list of values to include a larger subset. Values can also be used with an initial `M(!`) to specify that a specific subset should not be collected. Values can also be used with an initial `M(!`) to specify that a specific subset should not be collected. Valid subsets are 'all', 'interfaces', 'l2\_interfaces', 'l3\_interfaces', 'lacp', 'lacp\_interfaces', 'lag\_interfaces', 'lldp\_global', 'lldp\_interfaces', 'vlans', 'acls'. |
| **gather\_subset** list / elements=string | **Default:**"!config" | When supplied, this argument will restrict the facts collected to a given subset. Possible values for this argument include all, hardware, config, and interfaces. Can specify a list of values to include a larger subset. Values can also be used with an initial `M(!`) to specify that a specific subset should not be collected. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
Notes
-----
Note
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: Gather all legacy facts
- arista.eos.eos_facts:
gather_subset: all
- name: Gather only the config and default facts
arista.eos.eos_facts:
gather_subset:
- config
- name: Do not gather hardware facts
arista.eos.eos_facts:
gather_subset:
- '!hardware'
- name: Gather legacy and resource facts
arista.eos.eos_facts:
gather_subset: all
gather_network_resources: all
- name: Gather only the interfaces resource facts and no legacy facts
- arista.eos.eos_facts:
gather_subset:
- '!all'
- '!min'
gather_network_resources:
- interfaces
- name: Gather all resource facts and minimal legacy facts
arista.eos.eos_facts:
gather_subset: min
gather_network_resources: all
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **ansible\_net\_all\_ipv4\_addresses** list / elements=string | when interfaces is configured | All IPv4 addresses configured on the device |
| **ansible\_net\_all\_ipv6\_addresses** list / elements=string | when interfaces is configured | All IPv6 addresses configured on the device |
| **ansible\_net\_api** string | always | The name of the transport |
| **ansible\_net\_config** string | when config is configured | The current active config from the device |
| **ansible\_net\_filesystems** list / elements=string | when hardware is configured | All file system names available on the device |
| **ansible\_net\_fqdn** string | always | The fully qualified domain name of the device |
| **ansible\_net\_gather\_network\_resources** list / elements=string | when the resource is configured | The list of fact for network resource subsets collected from the device |
| **ansible\_net\_gather\_subset** list / elements=string | always | The list of fact subsets collected from the device |
| **ansible\_net\_hostname** string | always | The configured hostname of the device |
| **ansible\_net\_image** string | always | The image file the device is running |
| **ansible\_net\_interfaces** dictionary | when interfaces is configured | A hash of all interfaces running on the system |
| **ansible\_net\_memfree\_mb** integer | when hardware is configured | The available free memory on the remote device in Mb |
| **ansible\_net\_memtotal\_mb** integer | when hardware is configured | The total memory on the remote device in Mb |
| **ansible\_net\_model** string | always | The model name returned from the device |
| **ansible\_net\_neighbors** dictionary | when interfaces is configured | The list of LLDP neighbors from the remote device |
| **ansible\_net\_python\_version** string | always | The Python version Ansible controller is using |
| **ansible\_net\_serialnum** string | always | The serial number of the remote device |
| **ansible\_net\_version** string | always | The operating system version running on the remote device |
### Authors
* Peter Sprygada (@privateip)
* Nathaniel Case (@Qalthos)
| programming_docs |
ansible arista.eos.eos_l2_interfaces – L2 interfaces resource module arista.eos.eos\_l2\_interfaces – L2 interfaces resource module
==============================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_l2_interfaces`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module provides declarative management of Layer-2 interface on Arista EOS devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** list / elements=dictionary | | A dictionary of Layer-2 interface options |
| | **access** dictionary | | Switchport mode access command to configure the interface as a layer 2 access. |
| | | **vlan** integer | | Configure given VLAN in access port. It's used as the access VLAN ID. |
| | **mode** string | **Choices:*** access
* trunk
| Mode in which interface needs to be configured. Access mode is not shown in interface facts, so idempotency will not be maintained for switchport mode access and every time the output will come as changed=True. |
| | **name** string / required | | Full name of interface, e.g. Ethernet1. |
| | **trunk** dictionary | | Switchport mode trunk command to configure the interface as a Layer 2 trunk. |
| | | **native\_vlan** integer | | Native VLAN to be configured in trunk port. It is used as the trunk native VLAN ID. |
| | | **trunk\_allowed\_vlans** list / elements=string | | List of allowed VLANs in a given trunk port. These are the only VLANs that will be configured on the trunk. |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section ^interface**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** **merged** ←
* replaced
* overridden
* deleted
* parsed
* rendered
* gathered
| The state of the configuration after module completion |
Notes
-----
Note
* Tested against Arista EOS 4.20.10M
* This module works with connection `network_cli`. See the [EOS Platform Options](../network/user_guide/platform_eos).
Examples
--------
```
# Using merged
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# switchport access vlan 20
# !
# interface Ethernet2
# switchport trunk native vlan 20
# switchport mode trunk
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
# !
- name: Merge provided configuration with device configuration.
arista.eos.eos_l2_interfaces:
config:
- name: Ethernet1
mode: trunk
trunk:
native_vlan: 10
- name: Ethernet2
mode: access
access:
vlan: 30
state: merged
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# switchport trunk native vlan 10
# switchport mode trunk
# !
# interface Ethernet2
# switchport access vlan 30
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
# !
# Using replaced
# Before state:
# -------------
#
# veos2#show running-config | s int
# interface Ethernet1
# switchport access vlan 20
# !
# interface Ethernet2
# switchport trunk native vlan 20
# switchport mode trunk
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
# !
- name: Replace device configuration of specified L2 interfaces with provided configuration.
arista.eos.eos_l2_interfaces:
config:
- name: Ethernet1
mode: trunk
trunk:
native_vlan: 20
trunk_vlans: 5-10, 15
state: replaced
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# switchport trunk native vlan 20
# switchport trunk allowed vlan 5-10,15
# switchport mode trunk
# !
# interface Ethernet2
# switchport trunk native vlan 20
# switchport mode trunk
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
# !
# Using overridden
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# switchport access vlan 20
# !
# interface Ethernet2
# switchport trunk native vlan 20
# switchport mode trunk
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
# !
- name: Override device configuration of all L2 interfaces on device with provided
configuration.
arista.eos.eos_l2_interfaces:
config:
- name: Ethernet2
mode: access
access:
vlan: 30
state: overridden
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# !
# interface Ethernet2
# switchport access vlan 30
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
# !
# Using deleted
# Before state:
# -------------
#
# veos#show running-config | section interface
# interface Ethernet1
# switchport access vlan 20
# !
# interface Ethernet2
# switchport trunk native vlan 20
# switchport mode trunk
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
# !
- name: Delete EOS L2 interfaces as in given arguments.
arista.eos.eos_l2_interfaces:
config:
- name: Ethernet1
- name: Ethernet2
state: deleted
# After state:
# ------------
#
# veos#show running-config | section interface
# interface Ethernet1
# !
# interface Ethernet2
# !
# interface Management1
# ip address dhcp
# ipv6 address auto-config
# using rendered
- name: Use Rendered to convert the structured data to native config
arista.eos.eos_l2_interfaces:
config:
- name: Ethernet1
mode: trunk
trunk:
native_vlan: 10
- name: Ethernet2
mode: access
access:
vlan: 30
state: merged
# Output :
# ------------
#
# - "interface Ethernet1"
# - "switchport trunk native vlan 10"
# - "switchport mode trunk"
# - "interface Ethernet2"
# - "switchport access vlan 30"
# - "interface Management1"
# - "ip address dhcp"
# - "ipv6 address auto-config"
# using parsed
# parsed.cfg
# interface Ethernet1
# switchport trunk native vlan 10
# switchport mode trunk
# !
# interface Ethernet2
# switchport access vlan 30
# !
- name: Use parsed to convert native configs to structured data
arista.eos.l2_interfaces:
running_config: "{{ lookup('file', 'parsed.cfg') }}"
state: parsed
# Output:
# parsed:
# - name: Ethernet1
# mode: trunk
# trunk:
# native_vlan: 10
# - name: Ethernet2
# mode: access
# access:
# vlan: 30
# Using gathered:
# Existing config on the device:
#
# veos#show running-config | section interface
# interface Ethernet1
# switchport trunk native vlan 10
# switchport mode trunk
# !
# interface Ethernet2
# switchport access vlan 30
# !
- name: Gather interfaces facts from the device
arista.eos.l2_interfaces:
state: gathered
# output:
# gathered:
# - name: Ethernet1
# mode: trunk
# trunk:
# native_vlan: 10
# - name: Ethernet2
# mode: access
# access:
# vlan: 30
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **after** list / elements=string | when changed | The configuration as structured data after module completion. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **before** list / elements=string | always | The configuration as structured data prior to module invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **commands** list / elements=string | always | The set of commands pushed to the remote device. **Sample:** ['interface Ethernet2', 'switchport access vlan 20'] |
### Authors
* Nathaniel Case (@qalthos)
ansible arista.eos.eos_lldp_interfaces – LLDP interfaces resource module arista.eos.eos\_lldp\_interfaces – LLDP interfaces resource module
==================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_lldp_interfaces`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module manages Link Layer Discovery Protocol (LLDP) attributes of interfaces on Arista EOS devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** list / elements=dictionary | | A dictionary of LLDP interfaces options. |
| | **name** string | | Full name of the interface (i.e. Ethernet1). |
| | **receive** boolean | **Choices:*** no
* yes
| Enable/disable LLDP RX on an interface. |
| | **transmit** boolean | **Choices:*** no
* yes
| Enable/disable LLDP TX on an interface. |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section ^interface**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** **merged** ←
* replaced
* overridden
* deleted
* parsed
* gathered
* rendered
| The state of the configuration after module completion. |
Notes
-----
Note
* Tested against Arista EOS 4.20.10M
* This module works with connection `network_cli`. See the [EOS Platform Options](../network/user_guide/platform_eos).
Examples
--------
```
# Using merged
#
#
# ------------
# Before state
# ------------
#
#
# veos#show run | section ^interface
# interface Ethernet1
# no lldp receive
# interface Ethernet2
# no lldp transmit
- name: Merge provided configuration with running configuration
arista.eos.eos_lldp_interfaces:
config:
- name: Ethernet1
transmit: false
- name: Ethernet2
transmit: false
state: merged
#
# ------------
# After state
# ------------
#
# veos#show run | section ^interface
# interface Ethernet1
# no lldp transmit
# no lldp receive
# interface Ethernet2
# no lldp transmit
# Using replaced
#
#
# ------------
# Before state
# ------------
#
#
# veos#show run | section ^interface
# interface Ethernet1
# no lldp receive
# interface Ethernet2
# no lldp transmit
- name: Replace existing LLDP configuration of specified interfaces with provided
configuration
arista.eos.eos_lldp_interfaces:
config:
- name: Ethernet1
transmit: false
state: replaced
#
# ------------
# After state
# ------------
#
# veos#show run | section ^interface
# interface Ethernet1
# no lldp transmit
# interface Ethernet2
# no lldp transmit
# Using overridden
#
#
# ------------
# Before state
# ------------
#
#
# veos#show run | section ^interface
# interface Ethernet1
# no lldp receive
# interface Ethernet2
# no lldp transmit
- name: Override the LLDP configuration of all the interfaces with provided configuration
arista.eos.eos_lldp_interfaces:
config:
- name: Ethernet1
transmit: false
state: overridden
#
# ------------
# After state
# ------------
#
# veos#show run | section ^interface
# interface Ethernet1
# no lldp transmit
# interface Ethernet2
# Using deleted
#
#
# ------------
# Before state
# ------------
#
#
# veos#show run | section ^interface
# interface Ethernet1
# no lldp receive
# interface Ethernet2
# no lldp transmit
- name: Delete LLDP configuration of specified interfaces (or all interfaces if none
are specified)
arista.eos.eos_lldp_interfaces:
state: deleted
#
# ------------
# After state
# ------------
#
# veos#show run | section ^interface
# interface Ethernet1
# interface Ethernet2
# using rendered:
- name: Use Rendered to convert the structured data to native config
arista.eos.eos_lldp_interfaces:
config:
- name: Ethernet1
transmit: false
- name: Ethernet2
transmit: false
state: rendered
#
# ------------
# Output
# ------------
#
# interface Ethernet1
# no lldp transmit
# interface Ethernet2
# no lldp transmit
# Using parsed
# parsed.cfg
# interface Ethernet1
# no lldp transmit
# interface Ethernet2
# no lldp transmit
- name: Use parsed to convert native configs to structured data
arista.eos.lldp_interfaces:
running_config: "{{ lookup('file', 'parsed.cfg') }}"
state: parsed
# ------------
# Output
# ------------
# parsed:
# - name: Ethernet1
# transmit: False
# - name: Ethernet2
# transmit: False
# Using gathered:
# native config:
# interface Ethernet1
# no lldp transmit
# interface Ethernet2
# no lldp transmit
- name: Gather lldp interfaces facts from the device
arista.eos.lldp_interfaces:
state: gathered
# ------------
# Output
# ------------
# gathered:
# - name: Ethernet1
# transmit: False
# - name: Ethernet2
# transmit: False
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **after** list / elements=string | when changed | The configuration as structured data after module completion. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **before** list / elements=string | always | The configuration as structured data prior to module invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **commands** list / elements=string | always | The set of commands pushed to the remote device. **Sample:** ['interface Ethernet1', 'no lldp transmit'] |
### Authors
* Nathaniel Case (@Qalthos)
ansible arista.eos.eos_bgp_address_family – Manages BGP address family resource module arista.eos.eos\_bgp\_address\_family – Manages BGP address family resource module
=================================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_bgp_address_family`.
New in version 1.4.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
Synopsis
--------
* This module configures and manages the attributes of BGP AF on Arista EOS platforms.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** dictionary | | Configurations for BGP address family. |
| | **address\_family** list / elements=dictionary | | Enable address family and enter its config mode |
| | | **afi** string | **Choices:*** ipv4
* ipv6
* evpn
| address family. |
| | | **bgp\_params** dictionary | | BGP parameters. |
| | | | **additional\_paths** string | **Choices:*** install
* send
* receive
| BGP additional-paths commands |
| | | | **next\_hop\_address\_family** string | **Choices:*** ipv6
| Next-hop address-family configuration |
| | | | **next\_hop\_unchanged** boolean | **Choices:*** no
* yes
| Preserve original nexthop while advertising routes to eBGP peers. |
| | | | **redistribute\_internal** boolean | **Choices:*** no
* yes
| Redistribute internal BGP routes. |
| | | | **route** string | | Configure route-map for route installation. |
| | | **graceful\_restart** boolean | **Choices:*** no
* yes
| Enable graceful restart mode. |
| | | **neighbor** list / elements=dictionary | | Configure routing for a network. |
| | | | **activate** boolean | **Choices:*** no
* yes
| Activate neighbor in the address family. |
| | | | **additional\_paths** string | **Choices:*** send
* receive
| BGP additional-paths commands. |
| | | | **default\_originate** dictionary | | Originate default route to this neighbor. |
| | | | | **always** boolean | **Choices:*** no
* yes
| Always originate default route to this neighbor. |
| | | | | **route\_map** string | | Route map reference. |
| | | | **encapsulation** dictionary | | Default transport encapsulation for neighbor. Applicable for evpn address-family. |
| | | | | **source\_interface** string | | Source interface to update BGP next hop address. Applicable for mpls transport. |
| | | | | **transport** string | **Choices:*** mpls
* vxlan
| MPLS/VXLAN transport. |
| | | | **graceful\_restart** boolean | **Choices:*** no
* yes
| Enable graceful restart mode. |
| | | | **next\_hop\_address\_family** string | **Choices:*** ipv6
| Next-hop address-family configuration |
| | | | **next\_hop\_unchanged** boolean | **Choices:*** no
* yes
| Preserve original nexthop while advertising routes to eBGP peers. |
| | | | **peer** string | | Neighbor address/ peer-group name. |
| | | | **prefix\_list** dictionary | | Prefix list reference. |
| | | | | **direction** string | **Choices:*** in
* out
| Configure an inbound/outbound prefix-list. |
| | | | | **name** string | | prefix list name. |
| | | | **route\_map** dictionary | | Route map reference. |
| | | | | **direction** string | **Choices:*** in
* out
| Configure an inbound/outbound route-map. |
| | | | | **name** string | | Route map name. |
| | | | **weight** integer | | Weight to assign. |
| | | **network** list / elements=dictionary | | configure routing for network. |
| | | | **address** string | | network address. |
| | | | **route\_map** string | | Route map reference. |
| | | **redistribute** list / elements=dictionary | | Redistribute routes in to BGP. |
| | | | **isis\_level** string | **Choices:*** level-1
* level-2
* level-1-2
| Applicable for isis routes. Specify isis route level. |
| | | | **ospf\_route** string | **Choices:*** internal
* external
* nssa\_external\_1
* nssa\_external\_2
| ospf route options. |
| | | | **protocol** string | **Choices:*** isis
* ospf3
* dhcp
| Routes to be redistributed. |
| | | | **route\_map** string | | Route map reference. |
| | | **route\_target** dictionary | | Route target |
| | | | **mode** string | **Choices:*** both
* import
* export
| route import or route export. |
| | | | **target** string | | route target |
| | | **safi** string | **Choices:*** labeled-unicast
* multicast
| Address family type for ipv4. |
| | | **vrf** string | | name of the VRF in which BGP will be configured. |
| | **as\_number** string | | Autonomous system number. |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section bgp**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** deleted
* **merged** ←
* overridden
* replaced
* gathered
* rendered
* parsed
| The state the configuration should be left in. |
Notes
-----
Note
* Tested against Arista EOS 4.23.0F
* This module works with connection `network_cli`. See the [EOS Platform Options](eos_platform_options).
Examples
--------
```
# Using merged
# Before state
# veos(config)#show running-config | section bgp
# veos(config)#
- name: Merge provided configuration with device configuration
arista.eos.eos_bgp_address_family:
config:
as_number: "10"
address_family:
- afi: "ipv4"
redistribute:
- protocol: "ospf3"
ospf_route: "external"
network:
- address: "1.1.1.0/24"
- address: "1.5.1.0/24"
route_map: "MAP01"
- afi: "ipv6"
bgp_params:
additional_paths: "receive"
neighbor:
- peer: "peer2"
default_originate:
always: True
- afi: "ipv6"
redistribute:
- protocol: "isis"
isis_level: "level-2"
route_target:
mode: "export"
target: "33:11"
vrf: "vrft"
state: merged
# After state:
# veos(config-router-bgp)#show running-config | section bgp
# router bgp 10
# neighbor peer2 peer-group
# neighbor peer2 maximum-routes 12000
# neighbor 1.1.1.1 maximum-routes 12000
# !
# address-family ipv4
# neighbor 1.1.1.1 activate
# network 1.1.1.0/24
# network 1.5.1.0/24 route-map MAP01
# redistribute ospf3 match external
# !
# address-family ipv6
# bgp additional-paths receive
# neighbor peer2 activate
# neighbor peer2 default-originate always
# !
# vrf vrft
# address-family ipv6
# route-target export 33:11
# redistribute isis level-2
# veos(config-router-bgp)#
# Module Execution:
# "after": {
# "address_family": [
# {
# "afi": "ipv4",
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ]
# },
# {
# "afi": "ipv6",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "neighbor": [
# {
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ]
# },
# {
# "afi": "ipv6",
# "redistribute": [
# {
# "isis_level": "level-2",
# "protocol": "isis"
# }
# ],
# "route_target": {
# "mode": "export",
# "target": "33:11"
# },
# "vrf": "vrft"
# }
# ],
# "as_number": "10"
# },
# "before": {},
# "changed": true,
# "commands": [
# "router bgp 10",
# "address-family ipv4",
# "redistribute ospf3 match external",
# "network 1.1.1.0/24",
# "network 1.5.1.0/24 route-map MAP01",
# "exit",
# "address-family ipv6",
# "neighbor peer2 default-originate always",
# "bgp additional-paths receive",
# "exit",
# "vrf vrft",
# "address-family ipv6",
# "redistribute isis level-2",
# "route-target export 33:11",
# "exit",
# "exit"
# ],
# Using replaced:
# Before State:
# veos(config-router-bgp)#show running-config | section bgp
# router bgp 10
# neighbor peer2 peer-group
# neighbor peer2 maximum-routes 12000
# neighbor 1.1.1.1 maximum-routes 12000
# !
# address-family ipv4
# neighbor 1.1.1.1 activate
# network 1.1.1.0/24
# network 1.5.1.0/24 route-map MAP01
# redistribute ospf3 match external
# !
# address-family ipv6
# bgp additional-paths receive
# neighbor peer2 activate
# neighbor peer2 default-originate always
# !
# vrf vrft
# address-family ipv6
# route-target export 33:11
# redistribute isis level-2
# veos(config-router-bgp)#
#
- name: Replace
arista.eos.eos_bgp_address_family:
config:
as_number: "10"
address_family:
- afi: "ipv6"
vrf: "vrft"
redistribute:
- protocol: "ospf3"
ospf_route: "external"
- afi: "ipv6"
redistribute:
- protocol: "isis"
isis_level: "level-2"
state: replaced
# After State:
# veos(config-router-bgp)#show running-config | section bgp
# router bgp 10
# neighbor peer2 peer-group
# neighbor peer2 maximum-routes 12000
# neighbor 1.1.1.1 maximum-routes 12000
# !
# address-family ipv4
# neighbor 1.1.1.1 activate
# network 1.1.1.0/24
# network 1.5.1.0/24 route-map MAP01
# redistribute ospf3 match external
# !
# address-family ipv6
# neighbor peer2 default-originate always
# redistribute isis level-2
# !
# vrf vrft
# address-family ipv6
# redistribute ospf3 match external
# veos(config-router-bgp)#
#
#
# # Module Execution:
#
# "after": {
# "address_family": [
# {
# "afi": "ipv4",
# "neighbor": [
# {
# "activate": true,
# "peer": "1.1.1.1"
# }
# ],
# "network": [
# {
# "address": "1.1.1.0/24"
# },
# {
# "address": "1.5.1.0/24",
# "route_map": "MAP01"
# }
# ],
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ]
# },
# {
# "afi": "ipv6",
# "neighbor": [
# {
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ],
# "redistribute": [
# {
# "isis_level": "level-2",
# "protocol": "isis"
# }
# ]
# },
# {
# "afi": "ipv6",
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ],
# "vrf": "vrft"
# }
# ],
# "as_number": "10"
# },
# "before": {
# "address_family": [
# {
# "afi": "ipv4",
# "neighbor": [
# {
# "activate": true,
# "peer": "1.1.1.1"
# }
# ],
# "network": [
# {
# "address": "1.1.1.0/24"
# },
# {
# "address": "1.5.1.0/24",
# "route_map": "MAP01"
# }
# ],
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ]
# },
# {
# "afi": "ipv6",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "neighbor": [
# {
# "activate": true,
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ]
# },
# {
# "afi": "ipv6",
# "redistribute": [
# {
# "isis_level": "level-2",
# "protocol": "isis"
# }
# ],
# "route_target": {
# "mode": "export",
# "target": "33:11"
# },
# "vrf": "vrft"
# }
# ],
# "as_number": "10"
# },
# "changed": true,
# "commands": [
# "router bgp 10",
# "vrf vrft",
# "address-family ipv6",
# "redistribute ospf3 match external",
# "no redistribute isis level-2",
# "no route-target export 33:11",
# "exit",
# "exit",
# "address-family ipv6",
# "redistribute isis level-2",
# "no neighbor peer2 activate",
# "no bgp additional-paths receive",
# "exit"
# ],
# Using overridden (overriding af at global context):
# Before state:
# veos(config-router-bgp)#show running-config | section bgp
# router bgp 10
# neighbor peer2 peer-group
# neighbor peer2 maximum-routes 12000
# neighbor 1.1.1.1 maximum-routes 12000
# !
# address-family ipv4
# neighbor 1.1.1.1 activate
# network 1.1.1.0/24
# network 1.5.1.0/24 route-map MAP01
# redistribute ospf3 match external
# !
# address-family ipv6
# neighbor peer2 default-originate always
# redistribute isis level-2
# !
# vrf vrft
# address-family ipv6
# redistribute ospf3 match external
# veos(config-router-bgp)#
- name: Overridden
arista.eos.eos_bgp_address_family:
config:
as_number: "10"
address_family:
- afi: "ipv4"
bgp_params:
additional_paths: "receive"
neighbor:
- peer: "peer2"
default_originate:
always: True
state: overridden
# After State:
# veos(config-router-bgp)#show running-config | section bgp
# router bgp 10
# neighbor peer2 peer-group
# neighbor peer2 maximum-routes 12000
# neighbor 1.1.1.1 maximum-routes 12000
# !
# address-family ipv4
# bgp additional-paths receive
# neighbor peer2 default-originate always
# !
# vrf vrft
# address-family ipv6
# redistribute ospf3 match external
# veos(config-router-bgp)#
#
# Module Execution:
#
# "after": {
# "address_family": [
# {
# "afi": "ipv4",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "neighbor": [
# {
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ]
# },
# {
# "afi": "ipv6",
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ],
# "vrf": "vrft"
# }
# ],
# "as_number": "10"
# },
# "before": {
# "address_family": [
# {
# "afi": "ipv4",
# "neighbor": [
# {
# "activate": true,
# "peer": "1.1.1.1"
# }
# ],
# "network": [
# {
# "address": "1.1.1.0/24"
# },
# {
# "address": "1.5.1.0/24",
# "route_map": "MAP01"
# }
# ],
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ]
# },
# {
# "afi": "ipv6",
# "neighbor": [
# {
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ],
# "redistribute": [
# {
# "isis_level": "level-2",
# "protocol": "isis"
# }
# ]
# },
# {
# "afi": "ipv6",
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ],
# "vrf": "vrft"
# }
# ],
# "as_number": "10"
# },
# "changed": true,
# "commands": [
# "router bgp 10",
# "address-family ipv4",
# "no redistribute ospf3 match external",
# "no network 1.1.1.0/24",
# "no network 1.5.1.0/24 route-map MAP01",
# "neighbor peer2 default-originate always",
# "no neighbor 1.1.1.1 activate",
# "bgp additional-paths receive",
# "exit",
# "no address-family ipv6"
# ],
# using Overridden (overridding af in vrf context):
# Before State:
# veos(config-router-bgp)#show running-config | section bgp
# router bgp 10
# neighbor peer2 peer-group
# neighbor peer2 maximum-routes 12000
# neighbor 1.1.1.1 maximum-routes 12000
# !
# address-family ipv4
# bgp additional-paths receive
# neighbor peer2 default-originate always
# no neighbor 1.1.1.1 activate
# network 1.1.1.0/24
# network 1.5.1.0/24 route-map MAP01
# redistribute ospf3 match external
# !
# address-family ipv6
# bgp additional-paths receive
# neighbor peer2 default-originate always
# !
# vrf vrft
# address-family ipv6
# route-target export 33:11
# redistribute isis level-2
# redistribute ospf3 match external
# veos(config-router-bgp)#
- name: Overridden
arista.eos.eos_bgp_address_family:
config:
as_number: "10"
address_family:
- afi: "ipv4"
bgp_params:
additional_paths: "receive"
neighbor:
- peer: "peer2"
default_originate:
always: True
vrf: vrft
state: overridden
# After State:
# veos(config-router-bgp)#show running-config | section bgp
# router bgp 10
# neighbor peer2 peer-group
# neighbor peer2 maximum-routes 12000
# neighbor 1.1.1.1 maximum-routes 12000
# !
# address-family ipv4
# bgp additional-paths receive
# neighbor peer2 default-originate always
# network 1.1.1.0/24
# network 1.5.1.0/24 route-map MAP01
# redistribute ospf3 match external
# !
# address-family ipv6
# bgp additional-paths receive
# neighbor peer2 default-originate always
# !
# vrf vrft
# address-family ipv4
# bgp additional-paths receive
# veos(config-router-bgp)#
#
# Module Execution:
#
# "after": {
# "address_family": [
# {
# "afi": "ipv4",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "neighbor": [
# {
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ],
# "network": [
# {
# "address": "1.1.1.0/24"
# },
# {
# "address": "1.5.1.0/24",
# "route_map": "MAP01"
# }
# ],
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ]
# },
# {
# "afi": "ipv6",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "neighbor": [
# {
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ]
# },
# {
# "afi": "ipv4",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "vrf": "vrft"
# }
# ],
# "as_number": "10"
# },
# "before": {
# "address_family": [
# {
# "afi": "ipv4",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "neighbor": [
# {
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ],
# "network": [
# {
# "address": "1.1.1.0/24"
# },
# {
# "address": "1.5.1.0/24",
# "route_map": "MAP01"
# }
# ],
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ]
# },
# {
# "afi": "ipv6",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "neighbor": [
# {
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ]
# },
# {
# "afi": "ipv6",
# "redistribute": [
# {
# "isis_level": "level-2",
# "protocol": "isis"
# },
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ],
# "route_target": {
# "mode": "export",
# "target": "33:11"
# },
# "vrf": "vrft"
# }
# ],
# "as_number": "10"
# },
# "changed": true,
# "commands": [
# "router bgp 10",
# "vrf vrft",
# "address-family ipv4",
# "neighbor peer2 default-originate always",
# "bgp additional-paths receive",
# "exit",
# "exit",
# " vrf vrft",
# "no address-family ipv6"
# ],
# Using Deleted:
# veos(config-router-bgp)#show running-config | section bgp
# router bgp 10
# neighbor peer2 peer-group
# neighbor peer2 maximum-routes 12000
# neighbor 1.1.1.1 maximum-routes 12000
# !
# address-family ipv4
# bgp additional-paths receive
# neighbor peer2 default-originate always
# no neighbor 1.1.1.1 activate
# network 1.1.1.0/24
# network 1.5.1.0/24 route-map MAP01
# redistribute ospf3 match external
# !
# address-family ipv6
# bgp additional-paths receive
# neighbor peer2 default-originate always
# !
# vrf vrft
# address-family ipv4
# bgp additional-paths receive
# veos(config-router-bgp)#
- name: Delete
arista.eos.eos_bgp_address_family:
config:
as_number: "10"
address_family:
- afi: "ipv6"
vrf: "vrft"
- afi: "ipv6"
state: deleted
# After State:
# veos(config-router-bgp)#show running-config | section bgp
# router bgp 10
# neighbor peer2 peer-group
# neighbor peer2 maximum-routes 12000
# neighbor 1.1.1.1 maximum-routes 12000
# !
# address-family ipv4
# bgp additional-paths receive
# neighbor peer2 default-originate always
# no neighbor 1.1.1.1 activate
# network 1.1.1.0/24
# network 1.5.1.0/24 route-map MAP01
# redistribute ospf3 match external
# !
# vrf vrft
# address-family ipv4
# bgp additional-paths receive
# veos(config-router-bgp)#
#
# Module Execution:
#
# "after": {
# "address_family": [
# {
# "afi": "ipv4",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "neighbor": [
# {
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ],
# "network": [
# {
# "address": "1.1.1.0/24"
# },
# {
# "address": "1.5.1.0/24",
# "route_map": "MAP01"
# }
# ],
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ]
# },
# {
# "afi": "ipv4",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "vrf": "vrft"
# }
# ],
# "as_number": "10"
# },
# "before": {
# "address_family": [
# {
# "afi": "ipv4",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "neighbor": [
# {
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ],
# "network": [
# {
# "address": "1.1.1.0/24"
# },
# {
# "address": "1.5.1.0/24",
# "route_map": "MAP01"
# }
# ],
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ]
# },
# {
# "afi": "ipv6",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "neighbor": [
# {
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ]
# },
# {
# "afi": "ipv4",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "vrf": "vrft"
# }
# ],
# "as_number": "10"
# },
# Using parsed:
# parsed_bgp_address_family.cfg :
# router bgp 10
# neighbor n2 peer-group
# neighbor n2 next-hop-unchanged
# neighbor n2 maximum-routes 12000
# neighbor peer2 peer-group
# neighbor peer2 maximum-routes 12000
# network 1.1.1.0/24
# network 1.5.1.0/24 route-map MAP01
# !
# address-family ipv4
# bgp additional-paths receive
# neighbor peer2 default-originate always
# redistribute ospf3 match external
# !
# address-family ipv6
# no bgp additional-paths receive
# neighbor n2 next-hop-unchanged
# redistribute isis level-2
# !
# vrf bgp_10
# ip access-group acl01
# ucmp fec threshold trigger 33 clear 22 warning-only
# !
# address-family ipv4
# route-target import 20:11
# !
# vrf vrft
# address-family ipv4
# bgp additional-paths receive
# !
# address-family ipv6
# redistribute ospf3 match external
- name: parse configs
arista.eos.eos_bgp_address_family:
running_config: "{{ lookup('file', './parsed_bgp_address_family.cfg') }}"
state: parsed
# Module Execution:
# "parsed": {
# "address_family": [
# {
# "afi": "ipv4",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "neighbor": [
# {
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ],
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ]
# },
# {
# "afi": "ipv6",
# "neighbor": [
# {
# "next_hop_unchanged": true,
# "peer": "n2"
# }
# ],
# "redistribute": [
# {
# "isis_level": "level-2",
# "protocol": "isis"
# }
# ]
# },
# {
# "afi": "ipv4",
# "route_target": {
# "mode": "import",
# "target": "20:11"
# },
# "vrf": "bgp_10"
# },
# {
# "afi": "ipv4",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "vrf": "vrft"
# },
# {
# "afi": "ipv6",
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ],
# "vrf": "vrft"
# }
# ],
# "as_number": "10"
# }
# }
# Using gathered:
# Device config:
# veos(config-router-bgp)#show running-config | section bgp
# router bgp 10
# neighbor peer2 peer-group
# neighbor peer2 maximum-routes 12000
# neighbor 1.1.1.1 maximum-routes 12000
# !
# address-family ipv4
# bgp additional-paths receive
# neighbor peer2 default-originate always
# no neighbor 1.1.1.1 activate
# network 1.1.1.0/24
# network 1.5.1.0/24 route-map MAP01
# redistribute ospf3 match external
# !
# vrf vrft
# address-family ipv4
# bgp additional-paths receive
# veos(config-router-bgp)#
- name: gather configs
arista.eos.eos_bgp_address_family:
state: gathered
# Module Execution:
# "gathered": {
# "address_family": [
# {
# "afi": "ipv4",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "neighbor": [
# {
# "default_originate": {
# "always": true
# },
# "peer": "peer2"
# }
# ],
# "network": [
# {
# "address": "1.1.1.0/24"
# },
# {
# "address": "1.5.1.0/24",
# "route_map": "MAP01"
# }
# ],
# "redistribute": [
# {
# "ospf_route": "external",
# "protocol": "ospf3"
# }
# ]
# },
# {
# "afi": "ipv4",
# "bgp_params": {
# "additional_paths": "receive"
# },
# "vrf": "vrft"
# }
# ],
# "as_number": "10"
# },
# using rendered:
- name: Render
arista.eos.eos_bgp_address_family:
config:
as_number: "10"
address_family:
- afi: "ipv4"
redistribute:
- protocol: "ospf3"
ospf_route: "external"
network:
- address: "1.1.1.0/24"
- address: "1.5.1.0/24"
route_map: "MAP01"
- afi: "ipv6"
bgp_params:
additional_paths: "receive"
neighbor:
- peer: "peer2"
default_originate:
always: True
- afi: "ipv6"
redistribute:
- protocol: "isis"
isis_level: "level-2"
route_target:
mode: "export"
target: "33:11"
vrf: "vrft"
state: rendered
# Module Execution:
# "rendered": [
# "router bgp 10",
# "address-family ipv4",
# "redistribute ospf3 match external",
# "network 1.1.1.0/24",
# "network 1.5.1.0/24 route-map MAP01",
# "exit",
# "address-family ipv6",
# "neighbor peer2 default-originate always",
# "bgp additional-paths receive",
# "exit",
# "vrf vrft",
# "address-family ipv6",
# "redistribute isis level-2",
# "route-target export 33:11",
# "exit",
# "exit"
# ]
#
```
### Authors
* Gomathi Selvi Srinivasan (@GomathiselviS)
| programming_docs |
ansible arista.eos.eos_lldp – Manage LLDP configuration on Arista EOS network devices arista.eos.eos\_lldp – Manage LLDP configuration on Arista EOS network devices
==============================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_lldp`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module provides declarative management of LLDP service on Arista EOS network devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **state** string | **Choices:*** **present** ←
* absent
* enabled
* disabled
| State of the LLDP configuration. If value is *present* lldp will be enabled else if it is *absent* it will be disabled. |
Notes
-----
Note
* Tested against EOS 4.15
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: Enable LLDP service
arista.eos.eos_lldp:
state: present
- name: Disable LLDP service
arista.eos.eos_lldp:
state: absent
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always, except for the platforms that use Netconf transport to manage the device. | The list of configuration mode commands to send to the device **Sample:** ['lldp run'] |
### Authors
* Ganesh Nalawade (@ganeshrn)
ansible arista.eos.eos_route_maps – Manages Route Maps resource module arista.eos.eos\_route\_maps – Manages Route Maps resource module
================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_route_maps`.
New in version 2.1.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
Synopsis
--------
* This module configures and manages the attributes of Route Mapd on Arista EOS platforms.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** list / elements=dictionary | | A list of route-map options |
| | **entries** list / elements=dictionary | | Route Map entries. |
| | | **action** string | **Choices:*** deny
* permit
| Action for matching routes |
| | | **continue\_sequence** integer | | Route map entry sequence number. |
| | | **description** string | | Description for the route map. |
| | | **match** dictionary | | Route map match rules. |
| | | | **aggregate\_role** dictionary | | Role in BGP contributor-aggregate relation. |
| | | | | **contributor** boolean | **Choices:*** no
* yes
| BGP aggregate's contributor. |
| | | | | **route\_map** string | | Route map to apply against the aggregate route. |
| | | | **as** integer | | BGP AS number. |
| | | | **as\_path** dictionary | | Set as-path. |
| | | | | **length** string | | Specify as-path length ( with comparison operators like <= 60 and >= 40 ). |
| | | | | **path\_list** string | | AS path list name. |
| | | | **community** dictionary | | BGP community attribute. |
| | | | | **community\_list** string | | list of community names (in csv format). |
| | | | | **exact\_match** boolean | **Choices:*** no
* yes
| Do exact matching of communities. |
| | | | | **instances** string | | Match number of community instances ( with comparison operators like <= 60 and >= 40 ). |
| | | | **extcommunity** dictionary | | extended community list name. |
| | | | | **community\_list** string | | list of community names (in csv format). |
| | | | | **exact\_match** boolean | **Choices:*** no
* yes
| Do exact matching of communities. |
| | | | **interface** string | | interface name. |
| | | | **invert\_result** dictionary | | Invert match result. |
| | | | | **aggregate\_role** dictionary | | Role in BGP contributor-aggregate relation. |
| | | | | | **contributor** boolean | **Choices:*** no
* yes
| BGP aggregate's contributor. |
| | | | | | **route\_map** string | | Route map to apply against the aggregate route. |
| | | | | **as\_path** dictionary | | Set as-path. |
| | | | | | **length** string | | Specify as-path length ( with comparison operators like <= 60 and >= 40 ). |
| | | | | | **path\_list** string | | AS path list name. |
| | | | | **community** dictionary | | BGP community attribute. |
| | | | | | **community\_list** string | | list of community names (in csv format). |
| | | | | | **exact\_match** boolean | **Choices:*** no
* yes
| Do exact matching of communities. |
| | | | | | **instances** string | | Match number of community instances ( with comparison operators like <= 60 and >= 40 ). |
| | | | | **extcommunity** dictionary | | extended community list name. |
| | | | | | **community\_list** string | | list of community names (in csv format). |
| | | | | | **exact\_match** boolean | **Choices:*** no
* yes
| Do exact matching of communities. |
| | | | | **large\_community** dictionary | | extended community list name. |
| | | | | | **community\_list** string | | list of community names (in csv format). |
| | | | | | **exact\_match** boolean | **Choices:*** no
* yes
| Do exact matching of communities. |
| | | | **ip** dictionary | | Set IP specific information. |
| | | | | **address** dictionary | | next hop destination. |
| | | | | | **access\_list** string | | ip access-list. |
| | | | | | **dynamic** boolean | **Choices:*** no
* yes
| Configure dynamic prefix-list. |
| | | | | | **prefix\_list** string | | Prefix list. |
| | | | | **next\_hop** string | | next hop prefix list. |
| | | | | **resolved\_next\_hop** string | | Route resolved prefix list. |
| | | | **ipv6** dictionary | | Set IPv6 specific information. |
| | | | | **address** dictionary | | next hop destination. |
| | | | | | **access\_list** string | | ip access-list. |
| | | | | | **dynamic** boolean | **Choices:*** no
* yes
| Configure dynamic prefix-list. |
| | | | | | **prefix\_list** string | | Prefix list. |
| | | | | **next\_hop** string | | next hop prefix list. |
| | | | | **resolved\_next\_hop** string | | Route resolved prefix list. |
| | | | **isis\_level** string | | IS-IS level. |
| | | | **large\_community** dictionary | | extended community list name. |
| | | | | **community\_list** string | | list of community names (in csv format). |
| | | | | **exact\_match** boolean | **Choices:*** no
* yes
| Do exact matching of communities. |
| | | | **local\_preference** integer | | BGP local preference. |
| | | | **metric** integer | | Route metric. |
| | | | **metric\_type** string | **Choices:*** type-1
* type-2
| Route metric type. |
| | | | **route\_type** string | | Route type |
| | | | **router\_id** string | | Router ID. |
| | | | **source\_protocol** string | | Source routing protocol, |
| | | | **tag** integer | | Route tag |
| | | **sequence** integer | | Index in the sequence. |
| | | **set** dictionary | | set route attributes. |
| | | | **as\_path** dictionary | | Set as-path. |
| | | | | **match** dictionary | | Match the entire as-path. |
| | | | | | **as\_number** string | | as number to use (includes auto;in csv format) |
| | | | | | **none** boolean | **Choices:*** no
* yes
| Remove matching AS numbers |
| | | | | **prepend** dictionary | | Prepend to the as-path. |
| | | | | | **as\_number** string | | as number to prepend (includes auto;in csv format) |
| | | | | | **last\_as** integer | | The number of times to prepend the last AS number. |
| | | | **bgp** integer | | BGP AS path multipath weight. |
| | | | **community\_attributes** dictionary | | BGP community attribute. |
| | | | | **community** dictionary | | community attributes. |
| | | | | | **additive** boolean | **Choices:*** no
* yes
| Add to existing community. |
| | | | | | **delete** boolean | **Choices:*** no
* yes
| Delete matching communities. |
| | | | | | **graceful\_shutdown** boolean | **Choices:*** no
* yes
| Gracefully shutdown. |
| | | | | | **internet** boolean | **Choices:*** no
* yes
| Internet community |
| | | | | | **list** string | | community list name. |
| | | | | | **local\_as** boolean | **Choices:*** no
* yes
| Do not send outside local AS. |
| | | | | | **no\_advertise** boolean | **Choices:*** no
* yes
| Do not advertise to any peer. |
| | | | | | **no\_export** boolean | **Choices:*** no
* yes
| Do not export to next AS. |
| | | | | | **number** string | | community number (in csv format). |
| | | | | **graceful\_shutdown** boolean | **Choices:*** no
* yes
| Graceful shutdown |
| | | | | **none** boolean | **Choices:*** no
* yes
| No community attribute. |
| | | | **distance** integer | | Set protocol independent distance. |
| | | | **evpn** boolean | **Choices:*** no
* yes
| Keep the next hop when advertising to eBGP peers. |
| | | | **extcommunity** dictionary | | BGP extended community attribute. |
| | | | | **lbw** dictionary | | Link bandwith values. |
| | | | | | **aggregate** boolean | **Choices:*** no
* yes
| Aggregate Link Bandwidth. |
| | | | | | **divide** string | **Choices:*** equal
* ration
| Divide Link Bandwidth. |
| | | | | | **value** string | | Link Bandwidth extended community value. |
| | | | | **none** boolean | **Choices:*** no
* yes
| No attribute. |
| | | | | **rt** dictionary | | Route target extended community |
| | | | | | **additive** boolean | **Choices:*** no
* yes
| Add to the existing community. |
| | | | | | **delete** boolean | **Choices:*** no
* yes
| Delete matching communities. |
| | | | | | **vpn** string | | VPN extended community. |
| | | | | **soo** dictionary | | Site-of-Origin extended community. |
| | | | | | **additive** boolean | **Choices:*** no
* yes
| Add to the existing community. |
| | | | | | **delete** boolean | **Choices:*** no
* yes
| Delete matching communities. |
| | | | | | **vpn** string | | VPN extended community. |
| | | | **ip** dictionary | | Set IP specific information. |
| | | | | **address** string | | next hop address. |
| | | | | **peer\_address** boolean | **Choices:*** no
* yes
| Use BGP peering addr as next-hop. |
| | | | | **unchanged** boolean | **Choices:*** no
* yes
| Keep the next hop when advertising to eBGP peer |
| | | | **ipv6** dictionary | | Set IPv6 specific information. |
| | | | | **address** string | | next hop address. |
| | | | | **peer\_address** boolean | **Choices:*** no
* yes
| Use BGP peering addr as next-hop. |
| | | | | **unchanged** boolean | **Choices:*** no
* yes
| Keep the next hop when advertising to eBGP peer |
| | | | **isis\_level** string | | IS-IS level. |
| | | | **local\_preference** integer | | BGP local preference. |
| | | | **metric** dictionary | | Route metric. |
| | | | | **add** string | **Choices:*** igp-metric
* igp-nexthop-cost
| Add igp-metric / igp-nexthop-cost |
| | | | | **igp\_param** string | **Choices:*** igp-metric
* igp-nexthop-cost
| IGP parameter |
| | | | | **value** string | | metric value to add or subtract(with +/- sign). |
| | | | **metric\_type** string | **Choices:*** type-1
* type-2
| Route metric type. |
| | | | **nexthop** dictionary | | Route next hop. |
| | | | | **max\_metric** boolean | **Choices:*** no
* yes
| Set IGP max metric value. |
| | | | | **value** integer | | IGP metric value. |
| | | | **origin** string | **Choices:*** egp
* igp
* incomplete
| Set bgp origin. |
| | | | **segment\_index** integer | | MPLS Segment-routing Segment Index. |
| | | | **tag** integer | | Route tag |
| | | | **weight** integer | | BGP weight. |
| | | **source** dictionary | | Rename/Copy configuration |
| | | | **action** string | **Choices:*** rename
* copy
| rename or copy configuration |
| | | | **overwrite** boolean | **Choices:*** no
* yes
| if True, overwrite existing config. |
| | | | **source\_map\_name** string | | Source route map name. |
| | | **statement** string | | statement name |
| | | **sub\_route\_map** dictionary | | Sub route map |
| | | | **invert\_result** boolean | **Choices:*** no
* yes
| Invert sub route map result |
| | | | **name** string | | sub route map name |
| | **route\_map** string | | Route map name. |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section route-map**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** deleted
* **merged** ←
* overridden
* replaced
* gathered
* rendered
* parsed
| The state the configuration should be left in. |
Notes
-----
Note
* Tested against Arista EOS 4.23.0F
* This module works with connection `network_cli`. See the [EOS Platform Options](eos_platform_options).
Examples
--------
```
# Using merged
# Before state
# veos#show running-config | section route-map
# veos#
- name: Merge provided configuration with device configuration
arista.eos.eos_route_maps:
config:
- route_map: "mapmerge"
entries:
- description: "merged_map"
action: "permit"
sequence: 10
match:
router_id: 22
- description: "newmap"
action: "deny"
sequence: 25
continue_sequence: 45
match:
interface: "Ethernet1"
- route_map: "mapmerge2"
entries:
- sub_route_map:
name: "mapmerge"
action: "deny"
sequence: 45
set:
metric:
value: 25
add: "igp-metric"
as_path:
prepend:
last_as: 2
match:
ipv6:
resolved_next_hop: "list1"
state: merged
# After State:
# veos#show running-config | section route-map
# route-map mapmerge permit 10
# description merged_map
# match router-id prefix-list 22
# !
# route-map mapmerge deny 25
# description newmap
# match interface Ethernet1
# continue 45
# !
# route-map mapmerge2 deny 45
# match ipv6 resolved-next-hop prefix-list list1
# sub-route-map mapmerge
# set metric 25 +igp-metric
# set as-path prepend last-as 2
# !
# route-map test permit 10
# veos#
# Module Execution:
# "after": [
# {
# "entries": [
# {
# "action": "permit",
# "description": "merged_map",
# "match": {
# "router_id": "22"
# },
# "sequence": 10
# },
# {
# "action": "deny",
# "continue_sequence": 45,
# "description": "newmap",
# "match": {
# "interface": "Ethernet1"
# },
# "sequence": 25
# }
# ],
# "route_map": "mapmerge"
# },
# {
# "entries": [
# {
# "action": "deny",
# "match": {
# "ipv6": {
# "resolved_next_hop": "list1"
# }
# },
# "sequence": 45,
# "set": {
# "as_path": {
# "prepend": {
# "last_as": 2
# }
# },
# "metric": {
# "add": "igp-metric",
# "value": "25"
# }
# },
# "sub_route_map": {
# "name": "mapmerge"
# }
# }
# ],
# "route_map": "mapmerge2"
# }
# ],
# "before": {},
# "changed": true,
# "commands": [
# "route-map mapmerge permit 10",
# "match router-id prefix-list 22",
# "description merged_map",
# "route-map mapmerge deny 25",
# "match interface Ethernet1",
# "description newmap",
# "continue 45",
# "route-map mapmerge2 deny 45",
# "match ipv6 resolved-next-hop prefix-list list1",
# "set metric 25 +igp-metric",
# "set as-path prepend last-as 2",
# "sub-route-map mapmerge"
# ],
#
# Using replaced:
# Before State:
# veos#show running-config | section route-map
# route-map mapmerge permit 10
# description merged_map
# match router-id prefix-list 22
# !
# route-map mapmerge deny 25
# description newmap
# match interface Ethernet1
# continue 45
# !
# route-map mapmerge2 deny 45
# match ipv6 resolved-next-hop prefix-list list1
# sub-route-map mapmerge
# set metric 25 +igp-metric
# set as-path prepend last-as 2
# !
# veos#
- name: Replace
arista.eos.eos_route_maps:
config:
- route_map: "mapmerge"
entries:
- action: "permit"
sequence: 10
match:
ipv6:
resolved_next_hop: "listr"
- action: "deny"
sequence: 90
set:
extcommunity:
rt:
vpn: "22:11"
delete: True
ip:
unchanged: True
state: replaced
# After State:
# veos#show running-config | section route-map
# route-map mapmerge permit 10
# match ipv6 resolved-next-hop prefix-list listr
# !
# route-map mapmerge deny 25
# description newmap
# match interface Ethernet1
# continue 45
# !
# route-map mapmerge deny 90
# set ip next-hop unchanged
# set extcommunity rt 22:11 delete
# !
# route-map mapmerge2 deny 45
# match ipv6 resolved-next-hop prefix-list list1
# sub-route-map mapmerge
# set metric 25 +igp-metric
# set as-path prepend last-as 2
# !
#
# Module Execution:
#
# "after": [
# {
# "entries": [
# {
# "action": "permit",
# "match": {
# "ipv6": {
# "resolved_next_hop": "listr"
# }
# },
# "sequence": 10
# },
# {
# "action": "deny",
# "continue_sequence": 45,
# "description": "newmap",
# "match": {
# "interface": "Ethernet1"
# },
# "sequence": 25
# },
# {
# "action": "deny",
# "sequence": 90,
# "set": {
# "extcommunity": {
# "rt": {
# "delete": true,
# "vpn": "22:11"
# }
# },
# "ip": {
# "unchanged": true
# }
# }
# }
# ],
# "route_map": "mapmerge"
# },
# {
# "entries": [
# {
# "action": "deny",
# "match": {
# "ipv6": {
# "resolved_next_hop": "list1"
# }
# },
# "sequence": 45,
# "set": {
# "as_path": {
# "prepend": {
# "last_as": 2
# }
# },
# "metric": {
# "add": "igp-metric",
# "value": "25"
# }
# },
# "sub_route_map": {
# "name": "mapmerge"
# }
# }
# ],
# "route_map": "mapmerge2"
# },
# {
# "entries": [
# {
# "action": "permit",
# "sequence": 10
# }
# ],
# "route_map": "test"
# }
# ],
# "before": [
# {
# "entries": [
# {
# "action": "permit",
# "description": "merged_map",
# "match": {
# "router_id": "22"
# },
# "sequence": 10
# },
# {
# "action": "deny",
# "continue_sequence": 45,
# "description": "newmap",
# "match": {
# "interface": "Ethernet1"
# },
# "sequence": 25
# }
# ],
# "route_map": "mapmerge"
# },
# {
# "entries": [
# {
# "action": "deny",
# "match": {
# "ipv6": {
# "resolved_next_hop": "list1"
# }
# },
# "sequence": 45,
# "set": {
# "as_path": {
# "prepend": {
# "last_as": 2
# }
# },
# "metric": {
# "add": "igp-metric",
# "value": "25"
# }
# },
# "sub_route_map": {
# "name": "mapmerge"
# }
# }
# ],
# "route_map": "mapmerge2"
# }
# ],
# "changed": true,
# "commands": [
# "route-map mapmerge permit 10",
# "match ipv6 resolved-next-hop prefix-list listr",
# "no match router-id prefix-list 22",
# "no description",
# "route-map mapmerge deny 90",
# "set extcommunity rt 22:11 delete",
# "set ip next-hop unchanged"
# ],
#
#
# Using Overridden:
# Before state:
# veos#show running-config | section route-map
# route-map mapmerge permit 10
# match ipv6 resolved-next-hop prefix-list listr
# !
# route-map mapmerge deny 25
# description newmap
# match interface Ethernet1
# continue 45
# !
# route-map mapmerge deny 90
# set ip next-hop unchanged
# set extcommunity rt 22:11 delete
# !
# route-map mapmerge2 deny 45
# match ipv6 resolved-next-hop prefix-list list1
# sub-route-map mapmerge
# set metric 25 +igp-metric
# set as-path prepend last-as 2
# !
# route-map test permit 10
# veos#
- name: Override
arista.eos.eos_route_maps:
config:
- route_map: "mapmerge"
entries:
- action: "permit"
sequence: 10
match:
ipv6:
resolved_next_hop: "listr"
- action: "deny"
sequence: 90
set:
metric:
igp_param: "igp-nexthop-cost"
state: overridden
# After State:
# veos#show running-config | section route-map
# route-map mapmerge permit 10
# match ipv6 resolved-next-hop prefix-list listr
# !
# route-map mapmerge deny 90
# set metric igp-nexthop-cost
# veos#
#
#
# "after": [
# {
# "entries": [
# {
# "action": "permit",
# "match": {
# "ipv6": {
# "resolved_next_hop": "listr"
# }
# },
# "sequence": 10
# },
# {
# "action": "deny",
# "sequence": 90,
# "set": {
# "metric": {
# "igp_param": "igp-nexthop-cost"
# }
# }
# }
# ],
# "route_map": "mapmerge"
# }
# ],
# "before": [
# {
# "entries": [
# {
# "action": "permit",
# "match": {
# "ipv6": {
# "resolved_next_hop": "listr"
# }
# },
# "sequence": 10
# },
# {
# "action": "deny",
# "continue_sequence": 45,
# "description": "newmap",
# "match": {
# "interface": "Ethernet1"
# },
# "sequence": 25
# },
# {
# "action": "deny",
# "sequence": 90,
# "set": {
# "extcommunity": {
# "rt": {
# "delete": true,
# "vpn": "22:11"
# }
# },
# "ip": {
# "unchanged": true
# }
# }
# }
# ],
# "route_map": "mapmerge"
# },
# {
# "entries": [
# {
# "action": "deny",
# "match": {
# "ipv6": {
# "resolved_next_hop": "list1"
# }
# },
# "sequence": 45,
# "set": {
# "as_path": {
# "prepend": {
# "last_as": 2
# }
# },
# "metric": {
# "add": "igp-metric",
# "value": "25"
# }
# },
# "sub_route_map": {
# "name": "mapmerge"
# }
# }
# ],
# "route_map": "mapmerge2"
# },
# {
# "entries": [
# {
# "action": "permit",
# "sequence": 10
# }
# ],
# "route_map": "test"
# }
# ],
# "changed": true,
# "commands": [
# "no route-map mapmerge deny 25",
# "no route-map mapmerge2 deny 45",
# "no route-map test permit 10",
# "route-map mapmerge deny 90",
# "set metric igp-nexthop-cost",
# "no set ip next-hop unchanged",
# "no set extcommunity rt 22:11 delete"
# ],
#
# Using deleted:
# Before State:
# veos#show running-config | section route-map
# route-map mapmerge permit 10
# description merged_map
# match router-id prefix-list 22
# match ipv6 resolved-next-hop prefix-list listr
# !
# route-map mapmerge deny 25
# description newmap
# match interface Ethernet1
# continue 45
# !
# route-map mapmerge deny 90
# set metric igp-nexthop-cost
# !
# route-map mapmerge2 deny 45
# match ipv6 resolved-next-hop prefix-list list1
# sub-route-map mapmerge
# set metric 25 +igp-metric
# set as-path prepend last-as 2
# veos#
- name: Delete route-map
arista.eos.eos_route_maps:
config:
- route_map: "mapmerge"
state: deleted
become: yes
tags:
- deleted1
# After State:
# veos#show running-config | section route-map
# route-map mapmerge2 deny 45
# match ipv6 resolved-next-hop prefix-list list1
# sub-route-map mapmerge
# set metric 25 +igp-metric
# set as-path prepend last-as 2
# veos#
#
# Module Execution:
#
# "after": [
# {
# "entries": [
# {
# "action": "deny",
# "match": {
# "ipv6": {
# "resolved_next_hop": "list1"
# }
# },
# "sequence": 45,
# "set": {
# "as_path": {
# "prepend": {
# "last_as": 2
# }
# },
# "metric": {
# "add": "igp-metric",
# "value": "25"
# }
# },
# "sub_route_map": {
# "name": "mapmerge"
# }
# }
# ],
# "route_map": "mapmerge2"
# }
# ],
# "before": [
# {
# "entries": [
# {
# "action": "permit",
# "description": "merged_map",
# "match": {
# "ipv6": {
# "resolved_next_hop": "listr"
# },
# "router_id": "22"
# },
# "sequence": 10
# },
# {
# "action": "deny",
# "continue": 45,
# "description": "newmap",
# "match": {
# "interface": "Ethernet1"
# },
# "sequence": 25
# },
# {
# "action": "deny",
# "sequence": 90,
# "set": {
# "metric": {
# "igp_param": "igp-nexthop-cost"
# }
# }
# }
# ],
# "route_map": "mapmerge"
# },
# {
# "entries": [
# {
# "action": "deny",
# "match": {
# "ipv6": {
# "resolved_next_hop": "list1"
# }
# },
# "sequence": 45,
# "set": {
# "as_path": {
# "prepend": {
# "last_as": 2
# }
# },
# "metric": {
# "add": "igp-metric",
# "value": "25"
# }
# },
# "sub_route_map": {
# "name": "mapmerge"
# }
# }
# ],
# "route_map": "mapmerge2"
# }
# ],
# "changed": true,
# "commands": [
# "no route-map mapmerge"
# ],
# Using deleted to delete all route-maps:
# Before State:
# veos#show running-config | section route-map
# route-map mapmerge permit 10
# description merged_map
# match router-id prefix-list 22
# !
# route-map mapmerge deny 25
# description newmap
# match interface Ethernet1
# continue 45
# !
# route-map mapmerge2 deny 45
# match ipv6 resolved-next-hop prefix-list list1
# sub-route-map mapmerge
# set metric 25 +igp-metric
# set as-path prepend last-as 2
# veos#
- name: Delete all route-maps
arista.eos.eos_route_maps:
state: deleted
# After State:
# veos#show running-config | section route-map
# veos#
#
# Module Execution:
#
# "after": {},
# "before": [
# {
# "entries": [
# {
# "action": "permit",
# "description": "merged_map",
# "match": {
# "router_id": "22"
# },
# "sequence": 10
# },
# {
# "action": "deny",
# "continue": 45,
# "description": "newmap",
# "match": {
# "interface": "Ethernet1"
# },
# "sequence": 25
# }
# ],
# "route_map": "mapmerge"
# },
# {
# "entries": [
# {
# "action": "deny",
# "match": {
# "ipv6": {
# "resolved_next_hop": "list1"
# }
# },
# "sequence": 45,
# "set": {
# "as_path": {
# "prepend": {
# "last_as": 2
# }
# },
# "metric": {
# "add": "igp-metric",
# "value": "25"
# }
# },
# "sub_route_map": {
# "name": "mapmerge"
# }
# }
# ],
# "route_map": "mapmerge2"
# }
# ],
# "changed": true,
# "commands": [
# "no route-map mapmerge",
# "no route-map mapmerge2"
# ],
# Using gathered:
# Device configs:
# veos#show running-config | section route-map
# route-map mapmerge permit 10
# description merged_map
# match router-id prefix-list 22
# !
# route-map mapmerge deny 25
# description newmap
# match interface Ethernet1
# continue 45
# !
# route-map mapmerge2 deny 45
# match ipv6 resolved-next-hop prefix-list list1
# sub-route-map mapmerge
# set metric 25 +igp-metric
# set as-path prepend last-as 2
# veos#
- name: gather configs
arista.eos.eos_route_maps:
state: gathered
# Module Execution:
# "gathered": [
# {
# "entries": [
# {
# "action": "permit",
# "description": "merged_map",
# "match": {
# "router_id": "22"
# },
# "sequence": 10
# },
# {
# "action": "deny",
# "continue_sequence": 45,
# "description": "newmap",
# "match": {
# "interface": "Ethernet1"
# },
# "sequence": 25
# }
# ],
# "route_map": "mapmerge"
# },
# {
# "entries": [
# {
# "action": "deny",
# "match": {
# "ipv6": {
# "resolved_next_hop": "list1"
# }
# },
# "sequence": 45,
# "set": {
# "as_path": {
# "prepend": {
# "last_as": 2
# }
# },
# "metric": {
# "add": "igp-metric",
# "value": "25"
# }
# },
# "sub_route_map": {
# "name": "mapmerge"
# }
# }
# ],
# "route_map": "mapmerge2"
# }
# ],
# Using rendered:
- name: Render provided configuration
arista.eos.eos_route_maps:
config:
- route_map: "mapmerge"
entries:
- description: "merged_map"
action: "permit"
sequence: 10
match:
router_id: 22
set:
bgp: 20
- description: "newmap"
action: "deny"
sequence: 25
continue_sequence: 45
match:
interface: "Ethernet1"
- route_map: "mapmerge2"
entries:
- sub_route_map:
name: "mapmerge"
action: "deny"
sequence: 45
set:
metric:
value: 25
add: "igp-metric"
as_path:
prepend:
last_as: 2
match:
ipv6:
resolved_next_hop: "list1"
state: rendered
# Module Execution:
# "rendered": [
# "route-map mapmerge permit 10",
# "match router-id prefix-list 22",
# "set bgp bestpath as-path weight 20",
# "description merged_map",
# "route-map mapmerge deny 25",
# "match interface Ethernet1",
# "description newmap",
# "continue 45",
# "route-map mapmerge2 deny 45",
# "match ipv6 resolved-next-hop prefix-list list1",
# "set metric 25 +igp-metric",
# "set as-path prepend last-as 2",
# "sub-route-map mapmerge"
# ]
# Using parsed:
# parsed.cfg
# route-map mapmerge permit 10
# description merged_map
# match router-id prefix-list 22
# set bgp bestpath as-path weight 20
# !
# route-map mapmerge deny 25
# description newmap
# match interface Ethernet1
# continue 45
# !
# route-map mapmerge2 deny 45
# match ipv6 resolved-next-hop prefix-list list1
# sub-route-map mapmerge
# set metric 25 +igp-metric
# set as-path prepend last-as 2
- name: parse configs
arista.eos.eos_route_maps:
running_config: "{{ lookup('file', './parsed.cfg') }}"
state: parsed
# Module Execution:
# "parsed": [
# {
# "entries": [
# {
# "action": "permit",
# "description": "merged_map",
# "match": {
# "router_id": "22"
# },
# "sequence": 10,
# "set": {
# "bgp": 20
# }
# },
# {
# "action": "deny",
# "continue_sequence": 45,
# "description": "newmap",
# "match": {
# "interface": "Ethernet1"
# },
# "sequence": 25
# }
# ],
# "route_map": "mapmerge"
# },
# {
# "entries": [
# {
# "action": "deny",
# "match": {
# "ipv6": {
# "resolved_next_hop": "list1"
# }
# },
# "sequence": 45,
# "set": {
# "as_path": {
# "prepend": {
# "last_as": 2
# }
# },
# "metric": {
# "add": "igp-metric",
# "value": "25"
# }
# },
# "sub_route_map": {
# "name": "mapmerge"
# }
# }
# ],
# "route_map": "mapmerge2"
# }
# ]
```
### Authors
* Gomathi Selvi Srinivasan (@GomathiselviS)
| programming_docs |
ansible arista.eos.eos_ospfv3 – OSPFv3 resource module arista.eos.eos\_ospfv3 – OSPFv3 resource module
===============================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_ospfv3`.
New in version 1.1.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
Synopsis
--------
* This module configures and manages the attributes of ospfv3 on Arista EOS platforms.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** dictionary | | A list of configurations for ospfv3. |
| | **processes** list / elements=dictionary | | A list of dictionary specifying the ospfv3 processes. |
| | | **address\_family** list / elements=dictionary | | Enable address family and enter its config mode |
| | | | **adjacency** dictionary | | Configure adjacency options for OSPF instance. |
| | | | | **exchange\_start** dictionary | | Configure exchange-start options for OSPF instance. |
| | | | | | **threshold** integer | | Number of peers to bring up simultaneously. |
| | | | **afi** string | **Choices:*** ipv4
* ipv6
| address family . |
| | | | **areas** list / elements=dictionary | | Specifies the configuration for OSPF areas |
| | | | | **area\_id** string | | Specifies a 32 bit number expressed in decimal or dotted-decimal notation. |
| | | | | **authentication** dictionary | | Configure authentication for the area incase of ospfv3. |
| | | | | | **algorithm** string | **Choices:*** md5
* sha1
| Name of algorithm to be used. |
| | | | | | **encrypt\_key** boolean | **Choices:*** no
* yes
| If False, key string is not encrypted |
| | | | | | **hidden\_key** boolean | **Choices:*** no
* yes
| If True, Specifies that a HIDDEN key will follow. |
| | | | | | **key** string | | 128 bit MD5 key or 140 bit SHA1 key. |
| | | | | | **passphrase** string | | Passphrase String for deriving keys for authentication and encryption. |
| | | | | | **spi** integer | | Specify the SPI value |
| | | | | **default\_cost** integer | | Specify the cost for default summary route in stub/NSSA area. |
| | | | | **encryption** dictionary | | Configure encryption for the area |
| | | | | | **algorithm** string | **Choices:*** sha1
* md5
| name of the algorithm to be used. |
| | | | | | **encrypt\_key** boolean | **Choices:*** no
* yes
| If False, key string is not encrypted |
| | | | | | **encryption** string | **Choices:*** 3des-cbc
* aes-128-cbc
* aes-192-cbc
* aes-256-cbc
* null
| name of encryption to be used. |
| | | | | | **hidden\_key** boolean | **Choices:*** no
* yes
| If True, Specifies that a HIDDEN key will follow. |
| | | | | | **key** string | | 128 bit MD5 key or 140 bit SHA1 key. |
| | | | | | **passphrase** string | | Passphrase String for deriving keys for authentication and encryption. |
| | | | | | **spi** integer | | Specify the SPI value |
| | | | | **nssa** dictionary | | Configures NSSA parameters. |
| | | | | | **default\_information\_originate** dictionary | | Originate default Type 7 LSA. |
| | | | | | | **metric** integer | | Metric for default route. |
| | | | | | | **metric\_type** integer | | Metric type for default route. |
| | | | | | | **nssa\_only** boolean | **Choices:*** no
* yes
| Limit default advertisement to this NSSA area. |
| | | | | | | **set** boolean | **Choices:*** no
* yes
| True if only default information orignate is set |
| | | | | | **no\_summary** boolean | **Choices:*** no
* yes
| Filter all type-3 LSAs in the nssa area. |
| | | | | | **nssa\_only** boolean | **Choices:*** no
* yes
| Disable Type-7 LSA p-bit setting |
| | | | | | **set** boolean | **Choices:*** no
* yes
| True if only nssa is set |
| | | | | | **translate** boolean | **Choices:*** no
* yes
| Enable LSA translation. |
| | | | | **ranges** list / elements=dictionary | | Configure route summarization. |
| | | | | | **address** string | | IP address. |
| | | | | | **advertise** boolean | **Choices:*** no
* yes
| Enable Advertisement of the range. |
| | | | | | **cost** integer | | Configures the metric. |
| | | | | | **subnet\_address** string | | IP address with mask length |
| | | | | | **subnet\_mask** string | | IP subnet mask |
| | | | | **stub** dictionary | | Stub area. |
| | | | | | **set** boolean | **Choices:*** no
* yes
| True if only stub is set |
| | | | | | **summary\_lsa** boolean | **Choices:*** no
* yes
| If False , Filter all type-3 LSAs in the stub area. |
| | | | **auto\_cost** dictionary | | Set auto-cost. |
| | | | | **reference\_bandwidth** integer | | reference bandwidth in megabits per sec. |
| | | | **bfd** dictionary | | Enable BFD. |
| | | | | **all\_interfaces** boolean | **Choices:*** no
* yes
| Enable BFD on all interfaces. |
| | | | **default\_information** dictionary | | Control distribution of default information. |
| | | | | **always** boolean | **Choices:*** no
* yes
| Always advertise default route. |
| | | | | **metric** integer | | Metric for default route. |
| | | | | **metric\_type** integer | | Metric type for default route. |
| | | | | **originate** boolean | **Choices:*** no
* yes
| Distribute a default route. |
| | | | | **route\_map** string | | Specify which route-map to use. |
| | | | **default\_metric** integer | | Configure the default metric for redistributed routes. |
| | | | **distance** integer | | Specifies the administrative distance for routes. |
| | | | **fips\_restrictions** boolean | **Choices:*** no
* yes
| Use FIPS compliant algorithms |
| | | | **graceful\_restart** dictionary | | Enable graceful restart mode. |
| | | | | **grace\_period** integer | | Specify maximum time to wait for graceful-restart to complete. |
| | | | | **set** boolean | **Choices:*** no
* yes
| When true sets the grace\_fulrestart config alone. |
| | | | **graceful\_restart\_helper** boolean | **Choices:*** no
* yes
| If True, Enable graceful restart helper. |
| | | | **log\_adjacency\_changes** dictionary | | To configure link-state changes and transitions of OSPFv3 neighbors. |
| | | | | **detail** boolean | **Choices:*** no
* yes
| If true , configures the switch to log all link-state changes. |
| | | | | **set** boolean | **Choices:*** no
* yes
| When true sets the log\_adjacency\_changes config alone. |
| | | | **max\_metric** dictionary | | Set maximum metric. |
| | | | | **router\_lsa** dictionary | | Maximum metric in self-originated router-LSAs. |
| | | | | | **external\_lsa** dictionary | | Override external-lsa metric with max-metric value. |
| | | | | | | **max\_metric\_value** integer | | Set max metric value for external LSAs. |
| | | | | | | **set** boolean | **Choices:*** no
* yes
| Set external-lsa attribute. |
| | | | | | **include\_stub** boolean | **Choices:*** no
* yes
| Set maximum metric for stub links in router-LSAs. |
| | | | | | **on\_startup** dictionary | | Set maximum metric temporarily after reboot. |
| | | | | | | **wait\_for\_bgp** boolean | **Choices:*** no
* yes
| Let BGP decide when to originate router-LSA with normal metric |
| | | | | | | **wait\_period** integer | | Wait period in seconds after startup. |
| | | | | | **set** boolean | **Choices:*** no
* yes
| Set router-lsa attribute. |
| | | | | | **summary\_lsa** dictionary | | Override summary-lsa metric with max-metric value. |
| | | | | | | **max\_metric\_value** integer | | Set max metric value for external LSAs. |
| | | | | | | **set** boolean | **Choices:*** no
* yes
| Set external-lsa attribute. |
| | | | **maximum\_paths** integer | | Maximum number of next-hops in an ECMP route. |
| | | | **passive\_interface** boolean | **Choices:*** no
* yes
| Include interface but without actively running OSPF. |
| | | | **redistribute** list / elements=dictionary | | Specifies the routes to be redistributed. |
| | | | | **route\_map** string | | Specify which route map to use. |
| | | | | **routes** string | **Choices:*** bgp
* connected
* static
| Route types (BGP,static,connected) |
| | | | **router\_id** string | | 32-bit number assigned to a router running OSPFv3. |
| | | | **shutdown** boolean | **Choices:*** no
* yes
| Disable the OSPF instance. |
| | | | **timers** dictionary | | Configure OSPF timers. |
| | | | | **lsa** integer | | Configure OSPF LSA timers. |
| | | | | **out\_delay** integer | | Configure out-delay timer. |
| | | | | **pacing** integer | | Configure OSPF packet pacing. |
| | | | | **throttle** dictionary | | Configure SPF timers |
| | | | | | **initial** integer | | Initial SPF schedule delay in msecs. |
| | | | | | **lsa** boolean | **Choices:*** no
* yes
| Configure threshold for retransmission of lsa |
| | | | | | **max** integer | | Max wait time between two SPFs in msecs. |
| | | | | | **min** integer | | Min Hold time between two SPFs in msecs |
| | | | | | **spf** boolean | **Choices:*** no
* yes
| Configure time between SPF calculations |
| | | **adjacency** dictionary | | Configure adjacency options for OSPF instance. |
| | | | **exchange\_start** dictionary | | Configure exchange-start options for OSPF instance. |
| | | | | **threshold** integer | | Number of peers to bring up simultaneously. |
| | | **areas** list / elements=dictionary | | Specifies the configuration for OSPF areas |
| | | | **area\_id** string | | Specifies a 32 bit number expressed in decimal or dotted-decimal notation. |
| | | | **authentication** dictionary | | Configure authentication for the area incase of ospfv3. |
| | | | | **algorithm** string | **Choices:*** md5
* sha1
| Name of algorithm to be used. |
| | | | | **encrypt\_key** boolean | **Choices:*** no
* yes
| If False, key string is not encrypted |
| | | | | **hidden\_key** boolean | **Choices:*** no
* yes
| If True, Specifies that a HIDDEN key will follow. |
| | | | | **key** string | | 128 bit MD5 key or 140 bit SHA1 key. |
| | | | | **passphrase** string | | Passphrase String for deriving keys for authentication and encryption. |
| | | | | **spi** integer | | Specify the SPI value |
| | | | **default\_cost** integer | | Specify the cost for default summary route in stub/NSSA area. |
| | | | **encryption** dictionary | | Configure encryption for the area |
| | | | | **algorithm** string | **Choices:*** sha1
* md5
| name of the algorithm to be used. |
| | | | | **encrypt\_key** boolean | **Choices:*** no
* yes
| If False, key string is not encrypted |
| | | | | **encryption** string | **Choices:*** 3des-cbc
* aes-128-cbc
* aes-192-cbc
* aes-256-cbc
* null
| name of encryption to be used. |
| | | | | **hidden\_key** boolean | **Choices:*** no
* yes
| If True, Specifies that a HIDDEN key will follow. |
| | | | | **key** string | | 128 bit MD5 key or 140 bit SHA1 key. |
| | | | | **passphrase** string | | Passphrase String for deriving keys for authentication and encryption. |
| | | | | **spi** integer | | Specify the SPI value |
| | | | **nssa** dictionary | | Configures NSSA parameters. |
| | | | | **default\_information\_originate** dictionary | | Originate default Type 7 LSA. |
| | | | | | **metric** integer | | Metric for default route. |
| | | | | | **metric\_type** integer | | Metric type for default route. |
| | | | | | **nssa\_only** boolean | **Choices:*** no
* yes
| Limit default advertisement to this NSSA area. |
| | | | | | **set** boolean | **Choices:*** no
* yes
| True if only default information orignate is set |
| | | | | **no\_summary** boolean | **Choices:*** no
* yes
| Filter all type-3 LSAs in the nssa area. |
| | | | | **nssa\_only** boolean | **Choices:*** no
* yes
| Disable Type-7 LSA p-bit setting |
| | | | | **set** boolean | **Choices:*** no
* yes
| True if only nssa is set |
| | | | | **translate** boolean | **Choices:*** no
* yes
| Enable LSA translation. |
| | | | **stub** dictionary | | Stub area. |
| | | | | **set** boolean | **Choices:*** no
* yes
| True if only stub is set. |
| | | | | **summary\_lsa** boolean | **Choices:*** no
* yes
| If False , Filter all type-3 LSAs in the stub area. |
| | | **auto\_cost** dictionary | | Set auto-cost. |
| | | | **reference\_bandwidth** integer | | reference bandwidth in megabits per sec. |
| | | **bfd** dictionary | | Enable BFD. |
| | | | **all\_interfaces** boolean | **Choices:*** no
* yes
| Enable BFD on all interfaces. |
| | | **fips\_restrictions** boolean | **Choices:*** no
* yes
| Use FIPS compliant algorithms |
| | | **graceful\_restart** dictionary | | Enable graceful restart mode. |
| | | | **grace\_period** integer | | Specify maximum time to wait for graceful-restart to complete. |
| | | | **set** boolean | **Choices:*** no
* yes
| When true sets the grace\_fulrestart config alone. |
| | | **graceful\_restart\_helper** boolean | **Choices:*** no
* yes
| If True, Enable graceful restart helper. |
| | | **log\_adjacency\_changes** dictionary | | To configure link-state changes and transitions of OSPFv3 neighbors. |
| | | | **detail** boolean | **Choices:*** no
* yes
| If true , configures the switch to log all link-state changes. |
| | | | **set** boolean | **Choices:*** no
* yes
| When true sets the log\_adjacency\_changes config alone. |
| | | **max\_metric** dictionary | | Set maximum metric. |
| | | | **router\_lsa** dictionary | | Maximum metric in self-originated router-LSAs. |
| | | | | **external\_lsa** dictionary | | Override external-lsa metric with max-metric value. |
| | | | | | **max\_metric\_value** integer | | Set max metric value for external LSAs. |
| | | | | | **set** boolean | **Choices:*** no
* yes
| Set external-lsa attribute. |
| | | | | **include\_stub** boolean | **Choices:*** no
* yes
| Set maximum metric for stub links in router-LSAs. |
| | | | | **on\_startup** dictionary | | Set maximum metric temporarily after reboot. |
| | | | | | **wait\_for\_bgp** boolean | **Choices:*** no
* yes
| Let BGP decide when to originate router-LSA with normal metric |
| | | | | | **wait\_period** integer | | Wait period in seconds after startup. |
| | | | | **set** boolean | **Choices:*** no
* yes
| Set router-lsa attribute. |
| | | | | **summary\_lsa** dictionary | | Override summary-lsa metric with max-metric value. |
| | | | | | **max\_metric\_value** integer | | Set max metric value for external LSAs. |
| | | | | | **set** boolean | **Choices:*** no
* yes
| Set external-lsa attribute. |
| | | **passive\_interface** boolean | **Choices:*** no
* yes
| Include interface but without actively running OSPF. |
| | | **router\_id** string | | 32-bit number assigned to a router running OSPFv3. |
| | | **shutdown** boolean | **Choices:*** no
* yes
| Disable the OSPF instance. |
| | | **timers** dictionary | | Configure OSPF timers. |
| | | | **lsa** integer | | Configure OSPF LSA timers. |
| | | | **out\_delay** integer | | Configure out-delay timer. |
| | | | **pacing** integer | | Configure OSPF packet pacing. |
| | | | **throttle** dictionary | | Configure SPF timers |
| | | | | **initial** integer | | Initial SPF schedule delay in msecs. |
| | | | | **lsa** boolean | **Choices:*** no
* yes
| Configure threshold for retransmission of lsa |
| | | | | **max** integer | | Max wait time between two SPFs in msecs. |
| | | | | **min** integer | | Min Hold time between two SPFs in msecs |
| | | | | **spf** boolean | **Choices:*** no
* yes
| Configure time between SPF calculations |
| | | **vrf** string | | VRF name . |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section ospfv3**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** deleted
* **merged** ←
* overridden
* replaced
* gathered
* rendered
* parsed
| The state the configuration should be left in. |
Notes
-----
Note
* Tested against Arista EOS 4.23.0F
* This module works with connection `network_cli`. See the [EOS Platform Options](../network/user_guide/platform_eos).
Examples
--------
```
# Using merged
# Before state
# veos#show running-config | section ospfv3
# veos#
- arista.eos.eos_ospfv3:
config:
processes:
- address_family:
- timers:
lsa: 22
graceful_restart:
grace_period: 35
afi: "ipv6"
timers:
pacing: 55
fips_restrictions: True
router_id: "2.2.2.2"
vrf: "vrfmerge"
# After state
# veos#show running-config | section ospfv3
# router ospfv3 vrf vrfmerge
# router-id 2.2.2.2
# fips restrictions
# timers pacing flood 55
# !
# address-family ipv6
# fips restrictions
# timers lsa arrival 22
# graceful-restart grace-period 35
# veos#
# Module Execution
# "after": {
# "processes": [
# {
# "address_family": [
# {
# "afi": "ipv6",
# "fips_restrictions": true,
# "graceful_restart": {
# "grace_period": 35
# },
# "timers": {
# "lsa": 22
# }
# }
# ],
# "fips_restrictions": true,
# "router_id": "2.2.2.2",
# "timers": {
# "pacing": 55
# },
# "vrf": "vrfmerge"
# }
# ]
# },
# "before": {},
# "changed": true,
# "commands": [
# "router ospfv3 vrf vrfmerge",
# "address-family ipv6",
# "graceful-restart grace-period 35",
# "timers lsa arrival 22",
# "exit",
# "timers pacing flood 55",
# "fips restrictions",
# "router-id 2.2.2.2",
# "exit"
# ],
# using replaced
# before state
# veos#show running-config | section ospfv3
# router ospfv3
# fips restrictions
# area 0.0.0.0 encryption ipsec spi 43 esp null md5 passphrase 7 h8pZp9eprTYjjoY/NKFFe0Ei7x03Y7dyLotRhI0a5t4=
# !
# router ospfv3 vrf vrfmerge
# router-id 2.2.2.2
# fips restrictions
# timers pacing flood 55
# !
# address-family ipv6
# fips restrictions
# timers lsa arrival 22
# graceful-restart grace-period 35
# veos#
- arista.eos.eos_ospfv3:
config:
processes:
- areas:
- area_id: "0.0.0.0"
encryption:
spi: 43
encryption: "null"
algorithm: "md5"
encrypt_key: False
passphrase: "7hl8FV3lZ6H1mAKpjL47hQ=="
vrf: "default"
address_family:
- afi: "ipv4"
router_id: "7.1.1.1"
state: replaced
# After state
# veos#show running-config | section ospfv3
# router ospfv3
# area 0.0.0.0 encryption ipsec spi 43 esp null md5 passphrase 7 h8pZp9eprTYjjoY/NKFFe0Ei7x03Y7dyLotRhI0a5t4=
# !
# router ospfv3 vrf vrfmerge
# passive-interface default
# !
# address-family ipv6
# area 0.0.0.3 range 10.1.2.0/24
# area 0.0.0.3 range 60.1.0.0/16 cost 30
# veos#
# Module execution
# "after": {
# "processes": [
# {
# "areas": [
# {
# "area_id": "0.0.0.0",
# "encryption": {
# "algorithm": "md5",
# "encryption": "null",
# "hidden_key": true,
# "passphrase": "h8pZp9eprTYjjoY/NKFFe0Ei7x03Y7dyLotRhI0a5t4="
# }
# }
# ],
# "vrf": "default"
# },
# {
# "address_family": [
# {
# "afi": "ipv6",
# "areas": [
# {
# "area_id": "0.0.0.3",
# "ranges": [
# {
# "address": "10.1.2.0/24"
# },
# {
# "address": "60.1.0.0/16",
# "cost": 30
# }
# ]
# }
# ]
# }
# ],
# "passive_interface": true,
# "vrf": "vrfmerge"
# }
# ]
# },
# "before": {
# "processes": [
# {
# "areas": [
# {
# "area_id": "0.0.0.0",
# "encryption": {
# "algorithm": "md5",
# "encryption": "null",
# "hidden_key": true,
# "passphrase": "h8pZp9eprTYjjoY/NKFFe0Ei7x03Y7dyLotRhI0a5t4="
# }
# }
# ],
# "fips_restrictions": true,
# "vrf": "default"
# },
# {
# "address_family": [
# {
# "afi": "ipv6",
# "fips_restrictions": true,
# "graceful_restart": {
# "grace_period": 35
# },
# "timers": {
# "lsa": 22
# }
# }
# ],
# "fips_restrictions": true,
# "router_id": "2.2.2.2",
# "timers": {
# "pacing": 55
# },
# "vrf": "vrfmerge"
# }
# ]
# },
# "changed": true,
# "commands": [
# "router ospfv3 vrf vrfmerge",
# "address-family ipv6",
# "no fips restrictions",
# "no graceful-restart",
# "no timers lsa arrival 22",
# "area 0.0.0.3 range 10.1.2.2/24 advertise",
# "area 0.0.0.3 range 60.1.1.1 255.255.0.0 cost 30",
# "exit",
# "passive-interface default",
# "no router-id",
# "no fips restrictions",
# "no timers pacing flood 55",
# "exit"
# ],
# using overridden
# before state
# veos#show running-config | section ospfv3
# router ospfv3
# area 0.0.0.0 encryption ipsec spi 43 esp null md5 passphrase 7 h8pZp9eprTYjjoY/NKFFe0Ei7x03Y7dyLotRhI0a5t4=
# !
# router ospfv3 vrf vrfmerge
# passive-interface default
# !
# address-family ipv6
# area 0.0.0.3 range 10.1.2.0/24
# area 0.0.0.3 range 60.1.0.0/16 cost 30
# veos#
- arista.eos.eos_ospfv3:
config:
processes:
- address_family:
- areas:
- area_id: "0.0.0.3"
ranges:
- address: 10.1.2.2/24
advertise: True
- address: 60.1.1.1
subnet_mask: 255.255.0.0
cost: 30
afi: "ipv6"
passive_interface: True
vrf: "vrfmerge"
state: overridden
# After state
# veos#show running-config | section ospfv3
# router ospfv3 vrf vrfmerge
# passive-interface default
# !
# address-family ipv6
# area 0.0.0.3 range 10.1.2.0/24
# area 0.0.0.3 range 60.1.0.0/16 cost 30
# veos#
# Module execution
# "after": {
# "processes": [
# {
# "address_family": [
# {
# "afi": "ipv6",
# "areas": [
# {
# "area_id": "0.0.0.3",
# "ranges": [
# {
# "address": "10.1.2.0/24"
# },
# {
# "address": "60.1.0.0/16",
# "cost": 30
# }
# ]
# }
# ]
# }
# ],
# "passive_interface": true,
# "vrf": "vrfmerge"
# }
# ]
# },
# "before": {
# "processes": [
# {
# "areas": [
# {
# "area_id": "0.0.0.0",
# "encryption": {
# "algorithm": "md5",
# "encryption": "null",
# "hidden_key": true,
# "passphrase": "h8pZp9eprTYjjoY/NKFFe0Ei7x03Y7dyLotRhI0a5t4="
# }
# }
# ],
# "vrf": "default"
# },
# {
# "address_family": [
# {
# "afi": "ipv6",
# "areas": [
# {
# "area_id": "0.0.0.3",
# "ranges": [
# {
# "address": "10.1.2.0/24"
# },
# {
# "address": "60.1.0.0/16",
# "cost": 30
# }
# ]
# }
# ]
# }
# ],
# "passive_interface": true,
# "vrf": "vrfmerge"
# }
# ]
# },
# "changed": true,
# "commands": [
# "no router ospfv3",
# "router ospfv3 vrf vrfmerge",
# "address-family ipv6",
# "no area 0.0.0.3 range 10.1.2.0/24",
# "no area 0.0.0.3 range 60.1.0.0/16 cost 30",
# "area 0.0.0.3 range 10.1.2.2/24 advertise",
# "area 0.0.0.3 range 60.1.1.1 255.255.0.0 cost 30",
# "exit",
# "exit"
# ],
# using deleted
# Before state
# veos#show running-config | section ospfv3
# router ospfv3
# area 0.0.0.0 encryption ipsec spi 43 esp null md5 passphrase 7 h8pZp9eprTYjjoY/NKFFe0Ei7x03Y7dyLotRhI0a5t4=
# !
# router ospfv3 vrf vrfmerge
# passive-interface default
# !
# address-family ipv4
# redistribute connected
# redistribute static route-map MAP01
# area 0.0.0.3 range 10.1.2.0/24
# area 0.0.0.3 range 60.1.0.0/16 cost 30
# !
# address-family ipv6
# area 0.0.0.3 range 10.1.2.0/24
# area 0.0.0.3 range 60.1.0.0/16 cost 30
# veos#
- arista.eos.eos_ospfv3:
config:
processes:
- vrf: "default"
state: deleted
# After state
# veos#show running-config | section ospfv3
# router ospfv3 vrf vrfmerge
# passive-interface default
# !
# address-family ipv4
# redistribute connected
# redistribute static route-map MAP01
# area 0.0.0.3 range 10.1.2.0/24
# area 0.0.0.3 range 60.1.0.0/16 cost 30
# !
# address-family ipv6
# area 0.0.0.3 range 10.1.2.0/24
# area 0.0.0.3 range 60.1.0.0/16 cost 30
# veos#
# Module execution
# "after": {
# "processes": [
# {
# "address_family": [
# {
# "afi": "ipv4",
# "areas": [
# {
# "area_id": "0.0.0.3",
# "ranges": [
# {
# "address": "10.1.2.0/24"
# },
# {
# "address": "60.1.0.0/16",
# "cost": 30
# }
# ]
# }
# ],
# "redistribute": [
# {
# "routes": "connected"
# },
# {
# "route_map": "MAP01",
# "routes": "static"
# }
# ]
# },
# {
# "afi": "ipv6",
# "areas": [
# {
# "area_id": "0.0.0.3",
# "ranges": [
# {
# "address": "10.1.2.0/24"
# },
# {
# "address": "60.1.0.0/16",
# "cost": 30
# }
# ]
# }
# ]
# }
# ],
# "passive_interface": true,
# "vrf": "vrfmerge"
# }
# ]
# },
# "before": {
# "processes": [
# {
# "areas": [
# {
# "area_id": "0.0.0.0",
# "encryption": {
# "algorithm": "md5",
# "encryption": "null",
# "hidden_key": true,
# "passphrase": "h8pZp9eprTYjjoY/NKFFe0Ei7x03Y7dyLotRhI0a5t4="
# }
# }
# ],
# "vrf": "default"
# },
# {
# "address_family": [
# {
# "afi": "ipv4",
# "areas": [
# {
# "area_id": "0.0.0.3",
# "ranges": [
# {
# "address": "10.1.2.0/24"
# },
# {
# "address": "60.1.0.0/16",
# "cost": 30
# }
# ]
# }
# ],
# "redistribute": [
# {
# "routes": "connected"
# },
# {
# "route_map": "MAP01",
# "routes": "static"
# }
# ]
# },
# {
# "afi": "ipv6",
# "areas": [
# {
# "area_id": "0.0.0.3",
# "ranges": [
# {
# "address": "10.1.2.0/24"
# },
# {
# "address": "60.1.0.0/16",
# "cost": 30
# }
# ]
# }
# ]
# }
# ],
# "passive_interface": true,
# "vrf": "vrfmerge"
# }
# ]
# },
# "changed": true,
# "commands": [
# "no router ospfv3"
# ],
# using parsed
# parsed_ospfv3.cfg
# router ospfv3
# fips restrictions
# area 0.0.0.20 stub
# area 0.0.0.20 authentication ipsec spi 33 sha1 passphrase 7 4O8T3zo4xBdRWXBnsnK934o9SEb+jEhHUN6+xzZgCo2j9EnQBUvtwNxxLEmYmm6w
# area 0.0.0.40 default-cost 45
# area 0.0.0.40 stub
# timers pacing flood 7
# adjacency exchange-start threshold 11
# !
# address-family ipv4
# fips restrictions
# redistribute connected
# !
# address-family ipv6
# router-id 10.1.1.1
# fips restrictions
# !
# router ospfv3 vrf vrf01
# bfd all-interfaces
# fips restrictions
# area 0.0.0.0 encryption ipsec spi 256 esp null sha1 passphrase 7 7hl8FV3lZ6H1mAKpjL47hQ==
# log-adjacency-changes detail
# !
# address-family ipv4
# passive-interface default
# fips restrictions
# redistribute connected route-map MAP01
# maximum-paths 100
# !
# address-family ipv6
# fips restrictions
# area 0.0.0.10 nssa no-summary
# default-information originate route-map DefaultRouteFilter
# max-metric router-lsa external-lsa 25 summary-lsa
# !
# router ospfv3 vrf vrf02
# fips restrictions
# !
# address-family ipv6
# router-id 10.17.0.3
# distance ospf intra-area 200
# fips restrictions
# area 0.0.0.1 stub
# timers throttle spf 56 56 56
# timers out-delay 10
- arista.eos.eos_ospfv3:
running_config: "{{ lookup('file', './parsed_ospfv3.cfg') }}"
state: parsed
# Module execution
# "parsed": {
# "processes": [
# {
# "address_family": [
# {
# "afi": "ipv4",
# "fips_restrictions": true,
# "redistribute": [
# {
# "routes": "connected"
# }
# ]
# },
# {
# "afi": "ipv6",
# "fips_restrictions": true,
# "router_id": "10.1.1.1"
# }
# ],
# "adjacency": {
# "exchange_start": {
# "threshold": 11
# }
# },
# "areas": [
# {
# "area_id": "0.0.0.20",
# "authentication": {
# "algorithm": "sha1",
# "hidden_key": true,
# "passphrase": "4O8T3zo4xBdRWXBnsnK934o9SEb+jEhHUN6+xzZgCo2j9EnQBUvtwNxxLEmYmm6w",
# "spi": 33
# },
# "stub": {
# "set": true
# }
# },
# {
# "area_id": "0.0.0.40",
# "default_cost": 45,
# "stub": {
# "set": true
# }
# }
# ],
# "fips_restrictions": true,
# "timers": {
# "pacing": 7
# },
# "vrf": "default"
# },
# {
# "address_family": [
# {
# "afi": "ipv4",
# "fips_restrictions": true,
# "maximum_paths": 100,
# "passive_interface": true,
# "redistribute": [
# {
# "route_map": "MAP01",
# "routes": "connected"
# }
# ]
# },
# {
# "afi": "ipv6",
# "areas": [
# {
# "area_id": "0.0.0.10",
# "nssa": {
# "no_summary": true
# }
# }
# ],
# "default_information": {
# "originate": true,
# "route_map": "DefaultRouteFilter"
# },
# "fips_restrictions": true,
# "max_metric": {
# "router_lsa": {
# "external_lsa": {
# "max_metric_value": 25
# },
# "summary_lsa": {
# "set": true
# }
# }
# }
# }
# ],
# "areas": [
# {
# "area_id": "0.0.0.0",
# "encryption": {
# "algorithm": "sha1",
# "encryption": "null",
# "hidden_key": true,
# "passphrase": "7hl8FV3lZ6H1mAKpjL47hQ=="
# }
# }
# ],
# "bfd": {
# "all_interfaces": true
# },
# "fips_restrictions": true,
# "log_adjacency_changes": {
# "detail": true
# },
# "vrf": "vrf01"
# },
# {
# "address_family": [
# {
# "afi": "ipv6",
# "areas": [
# {
# "area_id": "0.0.0.1",
# "stub": {
# "set": true
# }
# }
# ],
# "distance": 200,
# "fips_restrictions": true,
# "router_id": "10.17.0.3",
# "timers": {
# "out_delay": 10,
# "throttle": {
# "initial": 56,
# "max": 56,
# "min": 56,
# "spf": true
# }
# }
# }
# ],
# "fips_restrictions": true,
# "vrf": "vrf02"
# }
# ]
# using gathered
# native config
# veos#show running-config | section ospfv3
# router ospfv3 vrf vrfmerge
# passive-interface default
# !
# address-family ipv4
# redistribute connected
# redistribute static route-map MAP01
# area 0.0.0.3 range 10.1.2.0/24
# area 0.0.0.3 range 60.1.0.0/16 cost 30
# !
# address-family ipv6
# area 0.0.0.3 range 10.1.2.0/24
# area 0.0.0.3 range 60.1.0.0/16 cost 30
# veos#
- arista.eos.eos_ospfv3:
state: gathered
# module execution
# "gathered": {
# "processes": [
# {
# "address_family": [
# {
# "afi": "ipv4",
# "areas": [
# {
# "area_id": "0.0.0.3",
# "ranges": [
# {
# "address": "10.1.2.0/24"
# },
# {
# "address": "60.1.0.0/16",
# "cost": 30
# }
# ]
# }
# ],
# "redistribute": [
# {
# "routes": "connected"
# },
# {
# "route_map": "MAP01",
# "routes": "static"
# }
# ]
# },
# {
# "afi": "ipv6",
# "areas": [
# {
# "area_id": "0.0.0.3",
# "ranges": [
# {
# "address": "10.1.2.0/24"
# },
# {
# "address": "60.1.0.0/16",
# "cost": 30
# }
# ]
# }
# ]
# }
# ],
# "passive_interface": true,
# "vrf": "vrfmerge"
# }
# ]
# using rendered
- arista.eos.eos_ospfv3:
config:
processes:
- address_family:
- timers:
lsa: 22
graceful_restart:
grace_period: 35
afi: "ipv6"
timers:
pacing: 55
fips_restrictions: True
router_id: "2.2.2.2"
vrf: "vrfmerge"
state: rendered
# module execution
# "rendered": [
# "router ospfv3 vrf vrfmerge",
# "address-family ipv6",
# "graceful-restart grace-period 35",
# "timers lsa arrival 22",
# "exit",
# "timers pacing flood 55",
# "fips restrictions",
# "router-id 2.2.2.2",
# "exit"
# ]
```
### Authors
* Gomathi Selvi Srinivasan (@GomathiselviS)
| programming_docs |
ansible arista.eos.eos_command – Run arbitrary commands on an Arista EOS device arista.eos.eos\_command – Run arbitrary commands on an Arista EOS device
========================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_command`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* Sends an arbitrary set of commands to an EOS node and returns the results read from the device. This module includes an argument that will cause the module to wait for a specific condition before returning or timing out if the condition is not met.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **commands** list / elements=raw / required | | The commands to send to the remote EOS device over the configured provider. The resulting output from the command is returned. If the *wait\_for* argument is provided, the module is not returned until the condition is satisfied or the number of *retries* has been exceeded. If a command sent to the device requires answering a prompt, it is possible to pass a dict containing command, answer and prompt. Common answers are 'y' or "\r" (carriage return, must be double quotes). Refer below examples. |
| **interval** integer | **Default:**1 | Configures the interval in seconds to wait between retries of the command. If the command does not pass the specified conditional, the interval indicates how to long to wait before trying the command again. |
| **match** string | **Choices:*** any
* **all** ←
| The *match* argument is used in conjunction with the *wait\_for* argument to specify the match policy. Valid values are `all` or `any`. If the value is set to `all` then all conditionals in the *wait\_for* must be satisfied. If the value is set to `any` then only one of the values must be satisfied. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **retries** integer | **Default:**10 | Specifies the number of retries a command should be tried before it is considered failed. The command is run on the target device every retry and evaluated against the *wait\_for* conditionals. |
| **wait\_for** list / elements=string | | Specifies what to evaluate from the output of the command and what conditionals to apply. This argument will cause the task to wait for a particular conditional to be true before moving forward. If the conditional is not true by the configured retries, the task fails. Note - With *wait\_for* the value in `result['stdout']` can be accessed using `result`, that is to access `result['stdout'][0]` use `result[0]` See examples.
aliases: waitfor |
Notes
-----
Note
* Tested against EOS 4.15
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: run show version on remote devices
arista.eos.eos_command:
commands: show version
- name: run show version and check to see if output contains Arista
arista.eos.eos_command:
commands: show version
wait_for: result[0] contains Arista
- name: run multiple commands on remote nodes
arista.eos.eos_command:
commands:
- show version
- show interfaces
- name: run multiple commands and evaluate the output
arista.eos.eos_command:
commands:
- show version
- show interfaces
wait_for:
- result[0] contains Arista
- result[1] contains Loopback0
- name: run commands and specify the output format
arista.eos.eos_command:
commands:
- command: show version
output: json
- name: using cli transport, check whether the switch is in maintenance mode
arista.eos.eos_command:
commands: show maintenance
wait_for: result[0] contains 'Under Maintenance'
- name: using cli transport, check whether the switch is in maintenance mode using
json output
arista.eos.eos_command:
commands: show maintenance | json
wait_for: result[0].units.System.state eq 'underMaintenance'
- name: using eapi transport check whether the switch is in maintenance, with 8 retries
and 2 second interval between retries
arista.eos.eos_command:
commands: show maintenance
wait_for: result[0]['units']['System']['state'] eq 'underMaintenance'
interval: 2
retries: 8
provider:
transport: eapi
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **failed\_conditions** list / elements=string | failed | The list of conditionals that have failed **Sample:** ['...', '...'] |
| **stdout** list / elements=string | always apart from low level errors (such as action plugin) | The set of responses from the commands **Sample:** ['...', '...'] |
| **stdout\_lines** list / elements=string | always apart from low level errors (such as action plugin) | The value of stdout split into a list **Sample:** [['...', '...'], ['...'], ['...']] |
### Authors
* Peter Sprygada (@privateip)
ansible arista.eos.eos_lacp_interfaces – LACP interfaces resource module arista.eos.eos\_lacp\_interfaces – LACP interfaces resource module
==================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_lacp_interfaces`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module manages Link Aggregation Control Protocol (LACP) attributes of interfaces on Arista EOS devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** list / elements=dictionary | | A dictionary of LACP interfaces options. |
| | **name** string | | Full name of the interface (i.e. Ethernet1). |
| | **port\_priority** integer | | LACP port priority for the interface. Range 1-65535. |
| | **rate** string | **Choices:*** fast
* normal
| Rate at which PDUs are sent by LACP. At fast rate LACP is transmitted once every 1 second. At normal rate LACP is transmitted every 30 seconds after the link is bundled. |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section ^interfaces**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** **merged** ←
* replaced
* overridden
* deleted
* parsed
* rendered
* gathered
| The state of the configuration after module completion. |
Notes
-----
Note
* Tested against Arista EOS 4.20.10M
* This module works with connection `network_cli`. See the [EOS Platform Options](../network/user_guide/platform_eos).
Examples
--------
```
# Using merged
#
#
# ------------
# Before state
# ------------
#
#
# veos#show run | section ^interface
# interface Ethernet1
# lacp port-priority 30
# interface Ethernet2
# lacp rate fast
- name: Merge provided configuration with device configuration
arista.eos.eos_lacp_interfaces:
config:
- name: Ethernet1
rate: fast
- name: Ethernet2
rate: normal
state: merged
#
# -----------
# After state
# -----------
#
# veos#show run | section ^interface
# interface Ethernet1
# lacp port-priority 30
# lacp rate fast
# interface Ethernet2
# Using replaced
#
#
# ------------
# Before state
# ------------
#
#
# veos#show run | section ^interface
# interface Ethernet1
# lacp port-priority 30
# interface Ethernet2
# lacp rate fast
- name: Replace existing LACP configuration of specified interfaces with provided
configuration
arista.eos.eos_lacp_interfaces:
config:
- name: Ethernet1
rate: fast
state: replaced
#
# -----------
# After state
# -----------
#
# veos#show run | section ^interface
# interface Ethernet1
# lacp rate fast
# interface Ethernet2
# lacp rate fast
# Using overridden
#
#
# ------------
# Before state
# ------------
#
#
# veos#show run | section ^interface
# interface Ethernet1
# lacp port-priority 30
# interface Ethernet2
# lacp rate fast
- name: Override the LACP configuration of all the interfaces with provided configuration
arista.eos.eos_lacp_interfaces:
config:
- name: Ethernet1
rate: fast
state: overridden
#
# -----------
# After state
#
#
# veos#show run | section ^interface
# interface Ethernet1
# lacp rate fast
# interface Ethernet2
# Using deleted
#
#
# ------------
# Before state
# ------------
#
#
# veos#show run | section ^interface
# interface Ethernet1
# lacp port-priority 30
# interface Ethernet2
# lacp rate fast
- name: Delete LACP attributes of given interfaces (or all interfaces if none specified).
arista.eos.eos_lacp_interfaces:
state: deleted
#
# -----------
# After state
# -----------
#
# veos#show run | section ^interface
# interface Ethernet1
# interface Ethernet2
# using rendered:
- name: Use Rendered to convert the structured data to native config
arista.eos.eos_lacp_interfaces:
config:
- name: Ethernet1
rate: fast
- name: Ethernet2
rate: normal
state: rendered
#
# -----------
# Output
# -----------
# rendered:
# - "interface Ethernet1"
# - "lacp rate fast"
# Using parsed:
# parsed.cfg:
# "interface Ethernet1"
# "lacp rate fast"
# "interface Ethernet2"
- name: Use parsed to convert native configs to structured data
arista.eos.eos_lacp_interfaces:
running_config: "{{ lookup('file', 'parsed.cfg') }}"
state: parsed
# Output:
# parsed:
# - name: Ethernet1
# rate: fast
# - name: Ethernet2
# rate: normal
# Using gathered:
# native config:
# veos#show run | section ^interface
# interface Ethernet1
# lacp port-priority 30
# interface Ethernet2
# lacp rate fast
- name: Gather LACP facts from the device
arista.eos.eos_lacp_interfaces:
state: gathered
# Output:
# gathered:
# - name: Ethernet1
# - name: Ethernet2
# rate: fast
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **after** list / elements=string | when changed | The configuration as structured data after module completion. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **before** list / elements=string | always | The configuration as structured data prior to module invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **commands** list / elements=string | always | The set of commands pushed to the remote device. **Sample:** ['interface Ethernet1', 'lacp rate fast'] |
### Authors
* Nathaniel Case (@Qalthos)
ansible arista.eos.eos_config – Manage Arista EOS configuration sections arista.eos.eos\_config – Manage Arista EOS configuration sections
=================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_config`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* Arista EOS configurations use a simple block indent file syntax for segmenting configuration into sections. This module provides an implementation for working with EOS configuration sections in a deterministic way. This module works with either CLI or eAPI transports.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **after** list / elements=string | | The ordered set of commands to append to the end of the command stack if a change needs to be made. Just like with *before* this allows the playbook designer to append a set of commands to be executed after the command set. |
| **backup** boolean | **Choices:*** **no** ←
* yes
| This argument will cause the module to create a full backup of the current `running-config` from the remote device before any changes are made. If the `backup_options` value is not given, the backup file is written to the `backup` folder in the playbook root directory or role root directory, if playbook is part of an ansible role. If the directory does not exist, it is created. |
| **backup\_options** dictionary | | This is a dict object containing configurable options related to backup file path. The value of this option is read only when `backup` is set to *yes*, if `backup` is set to *no* this option will be silently ignored. |
| | **dir\_path** path | | This option provides the path ending with directory name in which the backup configuration file will be stored. If the directory does not exist it will be first created and the filename is either the value of `filename` or default filename as described in `filename` options description. If the path value is not given in that case a *backup* directory will be created in the current working directory and backup configuration will be copied in `filename` within *backup* directory. |
| | **filename** string | | The filename to be used to store the backup configuration. If the filename is not given it will be generated based on the hostname, current time and date in format defined by <hostname>\_config.<current-date>@<current-time> |
| **before** list / elements=string | | The ordered set of commands to push on to the command stack if a change needs to be made. This allows the playbook designer the opportunity to perform configuration commands prior to pushing any changes without affecting how the set of commands are matched against the system. |
| **defaults** boolean | **Choices:*** **no** ←
* yes
| The *defaults* argument will influence how the running-config is collected from the device. When the value is set to true, the command used to collect the running-config is append with the all keyword. When the value is set to false, the command is issued without the all keyword |
| **diff\_against** string | **Choices:*** startup
* running
* intended
* **session** ←
| When using the `ansible-playbook --diff` command line argument the module can generate diffs against different sources. When this option is configure as *startup*, the module will return the diff of the running-config against the startup-config. When this option is configured as *intended*, the module will return the diff of the running-config against the configuration provided in the `intended_config` argument. When this option is configured as *running*, the module will return the before and after diff of the running-config with respect to any changes made to the device configuration. When this option is configured as `session`, the diff returned will be based on the configuration session. |
| **diff\_ignore\_lines** list / elements=string | | Use this argument to specify one or more lines that should be ignored during the diff. This is used for lines in the configuration that are automatically updated by the system. This argument takes a list of regular expressions or exact line matches. |
| **intended\_config** string | | The `intended_config` provides the master configuration that the node should conform to and is used to check the final running-config against. This argument will not modify any settings on the remote device and is strictly used to check the compliance of the current device's configuration against. When specifying this argument, the task should also modify the `diff_against` value and set it to *intended*. The configuration lines for this value should be similar to how it will appear if present in the running-configuration of the device including the indentation to ensure correct diff. |
| **lines** list / elements=string | | The ordered set of commands that should be configured in the section. The commands must be the exact same commands as found in the device running-config as found in the device running-config to ensure idempotency and correct diff. Be sure to note the configuration command syntax as some commands are automatically modified by the device config parser.
aliases: commands |
| **match** string | **Choices:*** **line** ←
* strict
* exact
* none
| Instructs the module on the way to perform the matching of the set of commands against the current device config. If match is set to *line*, commands are matched line by line. If match is set to *strict*, command lines are matched with respect to position. If match is set to *exact*, command lines must be an equal match. Finally, if match is set to *none*, the module will not attempt to compare the source configuration with the running configuration on the remote device. |
| **parents** list / elements=string | | The ordered set of parents that uniquely identify the section or hierarchy the commands should be checked against. If the parents argument is omitted, the commands are checked against the set of top level or global commands. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **replace** string | **Choices:*** **line** ←
* block
* config
| Instructs the module on the way to perform the configuration on the device. If the replace argument is set to *line* then the modified lines are pushed to the device in configuration mode. If the replace argument is set to *block* then the entire command block is pushed to the device in configuration mode if any line is not correct. |
| **running\_config** string | | The module, by default, will connect to the remote device and retrieve the current running-config to use as a base for comparing against the contents of source. There are times when it is not desirable to have the task get the current running-config for every task in a playbook. The *running\_config* argument allows the implementer to pass in the configuration to use as the base config for this module. The configuration lines for this option should be similar to how it will appear if present in the running-configuration of the device including the indentation to ensure idempotency and correct diff.
aliases: config |
| **save\_when** string | **Choices:*** always
* **never** ←
* modified
* changed
| When changes are made to the device running-configuration, the changes are not copied to non-volatile storage by default. Using this argument will change that before. If the argument is set to *always*, then the running-config will always be copied to the startup-config and the *modified* flag will always be set to True. If the argument is set to *modified*, then the running-config will only be copied to the startup-config if it has changed since the last save to startup-config. If the argument is set to *never*, the running-config will never be copied to the startup-config. If the argument is set to *changed*, then the running-config will only be copied to the startup-config if the task has made a change. *changed* was added in Ansible 2.5. |
| **src** path | | The *src* argument provides a path to the configuration file to load into the remote system. The path can either be a full system path to the configuration file if the value starts with / or relative to the root of the implemented role or playbook. This argument is mutually exclusive with the *lines* and *parents* arguments. It can be a Jinja2 template as well. The configuration lines in the source file should be similar to how it will appear if present in the running-configuration (live switch config) of the device i ncluding the indentation to ensure idempotency and correct diff. Arista EOS device config has 3 spaces indentation. |
Notes
-----
Note
* Tested against EOS 4.15
* Abbreviated commands are NOT idempotent, see [Network FAQ](../network/user_guide/faq#why-do-the-config-modules-always-return-changed-true-with-abbreviated-commands).
* To ensure idempotency and correct diff the configuration lines in the relevant module options should be similar to how they appear if present in the running configuration on device including the indentation.
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: configure top level settings
arista.eos.eos_config:
lines: hostname {{ inventory_hostname }}
- name: load an acl into the device
arista.eos.eos_config:
lines:
- 10 permit ip host 192.0.2.1 any log
- 20 permit ip host 192.0.2.2 any log
- 30 permit ip host 192.0.2.3 any log
- 40 permit ip host 192.0.2.4 any log
parents: ip access-list test
before: no ip access-list test
replace: block
- name: load configuration from file
arista.eos.eos_config:
src: eos.cfg
- name: render a Jinja2 template onto an Arista switch
arista.eos.eos_config:
backup: yes
src: eos_template.j2
- name: diff the running config against a master config
arista.eos.eos_config:
diff_against: intended
intended_config: "{{ lookup('file', 'master.cfg') }}"
- name: for idempotency, use full-form commands
arista.eos.eos_config:
lines:
# - shut
- shutdown
# parents: int eth1
parents: interface Ethernet1
- name: configurable backup path
arista.eos.eos_config:
src: eos_template.j2
backup: yes
backup_options:
filename: backup.cfg
dir_path: /home/user
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **backup\_path** string | when backup is yes | The full path to the backup file **Sample:** /playbooks/ansible/backup/eos\_config.2016-07-16@22:28:34 |
| **commands** list / elements=string | always | The set of commands that will be pushed to the remote device **Sample:** ['hostname switch01', 'interface Ethernet1', 'no shutdown'] |
| **date** string | when backup is yes | The date extracted from the backup file name **Sample:** 2016-07-16 |
| **filename** string | when backup is yes and filename is not specified in backup options | The name of the backup file **Sample:** eos\_config.2016-07-16@22:28:34 |
| **shortname** string | when backup is yes and filename is not specified in backup options | The full path to the backup file excluding the timestamp **Sample:** /playbooks/ansible/backup/eos\_config |
| **time** string | when backup is yes | The time extracted from the backup file name **Sample:** 22:28:34 |
| **updates** list / elements=string | always | The set of commands that will be pushed to the remote device **Sample:** ['hostname switch01', 'interface Ethernet1', 'no shutdown'] |
### Authors
* Peter Sprygada (@privateip)
| programming_docs |
ansible arista.eos.eos – Use eAPI to run command on eos platform arista.eos.eos – Use eAPI to run command on eos platform
========================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
Synopsis
--------
* This eos plugin provides low level abstraction api’s for sending and receiving CLI commands with eos network devices.
Parameters
----------
| Parameter | Choices/Defaults | Configuration | Comments |
| --- | --- | --- | --- |
| **eos\_use\_sessions** integer | **Default:**1 | env:ANSIBLE\_EOS\_USE\_SESSIONS var: ansible\_eos\_use\_sessions | Specifies if sessions should be used on remote host or not |
### Authors
* Ansible Networking Team
ansible arista.eos.eos_l2_interface – (deprecated, removed after 2022-06-01) Manage L2 interfaces on Arista EOS network devices. arista.eos.eos\_l2\_interface – (deprecated, removed after 2022-06-01) Manage L2 interfaces on Arista EOS network devices.
==========================================================================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_l2_interface`.
New in version 1.0.0: of arista.eos
* [DEPRECATED](#deprecated)
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
* [Status](#status)
DEPRECATED
----------
Removed in
major release after 2022-06-01
Why
Updated modules released with more functionality
Alternative
eos\_l2\_interfaces
Synopsis
--------
* This module provides declarative management of L2 interfaces on Arista EOS network devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **access\_vlan** string | | Configure given VLAN in access port. If `mode=access`, used as the access VLAN ID. |
| **aggregate** list / elements=dictionary | | List of Layer-2 interface definitions. |
| | **access\_vlan** string | | Configure given VLAN in access port. If `mode=access`, used as the access VLAN ID. |
| | **mode** string | **Choices:*** access
* trunk
| Mode in which interface needs to be configured. |
| | **name** string / required | | Name of the interface |
| | **native\_vlan** string | | Native VLAN to be configured in trunk port. If `mode=trunk`, used as the trunk native VLAN ID. |
| | **state** string | **Choices:*** present
* absent
| Manage the state of the Layer-2 Interface configuration. |
| | **trunk\_allowed\_vlans** string | | List of allowed VLANs in a given trunk port. If `mode=trunk`, these are the ONLY VLANs that will be configured on the trunk, i.e. `2-10,15`.
aliases: trunk\_vlans |
| **mode** string | **Choices:*** access
* trunk
| Mode in which interface needs to be configured. |
| **name** string | | Name of the interface
aliases: interface |
| **native\_vlan** string | | Native VLAN to be configured in trunk port. If `mode=trunk`, used as the trunk native VLAN ID. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **state** string | **Choices:*** **present** ←
* absent
| Manage the state of the Layer-2 Interface configuration. |
| **trunk\_allowed\_vlans** string | | List of allowed VLANs in a given trunk port. If `mode=trunk`, these are the ONLY VLANs that will be configured on the trunk, i.e. `2-10,15`.
aliases: trunk\_vlans |
Notes
-----
Note
* Tested against EOS 4.15
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: Ensure Ethernet1 does not have any switchport
arista.eos.eos_l2_interface:
name: Ethernet1
state: absent
- name: Ensure Ethernet1 is configured for access vlan 20
arista.eos.eos_l2_interface:
name: Ethernet1
mode: access
access_vlan: 20
- name: Ensure Ethernet1 is a trunk port and ensure 2-50 are being tagged (doesn't
mean others aren't also being tagged)
arista.eos.eos_l2_interface:
name: Ethernet1
mode: trunk
native_vlan: 10
trunk_allowed_vlans: 2-50
- name: Set switchports on aggregate
arista.eos.eos_l2_interface:
aggregate:
- {name: ethernet1, mode: access, access_vlan: 20}
- {name: ethernet2, mode: trunk, native_vlan: 10}
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always. | The list of configuration mode commands to send to the device **Sample:** ['interface ethernet1', 'switchport access vlan 20'] |
Status
------
* This module will be removed in a major release after 2022-06-01. *[deprecated]*
* For more information see [DEPRECATED](#deprecated).
### Authors
* Ricardo Carrillo Cruz (@rcarrillocruz)
ansible arista.eos.eos_acls – ACLs resource module arista.eos.eos\_acls – ACLs resource module
===========================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_acls`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module manages the IP access-list attributes of Arista EOS interfaces.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** list / elements=dictionary | | A dictionary of IP access-list options |
| | **acls** list / elements=dictionary | | A list of Access Control Lists (ACL). |
| | | **aces** list / elements=dictionary | | Filtering data |
| | | | **destination** dictionary | | The packet's destination address |
| | | | | **address** string | | dotted decimal notation of IP address |
| | | | | **any** boolean | **Choices:*** no
* yes
| Rule matches all source addresses |
| | | | | **host** string | | Host IP address |
| | | | | **port\_protocol** dictionary | | Specify dest port/protocol, along with operator . (comes with tcp/udp). |
| | | | | **subnet\_address** string | | A subnet address |
| | | | | **wildcard\_bits** string | | Source wildcard bits |
| | | | **fragment\_rules** boolean | **Choices:*** no
* yes
| Add fragment rules |
| | | | **fragments** boolean | **Choices:*** no
* yes
| Match non-head fragment packets |
| | | | **grant** string | **Choices:*** permit
* deny
| Action to be applied on the rule |
| | | | **hop\_limit** dictionary | | Hop limit value. |
| | | | **line** string | | For fact gathering, any ACE that is not fully parsed, while show up as a value of this attribute.
aliases: ace |
| | | | **log** boolean | **Choices:*** no
* yes
| Log matches against this rule |
| | | | **protocol** string | | Specify the protocol to match. Refer to vendor documentation for valid values. |
| | | | **protocol\_options** dictionary | | All the possible sub options for the protocol chosen. |
| | | | | **icmp** dictionary | | Internet Control Message Protocol settings. |
| | | | | | **administratively\_prohibited** boolean | **Choices:*** no
* yes
| Administratively prohibited |
| | | | | | **alternate\_address** boolean | **Choices:*** no
* yes
| Alternate address |
| | | | | | **conversion\_error** boolean | **Choices:*** no
* yes
| Datagram conversion |
| | | | | | **dod\_host\_prohibited** boolean | **Choices:*** no
* yes
| Host prohibited |
| | | | | | **dod\_net\_prohibited** boolean | **Choices:*** no
* yes
| Net prohibited |
| | | | | | **echo** boolean | **Choices:*** no
* yes
| Echo (ping) |
| | | | | | **echo\_reply** boolean | **Choices:*** no
* yes
| Echo reply |
| | | | | | **general\_parameter\_problem** boolean | **Choices:*** no
* yes
| Parameter problem |
| | | | | | **host\_isolated** boolean | **Choices:*** no
* yes
| Host isolated |
| | | | | | **host\_precedence\_unreachable** boolean | **Choices:*** no
* yes
| Host unreachable for precedence |
| | | | | | **host\_redirect** boolean | **Choices:*** no
* yes
| Host redirect |
| | | | | | **host\_tos\_redirect** boolean | **Choices:*** no
* yes
| Host redirect for TOS |
| | | | | | **host\_tos\_unreachable** boolean | **Choices:*** no
* yes
| Host unreachable for TOS |
| | | | | | **host\_unknown** boolean | **Choices:*** no
* yes
| Host unknown |
| | | | | | **host\_unreachable** boolean | **Choices:*** no
* yes
| Host unreachable |
| | | | | | **information\_reply** boolean | **Choices:*** no
* yes
| Information replies |
| | | | | | **information\_request** boolean | **Choices:*** no
* yes
| Information requests |
| | | | | | **mask\_reply** boolean | **Choices:*** no
* yes
| Mask replies |
| | | | | | **mask\_request** boolean | **Choices:*** no
* yes
| Mask requests |
| | | | | | **message\_code** integer | | ICMP message code |
| | | | | | **message\_num** integer | | icmp msg type number. |
| | | | | | **message\_type** integer | | ICMP message type |
| | | | | | **mobile\_redirect** boolean | **Choices:*** no
* yes
| Mobile host redirect |
| | | | | | **net\_redirect** boolean | **Choices:*** no
* yes
| Network redirect |
| | | | | | **net\_tos\_redirect** boolean | **Choices:*** no
* yes
| Net redirect for TOS |
| | | | | | **net\_tos\_unreachable** boolean | **Choices:*** no
* yes
| Network unreachable for TOS |
| | | | | | **net\_unreachable** boolean | **Choices:*** no
* yes
| Net unreachable |
| | | | | | **network\_unknown** boolean | **Choices:*** no
* yes
| Network unknown |
| | | | | | **no\_room\_for\_option** boolean | **Choices:*** no
* yes
| Parameter required but no room |
| | | | | | **option\_missing** boolean | **Choices:*** no
* yes
| Parameter required but not present |
| | | | | | **packet\_too\_big** boolean | **Choices:*** no
* yes
| Fragmentation needed and DF set |
| | | | | | **parameter\_problem** boolean | **Choices:*** no
* yes
| All parameter problems |
| | | | | | **port\_unreachable** boolean | **Choices:*** no
* yes
| Port unreachable |
| | | | | | **precedence\_unreachable** boolean | **Choices:*** no
* yes
| Precedence cutoff |
| | | | | | **protocol\_unreachable** boolean | **Choices:*** no
* yes
| Protocol unreachable |
| | | | | | **reassembly\_timeout** boolean | **Choices:*** no
* yes
| Reassembly timeout |
| | | | | | **redirect** boolean | **Choices:*** no
* yes
| All redirects |
| | | | | | **router\_advertisement** boolean | **Choices:*** no
* yes
| Router discovery advertisements |
| | | | | | **router\_solicitation** boolean | **Choices:*** no
* yes
| Router discovery solicitations |
| | | | | | **source\_quench** boolean | **Choices:*** no
* yes
| Source quenches |
| | | | | | **source\_route\_failed** boolean | **Choices:*** no
* yes
| Source route failed |
| | | | | | **time\_exceeded** boolean | **Choices:*** no
* yes
| All time exceededs |
| | | | | | **timestamp\_reply** boolean | **Choices:*** no
* yes
| Timestamp replies |
| | | | | | **timestamp\_request** boolean | **Choices:*** no
* yes
| Timestamp requests |
| | | | | | **traceroute** boolean | **Choices:*** no
* yes
| Traceroute |
| | | | | | **ttl\_exceeded** boolean | **Choices:*** no
* yes
| TTL exceeded |
| | | | | | **unreachable** boolean | **Choices:*** no
* yes
| All unreachables |
| | | | | **icmpv6** dictionary | | Options for icmpv6. |
| | | | | | **address\_unreachable** boolean | **Choices:*** no
* yes
| address unreachable |
| | | | | | **beyond\_scope** boolean | **Choices:*** no
* yes
| beyond\_scope |
| | | | | | **echo\_reply** boolean | **Choices:*** no
* yes
| echo\_reply |
| | | | | | **echo\_request** boolean | **Choices:*** no
* yes
| echo reques |
| | | | | | **erroneous\_header** boolean | **Choices:*** no
* yes
| erroneous header |
| | | | | | **fragment\_reassembly\_exceeded** boolean | **Choices:*** no
* yes
| fragment\_reassembly\_exceeded |
| | | | | | **hop\_limit\_exceeded** boolean | **Choices:*** no
* yes
| hop limit exceeded |
| | | | | | **neighbor\_advertisement** boolean | **Choices:*** no
* yes
| neighbor advertisement |
| | | | | | **neighbor\_solicitation** boolean | **Choices:*** no
* yes
| neighbor\_solicitation |
| | | | | | **no\_admin** boolean | **Choices:*** no
* yes
| no admin |
| | | | | | **no\_route** boolean | **Choices:*** no
* yes
| no route |
| | | | | | **packet\_too\_big** boolean | **Choices:*** no
* yes
| packet too big |
| | | | | | **parameter\_problem** boolean | **Choices:*** no
* yes
| parameter problem |
| | | | | | **port\_unreachable** boolean | **Choices:*** no
* yes
| port unreachable |
| | | | | | **redirect\_message** boolean | **Choices:*** no
* yes
| redirect message |
| | | | | | **reject\_route** boolean | **Choices:*** no
* yes
| reject route |
| | | | | | **router\_advertisement** boolean | **Choices:*** no
* yes
| router\_advertisement |
| | | | | | **router\_solicitation** boolean | **Choices:*** no
* yes
| router\_solicitation |
| | | | | | **source\_address\_failed** boolean | **Choices:*** no
* yes
| source\_address\_failed |
| | | | | | **source\_routing\_error** boolean | **Choices:*** no
* yes
| source\_routing\_error |
| | | | | | **time\_exceeded** boolean | **Choices:*** no
* yes
| time\_exceeded |
| | | | | | **unreachable** boolean | **Choices:*** no
* yes
| unreachable |
| | | | | | **unrecognized\_ipv6\_option** boolean | **Choices:*** no
* yes
| unrecognized\_ipv6\_option |
| | | | | | **unrecognized\_next\_header** boolean | **Choices:*** no
* yes
| unrecognized\_next\_header |
| | | | | **ip** dictionary | | Internet Protocol. |
| | | | | | **nexthop\_group** string | | Nexthop-group name. |
| | | | | **ipv6** dictionary | | Internet V6 Protocol. |
| | | | | | **nexthop\_group** string | | Nexthop-group name. |
| | | | | **tcp** dictionary | | Options for tcp protocol. |
| | | | | | **flags** dictionary | | Match TCP packet flags |
| | | | | | | **ack** boolean | **Choices:*** no
* yes
| Match on the ACK bit |
| | | | | | | **established** boolean | **Choices:*** no
* yes
| Match established connections |
| | | | | | | **fin** boolean | **Choices:*** no
* yes
| Match on the FIN bit |
| | | | | | | **psh** boolean | **Choices:*** no
* yes
| Match on the PSH bit |
| | | | | | | **rst** boolean | **Choices:*** no
* yes
| Match on the RST bit |
| | | | | | | **syn** boolean | **Choices:*** no
* yes
| Match on the SYN bit |
| | | | | | | **urg** boolean | **Choices:*** no
* yes
| Match on the URG bit |
| | | | **remark** string | | Specify a comment |
| | | | **sequence** integer | | sequence number for the ordered list of rules |
| | | | **source** dictionary | | The packet's source address |
| | | | | **address** string | | dotted decimal notation of IP address |
| | | | | **any** boolean | **Choices:*** no
* yes
| Rule matches all source addresses |
| | | | | **host** string | | Host IP address |
| | | | | **port\_protocol** dictionary | | Specify source port/protocoli, along with operator. (comes with tcp/udp). |
| | | | | **subnet\_address** string | | A subnet address |
| | | | | **wildcard\_bits** string | | Source wildcard bits |
| | | | **tracked** boolean | **Choices:*** no
* yes
| Match packets in existing ICMP/UDP/TCP connections |
| | | | **ttl** dictionary | | Compares the TTL (time-to-live) value in the packet to a specified value |
| | | | | **eq** integer | | Match a single TTL value |
| | | | | **gt** integer | | Match TTL greater than this number |
| | | | | **lt** integer | | Match TTL lesser than this number |
| | | | | **neq** integer | | Match TTL not equal to this value |
| | | | **vlan** string | | Vlan options |
| | | **name** string / required | | Name of the acl-list |
| | | **standard** boolean | **Choices:*** no
* yes
| standard access-list or not |
| | **afi** string / required | **Choices:*** ipv4
* ipv6
| The Address Family Indicator (AFI) for the Access Control Lists (ACL). |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section access-list**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** deleted
* **merged** ←
* overridden
* replaced
* gathered
* rendered
* parsed
| The state the configuration should be left in. |
Notes
-----
Note
* Tested against Arista vEOS v4.20.10M
Examples
--------
```
# Using merged
# Before state:
# -------------
# show running-config | section access-list
# ip access-list test1
# 10 permit ip 10.10.10.0/24 any ttl eq 200
# 20 permit ip 10.30.10.0/24 host 10.20.10.1
# 30 deny tcp host 10.10.20.1 eq finger www any syn log
# 40 permit ip any any
# ipv6 access-list test2
# 10 deny icmpv6 any any reject-route hop-limit eq 20
- name: Merge provided configuration with device configuration
arista.eos.eos_acls:
config:
- afi: ipv4
acls:
- name: test1
aces:
- sequence: 35
grant: deny
protocol: ospf
source:
subnet_address: 20.0.0.0/8
destnation:
any: true
state: merged
# After state:
# ------------
#
# show running-config | section access-list
# ip access-list test1
# 10 permit ip 10.10.10.0/24 any ttl eq 200
# 20 permit ip 10.30.10.0/24 host 10.20.10.1
# 30 deny tcp host 10.10.20.1 eq finger www any syn log
# 35 deny ospf 20.0.0.0/8 any
# 40 permit ip any any
# ipv6 access-list test2
# 10 deny icmpv6 any any reject-route hop-limit eq 20
# Using merged
# Before state:
# -------------
# show running-config | section access-list
# ip access-list test1
# 10 permit ip 10.10.10.0/24 any ttl eq 200
# 20 permit ip 10.30.10.0/24 host 10.20.10.1
# 30 deny tcp host 10.10.20.1 eq finger www any syn log
# 40 permit ip any any
# ipv6 access-list test2
# 10 deny icmpv6 any any reject-route hop-limit eq 20
- name: Merge to update the given configuration with an existing ace
arista.eos.eos_acls:
config:
- afi: ipv4
acls:
- name: test1
aces:
- sequence: 35
log: true
ttl:
eq: 33
state: merged
# After state:
# ------------
#
# show running-config | section access-list
# ip access-list test1
# 10 permit ip 10.10.10.0/24 any ttl eq 200
# 20 permit ip 10.30.10.0/24 host 10.20.10.1
# 30 deny tcp host 10.10.20.1 eq finger www any syn log
# 35 deny ospf 20.0.0.0/8 any ttl eq 33 log
# 40 permit ip any any
# ipv6 access-list test2
# 10 deny icmpv6 any any reject-route hop-limit eq 20
# Using replaced
# Before state:
# -------------
# show running-config | section access-list
# ip access-list test1
# 10 permit ip 10.10.10.0/24 any ttl eq 200
# 20 permit ip 10.30.10.0/24 host 10.20.10.1
# 30 deny tcp host 10.10.20.1 eq finger www any syn log
# 40 permit ip any any
# !
# ip access-list test3
# 10 permit ip 35.33.0.0/16 any log
# !
# ipv6 access-list test2
# 10 deny icmpv6 any any reject-route hop-limit eq 20
- name: Replace device configuration with provided configuration
arista.eos.eos_acls:
config:
- afi: ipv4
acls:
- name: test1
aces:
- sequence: 35
grant: permit
protocol: ospf
source:
subnet_address: 20.0.0.0/8
destination:
any: true
state: replaced
# After state:
# ------------
#
# show running-config | section access-list
# ip access-list test1
# 35 permit ospf 20.0.0.0/8 any
# !
# ip access-list test3
# 10 permit ip 35.33.0.0/16 any log
# !
# ipv6 access-list test2
# 10 deny icmpv6 any any reject-route hop-limit eq 20
# Using overridden
# Before state:
# -------------
# show running-config | section access-list
# ip access-list test1
# 10 permit ip 10.10.10.0/24 any ttl eq 200
# 20 permit ip 10.30.10.0/24 host 10.20.10.1
# 30 deny tcp host 10.10.20.1 eq finger www any syn log
# 40 permit ip any any
# !
# ip access-list test3
# 10 permit ip 35.33.0.0/16 any log
# !
# ipv6 access-list test2
# 10 deny icmpv6 any any reject-route hop-limit eq 20
- name: override device configuration with provided configuration
arista.eos.eos_acls:
config:
- afi: ipv4
acls:
- name: test1
aces:
- sequence: 35
action: permit
protocol: ospf
source:
subnet_address: 20.0.0.0/8
destination:
any: true
state: overridden
# After state:
# ------------
#
# show running-config | section access-list
# ip access-list test1
# 35 permit ospf 20.0.0.0/8 any
# !
# Using deleted:
# Before state:
# -------------
# show running-config | section access-list
# ip access-list test1
# 10 permit ip 10.10.10.0/24 any ttl eq 200
# 20 permit ip 10.30.10.0/24 host 10.20.10.1
# 30 deny tcp host 10.10.20.1 eq finger www any syn log
# 40 permit ip any any
# ipv6 access-list test2
# 10 deny icmpv6 any any reject-route hop-limit eq 20
# !
- name: Delete provided configuration
arista.eos.eos_acls:
config:
- afi: ipv4
state: deleted
# After state:
# ------------
#
# show running-config | section access-list
# ipv6 access-list test2
# 10 deny icmpv6 any any reject-route hop-limit eq 20
# Before state:
# -------------
# show running-config | section access-list
# ip access-list test1
# 10 permit ip 10.10.10.0/24 any ttl eq 200
# 20 permit ip 10.30.10.0/24 host 10.20.10.1
# 30 deny tcp host 10.10.20.1 eq finger www any syn log
# 40 permit ip any any
# ipv6 access-list test2
# 10 deny icmpv6 any any reject-route hop-limit eq 20
# !
- name: Delete provided configuration
arista.eos.eos_acls:
config:
- afi: ipv4
acls:
- name: test1
state: deleted
# After state:
# ------------
#
# show running-config | section access-list
# ipv6 access-list test2
# 10 deny icmpv6 any any reject-route hop-limit eq 20
# using gathered
# ip access-list test1
# 35 deny ospf 20.0.0.0/8 any
# ip access-list test2
# 40 permit vlan 55 0xE2 icmpv6 any any log
- name: Gather the exisitng condiguration
arista.eos.eos_acls:
state: gathered
# returns:
# arista.eos.eos_acls:
# config:
# - afi: "ipv4"
# acls:
# - name: test1
# aces:
# - sequence: 35
# grant: "deny"
# protocol: "ospf"
# source:
# subnet_address: 20.0.0.0/8
# destination:
# any: true
# - afi: "ipv6"
# acls:
# - name: test2
# aces:
# - sequence: 40
# grant: "permit"
# vlan: "55 0xE2"
# protocol: "icmpv6"
# log: true
# source:
# any: true
# destination:
# any: true
# using rendered
- name: Delete provided configuration
arista.eos.eos_acls:
config:
- afi: ipv4
acls:
- name: test1
aces:
- sequence: 35
grant: deny
protocol: ospf
source:
subnet_address: 20.0.0.0/8
destination:
any: true
- afi: ipv6
acls:
- name: test2
aces:
- sequence: 40
grant: permit
vlan: 55 0xE2
protocol: icmpv6
log: true
source:
any: true
destination:
any: true
state: rendered
# returns:
# ip access-list test1
# 35 deny ospf 20.0.0.0/8 any
# ip access-list test2
# 40 permit vlan 55 0xE2 icmpv6 any any log
# Using Parsed
# parsed_acls.cfg
# ipv6 access-list standard test2
# 10 permit any log
# !
# ip access-list test1
# 35 deny ospf 20.0.0.0/8 any
# 45 remark Run by ansible
# 55 permit tcp any any
# !
- name: parse configs
arista.eos.eos_acls:
running_config: "{{ lookup('file', './parsed_acls.cfg') }}"
state: parsed
# returns
# "parsed": [
# {
# "acls": [
# {
# "aces": [
# {
# "destination": {
# "any": true
# },
# "grant": "deny",
# "protocol": "ospf",
# "sequence": 35,
# "source": {
# "subnet_address": "20.0.0.0/8"
# }
# },
# {
# "remark": "Run by ansible",
# "sequence": 45
# },
# {
# "destination": {
# "any": true
# },
# "grant": "permit",
# "protocol": "tcp",
# "sequence": 55,
# "source": {
# "any": true
# }
# }
# ],
# "name": "test1"
# }
# ],
# "afi": "ipv4"
# },
# {
# "acls": [
# {
# "aces": [
# {
# "grant": "permit",
# "log": true,
# "sequence": 10,
# "source": {
# "any": true
# }
# }
# ],
# "name": "test2",
# "standard": true
# }
# ],
# "afi": "ipv6"
# }
# ]
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **after** list / elements=string | when changed | The resulting configuration model invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **before** list / elements=string | always | The configuration prior to the model invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **commands** list / elements=string | always | The set of commands pushed to the remote device. **Sample:** ['ipv6 access-list standard test2', '10 permit any log', 'ip access-list test1', '35 deny ospf 20.0.0.0/8 any', '45 remark Run by ansible', '55 permit tcp any any'] |
### Authors
* Gomathiselvi S (@GomathiselviS)
| programming_docs |
ansible arista.eos.eos_user – Manage the collection of local users on EOS devices arista.eos.eos\_user – Manage the collection of local users on EOS devices
==========================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_user`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module provides declarative management of the local usernames configured on Arista EOS devices. It allows playbooks to manage either individual usernames or the collection of usernames in the current running config. It also supports purging usernames from the configuration that are not explicitly defined.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **aggregate** list / elements=dictionary | | The set of username objects to be configured on the remote Arista EOS device. The list entries can either be the username or a hash of username and properties. This argument is mutually exclusive with the `username` argument.
aliases: users, collection |
| | **configured\_password** string | | The password to be configured on the remote Arista EOS device. The password needs to be provided in clear and it will be encrypted on the device. Please note that this option is not same as `provider password`. |
| | **name** string | | The username to be configured on the remote Arista EOS device. This argument accepts a stringv value and is mutually exclusive with the `aggregate` argument. Please note that this option is not same as `provider username`. |
| | **nopassword** boolean | **Choices:*** no
* yes
| Defines the username without assigning a password. This will allow the user to login to the system without being authenticated by a password. |
| | **privilege** integer | | The `privilege` argument configures the privilege level of the user when logged into the system. This argument accepts integer values in the range of 1 to 15. |
| | **role** string | | Configures the role for the username in the device running configuration. The argument accepts a string value defining the role name. This argument does not check if the role has been configured on the device. |
| | **sshkey** string | | Specifies the SSH public key to configure for the given username. This argument accepts a valid SSH key value. |
| | **state** string | **Choices:*** present
* absent
| Configures the state of the username definition as it relates to the device operational configuration. When set to *present*, the username(s) should be configured in the device active configuration and when set to *absent* the username(s) should not be in the device active configuration |
| | **update\_password** string | **Choices:*** on\_create
* always
| Since passwords are encrypted in the device running config, this argument will instruct the module when to change the password. When set to `always`, the password will always be updated in the device and when set to `on_create` the password will be updated only if the username is created. |
| **configured\_password** string | | The password to be configured on the remote Arista EOS device. The password needs to be provided in clear and it will be encrypted on the device. Please note that this option is not same as `provider password`. |
| **name** string | | The username to be configured on the remote Arista EOS device. This argument accepts a stringv value and is mutually exclusive with the `aggregate` argument. Please note that this option is not same as `provider username`. |
| **nopassword** boolean | **Choices:*** no
* yes
| Defines the username without assigning a password. This will allow the user to login to the system without being authenticated by a password. |
| **privilege** integer | | The `privilege` argument configures the privilege level of the user when logged into the system. This argument accepts integer values in the range of 1 to 15. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **purge** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to consider the resource definition absolute. It will remove any previously configured usernames on the device with the exception of the `admin` user which cannot be deleted per EOS constraints. |
| **role** string | | Configures the role for the username in the device running configuration. The argument accepts a string value defining the role name. This argument does not check if the role has been configured on the device. |
| **sshkey** string | | Specifies the SSH public key to configure for the given username. This argument accepts a valid SSH key value. |
| **state** string | **Choices:*** **present** ←
* absent
| Configures the state of the username definition as it relates to the device operational configuration. When set to *present*, the username(s) should be configured in the device active configuration and when set to *absent* the username(s) should not be in the device active configuration |
| **update\_password** string | **Choices:*** on\_create
* **always** ←
| Since passwords are encrypted in the device running config, this argument will instruct the module when to change the password. When set to `always`, the password will always be updated in the device and when set to `on_create` the password will be updated only if the username is created. |
Notes
-----
Note
* Tested against EOS 4.15
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: create a new user
arista.eos.eos_user:
name: ansible
sshkey: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
state: present
- name: remove all users except admin
arista.eos.eos_user:
purge: yes
- name: set multiple users to privilege level 15
arista.eos.eos_user:
aggregate:
- name: netop
- name: netend
privilege: 15
state: present
- name: Change Password for User netop
arista.eos.eos_user:
username: netop
configured_password: '{{ new_password }}'
update_password: always
state: present
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always | The list of configuration mode commands to send to the device **Sample:** ['name ansible secret password', 'name admin secret admin'] |
| **session\_name** string | when changed is True | The EOS config session name used to load the configuration **Sample:** ansible\_1479315771 |
### Authors
* Peter Sprygada (@privateip)
ansible arista.eos.eos_ospf_interfaces – OSPF Interfaces Resource Module. arista.eos.eos\_ospf\_interfaces – OSPF Interfaces Resource Module.
===================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_ospf_interfaces`.
New in version 1.1.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Examples](#examples)
Synopsis
--------
* This module manages OSPF configuration of interfaces on devices running Arista EOS.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** list / elements=dictionary | | A list of OSPF configuration for interfaces. |
| | **address\_family** list / elements=dictionary | | OSPF settings on the interfaces in address-family context. |
| | | **afi** string / required | **Choices:*** ipv4
* ipv6
| Address Family Identifier (AFI) for OSPF settings on the interfaces. |
| | | **area** dictionary | | Area associated with interface. Valid only when afi = ipv4. |
| | | | **area\_id** string / required | | Area ID as a decimal or IP address format. |
| | | **authentication\_key** dictionary | | Configure the authentication key for the interface. Valid only when afi = ipv4. |
| | | | **encryption** string | | 0 Specifies an UNENCRYPTED authentication key will follow. 7 Specifies a proprietry encryption type.` |
| | | | **key** string | | password (up to 8 chars). |
| | | **authentication\_v2** dictionary | | Authentication settings on the interface. Valid only when afi = ipv4. |
| | | | **message\_digest** boolean | **Choices:*** no
* yes
| Use message-digest authentication. |
| | | | **set** boolean | **Choices:*** no
* yes
| Enable authentication on the interface. |
| | | **authentication\_v3** dictionary | | Authentication settings on the interface. Valid only when afi = ipv6. |
| | | | **algorithm** string | **Choices:*** md5
* sha1
| Encryption alsgorithm. |
| | | | **key** string | | 128 bit MD5 key or 140 bit SHA1 key. |
| | | | **keytype** string | | Specifies if an unencrypted/hidden follows. 0 denotes unencrypted key. 7 denotes hidden key. |
| | | | **passphrase** string | | Passphrase String for deriving keys for authentication and encryption. |
| | | | **spi** integer | | IPsec Security Parameter Index. |
| | | **bfd** boolean | **Choices:*** no
* yes
| Enable BFD. |
| | | **cost** integer | | metric associated with interface. |
| | | **dead\_interval** integer | | Time interval to detect a dead router. |
| | | **encryption\_v3** dictionary | | Authentication settings on the interface. Valid only when afi = ipv6. |
| | | | **algorithm** string | **Choices:*** md5
* sha1
| algorithm. |
| | | | **encryption** string | **Choices:*** 3des-cbc
* aes-128-cbc
* aes-192-cbc
* aes-256-cbc
* null
| encryption type. |
| | | | **key** string | | key |
| | | | **keytype** string | | Specifies if an unencrypted/hidden follows. 0 denotes unencrypted key. 7 denotes hidden key. |
| | | | **passphrase** string | | Passphrase String for deriving keys for authentication and encryption. |
| | | | **spi** integer | | IPsec Security Parameter Index. |
| | | **hello\_interval** integer | | Timer interval between transmission of hello packets. |
| | | **ip\_params** list / elements=dictionary | | Specify parameters for IPv4/IPv6. Valid only when afi = ipv6. |
| | | | **afi** string / required | **Choices:*** ipv4
* ipv6
| Address Family Identifier (AFI) for OSPF settings on the interfaces. |
| | | | **area** dictionary | | Area associated with interface. Valid only when afi = ipv4. |
| | | | | **area\_id** string / required | | Area ID as a decimal or IP address format. |
| | | | **bfd** boolean | **Choices:*** no
* yes
| Enable BFD. |
| | | | **cost** integer | | metric associated with interface. |
| | | | **dead\_interval** integer | | Time interval to detect a dead router. |
| | | | **hello\_interval** integer | | Timer interval between transmission of hello packets. |
| | | | **mtu\_ignore** boolean | **Choices:*** no
* yes
| if True, Disable MTU check for Database Description packets. |
| | | | **network** string | | Interface type. |
| | | | **passive\_interface** boolean | **Choices:*** no
* yes
| Suppress routing updates in an interface. |
| | | | **priority** integer | | Interface priority. |
| | | | **retransmit\_interval** integer | | LSA retransmission interval. |
| | | | **transmit\_delay** integer | | LSA transmission delay. |
| | | **message\_digest\_key** dictionary | | Message digest authentication password (key) settings. |
| | | | **encryption** string | | 0 Specifies an UNENCRYPTED ospf password (key) will follow. 7 Specifies a proprietry encryption type. |
| | | | **key** string | | Authentication key (upto 16 chars). |
| | | | **key\_id** integer | | Key ID. |
| | | **mtu\_ignore** boolean | **Choices:*** no
* yes
| if True, Disable MTU check for Database Description packets. |
| | | **network** string | | Interface type. |
| | | **passive\_interface** boolean | **Choices:*** no
* yes
| Suppress routing updates in an interface. Valid only when afi = ipv6. |
| | | **priority** integer | | Interface priority. |
| | | **retransmit\_interval** integer | | LSA retransmission interval. |
| | | **shutdown** boolean | **Choices:*** no
* yes
| Shutdown OSPF on this interface. |
| | | **transmit\_delay** integer | | LSA transmission delay. |
| | **name** string | | Name/Identifier of the interface. |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section interface**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** **merged** ←
* replaced
* overridden
* deleted
* gathered
* parsed
* rendered
| The state the configuration should be left in. |
Examples
--------
```
# Using merged
# Before state
# veos(config)#show running-config | section interface | ospf
# veos(config)#
- name: Merge provided configuration with device configuration
arista.eos.eos_ospf_interfaces:
config:
- name: "Vlan1"
address_family:
- afi: "ipv4"
area:
area_id: "0.0.0.50"
cost: 500
mtu_ignore: True
- afi: "ipv6"
dead_interval: 44
ip_params:
- afi: "ipv6"
mtu_ignore: True
network: "point-to-point"
state: merged
# After State
# veos(config)#show running-config | section interface | ospf
# interface Vlan1
# ip ospf cost 500
# ip ospf mtu-ignore
# ip ospf area 0.0.0.50
# ospfv3 dead-interval 44
# ospfv3 ipv6 network point-to-point
# ospfv3 ipv6 mtu-ignore
# veos(config)#
#
#
# Module Execution:
#
# "after": [
# {
# "name": "Ethernet1"
# },
# {
# "name": "Ethernet2"
# },
# {
# "name": "Management1"
# },
# {
# "address_family": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.50"
# },
# "cost": 500,
# "mtu_ignore": True
# },
# {
# "afi": "ipv6",
# "dead_interval": 44,
# "ip_params": [
# {
# "afi": "ipv6",
# "mtu_ignore": True,
# "network": "point-to-point"
# }
# ]
# }
# ],
# "name": "Vlan1"
# }
# ],
# "before": [
# {
# "name": "Ethernet1"
# },
# {
# "name": "Ethernet2"
# },
# {
# "name": "Management1"
# }
# ],
# "changed": True,
# "commands": [
# "interface Vlan1",
# "ip ospf area 0.0.0.50",
# "ip ospf cost 500",
# "ip ospf mtu-ignore",
# "ospfv3 dead-interval 44",
# "ospfv3 ipv6 mtu-ignore",
# "ospfv3 ipv6 network point-to-point"
# ],
#
# Using replaced
#---------------
# Before State:
# veos(config)#show running-config | section interface | ospf
# interface Vlan1
# ip ospf cost 500
# ip ospf dead-interval 29
# ip ospf hello-interval 66
# ip ospf mtu-ignore
# ip ospf area 0.0.0.50
# ospfv3 cost 106
# ospfv3 hello-interval 77
# ospfv3 dead-interval 44
# ospfv3 transmit-delay 100
# ospfv3 ipv4 priority 45
# ospfv3 ipv4 area 0.0.0.5
# ospfv3 ipv6 passive-interface
# ospfv3 ipv6 retransmit-interval 115
# ospfv3 ipv6 network point-to-point
# ospfv3 ipv6 mtu-ignore
# !
# interface Vlan2
# ospfv3 ipv4 hello-interval 45
# ospfv3 ipv4 retransmit-interval 100
# ospfv3 ipv4 area 0.0.0.6
# veos(config)#
- name: Replace device configuration with provided configuration
arista.eos.eos_ospf_interfaces:
config:
- name: "Vlan1"
address_family:
- afi: "ipv6"
cost: 44
bfd: True
ip_params:
- afi: "ipv6"
mtu_ignore: True
network: "point-to-point"
dead_interval: 56
state: replaced
# After State:
# veos(config)#show running-config | section interface | ospf
# interface Vlan1
# ospfv3 bfd
# ospfv3 cost 44
# no ospfv3 ipv6 passive-interface
# ospfv3 ipv6 network point-to-point
# ospfv3 ipv6 mtu-ignore
# !
# interface Vlan2
# ospfv3 ipv4 hello-interval 45
# ospfv3 ipv4 retransmit-interval 100
# ospfv3 ipv4 area 0.0.0.6
# veos(config)#
#
# Module Execution:
#
# "after": [
# {
# "name": "Ethernet1"
# },
# {
# "name": "Ethernet2"
# },
# {
# "name": "Management1"
# },
# {
# "address_family": [
# {
# "afi": "ipv6",
# "bfd": True,
# "cost": 44,
# "ip_params": [
# {
# "afi": "ipv6",
# "mtu_ignore": True,
# "network": "point-to-point"
# }
# ]
# }
# ],
# "name": "Vlan1"
# },
# {
# "address_family": [
# {
# "afi": "ipv6",
# "ip_params": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.6"
# },
# "hello_interval": 45,
# "retransmit_interval": 100
# }
# ]
# }
# ],
# "name": "Vlan2"
# }
# ],
# "before": [
# {
# "name": "Ethernet1"
# },
# {
# "name": "Ethernet2"
# },
# {
# "name": "Management1"
# },
# {
# "address_family": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.50"
# },
# "cost": 500,
# "dead_interval": 29,
# "hello_interval": 66,
# "mtu_ignore": True
# },
# {
# "afi": "ipv6",
# "cost": 106,
# "dead_interval": 44,
# "hello_interval": 77,
# "ip_params": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.5"
# },
# "priority": 45
# },
# {
# "afi": "ipv6",
# "mtu_ignore": True,
# "network": "point-to-point",
# "passive_interface": True,
# "retransmit_interval": 115
# }
# ],
# "transmit_delay": 100
# }
# ],
# "name": "Vlan1"
# },
# {
# "address_family": [
# {
# "afi": "ipv6",
# "ip_params": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.6"
# },
# "hello_interval": 45,
# "retransmit_interval": 100
# }
# ]
# }
# ],
# "name": "Vlan2"
# }
# ],
# "changed": True,
# "commands": [
# "interface Vlan1",
# "no ip ospf cost 500",
# "no ip ospf dead-interval 29",
# "no ip ospf hello-interval 66",
# "no ip ospf mtu-ignore",
# "no ip ospf area 0.0.0.50",
# "ospfv3 cost 44",
# "ospfv3 bfd",
# "ospfv3 authentication ipsec spi 30 md5 passphrase 7 7hl8FV3lZ6H1mAKpjL47hQ==",
# "no ospfv3 ipv4 priority 45",
# "no ospfv3 ipv4 area 0.0.0.5",
# "ospfv3 ipv6 dead-interval 56",
# "no ospfv3 ipv6 passive-interface",
# "no ospfv3 ipv6 retransmit-interval 115",
# "no ospfv3 hello-interval 77",
# "no ospfv3 dead-interval 44",
# "no ospfv3 transmit-delay 100"
# ],
#
# Using overidden:
# ----------------
# Before State:
# veos(config)#show running-config | section interface | ospf
# interface Vlan1
# ip ospf dead-interval 29
# ip ospf hello-interval 66
# ip ospf mtu-ignore
# ospfv3 bfd
# ospfv3 cost 106
# ospfv3 hello-interval 77
# ospfv3 transmit-delay 100
# ospfv3 ipv4 priority 45
# ospfv3 ipv4 area 0.0.0.5
# ospfv3 ipv6 passive-interface
# ospfv3 ipv6 dead-interval 56
# ospfv3 ipv6 retransmit-interval 115
# ospfv3 ipv6 network point-to-point
# ospfv3 ipv6 mtu-ignore
# !
# interface Vlan2
# ospfv3 ipv4 hello-interval 45
# ospfv3 ipv4 retransmit-interval 100
# ospfv3 ipv4 area 0.0.0.6
# veos(config)#
- name: Override device configuration with provided configuration
arista.eos.eos_ospf_interfaces:
config:
- name: "Vlan1"
address_family:
- afi: "ipv6"
cost: 44
bfd: True
ip_params:
- afi: "ipv6"
mtu_ignore: True
network: "point-to-point"
dead_interval: 56
state: overridden
# After State:
# veos(config)#show running-config | section interface | ospf
# interface Vlan1
# ospfv3 bfd
# ospfv3 cost 44
# no ospfv3 ipv6 passive-interface
# ospfv3 ipv6 dead-interval 56
# ospfv3 ipv6 network point-to-point
# ospfv3 ipv6 mtu-ignore
# veos(config)#
#
#
# Module Execution:
#
# "after": [
# {
# "name": "Ethernet1"
# },
# {
# "name": "Ethernet2"
# },
# {
# "name": "Management1"
# },
# {
# "address_family": [
# {
# "afi": "ipv6",
# "bfd": True,
# "cost": 44,
# "ip_params": [
# {
# "afi": "ipv6",
# "dead_interval": 56,
# "mtu_ignore": True,
# "network": "point-to-point"
# }
# ]
# }
# ],
# "name": "Vlan1"
# },
# {
# "name": "Vlan2"
# }
# ],
# "before": [
# {
# "name": "Ethernet1"
# },
# {
# "name": "Ethernet2"
# },
# {
# "name": "Management1"
# },
# {
# "address_family": [
# {
# "afi": "ipv4",
# "dead_interval": 29,
# "hello_interval": 66,
# "mtu_ignore": True
# },
# {
# "afi": "ipv6",
# "bfd": True,
# "cost": 106,
# "hello_interval": 77,
# "ip_params": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.5"
# },
# "priority": 45
# },
# {
# "afi": "ipv6",
# "dead_interval": 56,
# "mtu_ignore": True,
# "network": "point-to-point",
# "passive_interface": True,
# "retransmit_interval": 115
# }
# ],
# "transmit_delay": 100
# }
# ],
# "name": "Vlan1"
# },
# {
# "address_family": [
# {
# "afi": "ipv6",
# "ip_params": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.6"
# },
# "hello_interval": 45,
# "retransmit_interval": 100
# }
# ]
# }
# ],
# "name": "Vlan2"
# }
# ],
# "changed": True,
# "commands": [
# "interface Vlan2",
# "no ospfv3 ipv4 hello-interval 45",
# "no ospfv3 ipv4 retransmit-interval 100",
# "no ospfv3 ipv4 area 0.0.0.6",
# "interface Vlan1",
# "no ip ospf dead-interval 29",
# "no ip ospf hello-interval 66",
# "no ip ospf mtu-ignore",
# "ospfv3 cost 44",
# "ospfv3 authentication ipsec spi 30 md5 passphrase 7 7hl8FV3lZ6H1mAKpjL47hQ==",
# "no ospfv3 ipv4 priority 45",
# "no ospfv3 ipv4 area 0.0.0.5",
# "no ospfv3 ipv6 passive-interface",
# "no ospfv3 ipv6 retransmit-interval 115",
# "no ospfv3 hello-interval 77",
# "no ospfv3 transmit-delay 100"
# ],
#
# Using deleted:
#--------------
# before State:
# veos(config)#show running-config | section interface | ospf
# interface Vlan1
# ip ospf dead-interval 29
# ip ospf hello-interval 66
# ip ospf mtu-ignore
# ospfv3 bfd
# ospfv3 cost 106
# ospfv3 hello-interval 77
# ospfv3 transmit-delay 100
# ospfv3 ipv4 priority 45
# ospfv3 ipv4 area 0.0.0.5
# ospfv3 ipv6 passive-interface
# ospfv3 ipv6 dead-interval 56
# ospfv3 ipv6 retransmit-interval 115
# ospfv3 ipv6 network point-to-point
# ospfv3 ipv6 mtu-ignore
# !
# interface Vlan2
# ospfv3 ipv4 hello-interval 45
# ospfv3 ipv4 retransmit-interval 100
# ospfv3 ipv4 area 0.0.0.6
# veos(config)#
- name: Delete device configuration
arista.eos.eos_ospf_interfaces:
config:
- name: "Vlan1"
state: deleted
# After State:
# veos#show running-config | section interface | ospf
# interface Vlan2
# ospfv3 ipv4 hello-interval 45
# ospfv3 ipv4 retransmit-interval 100
# ospfv3 ipv4 area 0.0.0.6
#
# Module Execution:
#
# "after": [
# {
# "name": "Ethernet1"
# },
# {
# "name": "Ethernet2"
# },
# {
# "name": "Management1"
# },
# {
# "name": "Vlan1"
# },
# {
# "address_family": [
# {
# "afi": "ipv6",
# "ip_params": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.6"
# },
# "hello_interval": 45,
# "retransmit_interval": 100
# }
# ]
# }
# ],
# "name": "Vlan2"
# }
# ],
# "before": [
# {
# "name": "Ethernet1"
# },
# {
# "name": "Ethernet2"
# },
# {
# "name": "Management1"
# },
# {
# "address_family": [
# {
# "afi": "ipv4",
# "dead_interval": 29,
# "hello_interval": 66,
# "mtu_ignore": True
# },
# {
# "afi": "ipv6",
# "bfd": True,
# "cost": 106,
# "hello_interval": 77,
# "ip_params": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.5"
# },
# "priority": 45
# },
# {
# "afi": "ipv6",
# "dead_interval": 56,
# "mtu_ignore": True,
# "network": "point-to-point",
# "passive_interface": True,
# "retransmit_interval": 115
# }
# ],
# "transmit_delay": 100
# }
# ],
# "name": "Vlan1"
# },
# {
# "address_family": [
# {
# "afi": "ipv6",
# "ip_params": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.6"
# },
# "hello_interval": 45,
# "retransmit_interval": 100
# }
# ]
# }
# ],
# "name": "Vlan2"
# }
# ],
# "changed": True,
# "commands": [
# "interface Vlan1",
# "no ip ospf dead-interval 29",
# "no ip ospf hello-interval 66",
# "no ip ospf mtu-ignore",
# "no ospfv3 bfd",
# "no ospfv3 cost 106",
# "no ospfv3 hello-interval 77",
# "no ospfv3 transmit-delay 100",
# "no ospfv3 ipv4 priority 45",
# "no ospfv3 ipv4 area 0.0.0.5",
# "no ospfv3 ipv6 passive-interface",
# "no ospfv3 ipv6 dead-interval 56",
# "no ospfv3 ipv6 retransmit-interval 115",
# "no ospfv3 ipv6 network point-to-point",
# "no ospfv3 ipv6 mtu-ignore"
# ],
#
# Using parsed:
# ------------
# parsed.cfg:
# ----------
# interface Vlan1
# ip ospf dead-interval 29
# ip ospf hello-interval 66
# ip ospf mtu-ignore
# ip ospf cost 500
# ospfv3 bfd
# ospfv3 cost 106
# ospfv3 hello-interval 77
# ospfv3 transmit-delay 100
# ospfv3 ipv4 priority 45
# ospfv3 ipv4 area 0.0.0.5
# ospfv3 ipv6 passive-interface
# ospfv3 ipv6 dead-interval 56
# ospfv3 ipv6 retransmit-interval 115
# ospfv3 ipv6 network point-to-point
# ospfv3 ipv6 mtu-ignore
# !
# interface Vlan2
# ospfv3 ipv4 hello-interval 45
# ospfv3 ipv4 retransmit-interval 100
# ospfv3 ipv4 area 0.0.0.6
#
- name: parse configs
arista.eos.eos_ospf_interfaces:
running_config: "{{ lookup('file', './parsed.cfg') }}"
state: parsed
# Module Execution:
# "parsed": [
# {
# "address_family": [
# {
# "afi": "ipv4",
# "cost": 500,
# "dead_interval": 29,
# "hello_interval": 66,
# "mtu_ignore": True
# },
# {
# "afi": "ipv6",
# "bfd": True,
# "cost": 106,
# "hello_interval": 77,
# "ip_params": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.5"
# },
# "priority": 45
# },
# {
# "afi": "ipv6",
# "dead_interval": 56,
# "mtu_ignore": True,
# "network": "point-to-point",
# "passive_interface": True,
# "retransmit_interval": 115
# }
# ],
# "transmit_delay": 100
# }
# ],
# "name": "Vlan1"
# },
# {
# "address_family": [
# {
# "afi": "ipv6",
# "ip_params": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.6"
# },
# "hello_interval": 45,
# "retransmit_interval": 100
# }
# ]
# }
# ],
# "name": "Vlan2"
# }
# ]
# Using gathered:
# Device COnfig:
# veos#show running-config | section interface | ospf
# interface Vlan1
# ip ospf cost 500
# ip ospf dead-interval 29
# ip ospf hello-interval 66
# ip ospf mtu-ignore
# ip ospf area 0.0.0.50
# ospfv3 cost 106
# ospfv3 hello-interval 77
# ospfv3 transmit-delay 100
# ospfv3 ipv4 priority 45
# ospfv3 ipv4 area 0.0.0.5
# ospfv3 ipv6 passive-interface
# ospfv3 ipv6 dead-interval 56
# ospfv3 ipv6 retransmit-interval 115
# ospfv3 ipv6 network point-to-point
# ospfv3 ipv6 mtu-ignore
# !
# interface Vlan2
# ospfv3 ipv4 hello-interval 45
# ospfv3 ipv4 retransmit-interval 100
# ospfv3 ipv4 area 0.0.0.6
# veos#
- name: gather configs
arista.eos.eos_ospf_interfaces:
state: gathered
# Module Execution:
#
# "gathered": [
# {
# "name": "Ethernet1"
# },
# {
# "name": "Ethernet2"
# },
# {
# "name": "Management1"
# },
# {
# "address_family": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.50"
# },
# "cost": 500,
# "dead_interval": 29,
# "hello_interval": 66,
# "mtu_ignore": True
# },
# {
# "afi": "ipv6",
# "cost": 106,
# "hello_interval": 77,
# "ip_params": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.5"
# },
# "priority": 45
# },
# {
# "afi": "ipv6",
# "dead_interval": 56,
# "mtu_ignore": True,
# "network": "point-to-point",
# "passive_interface": True,
# "retransmit_interval": 115
# }
# ],
# "transmit_delay": 100
# }
# ],
# "name": "Vlan1"
# },
# {
# "address_family": [
# {
# "afi": "ipv6",
# "ip_params": [
# {
# "afi": "ipv4",
# "area": {
# "area_id": "0.0.0.6"
# },
# "hello_interval": 45,
# "retransmit_interval": 100
# }
# ]
# }
# ],
# "name": "Vlan2"
# }
# ],
#
# Using rendered:
# --------------
- name: Render provided configuration
arista.eos.eos_ospf_interfaces:
config:
- name: "Vlan1"
address_family:
- afi: "ipv4"
dead_interval: 29
mtu_ignore: True
hello_interval: 66
- afi: "ipv6"
hello_interval: 77
cost : 106
transmit_delay: 100
ip_params:
- afi: "ipv6"
retransmit_interval: 115
dead_interval: 56
passive_interface: True
- afi: "ipv4"
area:
area_id: "0.0.0.5"
priority: 45
- name: "Vlan2"
address_family:
- afi: "ipv6"
ip_params:
- afi: "ipv4"
area:
area_id: "0.0.0.6"
hello_interval: 45
retransmit_interval: 100
- afi: "ipv4"
message_digest_key:
key_id: 200
encryption: 7
key: "hkdfhtu=="
state: rendered
# Module Execution:
#
# "rendered": [
# "interface Vlan1",
# "ip ospf dead-interval 29",
# "ip ospf mtu-ignore",
# "ip ospf hello-interval 66",
# "ospfv3 hello-interval 77",
# "ospfv3 cost 106",
# "ospfv3 transmit-delay 100",
# "ospfv3 ipv4 area 0.0.0.5",
# "ospfv3 ipv4 priority 45",
# "ospfv3 ipv6 retransmit-interval 115",
# "ospfv3 ipv6 dead-interval 56",
# "ospfv3 ipv6 passive-interface",
# "interface Vlan2",
# "ip ospf message-digest-key 200 md5 7 hkdfhtu==",
# "ospfv3 ipv4 area 0.0.0.6",
# "ospfv3 ipv4 hello-interval 45",
# "ospfv3 ipv4 retransmit-interval 100"
# ]
#
```
### Authors
* Gomathi Selvi Srinivasan (@GomathiselviS)
| programming_docs |
ansible arista.eos.eos_linkagg – (deprecated, removed after 2022-06-01) Manage link aggregation groups on Arista EOS network devices arista.eos.eos\_linkagg – (deprecated, removed after 2022-06-01) Manage link aggregation groups on Arista EOS network devices
=============================================================================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_linkagg`.
New in version 1.0.0: of arista.eos
* [DEPRECATED](#deprecated)
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
* [Status](#status)
DEPRECATED
----------
Removed in
major release after 2022-06-01
Why
Updated modules released with more functionality
Alternative
eos\_lag\_interfaces
Synopsis
--------
* This module provides declarative management of link aggregation groups on Arista EOS network devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **aggregate** list / elements=dictionary | | List of link aggregation definitions. |
| | **group** integer / required | | Channel-group number for the port-channel Link aggregation group. Range 1-2000. |
| | **members** list / elements=string | | List of members of the link aggregation group. |
| | **min\_links** integer | | Minimum number of ports required up before bringing up the link aggregation group. |
| | **mode** string | **Choices:*** active
* on
* passive
| Mode of the link aggregation group. |
| | **state** string | **Choices:*** present
* absent
| State of the link aggregation group. |
| **group** integer | | Channel-group number for the port-channel Link aggregation group. Range 1-2000. |
| **members** list / elements=string | | List of members of the link aggregation group. |
| **min\_links** integer | | Minimum number of ports required up before bringing up the link aggregation group. |
| **mode** string | **Choices:*** active
* on
* passive
| Mode of the link aggregation group. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **purge** boolean | **Choices:*** **no** ←
* yes
| Purge links not defined in the *aggregate* parameter. |
| **state** string | **Choices:*** **present** ←
* absent
| State of the link aggregation group. |
Notes
-----
Note
* Tested against EOS 4.15
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: create link aggregation group
arista.eos.eos_linkagg:
group: 10
state: present
- name: delete link aggregation group
arista.eos.eos_linkagg:
group: 10
state: absent
- name: set link aggregation group to members
arista.eos.eos_linkagg:
group: 200
min_links: 3
mode: active
members:
- Ethernet0
- Ethernet1
- name: remove link aggregation group from Ethernet0
arista.eos.eos_linkagg:
group: 200
min_links: 3
mode: active
members:
- Ethernet1
- name: Create aggregate of linkagg definitions
arista.eos.eos_linkagg:
aggregate:
- {group: 3, mode: on, members: [Ethernet1]}
- {group: 100, mode: passive, min_links: 3, members: [Ethernet2]}
- name: Remove aggregate of linkagg definitions
arista.eos.eos_linkagg:
aggregate:
- {group: 3, mode: on, members: [Ethernet1]}
- {group: 100, mode: passive, min_links: 3, members: [Ethernet2]}
state: absent
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always, except for the platforms that use Netconf transport to manage the device. | The list of configuration mode commands to send to the device **Sample:** ['interface port-channel 30', 'port-channel min-links 5', 'interface Ethernet3', 'channel-group 30 mode on', 'no interface port-channel 30'] |
Status
------
* This module will be removed in a major release after 2022-06-01. *[deprecated]*
* For more information see [DEPRECATED](#deprecated).
### Authors
* Trishna Guha (@trishnaguha)
ansible arista.eos.eos_bgp – (deprecated, removed after 2023-01-29) Configure global BGP protocol settings on Arista EOS. arista.eos.eos\_bgp – (deprecated, removed after 2023-01-29) Configure global BGP protocol settings on Arista EOS.
==================================================================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_bgp`.
New in version 1.0.0: of arista.eos
* [DEPRECATED](#deprecated)
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
* [Status](#status)
DEPRECATED
----------
Removed in
major release after 2023-01-29
Why
Updated module released with more functionality.
Alternative
eos\_bgp\_global
Synopsis
--------
* This module provides configuration management of global BGP parameters on Arista EOS devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** dictionary | | Specifies the BGP related configuration. |
| | **address\_family** list / elements=dictionary | | Specifies BGP address family related configurations. |
| | | **afi** string / required | **Choices:*** ipv4
* ipv6
| Type of address family to configure. |
| | | **neighbors** list / elements=dictionary | | Specifies BGP neighbor related configurations in Address Family configuration mode. |
| | | | **activate** boolean | **Choices:*** no
* yes
| Enable the Address Family for this Neighbor. |
| | | | **default\_originate** boolean | **Choices:*** no
* yes
| Originate default route to this neighbor. |
| | | | **graceful\_restart** boolean | **Choices:*** no
* yes
| Enable/disable graceful restart mode for this neighbor. |
| | | | **neighbor** string / required | | Neighbor router address. |
| | | | **weight** integer | | Assign weight for routes learnt from this neighbor. The range is from 0 to 65535 |
| | | **networks** list / elements=dictionary | | Specify Networks to announce via BGP. For operation replace, this option is mutually exclusive with root level networks option. |
| | | | **masklen** integer | | Subnet mask length for the Network to announce(e.g, 8, 16, 24, etc.). |
| | | | **prefix** string / required | | Network ID to announce via BGP. |
| | | | **route\_map** string | | Route map to modify the attributes. |
| | | **redistribute** list / elements=dictionary | | Specifies the redistribute information from another routing protocol. |
| | | | **protocol** string / required | **Choices:*** ospf3
* ospf
* isis
* static
* connected
* rip
| Specifies the protocol for configuring redistribute information. |
| | | | **route\_map** string | | Specifies the route map reference. |
| | **bgp\_as** integer / required | | Specifies the BGP Autonomous System (AS) number to configure on the device. |
| | **log\_neighbor\_changes** boolean | **Choices:*** no
* yes
| Enable/disable logging neighbor up/down and reset reason. |
| | **neighbors** list / elements=dictionary | | Specifies BGP neighbor related configurations. |
| | | **description** string | | Neighbor specific description. |
| | | **ebgp\_multihop** integer | | Specifies the maximum hop count for EBGP neighbors not on directly connected networks. The range is from 1 to 255. |
| | | **enabled** boolean | **Choices:*** no
* yes
| Administratively shutdown or enable a neighbor. |
| | | **maximum\_prefix** integer | | Maximum number of prefixes to accept from this peer. The range is from 0 to 4294967294. |
| | | **neighbor** string / required | | Neighbor router address. |
| | | **password** string | | Password to authenticate the BGP peer connection. |
| | | **peer\_group** string | | Name of the peer group that the neighbor is a member of. |
| | | **remote\_as** integer / required | | Remote AS of the BGP neighbor to configure. |
| | | **remove\_private\_as** boolean | **Choices:*** no
* yes
| Remove the private AS number from outbound updates. |
| | | **route\_reflector\_client** integer | | Specify a neighbor as a route reflector client. |
| | | **timers** dictionary | | Specifies BGP neighbor timer related configurations. |
| | | | **holdtime** integer / required | | Interval (in seconds) after not receiving a keepalive message that device declares a peer dead. The range is from 3 to 7200. Setting this value to 0 will not send keep-alives (hold forever). |
| | | | **keepalive** integer / required | | Frequency (in seconds) with which the device sends keepalive messages to its peer. The range is from 0 to 3600. |
| | | **update\_source** string | | Source of the routing updates. |
| | **networks** list / elements=dictionary | | Specify Networks to announce via BGP. For operation replace, this option is mutually exclusive with networks option under address\_family. For operation replace, if the device already has an address family activated, this option is not allowed. |
| | | **masklen** integer | | Subnet mask length for the Network to announce(e.g, 8, 16, 24, etc.). |
| | | **prefix** string / required | | Network ID to announce via BGP. |
| | | **route\_map** string | | Route map to modify the attributes. |
| | **redistribute** list / elements=dictionary | | Specifies the redistribute information from another routing protocol. |
| | | **protocol** string / required | **Choices:*** ospf
* ospf3
* static
* connected
* rip
* isis
| Specifies the protocol for configuring redistribute information. |
| | | **route\_map** string | | Specifies the route map reference. |
| | **router\_id** string | | Configures the BGP routing process router-id value. |
| **operation** string | **Choices:*** **merge** ←
* replace
* override
* delete
| Specifies the operation to be performed on the BGP process configured on the device. In case of merge, the input configuration will be merged with the existing BGP configuration on the device. In case of replace, if there is a diff between the existing configuration and the input configuration, the existing configuration will be replaced by the input configuration for every option that has the diff. In case of override, all the existing BGP configuration will be removed from the device and replaced with the input configuration. In case of delete the existing BGP configuration will be removed from the device. |
Notes
-----
Note
* Tested against Arista vEOS Version 4.15.9M.
Examples
--------
```
- name: configure global bgp as 64496
arista.eos.eos_bgp:
config:
bgp_as: 64496
router_id: 192.0.2.1
log_neighbor_changes: true
neighbors:
- neighbor: 203.0.113.5
remote_as: 64511
timers:
keepalive: 300
holdtime: 360
- neighbor: 198.51.100.2
remote_as: 64498
networks:
- prefix: 198.51.100.0
route_map: RMAP_1
- prefix: 192.0.2.0
masklen: 23
address_family:
- afi: ipv4
safi: unicast
redistribute:
- protocol: isis
route_map: RMAP_1
operation: merge
- name: Configure BGP neighbors
arista.eos.eos_bgp:
config:
bgp_as: 64496
neighbors:
- neighbor: 192.0.2.10
remote_as: 64496
description: IBGP_NBR_1
ebgp_multihop: 100
timers:
keepalive: 300
holdtime: 360
- neighbor: 192.0.2.15
remote_as: 64496
description: IBGP_NBR_2
ebgp_multihop: 150
operation: merge
- name: Configure root-level networks for BGP
arista.eos.eos_bgp:
config:
bgp_as: 64496
networks:
- prefix: 203.0.113.0
masklen: 27
route_map: RMAP_1
- prefix: 203.0.113.32
masklen: 27
route_map: RMAP_2
operation: merge
- name: Configure BGP neighbors under address family mode
arista.eos.eos_bgp:
config:
bgp_as: 64496
address_family:
- afi: ipv4
neighbors:
- neighbor: 203.0.113.10
activate: yes
default_originate: true
- neighbor: 192.0.2.15
activate: yes
graceful_restart: true
operation: merge
- name: remove bgp as 64496 from config
arista.eos.eos_bgp:
config:
bgp_as: 64496
operation: delete
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always | The list of configuration mode commands to send to the device **Sample:** ['router bgp 64496', 'bgp router-id 192.0.2.1', 'bgp log-neighbor-changes', 'neighbor 203.0.113.5 remote-as 64511', 'neighbor 203.0.113.5 timers 300 360', 'neighbor 198.51.100.2 remote-as 64498', 'network 198.51.100.0 route-map RMAP\_1', 'network 192.0.2.0 mask 255.255.254.0', 'address-family ipv4', 'redistribute isis route-map RMAP\_1', 'exit-address-family'] |
Status
------
* This module will be removed in a major release after 2023-01-29. *[deprecated]*
* For more information see [DEPRECATED](#deprecated).
### Authors
* Nilashish Chakraborty (@NilashishC)
ansible arista.eos.eos_eapi – Manage and configure Arista EOS eAPI. arista.eos.eos\_eapi – Manage and configure Arista EOS eAPI.
============================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_eapi`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Requirements](#requirements)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* Use to enable or disable eAPI access, and set the port and state of http, https, local\_http and unix-socket servers.
* When enabling eAPI access the default is to enable HTTP on port 80, enable HTTPS on port 443, disable local HTTP, and disable Unix socket server. Use the options listed below to override the default configuration.
* Requires EOS v4.12 or greater.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Requirements
------------
The below requirements are needed on the host that executes this module.
* EOS v4.12 or greater
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** string | | The module, by default, will connect to the remote device and retrieve the current running-config to use as a base for comparing against the contents of source. There are times when it is not desirable to have the task get the current running-config for every task in a playbook. The *config* argument allows the implementer to pass in the configuration to use as the base config for comparison. |
| **http** boolean | **Choices:*** no
* yes
| The `http` argument controls the operating state of the HTTP transport protocol when eAPI is present in the running-config. When the value is set to True, the HTTP protocol is enabled and when the value is set to False, the HTTP protocol is disabled. By default, when eAPI is first configured, the HTTP protocol is disabled.
aliases: enable\_http |
| **http\_port** integer | | Configures the HTTP port that will listen for connections when the HTTP transport protocol is enabled. This argument accepts integer values in the valid range of 1 to 65535. |
| **https** boolean | **Choices:*** no
* yes
| The `https` argument controls the operating state of the HTTPS transport protocol when eAPI is present in the running-config. When the value is set to True, the HTTPS protocol is enabled and when the value is set to False, the HTTPS protocol is disabled. By default, when eAPI is first configured, the HTTPS protocol is enabled.
aliases: enable\_https |
| **https\_port** integer | | Configures the HTTP port that will listen for connections when the HTTP transport protocol is enabled. This argument accepts integer values in the valid range of 1 to 65535. |
| **local\_http** boolean | **Choices:*** no
* yes
| The `local_http` argument controls the operating state of the local HTTP transport protocol when eAPI is present in the running-config. When the value is set to True, the HTTP protocol is enabled and restricted to connections from localhost only. When the value is set to False, the HTTP local protocol is disabled. Note is value is independent of the `http` argument
aliases: enable\_local\_http |
| **local\_http\_port** integer | | Configures the HTTP port that will listen for connections when the HTTP transport protocol is enabled. This argument accepts integer values in the valid range of 1 to 65535. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **socket** boolean | **Choices:*** no
* yes
| The `socket` argument controls the operating state of the UNIX Domain Socket used to receive eAPI requests. When the value of this argument is set to True, the UDS will listen for eAPI requests. When the value is set to False, the UDS will not be available to handle requests. By default when eAPI is first configured, the UDS is disabled.
aliases: enable\_socket |
| **state** string | **Choices:*** **started** ←
* stopped
| The `state` argument controls the operational state of eAPI on the remote device. When this argument is set to `started`, eAPI is enabled to receive requests and when this argument is `stopped`, eAPI is disabled and will not receive requests. |
| **timeout** integer | **Default:**30 | The time (in seconds) to wait for the eAPI configuration to be reflected in the running-config. |
| **vrf** string | **Default:**"default" | The `vrf` argument will configure eAPI to listen for connections in the specified VRF. By default, eAPI transports will listen for connections in the global table. This value requires the VRF to already be created otherwise the task will fail. |
Notes
-----
Note
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: Enable eAPI access with default configuration
arista.eos.eos_eapi:
state: started
- name: Enable eAPI with no HTTP, HTTPS at port 9443, local HTTP at port 80, and socket
enabled
arista.eos.eos_eapi:
state: started
http: false
https_port: 9443
local_http: yes
local_http_port: 80
socket: yes
- name: Shutdown eAPI access
arista.eos.eos_eapi:
state: stopped
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always | The list of configuration mode commands to send to the device **Sample:** ['management api http-commands', 'protocol http port 81', 'no protocol https'] |
| **session\_name** string | when changed is True | The EOS config session name used to load the configuration **Sample:** ansible\_1479315771 |
| **urls** dictionary | when eAPI is started | Hash of URL endpoints eAPI is listening on per interface **Sample:** {'Management1': ['http://172.26.10.1:80']} |
### Authors
* Peter Sprygada (@privateip)
| programming_docs |
ansible arista.eos.eos_vlans – VLANs resource module arista.eos.eos\_vlans – VLANs resource module
=============================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_vlans`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module provides declarative management of VLANs on Arista EOS network devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** list / elements=dictionary | | A dictionary of VLANs options |
| | **name** string | | Name of the VLAN. |
| | **state** string | **Choices:*** active
* suspend
| Operational state of the VLAN |
| | **vlan\_id** integer / required | | ID of the VLAN. Range 1-4094 |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section vlan**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value |
| **state** string | **Choices:*** **merged** ←
* replaced
* overridden
* deleted
* rendered
* gathered
* parsed
| The state of the configuration after module completion |
Notes
-----
Note
* Tested against Arista EOS 4.20.10M
* This module works with connection `network_cli`. See the [EOS Platform Options](../network/user_guide/platform_eos).
Examples
--------
```
# Using deleted
# Before state:
# -------------
#
# veos(config-vlan-20)#show running-config | section vlan
# vlan 10
# name ten
# !
# vlan 20
# name twenty
- name: Delete attributes of the given VLANs.
arista.eos.eos_vlans:
config:
- vlan_id: 20
state: deleted
# After state:
# ------------
#
# veos(config-vlan-20)#show running-config | section vlan
# vlan 10
# name ten
# Using merged
# Before state:
# -------------
#
# veos(config-vlan-20)#show running-config | section vlan
# vlan 10
# name ten
# !
# vlan 20
# name twenty
- name: Merge given VLAN attributes with device configuration
arista.eos.eos_vlans:
config:
- vlan_id: 20
state: suspend
state: merged
# After state:
# ------------
#
# veos(config-vlan-20)#show running-config | section vlan
# vlan 10
# name ten
# !
# vlan 20
# name twenty
# state suspend
# Using overridden
# Before state:
# -------------
#
# veos(config-vlan-20)#show running-config | section vlan
# vlan 10
# name ten
# !
# vlan 20
# name twenty
- name: Override device configuration of all VLANs with provided configuration
arista.eos.eos_vlans:
config:
- vlan_id: 20
state: suspend
state: overridden
# After state:
# ------------
#
# veos(config-vlan-20)#show running-config | section vlan
# vlan 20
# state suspend
# Using replaced
# Before state:
# -------------
#
# veos(config-vlan-20)#show running-config | section vlan
# vlan 10
# name ten
# !
# vlan 20
# name twenty
- name: Replace all attributes of specified VLANs with provided configuration
arista.eos.eos_vlans:
config:
- vlan_id: 20
state: suspend
state: replaced
# After state:
# ------------
#
# veos(config-vlan-20)#show running-config | section vlan
# vlan 10
# name ten
# !
# vlan 20
# state suspend
# using parsed
# parsed.cfg
# vlan 10
# name ten
# !
# vlan 20
# name twenty
# state suspend
- name: Use parsed to convert native configs to structured data
arista.eos.eos_vlans:
running_config: "{{ lookup('file', 'parsed.cfg') }}"
state: parsed
# Output:
# -------
# parsed:
# - vlan_id: 10
# name: ten
# - vlan_id: 20
# state: suspend
# Using rendered:
- name: Use Rendered to convert the structured data to native config
arista.eos.eos_vlans:
config:
- vlan_id: 10
name: ten
- vlan_id: 20
state: suspend
state: rendered
# Output:
# ------
# rendered:
# - "vlan 10"
# - "name ten"
# - "vlan 20"
# - "state suspend"
# Using gathered:
# native_config:
# vlan 10
# name ten
# !
# vlan 20
# name twenty
# state suspend
- name: Gather vlans facts from the device
arista.eos.eos_vlans:
state: gathered
# Output:
# ------
# gathered:
# - vlan_id: 10
# name: ten
# - vlan_id: 20
# state: suspend
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **after** list / elements=string | when changed | The configuration as structured data after module completion. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **before** list / elements=string | always | The configuration as structured data prior to module invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **commands** list / elements=string | always | The set of commands pushed to the remote device. **Sample:** ['vlan 10', 'no name', 'vlan 11', 'name Eleven'] |
### Authors
* Nathaniel Case (@qalthos)
ansible arista.eos.eos_lacp – LACP resource module arista.eos.eos\_lacp – LACP resource module
===========================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_lacp`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module manages Global Link Aggregation Control Protocol (LACP) on Arista EOS devices.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** dictionary | | LACP global options. |
| | **system** dictionary | | LACP system options. |
| | | **priority** integer | | The system priority to use in LACP negotiations. Lower value is higher priority. Refer to vendor documentation for valid values. |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section ^lacp**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** **merged** ←
* replaced
* deleted
* parsed
* rendered
* gathered
| The state of the configuration after module completion. |
Notes
-----
Note
* Tested against Arista EOS 4.20.10M
* This module works with connection `network_cli`. See the [EOS Platform Options](../network/user_guide/platform_eos).
Examples
--------
```
# Using merged
# Before state:
# -------------
# veos# show running-config | include lacp
# lacp system-priority 10
- name: Merge provided global LACP attributes with device attributes
arista.eos.eos_lacp:
config:
system:
priority: 20
state: merged
# After state:
# ------------
# veos# show running-config | include lacp
# lacp system-priority 20
#
# Using replaced
# Before state:
# -------------
# veos# show running-config | include lacp
# lacp system-priority 10
- name: Replace device global LACP attributes with provided attributes
arista.eos.eos_lacp:
config:
system:
priority: 20
state: replaced
# After state:
# ------------
# veos# show running-config | include lacp
# lacp system-priority 20
#
# Using deleted
# Before state:
# -------------
# veos# show running-config | include lacp
# lacp system-priority 10
- name: Delete global LACP attributes
arista.eos.eos_lacp:
state: deleted
# After state:
# ------------
# veos# show running-config | include lacp
#
#Using rendered:
- name: Use Rendered to convert the structured data to native config
arista.eos.eos_lacp:
config:
system:
priority: 20
state: rendered
# Output:
# ------------
# rendered:
# - "lacp system-priority 20"
#
# Using parsed:
# parsed.cfg
# lacp system-priority 20
- name: Use parsed to convert native configs to structured data
arista.eos.eos_lacp:
running_config: "{{ lookup('file', 'parsed.cfg') }}"
state: parsed
# Output:
# parsed:
# system:
# priority: 20
# Using gathered:
# nathive config:
# -------------
# lacp system-priority 10
- name: Gather lacp facts from the device
arista.eos.eos_lacp:
state: gathered
# Output:
# gathered:
# system:
# priority: 10
#
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **after** dictionary | when changed | The configuration as structured data after module completion. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **before** dictionary | always | The configuration as structured data prior to module invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **commands** list / elements=string | always | The set of commands pushed to the remote device. **Sample:** ['lacp system-priority 10'] |
### Authors
* Nathaniel Case (@Qalthos)
ansible arista.eos.eos – Use eos cliconf to run command on Arista EOS platform arista.eos.eos – Use eos cliconf to run command on Arista EOS platform
======================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
Synopsis
--------
* This eos plugin provides low level abstraction apis for sending and receiving CLI commands from Arista EOS network devices.
Parameters
----------
| Parameter | Choices/Defaults | Configuration | Comments |
| --- | --- | --- | --- |
| **config\_commands** list / elements=string added in 2.0.0 of arista.eos | **Default:**[] | var: ansible\_eos\_config\_commands | Specifies a list of commands that can make configuration changes to the target device. When `ansible\_network\_single\_user\_mode` is enabled, if a command sent to the device is present in this list, the existing cache is invalidated. |
| **eos\_use\_sessions** boolean | **Choices:*** no
* **yes** ←
| env:ANSIBLE\_EOS\_USE\_SESSIONS var: ansible\_eos\_use\_sessions | Specifies if sessions should be used on remote host or not |
### Authors
* Ansible Networking Team
ansible arista.eos.eos_acl_interfaces – ACL interfaces resource module arista.eos.eos\_acl\_interfaces – ACL interfaces resource module
================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_acl_interfaces`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module manages adding and removing Access Control Lists (ACLs) from interfaces on devices running EOS software.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** list / elements=dictionary | | A dictionary of ACL options for interfaces. |
| | **access\_groups** list / elements=dictionary | | Specifies ACLs attached to the interfaces. |
| | | **acls** list / elements=dictionary | | Specifies the ACLs for the provided AFI. |
| | | | **direction** string / required | **Choices:*** in
* out
| Specifies the direction of packets that the ACL will be applied on. |
| | | | **name** string / required | | Specifies the name of the IPv4/IPv4 ACL for the interface. |
| | | **afi** string / required | **Choices:*** ipv4
* ipv6
| Specifies the AFI for the ACL(s) to be configured on this interface. |
| | **name** string / required | | Name/Identifier for the interface. |
| **running\_config** string | | The module, by default, will connect to the remote device and retrieve the current running-config to use as a base for comparing against the contents of source. There are times when it is not desirable to have the task get the current running-config for every task in a playbook. The *running\_config* argument allows the implementer to pass in the configuration to use as the base config for comparison. This value of this option should be the output received from device by executing command |
| **state** string | **Choices:*** **merged** ←
* replaced
* overridden
* deleted
* gathered
* parsed
* rendered
| The state the configuration should be left in. |
Examples
--------
```
# Using Merged
# Before state:
# -------------
#
# eos#sh running-config | include interface|access-group
# interface Ethernet1
# interface Ethernet2
# interface Ethernet3
- name: Merge module attributes of given access-groups
arista.eos.eos_acl_interfaces:
config:
- name: Ethernet2
access_groups:
- afi: ipv4
acls:
name: acl01
direction: in
- afi: ipv6
acls:
name: acl03
direction: out
state: merged
# Commands Fired:
# ---------------
#
# interface Ethernet2
# ip access-group acl01 in
# ipv6 access-group acl03 out
# After state:
# -------------
#
# eos#sh running-config | include interface| access-group
# interface Loopback888
# interface Ethernet1
# interface Ethernet2
# ip access-group acl01 in
# ipv6 access-group acl03 out
# interface Ethernet3
# Using Replaced
# Before state:
# -------------
#
# eos#sh running-config | include interface|access-group
# interface Ethernet1
# interface Ethernet2
# ip access-group acl01 in
# ipv6 access-group acl03 out
# interface Ethernet3
# ip access-group acl01 in
- name: Replace module attributes of given access-groups
arista.eos.eos_acl_interfaces:
config:
- name: Ethernet2
access_groups:
- afi: ipv4
acls:
name: acl01
direction: out
state: replaced
# Commands Fired:
# ---------------
#
# interface Ethernet2
# no ip access-group acl01 in
# no ipv6 access-group acl03 out
# ip access-group acl01 out
# After state:
# -------------
#
# eos#sh running-config | include interface| access-group
# interface Loopback888
# interface Ethernet1
# interface Ethernet2
# ip access-group acl01 out
# interface Ethernet3
# ip access-group acl01 in
# Using Overridden
# Before state:
# -------------
#
# eos#sh running-config | include interface|access-group
# interface Ethernet1
# interface Ethernet2
# ip access-group acl01 in
# ipv6 access-group acl03 out
# interface Ethernet3
# ip access-group acl01 in
- name: Override module attributes of given access-groups
arista.eos.eos_acl_interfaces:
config:
- name: Ethernet2
access_groups:
- afi: ipv4
acls:
name: acl01
direction: out
state: overridden
# Commands Fired:
# ---------------
#
# interface Ethernet2
# no ip access-group acl01 in
# no ipv6 access-group acl03 out
# ip access-group acl01 out
# interface Ethernet3
# no ip access-group acl01 in
# After state:
# -------------
#
# eos#sh running-config | include interface| access-group
# interface Loopback888
# interface Ethernet1
# interface Ethernet2
# ip access-group acl01 out
# interface Ethernet3
# Using Deleted
# Before state:
# -------------
#
# eos#sh running-config | include interface|access-group
# interface Ethernet1
# interface Ethernet2
# ip access-group acl01 in
# ipv6 access-group acl03 out
# interface Ethernet3
# ip access-group acl01 out
- name: Delete module attributes of given access-groups
arista.eos.eos_acl_interfaces:
config:
- name: Ethernet2
access_groups:
- afi: ipv4
acls:
name: acl01
direction: in
- afi: ipv6
acls:
name: acl03
direction: out
state: deleted
# Commands Fired:
# ---------------
#
# interface Ethernet2
# no ip access-group acl01 in
# no ipv6 access-group acl03 out
# After state:
# -------------
#
# eos#sh running-config | include interface| access-group
# interface Loopback888
# interface Ethernet1
# interface Ethernet2
# interface Ethernet3
# ip access-group acl01 out
# Before state:
# -------------
#
# eos#sh running-config | include interface| access-group
# interface Ethernet1
# interface Ethernet2
# ip access-group acl01 in
# ipv6 access-group acl03 out
# interface Ethernet3
# ip access-group acl01 out
- name: Delete module attributes of given access-groups from ALL Interfaces
arista.eos.eos_acl_interfaces:
config:
state: deleted
# Commands Fired:
# ---------------
#
# interface Ethernet2
# no ip access-group acl01 in
# no ipv6 access-group acl03 out
# interface Ethernet3
# no ip access-group acl01 out
# After state:
# -------------
#
# eos#sh running-config | include interface| access-group
# interface Loopback888
# interface Ethernet1
# interface Ethernet2
# interface Ethernet3
# Before state:
# -------------
#
# eos#sh running-config | include interface| access-group
# interface Ethernet1
# interface Ethernet2
# ip access-group acl01 in
# ipv6 access-group acl03 out
# interface Ethernet3
# ip access-group acl01 out
- name: Delete acls under afi
arista.eos.eos_acl_interfaces:
config:
- name: Ethernet3
access_groups:
- afi: ipv4
- name: Ethernet2
access_groups:
- afi: ipv6
state: deleted
# Commands Fired:
# ---------------
#
# interface Ethernet2
# no ipv6 access-group acl03 out
# interface Ethernet3
# no ip access-group acl01 out
# After state:
# -------------
#
# eos#sh running-config | include interface| access-group
# interface Loopback888
# interface Ethernet1
# interface Ethernet2
# ip access-group acl01 in
# interface Ethernet3
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **after** list / elements=string | when changed | The resulting configuration model invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **before** list / elements=string | always | The configuration prior to the model invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **commands** list / elements=string | always | The set of commands pushed to the remote device. **Sample:** ['interface Ethernet2', 'ip access-group acl01 in', 'ipv6 access-group acl03 out', 'interface Ethernet3', 'ip access-group acl01 out'] |
### Authors
* GomathiSelvi S (@GomathiselviS)
| programming_docs |
ansible arista.eos.eos_system – Manage the system attributes on Arista EOS devices arista.eos.eos\_system – Manage the system attributes on Arista EOS devices
===========================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_system`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module provides declarative management of node system attributes on Arista EOS devices. It provides an option to configure host system parameters or remove those parameters from the device active configuration.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **domain\_list** list / elements=string | | Provides the list of domain suffixes to append to the hostname for the purpose of doing name resolution. This argument accepts a list of names and will be reconciled with the current active configuration on the running node.
aliases: domain\_search |
| **domain\_name** string | | Configure the IP domain name on the remote device to the provided value. Value should be in the dotted name form and will be appended to the `hostname` to create a fully-qualified domain name. |
| **hostname** string | | Configure the device hostname parameter. This option takes an ASCII string value. |
| **lookup\_source** list / elements=raw | | Provides one or more source interfaces to use for performing DNS lookups. The interface provided in `lookup_source` can only exist in a single VRF. This argument accepts either a list of interface names or a list of hashes that configure the interface name and VRF name. See examples. |
| **name\_servers** list / elements=string | | List of DNS name servers by IP address to use to perform name resolution lookups. This argument accepts either a list of DNS servers or a list of hashes that configure the name server and VRF name. See examples. |
| **provider** dictionary | | **Deprecated** Starting with Ansible 2.5 we recommend using `connection: network_cli`. Starting with Ansible 2.6 we recommend using `connection: httpapi` for eAPI. This option will be removed in a release after 2022-06-01. For more information please see the [EOS Platform Options guide](../network/user_guide/platform_eos). A dict object containing connection details. |
| | **auth\_pass** string | | Specifies the password to use if required to enter privileged mode on the remote device. If *authorize* is false, then this argument does nothing. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTH_PASS` will be used instead. |
| | **authorize** boolean | **Choices:*** **no** ←
* yes
| Instructs the module to enter privileged mode on the remote device before sending any commands. If not specified, the device will attempt to execute all commands in non-privileged mode. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_AUTHORIZE` will be used instead. |
| | **host** string | | Specifies the DNS host name or address for connecting to the remote device over the specified transport. The value of host is used as the destination address for the transport. |
| | **password** string | | Specifies the password to use to authenticate the connection to the remote device. This is a common argument used for either *cli* or *eapi* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_PASSWORD` will be used instead. |
| | **port** integer | **Default:**0 | Specifies the port to use when building the connection to the remote device. This value applies to either *cli* or *eapi*. The port value will default to the appropriate transport common port if none is provided in the task (cli=22, http=80, https=443). |
| | **ssh\_keyfile** path | | Specifies the SSH keyfile to use to authenticate the connection to the remote device. This argument is only used for *cli* transports. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_SSH_KEYFILE` will be used instead. |
| | **timeout** integer | | Specifies the timeout in seconds for communicating with the network device for either connecting or sending commands. If the timeout is exceeded before the operation is completed, the module will error. |
| | **transport** string | **Choices:*** **cli** ←
* eapi
| Configures the transport connection to use when connecting to the remote device. |
| | **use\_proxy** boolean | **Choices:*** no
* **yes** ←
| If `no`, the environment variables `http_proxy` and `https_proxy` will be ignored. |
| | **use\_ssl** boolean | **Choices:*** no
* **yes** ←
| Configures the *transport* to use SSL if set to `yes` only when the `transport=eapi`. If the transport argument is not eapi, this value is ignored. |
| | **username** string | | Configures the username to use to authenticate the connection to the remote device. This value is used to authenticate either the CLI login or the eAPI authentication depending on which transport is used. If the value is not specified in the task, the value of environment variable `ANSIBLE_NET_USERNAME` will be used instead. |
| | **validate\_certs** boolean | **Choices:*** no
* **yes** ←
| If `no`, SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. If the transport argument is not eapi, this value is ignored. |
| **state** string | **Choices:*** **present** ←
* absent
| State of the configuration values in the device's current active configuration. When set to *present*, the values should be configured in the device active configuration and when set to *absent* the values should not be in the device active configuration |
Notes
-----
Note
* Tested against EOS 4.15
* For information on using CLI, eAPI and privileged mode see the [EOS Platform Options guide](../../../network/user_guide/platform_eos#eos-platform-options)
* For more information on using Ansible to manage network devices see the [Ansible Network Guide](../../../network/index#network-guide)
* For more information on using Ansible to manage Arista EOS devices see the [Arista integration page](https://www.ansible.com/ansible-arista-networks).
Examples
--------
```
- name: configure hostname and domain-name
arista.eos.eos_system:
hostname: eos01
domain_name: test.example.com
- name: remove configuration
arista.eos.eos_system:
state: absent
- name: configure DNS lookup sources
arista.eos.eos_system:
lookup_source: Management1
- name: configure DNS lookup sources with VRF support
arista.eos.eos_system:
lookup_source:
- interface: Management1
vrf: mgmt
- interface: Ethernet1
vrf: myvrf
- name: configure name servers
arista.eos.eos_system:
name_servers:
- 8.8.8.8
- 8.8.4.4
- name: configure name servers with VRF support
arista.eos.eos_system:
name_servers:
- {server: 8.8.8.8, vrf: mgmt}
- {server: 8.8.4.4, vrf: mgmt}
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **commands** list / elements=string | always | The list of configuration mode commands to send to the device **Sample:** ['hostname eos01', 'ip domain-name test.example.com'] |
| **session\_name** string | changed | The EOS config session name used to load the configuration **Sample:** ansible\_1479315771 |
### Authors
* Peter Sprygada (@privateip)
ansible arista.eos.eos_static_routes – Static routes resource module arista.eos.eos\_static\_routes – Static routes resource module
==============================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_static_routes`.
New in version 1.0.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* This module configures and manages the attributes of static routes on Arista EOS platforms.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** list / elements=dictionary | | A list of configurations for static routes. |
| | **address\_families** list / elements=dictionary | | A dictionary specifying the address family to which the static route(s) belong. |
| | | **afi** string / required | **Choices:*** ipv4
* ipv6
| Specifies the top level address family indicator. |
| | | **routes** list / elements=dictionary | | A dictionary that specifies the static route configurations. |
| | | | **dest** string / required | | Destination IPv4 subnet (CIDR or address-mask notation). The address format is <v4/v6 address>/<mask> or <v4/v6 address> <mask>. The mask is number in range 0-32 for IPv4 and in range 0-128 for IPv6. |
| | | | **next\_hops** list / elements=dictionary | | Details of route to be taken. |
| | | | | **admin\_distance** integer | | Preference or administrative distance of route (range 1-255). |
| | | | | **description** string | | Name of the static route. |
| | | | | **forward\_router\_address** string | | Forwarding router's address on destination interface. |
| | | | | **interface** string | | Outgoing interface to take. For anything except 'null0', then next hop IP address should also be configured. IP address of the next hop router or null0 Null0 interface or ethernet e\_num Ethernet interface or loopback l\_num Loopback interface or management m\_num Management interface or port-channel p\_num vlan v\_num vxlan vx\_num Nexthop-Group Specify nexthop group name Tunnel Tunnel interface vtep Configure VXLAN Tunnel End Points |
| | | | | **mpls\_label** integer | | MPLS label |
| | | | | **nexthop\_grp** string | | Nexthop group |
| | | | | **tag** integer | | Route tag value (ranges from 0 to 4294967295). |
| | | | | **track** string | | Track value (range 1 - 512). Track must already be configured on the device before adding the route. |
| | | | | **vrf** string | | VRF of the destination. |
| | **vrf** string | | The VRF to which the static route(s) belong. |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | grep routes**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** deleted
* **merged** ←
* overridden
* replaced
* gathered
* rendered
* parsed
| The state the configuration should be left in. |
Notes
-----
Note
* Tested against Arista EOS 4.20.10M
* This module works with connection `network_cli`. See the [EOS Platform Options](../network/user_guide/platform_eos).
Examples
--------
```
# Using deleted
# Before State:
# ------------
# veos(config)#show running-config | grep route
# ip route vrf testvrf 22.65.1.0/24 Null0 90 name testroute
# ipv6 route 5222:5::/64 Management1 4312:100::1
# ipv6 route vrf testvrf 2222:6::/64 Management1 4312:100::1
# ipv6 route vrf testvrf 2222:6::/64 Ethernet1 55
# ipv6 route vrf testvrf 2222:6::/64 Null0 90 name testroute1
# veos(config)#
- name: Delete afi
arista.eos.eos_static_routes:
config:
- vrf: testvrf
address_families:
- afi: ipv4
state: deleted
# "after": [
# {
# "address_families": [
# {
# "afi": "ipv6",
# "routes": [
# {
# "dest": "5222:5::/64",
# "next_hops": [
# {
# "forward_router_address": "4312:100::1",
# "interface": "Management1"
# }
# ]
# }
# ]
# }
# ]
# },
# {
# "address_families": [
# {
# "afi": "ipv6",
# "routes": [
# {
# "dest": "2222:6::/64",
# "next_hops": [
# {
# "forward_router_address": "4312:100::1",
# "interface": "Management1"
# },
# {
# "admin_distance": 55,
# "interface": "Ethernet1"
# },
# {
# "admin_distance": 90,
# "description": "testroute1",
# "interface": "Null0"
# }
# ]
# }
# ]
# }
# ],
# "vrf": "testvrf"
# }
# ],
# "before": [
# {
# "address_families": [
# {
# "afi": "ipv6",
# "routes": [
# {
# "dest": "5222:5::/64",
# "next_hops": [
# {
# "forward_router_address": "4312:100::1",
# "interface": "Management1"
# }
# ]
# }
# ]
# }
# ]
# },
# {
# "address_families": [
# {
# "afi": "ipv4",
# "routes": [
# {
# "dest": "22.65.1.0/24",
# "next_hops": [
# {
# "admin_distance": 90,
# "description": "testroute",
# "interface": "Null0"
# }
# ]
# }
# ]
# },
# {
# "afi": "ipv6",
# "routes": [
# {
# "dest": "2222:6::/64",
# "next_hops": [
# {
# "forward_router_address": "4312:100::1",
# "interface": "Management1"
# },
# {
# "admin_distance": 55,
# "interface": "Ethernet1"
# },
# {
# "admin_distance": 90,
# "description": "testroute1",
# "interface": "Null0"
# }
# ]
# }
# ]
# }
# ],
# "vrf": "testvrf"
# }
# ],
# "changed": true,
# "commands": [
# "no ip route vrf testvrf 22.65.1.0/24 Null0 90 name testroute"
# ],
# After State
# ___________
# veos(config)#show running-config | grep route
# ipv6 route 5222:5::/64 Management1 4312:100::1
# ipv6 route vrf testvrf 2222:6::/64 Management1 4312:100::1
# ipv6 route vrf testvrf 2222:6::/64 Ethernet1 55
# ipv6 route vrf testvrf 2222:6::/64 Null0 90 name testroute1
#
# Using merged
# Before : [
# {
# "address_families": [
# {
# "afi": "ipv4",
# "routes": [
# {
# "dest": "165.10.1.0/24",
# "next_hops": [
# {
# "admin_distance": 100,
# "interface": "Ethernet1"
# }
# ]
# },
# {
# "dest": "172.17.252.0/24",
# "next_hops": [
# {
# "nexthop_grp": "testgroup"
# }
# ]
# }
# ]
# },
# {
# "afi": "ipv6",
# "routes": [
# {
# "dest": "5001::/64",
# "next_hops": [
# {
# "admin_distance": 50,
# "interface": "Ethernet1"
# }
# ]
# }
# ]
# }
# ]
# },
# {
# "address_families": [
# {
# "afi": "ipv4",
# "routes": [
# {
# "dest": "130.1.122.0/24",
# "next_hops": [
# {
# "interface": "Ethernet1",
# "tag": 50
# }
# ]
# }
# ]
# }
# ],
# "vrf": "testvrf"
# }
# ]
#
# Before State
# -------------
# veos(config)#show running-config | grep "route"
# ip route 165.10.1.0/24 Ethernet1 100
# ip route 172.17.252.0/24 Nexthop-Group testgroup
# ip route vrf testvrf 130.1.122.0/24 Ethernet1 tag 50
# ipv6 route 5001::/64 Ethernet1 50
# veos(config)#
- name: Merge new static route configuration
arista.eos.eos_static_routes:
config:
- vrf: testvrf
address_families:
- afi: ipv6
routes:
- dest: 2211::0/64
next_hop:
- forward_router_address: 100:1::2
interface: Ethernet1
state: merged
# After State
# -----------
#After [
# {
# "address_families": [
# {
# "afi": "ipv4",
# "routes": [
# {
# "dest": "165.10.1.0/24",
# "next_hops": [
# {
# "admin_distance": 100,
# "interface": "Ethernet1"
# }
# ]
# },
# {
# "dest": "172.17.252.0/24",
# "next_hops": [
# {
# "nexthop_grp": "testgroup"
# }
# ]
# }
# ]
# },
# {
# "afi": "ipv6",
# "routes": [
# {
# "dest": "5001::/64",
# "next_hops": [
# {
# "admin_distance": 50,
# "interface": "Ethernet1"
# }
# ]
# }
# ]
# }
# ]
# },
# {
# "address_families": [
# {
# "afi": "ipv4",
# "routes": [
# {
# "dest": "130.1.122.0/24",
# "next_hops": [
# {
# "interface": "Ethernet1",
# "tag": 50
# }
# ]
# }
# ]
# },
# {
# "afi": "ipv6",
# "routes": [
# {
# "dest": "2211::0/64",
# "next_hops": [
# {
# "aforward_router_address": 100:1::2
# "interface": "Ethernet1"
# }
# ]
# }
# ]
# }
# ],
# "vrf": "testvrf"
# }
# ]
#
# veos(config)#show running-config | grep "route"
# ip route 165.10.1.0/24 Ethernet1 100
# ip route 172.17.252.0/24 Nexthop-Group testgroup
# ip route vrf testvrf 130.1.122.0/24 Ethernet1 tag 50
# ipv6 route 2211::/64 Ethernet1 100:1::2
# ipv6 route 5001::/64 Ethernet1 50
# veos(config)#
# Using overridden
# Before State
# -------------
# "before": [
# {
# "address_families": [
# {
# "afi": "ipv4",
# "routes": [
# {
# "dest": "165.10.1.0/24",
# "next_hops": [
# {
# "admin_distance": 100,
# "interface": "Ethernet1"
# }
# ]
# },
# {
# "dest": "172.17.252.0/24",
# "next_hops": [
# {
# "nexthop_grp": "testgroup"
# }
# ]
# }
# ]
# },
# {
# "afi": "ipv6",
# "routes": [
# {
# "dest": "5001::/64",
# "next_hops": [
# {
# "admin_distance": 50,
# "interface": "Ethernet1"
# }
# ]
# }
# ]
# }
# ]
# },
# {
# "address_families": [
# {
# "afi": "ipv4",
# "routes": [
# {
# "dest": "130.1.122.0/24",
# "next_hops": [
# {
# "interface": "Ethernet1",
# "tag": 50
# }
# ]
# }
# ]
# }
# ],
# "vrf": "testvrf"
# }
# ]
# veos(config)#show running-config | grep "route"
# ip route 165.10.1.0/24 Ethernet1 100
# ip route 172.17.252.0/24 Nexthop-Group testgroup
# ip route vrf testvrf 130.1.122.0/24 Ethernet1 tag 50
# ipv6 route 5001::/64 Ethernet1 50
# veos(config)#
- name: Overridden static route configuration
arista.eos.eos_static_routes:
config:
- address_families:
- afi: ipv4
routes:
- dest: 10.2.2.0/24
next_hop:
- interface: Ethernet1
state: replaced
# After State
# -----------
# "after": [
# {
# "address_families": [
# {
# "afi": "ipv4",
# "routes": [
# {
# "dest": "10.2.2.0/24",
# "next_hops": [
# {
# "interface": "Ethernet1"
# }
# ]
# }
# ]
# }
# ]
# }
# ]
# veos(config)#show running-config | grep "route"
# ip route 10.2.2.0/24 Ethernet1
# veos(config)#
# Using replaced
# Before State
# -------------
# ip route 10.2.2.0/24 Ethernet1
# ip route 10.2.2.0/24 64.1.1.1 label 17 33
# ip route 33.33.33.0/24 Nexthop-Group testgrp
# ip route vrf testvrf 22.65.1.0/24 Null0 90 name testroute
# ipv6 route 5222:5::/64 Management1 4312:100::1
# ipv6 route vrf testvrf 2222:6::/64 Management1 4312:100::1
# ipv6 route vrf testvrf 2222:6::/64 Ethernet1 55
# ipv6 route vrf testvrf 2222:6::/64 Null0 90 name testroute1
# [
# {
# "address_families": [
# {
# "afi": "ipv4",
# "routes": [
# {
# "dest": "10.2.2.0/24",
# "next_hops": [
# {
# "interface": "Ethernet1"
# },
# {
# "admin_distance": 33,
# "interface": "64.1.1.1",
# "mpls_label": 17
# }
# ]
# },
# {
# "dest": "33.33.33.0/24",
# "next_hops": [
# {
# "nexthop_grp": "testgrp"
# }
# ]
# }
# ]
# },
# {
# "afi": "ipv6",
# "routes": [
# {
# "dest": "5222:5::/64",
# "next_hops": [
# {
# "forward_router_address": "4312:100::1",
# "interface": "Management1"
# }
# ]
# }
# ]
# }
# ]
# },
# {
# "address_families": [
# {
# "afi": "ipv4",
# "routes": [
# {
# "dest": "22.65.1.0/24",
# "next_hops": [
# {
# "admin_distance": 90,
# "description": "testroute",
# "interface": "Null0"
# }
# ]
# }
# ]
# },
# {
# "afi": "ipv6",
# "routes": [
# {
# "dest": "2222:6::/64",
# "next_hops": [
# {
# "forward_router_address": "4312:100::1",
# "interface": "Management1"
# },
# {
# "admin_distance": 90,
# "description": "testroute1",
# "interface": "Null0"
# }
# ]
# }
# ]
# }
# ],
# "vrf": "testvrf"
# }
# ]
- name: Replace nexthop
arista.eos.eos_static_routes:
config:
- vrf: testvrf
address_families:
- afi: ipv6
routes:
- dest: 2222:6::/64
next_hops:
- admin_distance: 55
interface: Ethernet1
state: replaced
# After State
# -----------
# veos(config)#show running-config | grep route
# ip route 10.2.2.0/24 Ethernet1
# ip route 10.2.2.0/24 64.1.1.1 label 17 33
# ip route 33.33.33.0/24 Nexthop-Group testgrp
# ip route vrf testvrf 22.65.1.0/24 Null0 90 name testroute
# ipv6 route 5222:5::/64 Management1 4312:100::1
# ipv6 route vrf testvrf 2222:6::/64 Ethernet1 55
# "after": [
# {
# "address_families": [
# {
# "afi": "ipv4",
# "routes": [
# {
# "dest": "10.2.2.0/24",
# "next_hops": [
# {
# "interface": "Ethernet1"
# },
# {
# "admin_distance": 33,
# "interface": "64.1.1.1",
# "mpls_label": 17
# }
# ]
# },
# {
# "dest": "33.33.33.0/24",
# "next_hops": [
# {
# "nexthop_grp": "testgrp"
# }
# ]
# }
# ]
# },
# {
# "afi": "ipv6",
# "routes": [
# {
# "dest": "5222:5::/64",
# "next_hops": [
# {
# "forward_router_address": "4312:100::1",
# "interface": "Management1"
# }
# ]
# }
# ]
# }
# ]
# },
# {
# "address_families": [
# {
# "afi": "ipv4",
# "routes": [
# {
# "dest": "22.65.1.0/24",
# "next_hops": [
# {
# "admin_distance": 90,
# "description": "testroute",
# "interface": "Null0"
# }
# ]
# }
# ]
# },
# {
# "afi": "ipv6",
# "routes": [
# {
# "dest": "2222:6::/64",
# "next_hops": [
# {
# "admin_distance": 55,
# "interface": "Ethernet1"
# }
# ]
# }
# ]
# }
# ],
# "vrf": "testvrf"
# }
# ]
# Before State
# -------------
# veos(config)#show running-config | grep "route"
# ip route 165.10.1.0/24 Ethernet1 10.1.1.2 100
# ipv6 route 5001::/64 Ethernet1
# veos(config)#
- name: Gather the exisitng condiguration
arista.eos.eos_static_routes:
state: gathered
# returns :
# arista.eos.eos_static_routes:
# config:
# - address_families:
# - afi: ipv4
# routes:
# - dest: 165.10.1.0/24
# next_hop:
# - forward_router_address: 10.1.1.2
# interface: "Ethernet1"
# admin_distance: 100
# - afi: ipv6
# routes:
# - dest: 5001::/64
# next_hop:
# - interface: "Ethernet1"
# Using rendered
# arista.eos.eos_static_routes:
# config:
# - address_families:
# - afi: ipv4
# routes:
# - dest: 165.10.1.0/24
# next_hop:
# - forward_router_address: 10.1.1.2
# interface: "Ethernet1"
# admin_distance: 100
# - afi: ipv6
# routes:
# - dest: 5001::/64
# next_hop:
# - interface: "Ethernet1"
# returns:
# ip route 165.10.1.0/24 Ethernet1 10.1.1.2 100
# ipv6 route 5001::/64 Ethernet1
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **after** list / elements=string | when changed | The resulting configuration model invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **before** list / elements=string | always | The configuration prior to the model invocation. **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **commands** list / elements=string | always | The set of commands pushed to the remote device. **Sample:** ['ip route vrf vrf1 192.2.2.0/24 125.2.3.1 93'] |
| **gathered** list / elements=string | When `state` is *gathered* | The configuration as structured data transformed for the running configuration fetched from remote host **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **parsed** list / elements=string | When `state` is *parsed* | The configuration as structured data transformed for the value of `running_config` option **Sample:** The configuration returned will always be in the same format of the parameters above. |
| **rendered** list / elements=string | When `state` is *rendered* | The set of CLI commands generated from the value in `config` option **Sample:** "address\_families": [ { "afi": "ipv4", "routes": [ { "dest": "192.2.2.0/24", "next\_hops": [ { "admin\_distance": 93, "description": null, "forward\_router\_address": null, "interface": "125.2.3.1", "mpls\_label": null, "nexthop\_grp": null, "tag": null, "track": null, "vrf": null } ] } ] } ], "vrf": "vrf1" } ], "running\_config": null, "state": "rendered" } |
### Authors
* Gomathi Selvi Srinivasan (@GomathiselviS)
| programming_docs |
ansible arista.eos.eos_bgp_global – Manages BGP global resource module arista.eos.eos\_bgp\_global – Manages BGP global resource module
================================================================
Note
This plugin is part of the [arista.eos collection](https://galaxy.ansible.com/arista/eos) (version 2.2.0).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install arista.eos`.
To use it in a playbook, specify: `arista.eos.eos_bgp_global`.
New in version 1.4.0: of arista.eos
* [Synopsis](#synopsis)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
Synopsis
--------
* This module configures and manages the attributes of BGP global on Arista EOS platforms.
Note
This module has a corresponding [action plugin](../../../plugins/action#action-plugins).
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **config** dictionary | | A list of configurations for BGP global. |
| | **access\_group** dictionary | | ip/ipv6 access list configuration. |
| | | **acl\_name** string | | access list name. |
| | | **afi** string | **Choices:*** ipv4
* ipv6
| Specify ip/ipv6. |
| | | **direction** string | | direction of packets. |
| | **aggregate\_address** list / elements=dictionary | | Configure aggregate address. |
| | | **address** string | | ipv4/ipv6 address prefix. |
| | | **advertise\_only** boolean | **Choices:*** no
* yes
| Advertise without installing the generated blackhole route in FIB. |
| | | **as\_set** boolean | **Choices:*** no
* yes
| Generate autonomous system set path information. |
| | | **attribute\_map** string | | Name of the route map used to set the attribute of the aggregate route. |
| | | **match\_map** string | | Name of the route map used to filter the contributors of the aggregate route. |
| | | **summary\_only** boolean | **Choices:*** no
* yes
| Filters all more-specific routes from updates. |
| | **as\_number** string | | Autonomous system number. |
| | **bgp\_params** dictionary | | BGP parameters. |
| | | **additional\_paths** string | **Choices:*** install
* send
* receive
| BGP additional-paths commands |
| | | **advertise\_inactive** boolean | **Choices:*** no
* yes
| Advertise BGP routes even if they are inactive in RIB. |
| | | **allowas\_in** dictionary | | Allow local-as in updates. |
| | | | **count** integer | | Number of local ASNs allowed in a BGP update. |
| | | | **set** boolean | **Choices:*** no
* yes
| When True, it is set. |
| | | **always\_compare\_med** boolean | **Choices:*** no
* yes
| BGP Always Compare MED |
| | | **asn** string | **Choices:*** asdot
* asplain
| AS Number notation. |
| | | **auto\_local\_addr** boolean | **Choices:*** no
* yes
| Automatically determine the local address to be used for the non-transport AF. |
| | | **bestpath** dictionary | | Select the bestpath selection algorithim for BGP routes. |
| | | | **as\_path** string | **Choices:*** ignore
* multipath\_relax
| Select the bestpath selection based on as-path. |
| | | | **ecmp\_fast** boolean | **Choices:*** no
* yes
| Tie-break BGP paths in a ECMP group based on the order of arrival. |
| | | | **med** dictionary | | MED attribute |
| | | | | **confed** boolean | **Choices:*** no
* yes
| MED Confed. |
| | | | | **missing\_as\_worst** boolean | **Choices:*** no
* yes
| MED missing-as-worst. |
| | | | **skip** boolean | **Choices:*** no
* yes
| skip one of the tie breaking rules in the bestpath selection. |
| | | | **tie\_break** string | **Choices:*** cluster\_list\_length
* router\_id
| Configure the tie-break option for BGP bestpath selection. |
| | | **client\_to\_client** boolean | **Choices:*** no
* yes
| client to client configuration. |
| | | **cluster\_id** string | | Cluster ID of this router acting as a route reflector. |
| | | **confederation** dictionary | | confederation. |
| | | | **identifier** string | | Confederation identifier. |
| | | | **peers** string | | Confederation peers. |
| | | **control\_plane\_filter** boolean | **Choices:*** no
* yes
| Control plane filter for BGP. |
| | | **convergence** dictionary | | Bgp convergence parameters. |
| | | | **slow\_peer** boolean | **Choices:*** no
* yes
| Maximum amount of time to wait for slow peers to estabilsh session. |
| | | | **time** integer | | time in secs |
| | | **default** string | **Choices:*** ipv4\_unicast
* ipv6\_unicast
| Default neighbor configuration commands. |
| | | **enforce\_first\_as** boolean | **Choices:*** no
* yes
| Enforce the First AS for EBGP routes(default). |
| | | **host\_routes** boolean | **Choices:*** no
* yes
| BGP host routes configuration. |
| | | **labeled\_unicast** string | **Choices:*** ip
* tunnel
| Labeled Unicast. |
| | | **listen** dictionary | | BGP listen. |
| | | | **limit** integer | | Set limit on the number of dynamic BGP peers allowed. |
| | | | **range** dictionary | | Subnet Range to be associated with the peer-group. |
| | | | | **address** string | | Address prefix |
| | | | | **peer\_group** dictionary | | Name of peer group. |
| | | | | | **name** string | | name. |
| | | | | | **peer\_filter** string | | Name of peer filter. |
| | | | | | **remote\_as** string | | Neighbor AS number |
| | | **log\_neighbor\_changes** boolean | **Choices:*** no
* yes
| Log neighbor up/down events. |
| | | **missing\_policy** dictionary | | Missing policy override configuration commands. |
| | | | **action** string | **Choices:*** deny
* permit
* deny-in-out
| Missing policy action options. |
| | | | **direction** string | **Choices:*** in
* out
| Missing policy direction options. |
| | | **monitoring** boolean | **Choices:*** no
* yes
| Enable Bgp monitoring for all/specified stations. |
| | | **next\_hop\_unchanged** boolean | **Choices:*** no
* yes
| Preserve original nexthop while advertising routes to eBGP peers. |
| | | **redistribute\_internal** boolean | **Choices:*** no
* yes
| Redistribute internal BGP routes. |
| | | **route** string | | Configure route-map for route installation. |
| | | **route\_reflector** dictionary | | Configure route reflector options |
| | | | **preserve** boolean | **Choices:*** no
* yes
| preserve route attributes, overwriting route-map changes |
| | | | **set** boolean | **Choices:*** no
* yes
| When True route\_reflector is set. |
| | | **transport** integer | | Configure transport port for TCP session |
| | **default\_metric** integer | | Default metric. |
| | **distance** dictionary | | Define an administrative distance. |
| | | **external** integer | | distance for external routes. |
| | | **internal** integer | | distance for internal routes. |
| | | **local** integer | | distance for local routes. |
| | **graceful\_restart** dictionary | | Enable graceful restart mode. |
| | | **restart\_time** integer | | Set the max time needed to restart and come back up. |
| | | **set** boolean | **Choices:*** no
* yes
| When True, graceful restart is set. |
| | | **stalepath\_time** integer | | Set the max time to hold onto restarting peer stale paths. |
| | **graceful\_restart\_helper** boolean | **Choices:*** no
* yes
| Enable graceful restart helper mode. |
| | **maximum\_paths** dictionary | | Maximum number of equal cost paths. |
| | | **max\_equal\_cost\_paths** integer | | Value for maximum number of equal cost paths. |
| | | **max\_installed\_ecmp\_paths** integer | | Value for maximum number of installed ECMP routes. |
| | **monitoring** dictionary | | BGP monitoring protocol configuration. |
| | | **port** integer | | Configure the BGP monitoring protocol port number <1024-65535>. |
| | | **received** string | **Choices:*** post\_policy
* pre\_policy
| BGP monitoring protocol received route selection. |
| | | **station** string | | BGP monitoring station configuration. |
| | | **timestamp** string | **Choices:*** none
* send\_time
| BGP monitoring protocol Per-Peer Header timestamp behavior. |
| | **neighbor** list / elements=dictionary | | Configure routing for a network.
aliases: neighbors |
| | | **additional\_paths** string | **Choices:*** send
* receive
| BGP additional-paths commands. |
| | | **allowas\_in** dictionary | | Allow local-as in updates. |
| | | | **count** integer | | Number of local ASNs allowed in a BGP update. |
| | | | **set** boolean | **Choices:*** no
* yes
| When True, it is set. |
| | | **auto\_local\_addr** boolean | **Choices:*** no
* yes
| Automatically determine the local address to be used for the non-transport AF. |
| | | **default\_originate** dictionary | | Originate default route to this neighbor. |
| | | | **always** boolean | **Choices:*** no
* yes
| Always originate default route to this neighbor. |
| | | | **route\_map** string | | Route map reference. |
| | | **description** string | | Text describing the neighbor. |
| | | **dont\_capability\_negotiate** boolean | **Choices:*** no
* yes
| Donot perform Capability Negotiation with this neighbor. |
| | | **ebgp\_multihop** dictionary | | Allow BGP connections to indirectly connected external peers. |
| | | | **set** boolean | **Choices:*** no
* yes
| If True, ttl is not set. |
| | | | **ttl** integer | | Time-to-live in the range 1-255 hops. |
| | | **encryption\_password** dictionary | | Password to use in computation of MD5 hash. |
| | | | **password** string | | password (up to 80 chars). |
| | | | **type** integer | **Choices:*** 0
* 7
| Encryption type. |
| | | **enforce\_first\_as** boolean | **Choices:*** no
* yes
| Enforce the First AS for EBGP routes(default). |
| | | **export\_localpref** integer | | Override localpref when exporting to an internal peer. |
| | | **fall\_over** boolean | **Choices:*** no
* yes
| Configure BFD protocol options for this peer. |
| | | **graceful\_restart** boolean | **Choices:*** no
* yes
| Enable graceful restart mode. |
| | | **graceful\_restart\_helper** boolean | **Choices:*** no
* yes
| Enable graceful restart helper mode. |
| | | **idle\_restart\_timer** integer | | Neighbor idle restart timer. |
| | | **import\_localpref** integer | | Override localpref when importing from an external peer. |
| | | **link\_bandwidth** dictionary | | Enable link bandwidth community for routes to this peer. |
| | | | **auto** boolean | **Choices:*** no
* yes
| Enable link bandwidth auto generation for routes from this peer. |
| | | | **default** string | | Enable link bandwidth default generation for routes from this peer. |
| | | | **set** boolean | **Choices:*** no
* yes
| If True, set link bandwidth |
| | | | **update\_delay** integer | | Delay outbound route updates. |
| | | **local\_as** dictionary | | Configure local AS number advertised to peer. |
| | | | **as\_number** string | | AS number. |
| | | | **fallback** boolean | **Choices:*** no
* yes
| Prefer router AS Number over local AS Number. |
| | | **local\_v6\_addr** string | | The local IPv6 address of the neighbor in A:B:C:D:E:F:G:H format. |
| | | **maximum\_accepted\_routes** dictionary | | Maximum number of routes accepted from this peer. |
| | | | **count** integer | | Maximum number of accepted routes (0 means unlimited). |
| | | | **warning\_limit** integer | | Maximum number of accepted routes after which a warning is issued. (0 means never warn) |
| | | **maximum\_received\_routes** dictionary | | Maximum number of routes received from this peer. |
| | | | **count** integer | | Maximum number of routes (0 means unlimited). |
| | | | **warning\_limit** dictionary | | Percentage of maximum-routes at which warning is to be issued. |
| | | | | **limit\_count** integer | | Number of routes at which to warn. |
| | | | | **limit\_percent** integer | | Percentage of maximum number of routes at which to warn( 1-100). |
| | | | **warning\_only** boolean | **Choices:*** no
* yes
| Only warn, no restart, if max route limit exceeded. |
| | | **metric\_out** integer | | MED value to advertise to peer. |
| | | **monitoring** boolean | **Choices:*** no
* yes
| Enable BGP Monitoring Protocol for this peer. |
| | | **next\_hop\_self** boolean | **Choices:*** no
* yes
| Always advertise this router address as the BGP next hop |
| | | **next\_hop\_unchanged** boolean | **Choices:*** no
* yes
| Preserve original nexthop while advertising routes to eBGP peers. |
| | | **next\_hop\_v6\_address** string | | IPv6 next-hop address for the neighbor |
| | | **out\_delay** integer | | Delay outbound route updates. |
| | | **peer** string | | Neighbor address or peer-group. |
| | | **peer\_group** string | | Name of the peer-group. |
| | | **prefix\_list** dictionary | | Prefix list reference. |
| | | | **direction** string | **Choices:*** in
* out
| Configure an inbound/outbound prefix-list. |
| | | | **name** string | | prefix list name. |
| | | **remote\_as** string | | Neighbor Autonomous System. |
| | | **remove\_private\_as** dictionary | | Remove private AS number from updates to this peer. |
| | | | **all** boolean | **Choices:*** no
* yes
| Remove private AS number. |
| | | | **replace\_as** boolean | **Choices:*** no
* yes
| Replace private AS number with local AS number. |
| | | | **set** boolean | **Choices:*** no
* yes
| If True, set remove\_private\_as. |
| | | **route\_map** dictionary | | Route map reference. |
| | | | **direction** string | **Choices:*** in
* out
| Configure an inbound/outbound route-map. |
| | | | **name** string | | Route map name. |
| | | **route\_reflector\_client** boolean | **Choices:*** no
* yes
| Configure peer as a route reflector client. |
| | | **route\_to\_peer** boolean | **Choices:*** no
* yes
| Use routing table information to reach the peer. |
| | | **send\_community** dictionary | | Send community attribute to this neighbor. |
| | | | **community\_attribute** string | | Type of community attributes to send to this neighbor. |
| | | | **divide** string | **Choices:*** equal
* ratio
| link-bandwidth divide attribute. |
| | | | **link\_bandwidth\_attribute** string | **Choices:*** aggregate
* divide
| cumulative/aggregate attribute to be sent. |
| | | | **speed** string | | Reference link speed in bits/second |
| | | | **sub\_attribute** string | **Choices:*** extended
* link-bandwidth
* standard
| Attribute to be sent to the neighbor. |
| | | **shutdown** boolean | **Choices:*** no
* yes
| Administratively shut down this neighbor. |
| | | **soft\_recognition** string | **Choices:*** all
* None
| Configure how to handle routes that fail import. |
| | | **timers** dictionary | | Timers. |
| | | | **holdtime** integer | | Hold time in secs. |
| | | | **keepalive** integer | | Keep Alive Interval in secs. |
| | | **transport** dictionary | | Configure transport options for TCP session. |
| | | | **connection\_mode** string | | Configure connection-mode for TCP session. |
| | | | **remote\_port** integer | | Configure BGP peer TCP port to connect to. |
| | | **ttl** integer | | BGP ttl security check |
| | | **update\_source** string | | Specify the local source interface for peer BGP sessions. |
| | | **weight** integer | | Weight to assign. |
| | **network** list / elements=dictionary | | Configure routing for a network.
aliases: networks |
| | | **address** string | | address prefix. |
| | | **route\_map** string | | Name of route map. |
| | **redistribute** list / elements=dictionary | | Redistribute routes in to BGP. |
| | | **isis\_level** string | **Choices:*** level-1
* level-2
* level-1-2
| Applicable for isis routes. Specify isis route level. |
| | | **ospf\_route** string | **Choices:*** internal
* external
* nssa\_external\_1
* nssa\_external\_2
| ospf route options. |
| | | **protocol** string | **Choices:*** isis
* ospf3
* ospf
* attached-host
* connected
* rip
* static
| Routes to be redistributed. |
| | | **route\_map** string | | Route map reference. |
| | **route\_target** dictionary | | Route target. |
| | | **action** string | **Choices:*** both
* import
* export
| Route action. |
| | | **target** string | | Route Target. |
| | **router\_id** string | | Router id. |
| | **shutdown** boolean | **Choices:*** no
* yes
| When True, shut down BGP. |
| | **timers** dictionary | | Timers. |
| | | **holdtime** integer | | Hold time in secs. |
| | | **keepalive** integer | | Keep Alive Interval in secs. |
| | **ucmp** dictionary | | Configure unequal cost multipathing. |
| | | **fec** dictionary | | Configure UCMP fec utilization threshold. |
| | | | **clear** integer | | UCMP FEC utilization Clear thresholds. |
| | | | **trigger** integer | | UCMP fec utilization too high threshold. |
| | | **link\_bandwidth** dictionary | | Configure link-bandwidth propagation delay. |
| | | | **mode** string | **Choices:*** encoding\_weighted
* recursive
| UCMP link bandwidth mode |
| | | | **update\_delay** integer | | Link Bandwidth Advertisement delay. |
| | | **mode** dictionary | | UCMP mode. |
| | | | **nexthops** integer | | Value for total number UCMP nexthops. |
| | | | **set** boolean | **Choices:*** no
* yes
| If True, ucmp mode is set to 1. |
| | **update** dictionary | | Configure BGP update generation. |
| | | **batch\_size** integer | | batch size for FIB route acknowledgements. |
| | | **wait\_for** string | **Choices:*** wait\_for\_convergence
* wait\_install
| wait for options before converge or synchronize. |
| | **vlan** integer | | Configure MAC VRF BGP for single VLAN support. |
| | **vlan\_aware\_bundle** string | | Configure MAC VRF BGP for multiple VLAN support. |
| | **vrfs** list / elements=dictionary | | Configure BGP in a VRF. |
| | | **access\_group** dictionary | | ip/ipv6 access list configuration. |
| | | | **acl\_name** string | | access list name. |
| | | | **afi** string | **Choices:*** ip
* ipv6
| Specify ip/ipv6. |
| | | | **direction** string | | direction of packets. |
| | | **aggregate\_address** list / elements=dictionary | | Configure aggregate address. |
| | | | **address** string | | ipv4/ipv6 address prefix. |
| | | | **advertise\_only** boolean | **Choices:*** no
* yes
| Advertise without installing the generated blackhole route in FIB. |
| | | | **as\_set** boolean | **Choices:*** no
* yes
| Generate autonomous system set path information. |
| | | | **attribute\_map** string | | Name of the route map used to set the attribute of the aggregate route. |
| | | | **match\_map** string | | Name of the route map used to filter the contributors of the aggregate route. |
| | | | **summary\_only** boolean | **Choices:*** no
* yes
| Filters all more-specific routes from updates. |
| | | **bgp\_params** dictionary | | BGP parameters. |
| | | | **additional\_paths** string | **Choices:*** install
* send
* receive
| BGP additional-paths commands |
| | | | **advertise\_inactive** boolean | **Choices:*** no
* yes
| Advertise BGP routes even if they are inactive in RIB. |
| | | | **allowas\_in** dictionary | | Allow local-as in updates. |
| | | | | **count** integer | | Number of local ASNs allowed in a BGP update. |
| | | | | **set** boolean | **Choices:*** no
* yes
| When True, it is set. |
| | | | **always\_compare\_med** boolean | **Choices:*** no
* yes
| BGP Always Compare MED |
| | | | **asn** string | **Choices:*** asdot
* asplain
| AS Number notation. |
| | | | **auto\_local\_addr** boolean | **Choices:*** no
* yes
| Automatically determine the local address to be used for the non-transport AF. |
| | | | **bestpath** dictionary | | Select the bestpath selection algorithim for BGP routes. |
| | | | | **as\_path** string | **Choices:*** ignore
* multipath\_relax
| Select the bestpath selection based on as-path. |
| | | | | **ecmp\_fast** boolean | **Choices:*** no
* yes
| Tie-break BGP paths in a ECMP group based on the order of arrival. |
| | | | | **med** dictionary | | MED attribute |
| | | | | | **confed** boolean | **Choices:*** no
* yes
| MED Confed. |
| | | | | | **missing\_as\_worst** boolean | **Choices:*** no
* yes
| MED missing-as-worst. |
| | | | | **skip** boolean | **Choices:*** no
* yes
| skip one of the tie breaking rules in the bestpath selection. |
| | | | | **tie\_break** string | **Choices:*** cluster\_list\_length
* router\_id
| Configure the tie-break option for BGP bestpath selection. |
| | | | **client\_to\_client** boolean | **Choices:*** no
* yes
| client to client configuration. |
| | | | **cluster\_id** string | | Cluster ID of this router acting as a route reflector. |
| | | | **confederation** dictionary | | confederation. |
| | | | | **identifier** string | | Confederation identifier. |
| | | | | **peers** string | | Confederation peers. |
| | | | **control\_plane\_filter** boolean | **Choices:*** no
* yes
| Control plane filter for BGP. |
| | | | **convergence** dictionary | | Bgp convergence parameters. |
| | | | | **slow\_peer** boolean | **Choices:*** no
* yes
| Maximum amount of time to wait for slow peers to estabilsh session. |
| | | | | **time** integer | | time in secs |
| | | | **default** string | **Choices:*** ipv4\_unicast
* ipv6\_unicast
| Default neighbor configuration commands. |
| | | | **enforce\_first\_as** boolean | **Choices:*** no
* yes
| Enforce the First AS for EBGP routes(default). |
| | | | **host\_routes** boolean | **Choices:*** no
* yes
| BGP host routes configuration. |
| | | | **labeled\_unicast** string | **Choices:*** ip
* tunnel
| Labeled Unicast. |
| | | | **listen** dictionary | | BGP listen. |
| | | | | **limit** integer | | Set limit on the number of dynamic BGP peers allowed. |
| | | | | **range** dictionary | | Subnet Range to be associated with the peer-group. |
| | | | | | **address** string | | Address prefix |
| | | | | | **peer\_group** dictionary | | Name of peer group. |
| | | | | | | **name** string | | name. |
| | | | | | | **peer\_filter** string | | Name of peer filter. |
| | | | | | | **remote\_as** string | | Neighbor AS number |
| | | | **log\_neighbor\_changes** boolean | **Choices:*** no
* yes
| Log neighbor up/down events. |
| | | | **missing\_policy** dictionary | | Missing policy override configuration commands. |
| | | | | **action** string | **Choices:*** deny
* permit
* deny-in-out
| Missing policy action options. |
| | | | | **direction** string | **Choices:*** in
* out
| Missing policy direction options. |
| | | | **monitoring** boolean | **Choices:*** no
* yes
| Enable Bgp monitoring for all/specified stations. |
| | | | **next\_hop\_unchanged** boolean | **Choices:*** no
* yes
| Preserve original nexthop while advertising routes to eBGP peers. |
| | | | **redistribute\_internal** boolean | **Choices:*** no
* yes
| Redistribute internal BGP routes. |
| | | | **route** string | | Configure route-map for route installation. |
| | | | **route\_reflector** dictionary | | Configure route reflector options |
| | | | | **preserve** boolean | **Choices:*** no
* yes
| preserve route attributes, overwriting route-map changes |
| | | | | **set** boolean | **Choices:*** no
* yes
| When True route\_reflector is set. |
| | | | **transport** integer | | Configure transport port for TCP session |
| | | **default\_metric** integer | | Default metric. |
| | | **distance** dictionary | | Define an administrative distance. |
| | | | **external** integer | | distance for external routes. |
| | | | **internal** integer | | distance for internal routes. |
| | | | **local** integer | | distance for local routes. |
| | | **graceful\_restart** dictionary | | Enable graceful restart mode. |
| | | | **restart\_time** integer | | Set the max time needed to restart and come back up. |
| | | | **set** boolean | **Choices:*** no
* yes
| When True, graceful restart is set. |
| | | | **stalepath\_time** integer | | Set the max time to hold onto restarting peer stale paths. |
| | | **graceful\_restart\_helper** boolean | **Choices:*** no
* yes
| Enable graceful restart helper mode. |
| | | **maximum\_paths** dictionary | | Maximum number of equal cost paths. |
| | | | **max\_equal\_cost\_paths** integer | | Value for maximum number of equal cost paths. |
| | | | **max\_installed\_ecmp\_paths** integer | | Value for maximum number of installed ECMP routes. |
| | | **neighbor** list / elements=dictionary | | Configure routing for a network.
aliases: neighbors |
| | | | **additional\_paths** string | **Choices:*** send
* receive
| BGP additional-paths commands. |
| | | | **allowas\_in** dictionary | | Allow local-as in updates. |
| | | | | **count** integer | | Number of local ASNs allowed in a BGP update. |
| | | | | **set** boolean | **Choices:*** no
* yes
| When True, it is set. |
| | | | **auto\_local\_addr** boolean | **Choices:*** no
* yes
| Automatically determine the local address to be used for the non-transport AF. |
| | | | **default\_originate** dictionary | | Originate default route to this neighbor. |
| | | | | **always** boolean | **Choices:*** no
* yes
| Always originate default route to this neighbor. |
| | | | | **route\_map** string | | Route map reference. |
| | | | **description** string | | Text describing the neighbor. |
| | | | **dont\_capability\_negotiate** boolean | **Choices:*** no
* yes
| Donot perform Capability Negotiation with this neighbor. |
| | | | **ebgp\_multihop** dictionary | | Allow BGP connections to indirectly connected external peers. |
| | | | | **set** boolean | **Choices:*** no
* yes
| If True, ttl is not set. |
| | | | | **ttl** integer | | Time-to-live in the range 1-255 hops. |
| | | | **encryption\_password** dictionary | | Password to use in computation of MD5 hash. |
| | | | | **password** string | | password (up to 80 chars). |
| | | | | **type** integer | **Choices:*** 0
* 7
| Encryption type. |
| | | | **enforce\_first\_as** boolean | **Choices:*** no
* yes
| Enforce the First AS for EBGP routes(default). |
| | | | **export\_localpref** integer | | Override localpref when exporting to an internal peer. |
| | | | **fall\_over** boolean | **Choices:*** no
* yes
| Configure BFD protocol options for this peer. |
| | | | **graceful\_restart** boolean | **Choices:*** no
* yes
| Enable graceful restart mode. |
| | | | **graceful\_restart\_helper** boolean | **Choices:*** no
* yes
| Enable graceful restart helper mode. |
| | | | **idle\_restart\_timer** integer | | Neighbor idle restart timer. |
| | | | **import\_localpref** integer | | Override localpref when importing from an external peer. |
| | | | **link\_bandwidth** dictionary | | Enable link bandwidth community for routes to this peer. |
| | | | | **auto** boolean | **Choices:*** no
* yes
| Enable link bandwidth auto generation for routes from this peer. |
| | | | | **default** string | | Enable link bandwidth default generation for routes from this peer. |
| | | | | **set** boolean | **Choices:*** no
* yes
| If True, set link bandwidth |
| | | | | **update\_delay** integer | | Delay outbound route updates. |
| | | | **local\_as** dictionary | | Configure local AS number advertised to peer. |
| | | | | **as\_number** string | | AS number. |
| | | | | **fallback** boolean | **Choices:*** no
* yes
| Prefer router AS Number over local AS Number. |
| | | | **local\_v6\_addr** string | | The local IPv6 address of the neighbor in A:B:C:D:E:F:G:H format. |
| | | | **maximum\_accepted\_routes** dictionary | | Maximum number of routes accepted from this peer. |
| | | | | **count** integer | | Maximum number of accepted routes (0 means unlimited). |
| | | | | **warning\_limit** integer | | Maximum number of accepted routes after which a warning is issued. (0 means never warn) |
| | | | **maximum\_received\_routes** dictionary | | Maximum number of routes received from this peer. |
| | | | | **count** integer | | Maximum number of routes (0 means unlimited). |
| | | | | **warning\_limit** dictionary | | Percentage of maximum-routes at which warning is to be issued. |
| | | | | | **limit\_count** integer | | Number of routes at which to warn. |
| | | | | | **limit\_percent** integer | | Percentage of maximum number of routes at which to warn( 1-100). |
| | | | | **warning\_only** boolean | **Choices:*** no
* yes
| Only warn, no restart, if max route limit exceeded. |
| | | | **metric\_out** integer | | MED value to advertise to peer. |
| | | | **monitoring** boolean | **Choices:*** no
* yes
| Enable BGP Monitoring Protocol for this peer. |
| | | | **next\_hop\_self** boolean | **Choices:*** no
* yes
| Always advertise this router address as the BGP next hop |
| | | | **next\_hop\_unchanged** boolean | **Choices:*** no
* yes
| Preserve original nexthop while advertising routes to eBGP peers. |
| | | | **next\_hop\_v6\_address** string | | IPv6 next-hop address for the neighbor |
| | | | **out\_delay** integer | | Delay outbound route updates. |
| | | | **peer** string | | Neighbor address or peer group. |
| | | | **peer\_group** string | | Name of the peer-group. |
| | | | **prefix\_list** dictionary | | Prefix list reference. |
| | | | | **direction** string | **Choices:*** in
* out
| Configure an inbound/outbound prefix-list. |
| | | | | **name** string | | prefix list name. |
| | | | **remote\_as** string | | Neighbor Autonomous System. |
| | | | **remove\_private\_as** dictionary | | Remove private AS number from updates to this peer. |
| | | | | **all** boolean | **Choices:*** no
* yes
| Remove private AS number. |
| | | | | **replace\_as** boolean | **Choices:*** no
* yes
| Replace private AS number with local AS number. |
| | | | | **set** boolean | **Choices:*** no
* yes
| If True, set remove\_private\_as. |
| | | | **route\_map** dictionary | | Route map reference. |
| | | | | **direction** string | **Choices:*** in
* out
| Configure an inbound/outbound route-map. |
| | | | | **name** string | | Route map name. |
| | | | **route\_reflector\_client** boolean | **Choices:*** no
* yes
| Configure peer as a route reflector client. |
| | | | **route\_to\_peer** boolean | **Choices:*** no
* yes
| Use routing table information to reach the peer. |
| | | | **send\_community** dictionary | | Send community attribute to this neighbor. |
| | | | | **community\_attribute** string | | Type of community attributes to send to this neighbor. |
| | | | | **divide** string | **Choices:*** equal
* ratio
| link-bandwidth divide attribute. |
| | | | | **link\_bandwidth\_attribute** string | **Choices:*** aggregate
* divide
| cumulative/aggregate attribute to be sent. |
| | | | | **speed** string | | Reference link speed in bits/second |
| | | | | **sub\_attribute** string | **Choices:*** extended
* link-bandwidth
* standard
| Attribute to be sent to the neighbor. |
| | | | **shutdown** boolean | **Choices:*** no
* yes
| Administratively shut down this neighbor. |
| | | | **soft\_recognition** string | **Choices:*** all
* None
| Configure how to handle routes that fail import. |
| | | | **timers** dictionary | | Timers. |
| | | | | **holdtime** integer | | Hold time in secs. |
| | | | | **keepalive** integer | | Keep Alive Interval in secs. |
| | | | **transport** dictionary | | Configure transport options for TCP session. |
| | | | | **connection\_mode** string | | Configure connection-mode for TCP session. |
| | | | | **remote\_port** integer | | Configure BGP peer TCP port to connect to. |
| | | | **ttl** integer | | BGP ttl security check |
| | | | **update\_source** string | | Specify the local source interface for peer BGP sessions. |
| | | | **weight** integer | | Weight to assign. |
| | | **network** list / elements=dictionary | | Configure routing for a network.
aliases: networks |
| | | | **address** string | | address prefix. |
| | | | **route\_map** string | | Name of route map. |
| | | **redistribute** list / elements=dictionary | | Redistribute routes in to BGP. |
| | | | **isis\_level** string | **Choices:*** level-1
* level-2
* level-1-2
| Applicable for isis routes. Specify isis route level. |
| | | | **ospf\_route** string | **Choices:*** internal
* external
* nssa\_external\_1
* nssa\_external\_2
| ospf route options. |
| | | | **protocol** string | **Choices:*** isis
* ospf3
* ospf
* attached-host
* connected
* rip
* static
| Routes to be redistributed. |
| | | | **route\_map** string | | Route map reference. |
| | | **route\_target** dictionary | | Route target. |
| | | | **action** string | **Choices:*** both
* import
* export
| Route action. |
| | | | **target** string | | Route Target. |
| | | **router\_id** string | | Router id. |
| | | **shutdown** boolean | **Choices:*** no
* yes
| When True, shut down BGP. |
| | | **timers** dictionary | | Timers. |
| | | | **holdtime** integer | | Hold time in secs. |
| | | | **keepalive** integer | | Keep Alive Interval in secs. |
| | | **ucmp** dictionary | | Configure unequal cost multipathing. |
| | | | **fec** dictionary | | Configure UCMP fec utilization threshold. |
| | | | | **clear** integer | | UCMP FEC utilization Clear thresholds. |
| | | | | **trigger** integer | | UCMP fec utilization too high threshold. |
| | | | **link\_bandwidth** dictionary | | Configure link-bandwidth propagation delay. |
| | | | | **mode** string | **Choices:*** encoding\_weighted
* recursive
* update\_delay
| UCMP link bandwidth mode |
| | | | | **update\_delay** integer | | Link Bandwidth Advertisement delay. |
| | | | **mode** dictionary | | UCMP mode. |
| | | | | **nexthops** integer | | Value for total number UCMP nexthops. |
| | | | | **set** boolean | **Choices:*** no
* yes
| If True, ucmp mode is set to 1. |
| | | **update** dictionary | | Configure BGP update generation. |
| | | | **batch\_size** integer | | batch size for FIB route acknowledgements. |
| | | | **wait\_for** string | **Choices:*** wait\_for\_convergence
* wait\_install
| wait for options before converge or synchronize. |
| | | **vrf** string | | VRF name. |
| **running\_config** string | | This option is used only with state *parsed*. The value of this option should be the output received from the EOS device by executing the command **show running-config | section bgp**. The state *parsed* reads the configuration from `running_config` option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the *parsed* key within the result. |
| **state** string | **Choices:*** deleted
* **merged** ←
* purged
* replaced
* gathered
* rendered
* parsed
| The state the configuration should be left in. State *purged* removes all the BGP configurations from the target device. Use caution with this state.('no router bgp <x>') State *deleted* only removes BGP attributes that this modules manages and does not negate the BGP process completely. Thereby, preserving address-family related configurations under BGP context. Running states *deleted* and *replaced* will result in an error if there are address-family configuration lines present under vrf context that is is to be removed. Please use the [arista.eos.eos\_bgp\_address\_family](eos_bgp_address_family_module) module for prior cleanup. Refer to examples for more details. |
Notes
-----
Note
* Tested against Arista EOS 4.23.0F
* This module works with connection `network_cli`. See the [EOS Platform Options](eos_platform_options).
Examples
--------
```
# Using merged
# Before state
# veos(config)#show running-config | section bgp
# veos(config)#
- name: Merge provided configuration with device configuration
arista.eos.eos_bgp_global:
config:
as_number: "100"
bgp_params:
host_routes: True
convergence:
slow_peer: True
time: 6
additional_paths: "send"
log_neighbor_changes: True
maximum_paths:
max_equal_cost_paths: 55
aggregate_address:
- address: "1.2.1.0/24"
as_set: true
match_map: "match01"
- address: "5.2.1.0/24"
attribute_map: "attrmatch01"
advertise_only: true
redistribute:
- protocol: "static"
route_map: "map_static"
- protocol: "attached-host"
distance:
internal: 50
neighbor:
- peer: "10.1.3.2"
allowas_in:
set: true
default_originate:
always: true
dont_capability_negotiate: true
export_localpref: 4000
maximum_received_routes:
count: 500
warning_limit:
limit_percent: 5
next_hop_unchanged: true
prefix_list:
name: "prefix01"
direction: "out"
- peer: "peer1"
fall_over: true
link_bandwidth:
update_delay: 5
monitoring: True
send_community:
community_attribute: "extended"
sub_attribute: "link-bandwidth"
link_bandwidth_attribute: "aggregate"
speed: "600"
vlan: 5
state: merged
# After State:
# veos(config)#show running-config | section bgp
# router bgp 100
# bgp convergence slow-peer time 6
# distance bgp 50 50 50
# maximum-paths 55
# bgp additional-paths send any
# neighbor peer1 peer-group
# neighbor peer1 link-bandwidth update-delay 5
# neighbor peer1 fall-over bfd
# neighbor peer1 monitoring
# neighbor peer1 send-community extended link-bandwidth aggregate 600
# neighbor peer1 maximum-routes 12000
# neighbor 10.1.3.2 export-localpref 4000
# neighbor 10.1.3.2 next-hop-unchanged
# neighbor 10.1.3.2 dont-capability-negotiate
# neighbor 10.1.3.2 allowas-in 3
# neighbor 10.1.3.2 default-originate always
# neighbor 10.1.3.2 maximum-routes 500 warning-limit 5 percent
# aggregate-address 1.2.1.0/24 as-set match-map match01
# aggregate-address 5.2.1.0/24 attribute-map attrmatch01 advertise-only
# redistribute static route-map map_static
# redistribute attached-host
# !
# vlan 5
# !
# address-family ipv4
# neighbor 10.1.3.2 prefix-list prefix01 out
# veos(config)#
#
# Module Execution:
#
# "after": {
# "aggregate_address": [
# {
# "address": "1.2.1.0/24",
# "as_set": true,
# "match_map": "match01"
# },
# {
# "address": "5.2.1.0/24",
# "advertise_only": true,
# "attribute_map": "attrmatch01"
# }
# ],
# "as_number": "100",
# "bgp_params": {
# "additional_paths": "send",
# "convergence": {
# "slow_peer": true,
# "time": 6
# }
# },
# "distance": {
# "external": 50,
# "internal": 50,
# "local": 50
# },
# "maximum_paths": {
# "max_equal_cost_paths": 55
# },
# "neighbor": [
# {
# "fall_over": true,
# "link_bandwidth": {
# "set": true,
# "update_delay": 5
# },
# "maximum_received_routes": {
# "count": 12000
# },
# "monitoring": true,
# "peer": "peer1",
# "peer_group": "peer1",
# "send_community": {
# "community_attribute": "extended",
# "link_bandwidth_attribute": "aggregate",
# "speed": "600",
# "sub_attribute": "link-bandwidth"
# }
# },
# {
# "allowas_in": {
# "count": 3
# },
# "default_originate": {
# "always": true
# },
# "dont_capability_negotiate": true,
# "export_localpref": 4000,
# "maximum_received_routes": {
# "count": 500,
# "warning_limit": {
# "limit_percent": 5
# }
# },
# "next_hop_unchanged": true,
# "peer": "10.1.3.2"
# }
# ],
# "redistribute": [
# {
# "protocol": "static",
# "route_map": "map_static"
# },
# {
# "protocol": "attached-host"
# }
# ],
# "vlan": 5
# },
# "before": {},
# "changed": true,
# "commands": [
# "router bgp 100",
# "neighbor 10.1.3.2 allowas-in",
# "neighbor 10.1.3.2 default-originate always",
# "neighbor 10.1.3.2 dont-capability-negotiate",
# "neighbor 10.1.3.2 export-localpref 4000",
# "neighbor 10.1.3.2 maximum-routes 500 warning-limit 5 percent",
# "neighbor 10.1.3.2 next-hop-unchanged",
# "neighbor 10.1.3.2 prefix-list prefix01 out",
# "neighbor peer1 fall-over bfd",
# "neighbor peer1 link-bandwidth update-delay 5",
# "neighbor peer1 monitoring",
# "neighbor peer1 send-community extended link-bandwidth aggregate 600",
# "redistribute static route-map map_static",
# "redistribute attached-host",
# "aggregate-address 1.2.1.0/24 as-set match-map match01",
# "aggregate-address 5.2.1.0/24 attribute-map attrmatch01 advertise-only",
# "bgp host-routes fib direct-install",
# "bgp convergence slow-peer time 6",
# "bgp additional-paths send any",
# "bgp log-neighbor-changes",
# "maximum-paths 55",
# "distance bgp 50",
# "vlan 5"
# ],
# Using replaced:
# Before state:
# veos(config)#show running-config | section bgp
# router bgp 100
# bgp convergence slow-peer time 6
# distance bgp 50 50 50
# maximum-paths 55
# bgp additional-paths send any
# neighbor peer1 peer-group
# neighbor peer1 link-bandwidth update-delay 5
# neighbor peer1 fall-over bfd
# neighbor peer1 monitoring
# neighbor peer1 send-community extended link-bandwidth aggregate 600
# neighbor peer1 maximum-routes 12000
# neighbor 10.1.3.2 export-localpref 4000
# neighbor 10.1.3.2 next-hop-unchanged
# neighbor 10.1.3.2 dont-capability-negotiate
# neighbor 10.1.3.2 allowas-in 3
# neighbor 10.1.3.2 default-originate always
# neighbor 10.1.3.2 maximum-routes 500 warning-limit 5 percent
# aggregate-address 1.2.1.0/24 as-set match-map match01
# aggregate-address 5.2.1.0/24 attribute-map attrmatch01 advertise-only
# redistribute static route-map map_static
# redistribute attached-host
# !
# vlan 5
# !
# address-family ipv4
# neighbor 10.1.3.2 prefix-list prefix01 out
# !
# vrf vrf01
# route-target import 54:11
# neighbor 12.1.3.2 dont-capability-negotiate
# neighbor 12.1.3.2 allowas-in 3
# neighbor 12.1.3.2 default-originate always
# neighbor 12.1.3.2 maximum-routes 12000
# veos(config)#
- name: replace provided configuration with device configuration
arista.eos.eos_bgp_global:
config:
as_number: "100"
bgp_params:
host_routes: True
convergence:
slow_peer: True
time: 6
additional_paths: "send"
log_neighbor_changes: True
vrfs:
- vrf: "vrf01"
maximum_paths:
max_equal_cost_paths: 55
aggregate_address:
- address: "1.2.1.0/24"
as_set: true
match_map: "match01"
- address: "5.2.1.0/24"
attribute_map: "attrmatch01"
advertise_only: true
redistribute:
- protocol: "static"
route_map: "map_static"
- protocol: "attached-host"
distance:
internal: 50
neighbor:
- peer: "10.1.3.2"
allowas_in:
set: true
default_originate:
always: true
dont_capability_negotiate: true
export_localpref: 4000
maximum_received_routes:
count: 500
warning_limit:
limit_percent: 5
next_hop_unchanged: true
prefix_list:
name: "prefix01"
direction: "out"
- peer: "peer1"
fall_over: true
link_bandwidth:
update_delay: 5
monitoring: True
send_community:
community_attribute: "extended"
sub_attribute: "link-bandwidth"
link_bandwidth_attribute: "aggregate"
speed: "600"
state: replaced
# After State:
# veos(config)#show running-config | section bgp
# router bgp 100
# bgp convergence slow-peer time 6
# bgp additional-paths send any
# !
# vrf vrf01
# distance bgp 50 50 50
# maximum-paths 55
# neighbor 10.1.3.2 export-localpref 4000
# neighbor 10.1.3.2 next-hop-unchanged
# neighbor 10.1.3.2 dont-capability-negotiate
# neighbor 10.1.3.2 allowas-in 3
# neighbor 10.1.3.2 default-originate always
# neighbor 10.1.3.2 maximum-routes 500 warning-limit 5 percent
# aggregate-address 1.2.1.0/24 as-set match-map match01
# aggregate-address 5.2.1.0/24 attribute-map attrmatch01 advertise-only
# redistribute static route-map map_static
# redistribute attached-host
# !
# address-family ipv4
# neighbor 10.1.3.2 prefix-list prefix01 out
# veos(config)#
#
#
# Module Execution:
#
# "after": {
# "as_number": "100",
# "bgp_params": {
# "additional_paths": "send",
# "convergence": {
# "slow_peer": true,
# "time": 6
# }
# },
# "vrfs": [
# {
# "aggregate_address": [
# {
# "address": "1.2.1.0/24",
# "as_set": true,
# "match_map": "match01"
# },
# {
# "address": "5.2.1.0/24",
# "advertise_only": true,
# "attribute_map": "attrmatch01"
# }
# ],
# "distance": {
# "external": 50,
# "internal": 50,
# "local": 50
# },
# "maximum_paths": {
# "max_equal_cost_paths": 55
# },
# "neighbor": [
# {
# "allowas_in": {
# "count": 3
# },
# "default_originate": {
# "always": true
# },
# "dont_capability_negotiate": true,
# "export_localpref": 4000,
# "maximum_received_routes": {
# "count": 500,
# "warning_limit": {
# "limit_percent": 5
# }
# },
# "next_hop_unchanged": true,
# "peer": "10.1.3.2"
# }
# ],
# "redistribute": [
# {
# "protocol": "static",
# "route_map": "map_static"
# },
# {
# "protocol": "attached-host"
# }
# ],
# "vrf": "vrf01"
# }
# ]
# },
# "before": {
# "aggregate_address": [
# {
# "address": "1.2.1.0/24",
# "as_set": true,
# "match_map": "match01"
# },
# {
# "address": "5.2.1.0/24",
# "advertise_only": true,
# "attribute_map": "attrmatch01"
# }
# ],
# "as_number": "100",
# "bgp_params": {
# "additional_paths": "send",
# "convergence": {
# "slow_peer": true,
# "time": 6
# }
# },
# "distance": {
# "external": 50,
# "internal": 50,
# "local": 50
# },
# "maximum_paths": {
# "max_equal_cost_paths": 55
# },
# "neighbor": [
# {
# "fall_over": true,
# "link_bandwidth": {
# "set": true,
# "update_delay": 5
# },
# "maximum_received_routes": {
# "count": 12000
# },
# "monitoring": true,
# "peer": "peer1",
# "peer_group": "peer1",
# "send_community": {
# "community_attribute": "extended",
# "link_bandwidth_attribute": "aggregate",
# "speed": "600",
# "sub_attribute": "link-bandwidth"
# }
# },
# {
# "allowas_in": {
# "count": 3
# },
# "default_originate": {
# "always": true
# },
# "dont_capability_negotiate": true,
# "export_localpref": 4000,
# "maximum_received_routes": {
# "count": 500,
# "warning_limit": {
# "limit_percent": 5
# }
# },
# "next_hop_unchanged": true,
# "peer": "10.1.3.2"
# }
# ],
# "redistribute": [
# {
# "protocol": "static",
# "route_map": "map_static"
# },
# {
# "protocol": "attached-host"
# }
# ],
# "vlan": 5,
# "vrfs": [
# {
# "neighbor": [
# {
# "allowas_in": {
# "count": 3
# },
# "default_originate": {
# "always": true
# },
# "dont_capability_negotiate": true,
# "maximum_received_routes": {
# "count": 12000
# },
# "peer": "12.1.3.2"
# }
# ],
# "route_target": {
# "action": "import",
# "target": "54:11"
# },
# "vrf": "vrf01"
# }
# ]
# },
# "changed": true,
# "commands": [
# "router bgp 100",
# "vrf vrf01",
# "no route-target import 54:11",
# "neighbor 10.1.3.2 allowas-in",
# "neighbor 10.1.3.2 default-originate always",
# "neighbor 10.1.3.2 dont-capability-negotiate",
# "neighbor 10.1.3.2 export-localpref 4000",
# "neighbor 10.1.3.2 maximum-routes 500 warning-limit 5 percent",
# "neighbor 10.1.3.2 next-hop-unchanged",
# "neighbor 10.1.3.2 prefix-list prefix01 out",
# "neighbor peer1 fall-over bfd",
# "neighbor peer1 link-bandwidth update-delay 5",
# "neighbor peer1 monitoring",
# "neighbor peer1 send-community extended link-bandwidth aggregate 600",
# "no neighbor 12.1.3.2",
# "redistribute static route-map map_static",
# "redistribute attached-host",
# "aggregate-address 1.2.1.0/24 as-set match-map match01",
# "aggregate-address 5.2.1.0/24 attribute-map attrmatch01 advertise-only",
# "maximum-paths 55",
# "distance bgp 50",
# "exit",
# "no neighbor peer1 peer-group",
# "no neighbor peer1 link-bandwidth update-delay 5",
# "no neighbor peer1 fall-over bfd",
# "no neighbor peer1 monitoring",
# "no neighbor peer1 send-community extended link-bandwidth aggregate 600",
# "no neighbor peer1 maximum-routes 12000",
# "no neighbor 10.1.3.2",
# "no redistribute static route-map map_static",
# "no redistribute attached-host",
# "no aggregate-address 1.2.1.0/24 as-set match-map match01",
# "no aggregate-address 5.2.1.0/24 attribute-map attrmatch01 advertise-only",
# "bgp host-routes fib direct-install",
# "bgp log-neighbor-changes",
# "no distance bgp 50 50 50",
# "no maximum-paths 55",
# "no vlan 5"
# ],
#
# Using replaced (in presence of address_family under vrf):
# Before State:
#veos(config)#show running-config | section bgp
# router bgp 100
# bgp convergence slow-peer time 6
# bgp additional-paths send any
# !
# vrf vrf01
# distance bgp 50 50 50
# maximum-paths 55
# neighbor 10.1.3.2 export-localpref 4000
# neighbor 10.1.3.2 next-hop-unchanged
# neighbor 10.1.3.2 dont-capability-negotiate
# neighbor 10.1.3.2 allowas-in 3
# neighbor 10.1.3.2 default-originate always
# neighbor 10.1.3.2 maximum-routes 500 warning-limit 5 percent
# aggregate-address 1.2.1.0/24 as-set match-map match01
# aggregate-address 5.2.1.0/24 attribute-map attrmatch01 advertise-only
# redistribute static route-map map_static
# redistribute attached-host
# !
# address-family ipv4
# neighbor 10.1.3.2 prefix-list prefix01 out
# !
# address-family ipv6
# redistribute dhcp
# veos(config)#
- name: Replace
arista.eos.eos_bgp_global:
config:
as_number: "100"
graceful_restart:
set: True
router_id: "1.1.1.1"
timers:
keepalive: 2
holdtime: 5
ucmp:
mode:
set: True
vlan_aware_bundle: "bundle1 bundle2 bundle3"
state: replaced
# Module Execution:
# fatal: [192.168.122.113]: FAILED! => {
# "changed": false,
# "invocation": {
# "module_args": {
# "config": {
# "access_group": null,
# "aggregate_address": null,
# "as_number": "100",
# "bgp_params": null,
# "default_metric": null,
# "distance": null,
# "graceful_restart": {
# "restart_time": null,
# "set": true,
# "stalepath_time": null
# },
# "graceful_restart_helper": null,
# "maximum_paths": null,
# "monitoring": null,
# "neighbor": null,
# "network": null,
# "redistribute": null,
# "route_target": null,
# "router_id": "1.1.1.1",
# "shutdown": null,
# "timers": {
# "holdtime": 5,
# "keepalive": 2
# },
# "ucmp": {
# "fec": null,
# "link_bandwidth": null,
# "mode": {
# "nexthops": null,
# "set": true
# }
# },
# "update": null,
# "vlan": null,
# "vlan_aware_bundle": "bundle1 bundle2 bundle3",
# "vrfs": null
# },
# "running_config": null,
# "state": "replaced"
# }
# },
# "msg": "Use the _bgp_af module to delete the address_family under vrf, before replacing/deleting the vrf."
# }
# Using deleted:
# Before state:
# veos(config)#show running-config | section bgp
# router bgp 100
# bgp convergence slow-peer time 6
# bgp additional-paths send any
# !
# vrf vrf01
# distance bgp 50 50 50
# maximum-paths 55
# neighbor 10.1.3.2 export-localpref 4000
# neighbor 10.1.3.2 next-hop-unchanged
# neighbor 10.1.3.2 dont-capability-negotiate
# neighbor 10.1.3.2 allowas-in 3
# neighbor 10.1.3.2 default-originate always
# neighbor 10.1.3.2 maximum-routes 500 warning-limit 5 percent
# aggregate-address 1.2.1.0/24 as-set match-map match01
# aggregate-address 5.2.1.0/24 attribute-map attrmatch01 advertise-only
# redistribute static route-map map_static
# redistribute attached-host
# !
- name: Delete configuration
arista.eos.eos_bgp_global:
config:
as_number: "100"
state: deleted
# After State:
# veos(config)#show running-config | section bgp
# router bgp 100
#
#
# Module Execution:
#
# "after": {
# "as_number": "100"
# },
# "before": {
# "as_number": "100",
# "bgp_params": {
# "additional_paths": "send",
# "convergence": {
# "slow_peer": true,
# "time": 6
# }
# },
# "vrfs": [
# {
# "aggregate_address": [
# {
# "address": "1.2.1.0/24",
# "as_set": true,
# "match_map": "match01"
# },
# {
# "address": "5.2.1.0/24",
# "advertise_only": true,
# "attribute_map": "attrmatch01"
# }
# ],
# "distance": {
# "external": 50,
# "internal": 50,
# "local": 50
# },
# "maximum_paths": {
# "max_equal_cost_paths": 55
# },
# "neighbor": [
# {
# "allowas_in": {
# "count": 3
# },
# "default_originate": {
# "always": true
# },
# "dont_capability_negotiate": true,
# "export_localpref": 4000,
# "maximum_received_routes": {
# "count": 500,
# "warning_limit": {
# "limit_percent": 5
# }
# },
# "next_hop_unchanged": true,
# "peer": "10.1.3.2"
# }
# ],
# "redistribute": [
# {
# "protocol": "static",
# "route_map": "map_static"
# },
# {
# "protocol": "attached-host"
# }
# ],
# "vrf": "vrf01"
# }
# ]
# },
# "changed": true,
# "commands": [
# "router bgp 100",
# "no vrf vrf01",
# "no bgp convergence slow-peer time 6",
# "no bgp additional-paths send any"
# ],
#
# Using purged:
# Before state:
# veos(config)#show running-config | section bgp
# router bgp 100
# bgp convergence slow-peer time 6
# distance bgp 50 50 50
# maximum-paths 55
# bgp additional-paths send any
# neighbor peer1 peer-group
# neighbor peer1 link-bandwidth update-delay 5
# neighbor peer1 fall-over bfd
# neighbor peer1 monitoring
# neighbor peer1 send-community extended link-bandwidth aggregate 600
# neighbor peer1 maximum-routes 12000
# neighbor 10.1.3.2 export-localpref 4000
# neighbor 10.1.3.2 next-hop-unchanged
# neighbor 10.1.3.2 dont-capability-negotiate
# neighbor 10.1.3.2 allowas-in 3
# neighbor 10.1.3.2 default-originate always
# neighbor 10.1.3.2 maximum-routes 500 warning-limit 5 percent
# aggregate-address 1.2.1.0/24 as-set match-map match01
# aggregate-address 5.2.1.0/24 attribute-map attrmatch01 advertise-only
# redistribute static route-map map_static
# redistribute attached-host
# !
# vlan 5
# !
# address-family ipv4
# neighbor 10.1.3.2 prefix-list prefix01 out
# !
# vrf vrf01
# route-target import 54:11
# neighbor 12.1.3.2 dont-capability-negotiate
# neighbor 12.1.3.2 allowas-in 3
# neighbor 12.1.3.2 default-originate always
# neighbor 12.1.3.2 maximum-routes 12000
# veos(config)#
- name: Purge configuration
arista.eos.eos_bgp_global:
config:
as_number: "100"
state: purged
# After State:
# veos(config)#show running-config | section bgp
# veos(config)#
# Module Execution:
# "after": {},
# "before": {
# "aggregate_address": [
# {
# "address": "1.2.1.0/24",
# "as_set": true,
# "match_map": "match01"
# },
# {
# "address": "5.2.1.0/24",
# "advertise_only": true,
# "attribute_map": "attrmatch01"
# }
# ],
# "as_number": "100",
# "bgp_params": {
# "additional_paths": "send",
# "convergence": {
# "slow_peer": true,
# "time": 6
# }
# },
# "distance": {
# "external": 50,
# "internal": 50,
# "local": 50
# },
# "maximum_paths": {
# "max_equal_cost_paths": 55
# },
# "neighbor": [
# {
# "fall_over": true,
# "link_bandwidth": {
# "set": true,
# "update_delay": 5
# },
# "maximum_received_routes": {
# "count": 12000
# },
# "monitoring": true,
# "peer": "peer1",
# "peer_group": "peer1",
# "send_community": {
# "community_attribute": "extended",
# "link_bandwidth_attribute": "aggregate",
# "speed": "600",
# "sub_attribute": "link-bandwidth"
# }
# },
# {
# "allowas_in": {
# "count": 3
# },
# "default_originate": {
# "always": true
# },
# "dont_capability_negotiate": true,
# "export_localpref": 4000,
# "maximum_received_routes": {
# "count": 500,
# "warning_limit": {
# "limit_percent": 5
# }
# },
# "next_hop_unchanged": true,
# "peer": "10.1.3.2"
# }
# ],
# "redistribute": [
# {
# "protocol": "static",
# "route_map": "map_static"
# },
# {
# "protocol": "attached-host"
# }
# ],
# "vlan": 5,
# "vrfs": [
# {
# "neighbor": [
# {
# "allowas_in": {
# "count": 3
# },
# "default_originate": {
# "always": true
# },
# "dont_capability_negotiate": true,
# "maximum_received_routes": {
# "count": 12000
# },
# "peer": "12.1.3.2"
# }
# ],
# "route_target": {
# "action": "import",
# "target": "54:11"
# },
# "vrf": "vrf01"
# }
# ]
# },
# "changed": true,
# "commands": [
# "no router bgp 100"
# ],
```
### Authors
* Gomathi Selvi Srinivasan (@GomathiselviS)
| programming_docs |
ansible Collections in the Openstack Namespace Collections in the Openstack Namespace
======================================
These are the collections with docs hosted on [docs.ansible.com](https://docs.ansible.com/) in the **openstack** namespace.
* [openstack.cloud](cloud/index#plugins-in-openstack-cloud)
ansible openstack.cloud.port – Add/Update/Delete ports from an OpenStack cloud. openstack.cloud.port – Add/Update/Delete ports from an OpenStack cloud.
=======================================================================
Note
This plugin is part of the [openstack.cloud collection](https://galaxy.ansible.com/openstack/cloud) (version 1.5.1).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install openstack.cloud`.
To use it in a playbook, specify: `openstack.cloud.port`.
* [Synopsis](#synopsis)
* [Requirements](#requirements)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* Add, Update or Remove ports from an OpenStack cloud. A *state* of ‘present’ will ensure the port is created or updated if required.
Requirements
------------
The below requirements are needed on the host that executes this module.
* openstacksdk
* openstacksdk >= 0.12.0
* python >= 3.6
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **admin\_state\_up** boolean | **Choices:*** no
* yes
| Sets admin state. |
| **allowed\_address\_pairs** list / elements=dictionary | | Allowed address pairs list. Allowed address pairs are supported with dictionary structure. e.g. allowed\_address\_pairs: - ip\_address: 10.1.0.12 mac\_address: ab:cd:ef:12:34:56 - ip\_address: ... |
| | **ip\_address** string | | The IP address. |
| | **mac\_address** string | | The MAC address. |
| **api\_timeout** integer | | How long should the socket layer wait before timing out for API calls. If this is omitted, nothing will be passed to the requests library. |
| **auth** dictionary | | Dictionary containing auth information as needed by the cloud's auth plugin strategy. For the default *password* plugin, this would contain *auth\_url*, *username*, *password*, *project\_name* and any information about domains (for example, *user\_domain\_name* or *project\_domain\_name*) if the cloud supports them. For other plugins, this param will need to contain whatever parameters that auth plugin requires. This parameter is not needed if a named cloud is provided or OpenStack OS\_\* environment variables are present. |
| **auth\_type** string | | Name of the auth plugin to use. If the cloud uses something other than password authentication, the name of the plugin should be indicated here and the contents of the *auth* parameter should be updated accordingly. |
| **availability\_zone** string | | Ignored. Present for backwards compatibility |
| **binding\_profile** dictionary | | Binding profile dict that the port should be created with. |
| **ca\_cert** string | | A path to a CA Cert bundle that can be used as part of verifying SSL API requests.
aliases: cacert |
| **client\_cert** string | | A path to a client certificate to use as part of the SSL transaction.
aliases: cert |
| **client\_key** string | | A path to a client key to use as part of the SSL transaction.
aliases: key |
| **cloud** raw | | Named cloud or cloud config to operate against. If *cloud* is a string, it references a named cloud config as defined in an OpenStack clouds.yaml file. Provides default values for *auth* and *auth\_type*. This parameter is not needed if *auth* is provided or if OpenStack OS\_\* environment variables are present. If *cloud* is a dict, it contains a complete cloud configuration like would be in a section of clouds.yaml. |
| **device\_id** string | | Device ID of device using this port. |
| **device\_owner** string | | The ID of the entity that uses this port. |
| **extra\_dhcp\_opts** list / elements=dictionary | | Extra dhcp options to be assigned to this port. Extra options are supported with dictionary structure. Note that options cannot be removed only updated. e.g. extra\_dhcp\_opts: - opt\_name: opt name1 opt\_value: value1 ip\_version: 4 - opt\_name: ... |
| | **ip\_version** integer / required | | The IP version this DHCP option is for. |
| | **opt\_name** string / required | | The name of the DHCP option to set. |
| | **opt\_value** string / required | | The value of the DHCP option to set. |
| **fixed\_ips** list / elements=dictionary | | Desired IP and/or subnet for this port. Subnet is referenced by subnet\_id and IP is referenced by ip\_address. |
| | **ip\_address** string / required | | The fixed IP address to attempt to allocate. |
| | **subnet\_id** string | | The subnet to attach the IP address to. |
| **interface** string | **Choices:*** admin
* internal
* **public** ←
| Endpoint URL type to fetch from the service catalog.
aliases: endpoint\_type |
| **mac\_address** string | | MAC address of this port. |
| **name** string | | Name that has to be given to the port. |
| **network** string | | Network ID or name this port belongs to. Required when creating a new port. |
| **no\_security\_groups** boolean | **Choices:*** **no** ←
* yes
| Do not associate a security group with this port. |
| **port\_security\_enabled** boolean | **Choices:*** no
* yes
| Whether to enable or disable the port security on the network. |
| **region\_name** string | | Name of the region. |
| **security\_groups** list / elements=string | | Security group(s) ID(s) or name(s) associated with the port (comma separated string or YAML list) |
| **state** string | **Choices:*** **present** ←
* absent
| Should the resource be present or absent. |
| **timeout** integer | **Default:**180 | How long should ansible wait for the requested resource. |
| **validate\_certs** boolean | **Choices:*** no
* yes
| Whether or not SSL API requests should be verified. Before Ansible 2.3 this defaulted to `yes`.
aliases: verify |
| **vnic\_type** string | **Choices:*** normal
* direct
* direct-physical
* macvtap
* baremetal
* virtio-forwarder
| The type of the port that should be created |
| **wait** boolean | **Choices:*** no
* **yes** ←
| Should ansible wait until the requested resource is complete. |
Notes
-----
Note
* The standard OpenStack environment variables, such as `OS_USERNAME` may be used instead of providing explicit values.
* Auth information is driven by openstacksdk, which means that values can come from a yaml config file in /etc/ansible/openstack.yaml, /etc/openstack/clouds.yaml or ~/.config/openstack/clouds.yaml, then from standard environment variables, then finally by explicit parameters in plays. More information can be found at <https://docs.openstack.org/openstacksdk/>
Examples
--------
```
# Create a port
- openstack.cloud.port:
state: present
auth:
auth_url: https://identity.example.com
username: admin
password: admin
project_name: admin
name: port1
network: foo
# Create a port with a static IP
- openstack.cloud.port:
state: present
auth:
auth_url: https://identity.example.com
username: admin
password: admin
project_name: admin
name: port1
network: foo
fixed_ips:
- ip_address: 10.1.0.21
# Create a port with No security groups
- openstack.cloud.port:
state: present
auth:
auth_url: https://identity.example.com
username: admin
password: admin
project_name: admin
name: port1
network: foo
no_security_groups: True
# Update the existing 'port1' port with multiple security groups (version 1)
- openstack.cloud.port:
state: present
auth:
auth_url: https://identity.example.com
username: admin
password: admin
project_name: admin
name: port1
security_groups: 1496e8c7-4918-482a-9172-f4f00fc4a3a5,057d4bdf-6d4d-472...
# Update the existing 'port1' port with multiple security groups (version 2)
- openstack.cloud.port:
state: present
auth:
auth_url: https://identity.example.com
username: admin
password: admin
project_name: admin
name: port1
security_groups:
- 1496e8c7-4918-482a-9172-f4f00fc4a3a5
- 057d4bdf-6d4d-472...
# Create port of type 'direct'
- openstack.cloud.port:
state: present
auth:
auth_url: https://identity.example.com
username: admin
password: admin
project_name: admin
name: port1
network: foo
vnic_type: direct
# Create a port with binding profile
- openstack.cloud.port:
state: present
auth:
auth_url: https://identity.example.com
username: admin
password: admin
project_name: admin
name: port1
network: foo
binding_profile:
"pci_slot": "0000:03:11.1"
"physical_network": "provider"
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **admin\_state\_up** boolean | success | Admin state up flag for this port. |
| **allowed\_address\_pairs** list / elements=string | success | Allowed address pairs with this port. |
| **binding:profile** dictionary | success | Port binded profile |
| **fixed\_ips** list / elements=string | success | Fixed ip(s) associated with this port. |
| **id** string | success | Unique UUID. |
| **name** string | success | Name given to the port. |
| **network\_id** string | success | Network ID this port belongs in. |
| **port\_security\_enabled** boolean | success | Port security state on the network. |
| **security\_groups** list / elements=string | success | Security group(s) associated with this port. |
| **status** string | success | Port's status. |
| **tenant\_id** string | success | Tenant id associated with this port. |
| **vnic\_type** string | success | Type of the created port |
### Authors
* OpenStack Ansible SIG
ansible openstack.cloud.volume_backup – Add/Delete Volume backup openstack.cloud.volume\_backup – Add/Delete Volume backup
=========================================================
Note
This plugin is part of the [openstack.cloud collection](https://galaxy.ansible.com/openstack/cloud) (version 1.5.1).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install openstack.cloud`.
To use it in a playbook, specify: `openstack.cloud.volume_backup`.
* [Synopsis](#synopsis)
* [Requirements](#requirements)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* Add or Remove Volume Backup in OTC.
Requirements
------------
The below requirements are needed on the host that executes this module.
* openstacksdk
* openstacksdk >= 0.12.0
* python >= 3.6
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **api\_timeout** integer | | How long should the socket layer wait before timing out for API calls. If this is omitted, nothing will be passed to the requests library. |
| **auth** dictionary | | Dictionary containing auth information as needed by the cloud's auth plugin strategy. For the default *password* plugin, this would contain *auth\_url*, *username*, *password*, *project\_name* and any information about domains (for example, *user\_domain\_name* or *project\_domain\_name*) if the cloud supports them. For other plugins, this param will need to contain whatever parameters that auth plugin requires. This parameter is not needed if a named cloud is provided or OpenStack OS\_\* environment variables are present. |
| **auth\_type** string | | Name of the auth plugin to use. If the cloud uses something other than password authentication, the name of the plugin should be indicated here and the contents of the *auth* parameter should be updated accordingly. |
| **availability\_zone** string | | Ignored. Present for backwards compatibility |
| **ca\_cert** string | | A path to a CA Cert bundle that can be used as part of verifying SSL API requests.
aliases: cacert |
| **client\_cert** string | | A path to a client certificate to use as part of the SSL transaction.
aliases: cert |
| **client\_key** string | | A path to a client key to use as part of the SSL transaction.
aliases: key |
| **cloud** raw | | Named cloud or cloud config to operate against. If *cloud* is a string, it references a named cloud config as defined in an OpenStack clouds.yaml file. Provides default values for *auth* and *auth\_type*. This parameter is not needed if *auth* is provided or if OpenStack OS\_\* environment variables are present. If *cloud* is a dict, it contains a complete cloud configuration like would be in a section of clouds.yaml. |
| **display\_description** string | | String describing the backup
aliases: description |
| **display\_name** string / required | | Name that has to be given to the backup
aliases: name |
| **force** boolean | **Choices:*** **no** ←
* yes
| Indicates whether to backup, even if the volume is attached. |
| **incremental** boolean | **Choices:*** **no** ←
* yes
| The backup mode |
| **interface** string | **Choices:*** admin
* internal
* **public** ←
| Endpoint URL type to fetch from the service catalog.
aliases: endpoint\_type |
| **metadata** dictionary | | Metadata for the backup |
| **region\_name** string | | Name of the region. |
| **snapshot** string | | Name or ID of the Snapshot to take backup of |
| **state** string | **Choices:*** **present** ←
* absent
| Should the resource be present or absent. |
| **timeout** integer | **Default:**180 | How long should ansible wait for the requested resource. |
| **validate\_certs** boolean | **Choices:*** no
* yes
| Whether or not SSL API requests should be verified. Before Ansible 2.3 this defaulted to `yes`.
aliases: verify |
| **volume** string | | Name or ID of the volume. Required when state is True. |
| **wait** boolean | **Choices:*** no
* **yes** ←
| Should ansible wait until the requested resource is complete. |
Notes
-----
Note
* The standard OpenStack environment variables, such as `OS_USERNAME` may be used instead of providing explicit values.
* Auth information is driven by openstacksdk, which means that values can come from a yaml config file in /etc/ansible/openstack.yaml, /etc/openstack/clouds.yaml or ~/.config/openstack/clouds.yaml, then from standard environment variables, then finally by explicit parameters in plays. More information can be found at <https://docs.openstack.org/openstacksdk/>
Examples
--------
```
- name: Create backup
openstack.cloud.volume_backup:
display_name: test_volume_backup
volume: "test_volume"
- name: Create backup from snapshot
openstack.cloud.volume_backup:
display_name: test_volume_backup
volume: "test_volume"
snapshot: "test_snapshot"
- name: Delete volume backup
openstack.cloud.volume_backup:
display_name: test_volume_backup
state: absent
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **backup** complex | On success when `state=present` | Dictionary describing the Cluster. |
| | **id** string | success | Unique UUID. **Sample:** 39007a7e-ee4f-4d13-8283-b4da2e037c69 |
| | **name** string | success | Name given to the load balancer. **Sample:** elb\_test |
| **id** string | On success when `state=present` | The Volume backup ID. **Sample:** 39007a7e-ee4f-4d13-8283-b4da2e037c69 |
### Authors
* OpenStack Ansible SIG
ansible openstack.cloud.image – Add/Delete images from OpenStack Cloud openstack.cloud.image – Add/Delete images from OpenStack Cloud
==============================================================
Note
This plugin is part of the [openstack.cloud collection](https://galaxy.ansible.com/openstack/cloud) (version 1.5.1).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install openstack.cloud`.
To use it in a playbook, specify: `openstack.cloud.image`.
* [Synopsis](#synopsis)
* [Requirements](#requirements)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
Synopsis
--------
* Add or Remove images from the OpenStack Image Repository
Requirements
------------
The below requirements are needed on the host that executes this module.
* openstacksdk
* openstacksdk >= 0.12.0
* python >= 3.6
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **api\_timeout** integer | | How long should the socket layer wait before timing out for API calls. If this is omitted, nothing will be passed to the requests library. |
| **auth** dictionary | | Dictionary containing auth information as needed by the cloud's auth plugin strategy. For the default *password* plugin, this would contain *auth\_url*, *username*, *password*, *project\_name* and any information about domains (for example, *user\_domain\_name* or *project\_domain\_name*) if the cloud supports them. For other plugins, this param will need to contain whatever parameters that auth plugin requires. This parameter is not needed if a named cloud is provided or OpenStack OS\_\* environment variables are present. |
| **auth\_type** string | | Name of the auth plugin to use. If the cloud uses something other than password authentication, the name of the plugin should be indicated here and the contents of the *auth* parameter should be updated accordingly. |
| **availability\_zone** string | | Ignored. Present for backwards compatibility |
| **ca\_cert** string | | A path to a CA Cert bundle that can be used as part of verifying SSL API requests.
aliases: cacert |
| **checksum** string | | The checksum of the image |
| **client\_cert** string | | A path to a client certificate to use as part of the SSL transaction.
aliases: cert |
| **client\_key** string | | A path to a client key to use as part of the SSL transaction.
aliases: key |
| **cloud** raw | | Named cloud or cloud config to operate against. If *cloud* is a string, it references a named cloud config as defined in an OpenStack clouds.yaml file. Provides default values for *auth* and *auth\_type*. This parameter is not needed if *auth* is provided or if OpenStack OS\_\* environment variables are present. If *cloud* is a dict, it contains a complete cloud configuration like would be in a section of clouds.yaml. |
| **container\_format** string | **Choices:*** ami
* aki
* ari
* **bare** ←
* ovf
* ova
* docker
| The format of the container |
| **disk\_format** string | **Choices:*** ami
* ari
* aki
* vhd
* vmdk
* raw
* **qcow2** ←
* vdi
* iso
* vhdx
* ploop
| The format of the disk that is getting uploaded |
| **filename** string | | The path to the file which has to be uploaded |
| **id** string | | The ID of the image when uploading an image |
| **interface** string | **Choices:*** admin
* internal
* **public** ←
| Endpoint URL type to fetch from the service catalog.
aliases: endpoint\_type |
| **is\_public** boolean | **Choices:*** **no** ←
* yes
| Whether the image can be accessed publicly. Note that publicizing an image requires admin role by default. |
| **kernel** string | | The name of an existing kernel image that will be associated with this image |
| **min\_disk** integer | | The minimum disk space (in GB) required to boot this image |
| **min\_ram** integer | | The minimum ram (in MB) required to boot this image |
| **name** string / required | | The name of the image when uploading - or the name/ID of the image if deleting |
| **owner** string | | The owner of the image |
| **properties** dictionary | **Default:**{} | Additional properties to be associated with this image |
| **protected** boolean | **Choices:*** **no** ←
* yes
| Prevent image from being deleted |
| **ramdisk** string | | The name of an existing ramdisk image that will be associated with this image |
| **region\_name** string | | Name of the region. |
| **state** string | **Choices:*** **present** ←
* absent
| Should the resource be present or absent. |
| **tags** list / elements=string | **Default:**[] | List of tags to be applied to the image |
| **timeout** integer | **Default:**180 | How long should ansible wait for the requested resource. |
| **validate\_certs** boolean | **Choices:*** no
* yes
| Whether or not SSL API requests should be verified. Before Ansible 2.3 this defaulted to `yes`.
aliases: verify |
| **volume** string | | ID of a volume to create an image from. The volume must be in AVAILABLE state. |
| **wait** boolean | **Choices:*** no
* **yes** ←
| Should ansible wait until the requested resource is complete. |
Notes
-----
Note
* The standard OpenStack environment variables, such as `OS_USERNAME` may be used instead of providing explicit values.
* Auth information is driven by openstacksdk, which means that values can come from a yaml config file in /etc/ansible/openstack.yaml, /etc/openstack/clouds.yaml or ~/.config/openstack/clouds.yaml, then from standard environment variables, then finally by explicit parameters in plays. More information can be found at <https://docs.openstack.org/openstacksdk/>
Examples
--------
```
# Upload an image from a local file named cirros-0.3.0-x86_64-disk.img
- openstack.cloud.image:
auth:
auth_url: https://identity.example.com
username: admin
password: passme
project_name: admin
openstack.cloud.identity_user_domain_name: Default
openstack.cloud.project_domain_name: Default
name: cirros
container_format: bare
disk_format: qcow2
state: present
filename: cirros-0.3.0-x86_64-disk.img
kernel: cirros-vmlinuz
ramdisk: cirros-initrd
tags:
- custom
properties:
cpu_arch: x86_64
distro: ubuntu
# Create image from volume attached to an instance
- name: create volume snapshot
openstack.cloud.volume_snapshot:
auth:
"{{ auth }}"
display_name: myvol_snapshot
volume: myvol
force: yes
register: myvol_snapshot
- name: create volume from snapshot
openstack.cloud.volume:
auth:
"{{ auth }}"
size: "{{ myvol_snapshot.snapshot.size }}"
snapshot_id: "{{ myvol_snapshot.snapshot.id }}"
display_name: myvol_snapshot_volume
wait: yes
register: myvol_snapshot_volume
- name: create image from volume snapshot
openstack.cloud.image:
auth:
"{{ auth }}"
volume: "{{ myvol_snapshot_volume.volume.id }}"
name: myvol_image
```
### Authors
* OpenStack Ansible SIG
| programming_docs |
ansible openstack.cloud.keypair_info – Get information about keypairs from OpenStack openstack.cloud.keypair\_info – Get information about keypairs from OpenStack
=============================================================================
Note
This plugin is part of the [openstack.cloud collection](https://galaxy.ansible.com/openstack/cloud) (version 1.5.1).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install openstack.cloud`.
To use it in a playbook, specify: `openstack.cloud.keypair_info`.
* [Synopsis](#synopsis)
* [Requirements](#requirements)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* Get information about keypairs that are associated with the account
Requirements
------------
The below requirements are needed on the host that executes this module.
* openstacksdk
* openstacksdk >= 0.12.0
* python >= 3.6
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **api\_timeout** integer | | How long should the socket layer wait before timing out for API calls. If this is omitted, nothing will be passed to the requests library. |
| **auth** dictionary | | Dictionary containing auth information as needed by the cloud's auth plugin strategy. For the default *password* plugin, this would contain *auth\_url*, *username*, *password*, *project\_name* and any information about domains (for example, *user\_domain\_name* or *project\_domain\_name*) if the cloud supports them. For other plugins, this param will need to contain whatever parameters that auth plugin requires. This parameter is not needed if a named cloud is provided or OpenStack OS\_\* environment variables are present. |
| **auth\_type** string | | Name of the auth plugin to use. If the cloud uses something other than password authentication, the name of the plugin should be indicated here and the contents of the *auth* parameter should be updated accordingly. |
| **availability\_zone** string | | Ignored. Present for backwards compatibility |
| **ca\_cert** string | | A path to a CA Cert bundle that can be used as part of verifying SSL API requests.
aliases: cacert |
| **client\_cert** string | | A path to a client certificate to use as part of the SSL transaction.
aliases: cert |
| **client\_key** string | | A path to a client key to use as part of the SSL transaction.
aliases: key |
| **cloud** raw | | Named cloud or cloud config to operate against. If *cloud* is a string, it references a named cloud config as defined in an OpenStack clouds.yaml file. Provides default values for *auth* and *auth\_type*. This parameter is not needed if *auth* is provided or if OpenStack OS\_\* environment variables are present. If *cloud* is a dict, it contains a complete cloud configuration like would be in a section of clouds.yaml. |
| **interface** string | **Choices:*** admin
* internal
* **public** ←
| Endpoint URL type to fetch from the service catalog.
aliases: endpoint\_type |
| **limit** integer | | Requests a page size of items. Returns a number of items up to a limit value. |
| **marker** string | | The last-seen item. |
| **name** string | | Name or ID of the keypair |
| **region\_name** string | | Name of the region. |
| **timeout** integer | **Default:**180 | How long should ansible wait for the requested resource. |
| **user\_id** string | | It allows admin users to operate key-pairs of specified user ID. |
| **validate\_certs** boolean | **Choices:*** no
* yes
| Whether or not SSL API requests should be verified. Before Ansible 2.3 this defaulted to `yes`.
aliases: verify |
| **wait** boolean | **Choices:*** no
* **yes** ←
| Should ansible wait until the requested resource is complete. |
Notes
-----
Note
* The standard OpenStack environment variables, such as `OS_USERNAME` may be used instead of providing explicit values.
* Auth information is driven by openstacksdk, which means that values can come from a yaml config file in /etc/ansible/openstack.yaml, /etc/openstack/clouds.yaml or ~/.config/openstack/clouds.yaml, then from standard environment variables, then finally by explicit parameters in plays. More information can be found at <https://docs.openstack.org/openstacksdk/>
Examples
--------
```
- name: Get information about keypairs
openstack.cloud.keypair_info:
register: result
- name: Get information about keypairs using optional parameters
openstack.cloud.keypair_info:
name: "test"
user_id: "fed75b36fd7a4078a769178d2b1bd844"
limit: 10
marker: "jdksl"
register: result
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **openstack\_keypairs** complex | always | Lists keypairs that are associated with the account. |
| | **created\_at** string | success | The date and time when the resource was created. **Sample:** 2021-01-19T14:52:07.261634 |
| | **fingerprint** string | success | The fingerprint for the keypair. **Sample:** 7e:eb:ab:24:ba:d1:e1:88:ae:9a:fb:66:53:df:d3:bd |
| | **id** string | success | The id identifying the keypair **Sample:** keypair-5d935425-31d5-48a7-a0f1-e76e9813f2c3 |
| | **is\_deleted** boolean | success | A boolean indicates whether this keypair is deleted or not. |
| | **name** string | success | A keypair name which will be used to reference it later. **Sample:** keypair-5d935425-31d5-48a7-a0f1-e76e9813f2c3 |
| | **private\_key** string | success | The private key for the keypair. **Sample:** MIICXAIBAAKBgQCqGKukO ... hZj6+H0qtjTkVxwTCpvKe4eCZ0FPq |
| | **public\_key** string | success | The keypair public key. **Sample:** ssh-rsa AAAAB3NzaC1yc ... 8rPsBUHNLQp Generated-by-Nova |
| | **type** string | success | The type of the keypair. Allowed values are ssh or x509. **Sample:** ssh |
| | **user\_id** string | success | It allows admin users to operate key-pairs of specified user ID. **Sample:** 59b10f2a2138428ea9358e10c7e44444 |
### Authors
* OpenStack Ansible SIG
ansible openstack.cloud.port_info – Retrieve information about ports within OpenStack. openstack.cloud.port\_info – Retrieve information about ports within OpenStack.
===============================================================================
Note
This plugin is part of the [openstack.cloud collection](https://galaxy.ansible.com/openstack/cloud) (version 1.5.1).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install openstack.cloud`.
To use it in a playbook, specify: `openstack.cloud.port_info`.
* [Synopsis](#synopsis)
* [Requirements](#requirements)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* Retrieve information about ports from OpenStack.
* This module was called `openstack.cloud.port_facts` before Ansible 2.9, returning `ansible_facts`. Note that the [openstack.cloud.port\_info](#ansible-collections-openstack-cloud-port-info-module) module no longer returns `ansible_facts`!
Requirements
------------
The below requirements are needed on the host that executes this module.
* openstacksdk
* openstacksdk >= 0.12.0
* python >= 3.6
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **api\_timeout** integer | | How long should the socket layer wait before timing out for API calls. If this is omitted, nothing will be passed to the requests library. |
| **auth** dictionary | | Dictionary containing auth information as needed by the cloud's auth plugin strategy. For the default *password* plugin, this would contain *auth\_url*, *username*, *password*, *project\_name* and any information about domains (for example, *user\_domain\_name* or *project\_domain\_name*) if the cloud supports them. For other plugins, this param will need to contain whatever parameters that auth plugin requires. This parameter is not needed if a named cloud is provided or OpenStack OS\_\* environment variables are present. |
| **auth\_type** string | | Name of the auth plugin to use. If the cloud uses something other than password authentication, the name of the plugin should be indicated here and the contents of the *auth* parameter should be updated accordingly. |
| **availability\_zone** string | | Ignored. Present for backwards compatibility |
| **ca\_cert** string | | A path to a CA Cert bundle that can be used as part of verifying SSL API requests.
aliases: cacert |
| **client\_cert** string | | A path to a client certificate to use as part of the SSL transaction.
aliases: cert |
| **client\_key** string | | A path to a client key to use as part of the SSL transaction.
aliases: key |
| **cloud** raw | | Named cloud or cloud config to operate against. If *cloud* is a string, it references a named cloud config as defined in an OpenStack clouds.yaml file. Provides default values for *auth* and *auth\_type*. This parameter is not needed if *auth* is provided or if OpenStack OS\_\* environment variables are present. If *cloud* is a dict, it contains a complete cloud configuration like would be in a section of clouds.yaml. |
| **filters** dictionary | | A dictionary of meta data to use for further filtering. Elements of this dictionary will be matched against the returned port dictionaries. Matching is currently limited to strings within the port dictionary, or strings within nested dictionaries. |
| **interface** string | **Choices:*** admin
* internal
* **public** ←
| Endpoint URL type to fetch from the service catalog.
aliases: endpoint\_type |
| **port** string | | Unique name or ID of a port. |
| **region\_name** string | | Name of the region. |
| **timeout** integer | **Default:**180 | How long should ansible wait for the requested resource. |
| **validate\_certs** boolean | **Choices:*** no
* yes
| Whether or not SSL API requests should be verified. Before Ansible 2.3 this defaulted to `yes`.
aliases: verify |
| **wait** boolean | **Choices:*** no
* **yes** ←
| Should ansible wait until the requested resource is complete. |
Notes
-----
Note
* The standard OpenStack environment variables, such as `OS_USERNAME` may be used instead of providing explicit values.
* Auth information is driven by openstacksdk, which means that values can come from a yaml config file in /etc/ansible/openstack.yaml, /etc/openstack/clouds.yaml or ~/.config/openstack/clouds.yaml, then from standard environment variables, then finally by explicit parameters in plays. More information can be found at <https://docs.openstack.org/openstacksdk/>
Examples
--------
```
# Gather information about all ports
- openstack.cloud.port_info:
cloud: mycloud
register: result
- debug:
msg: "{{ result.openstack_ports }}"
# Gather information about a single port
- openstack.cloud.port_info:
cloud: mycloud
port: 6140317d-e676-31e1-8a4a-b1913814a471
# Gather information about all ports that have device_id set to a specific value
# and with a status of ACTIVE.
- openstack.cloud.port_info:
cloud: mycloud
filters:
device_id: 1038a010-3a37-4a9d-82ea-652f1da36597
status: ACTIVE
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **openstack\_ports** complex | always, but can be null | List of port dictionaries. A subset of the dictionary keys listed below may be returned, depending on your cloud provider. |
| | **admin\_state\_up** boolean | success | The administrative state of the router, which is up (true) or down (false). **Sample:** True |
| | **allowed\_address\_pairs** list / elements=string | success | A set of zero or more allowed address pairs. An address pair consists of an IP address and MAC address. |
| | **binding:host\_id** string | success | The UUID of the host where the port is allocated. **Sample:** b4bd682d-234a-4091-aa5b-4b025a6a7759 |
| | **binding:profile** dictionary | success | A dictionary the enables the application running on the host to pass and receive VIF port-specific information to the plug-in. |
| | **binding:vif\_details** dictionary | success | A dictionary that enables the application to pass information about functions that the Networking API provides. **Sample:** {'port\_filter': True} |
| | **binding:vif\_type** dictionary | success | The VIF type for the port. **Sample:** ovs |
| | **binding:vnic\_type** string | success | The virtual network interface card (vNIC) type that is bound to the neutron port. **Sample:** normal |
| | **device\_id** string | success | The UUID of the device that uses this port. **Sample:** b4bd682d-234a-4091-aa5b-4b025a6a7759 |
| | **device\_owner** string | success | The UUID of the entity that uses this port. **Sample:** network:router\_interface |
| | **dns\_assignment** list / elements=string | success | DNS assignment information. |
| | **dns\_name** string | success | DNS name |
| | **extra\_dhcp\_opts** list / elements=string | success | A set of zero or more extra DHCP option pairs. An option pair consists of an option value and name. |
| | **fixed\_ips** list / elements=string | success | The IP addresses for the port. Includes the IP address and UUID of the subnet. |
| | **id** string | success | The UUID of the port. **Sample:** 3ec25c97-7052-4ab8-a8ba-92faf84148de |
| | **ip\_address** string | success | The IP address. **Sample:** 127.0.0.1 |
| | **mac\_address** string | success | The MAC address. **Sample:** 00:00:5E:00:53:42 |
| | **name** string | success | The port name. **Sample:** port\_name |
| | **network\_id** string | success | The UUID of the attached network. **Sample:** dd1ede4f-3952-4131-aab6-3b8902268c7d |
| | **port\_security\_enabled** boolean | success | The port security status. The status is enabled (true) or disabled (false). |
| | **security\_groups** list / elements=string | success | The UUIDs of any attached security groups. |
| | **status** string | success | The port status. **Sample:** ACTIVE |
| | **tenant\_id** string | success | The UUID of the tenant who owns the network. **Sample:** 51fce036d7984ba6af4f6c849f65ef00 |
### Authors
* OpenStack Ansible SIG
ansible openstack.cloud.image_info – Retrieve information about an image within OpenStack. openstack.cloud.image\_info – Retrieve information about an image within OpenStack.
===================================================================================
Note
This plugin is part of the [openstack.cloud collection](https://galaxy.ansible.com/openstack/cloud) (version 1.5.1).
You might already have this collection installed if you are using the `ansible` package. It is not included in `ansible-core`. To check whether it is installed, run `ansible-galaxy collection list`.
To install it, use: `ansible-galaxy collection install openstack.cloud`.
To use it in a playbook, specify: `openstack.cloud.image_info`.
* [Synopsis](#synopsis)
* [Requirements](#requirements)
* [Parameters](#parameters)
* [Notes](#notes)
* [Examples](#examples)
* [Return Values](#return-values)
Synopsis
--------
* Retrieve information about a image image from OpenStack.
* This module was called `openstack.cloud.image_facts` before Ansible 2.9, returning `ansible_facts`. Note that the [openstack.cloud.image\_info](#ansible-collections-openstack-cloud-image-info-module) module no longer returns `ansible_facts`!
Requirements
------------
The below requirements are needed on the host that executes this module.
* openstacksdk
* openstacksdk >= 0.12.0
* python >= 3.6
Parameters
----------
| Parameter | Choices/Defaults | Comments |
| --- | --- | --- |
| **api\_timeout** integer | | How long should the socket layer wait before timing out for API calls. If this is omitted, nothing will be passed to the requests library. |
| **auth** dictionary | | Dictionary containing auth information as needed by the cloud's auth plugin strategy. For the default *password* plugin, this would contain *auth\_url*, *username*, *password*, *project\_name* and any information about domains (for example, *user\_domain\_name* or *project\_domain\_name*) if the cloud supports them. For other plugins, this param will need to contain whatever parameters that auth plugin requires. This parameter is not needed if a named cloud is provided or OpenStack OS\_\* environment variables are present. |
| **auth\_type** string | | Name of the auth plugin to use. If the cloud uses something other than password authentication, the name of the plugin should be indicated here and the contents of the *auth* parameter should be updated accordingly. |
| **availability\_zone** string | | Ignored. Present for backwards compatibility |
| **ca\_cert** string | | A path to a CA Cert bundle that can be used as part of verifying SSL API requests.
aliases: cacert |
| **client\_cert** string | | A path to a client certificate to use as part of the SSL transaction.
aliases: cert |
| **client\_key** string | | A path to a client key to use as part of the SSL transaction.
aliases: key |
| **cloud** raw | | Named cloud or cloud config to operate against. If *cloud* is a string, it references a named cloud config as defined in an OpenStack clouds.yaml file. Provides default values for *auth* and *auth\_type*. This parameter is not needed if *auth* is provided or if OpenStack OS\_\* environment variables are present. If *cloud* is a dict, it contains a complete cloud configuration like would be in a section of clouds.yaml. |
| **image** string | | Name or ID of the image |
| **interface** string | **Choices:*** admin
* internal
* **public** ←
| Endpoint URL type to fetch from the service catalog.
aliases: endpoint\_type |
| **properties** dictionary | | Dict of properties of the images used for query |
| **region\_name** string | | Name of the region. |
| **timeout** integer | **Default:**180 | How long should ansible wait for the requested resource. |
| **validate\_certs** boolean | **Choices:*** no
* yes
| Whether or not SSL API requests should be verified. Before Ansible 2.3 this defaulted to `yes`.
aliases: verify |
| **wait** boolean | **Choices:*** no
* **yes** ←
| Should ansible wait until the requested resource is complete. |
Notes
-----
Note
* The standard OpenStack environment variables, such as `OS_USERNAME` may be used instead of providing explicit values.
* Auth information is driven by openstacksdk, which means that values can come from a yaml config file in /etc/ansible/openstack.yaml, /etc/openstack/clouds.yaml or ~/.config/openstack/clouds.yaml, then from standard environment variables, then finally by explicit parameters in plays. More information can be found at <https://docs.openstack.org/openstacksdk/>
Examples
--------
```
- name: Gather information about a previously created image named image1
openstack.cloud.image_info:
auth:
auth_url: https://identity.example.com
username: user
password: password
project_name: someproject
image: image1
register: result
- name: Show openstack information
debug:
msg: "{{ result.openstack_image }}"
# Show all available Openstack images
- name: Retrieve all available Openstack images
openstack.cloud.image_info:
register: result
- name: Show images
debug:
msg: "{{ result.openstack_image }}"
# Show images matching requested properties
- name: Retrieve images having properties with desired values
openstack.cloud.image_facts:
properties:
some_property: some_value
OtherProp: OtherVal
- name: Show images
debug:
msg: "{{ result.openstack_image }}"
```
Return Values
-------------
Common return values are documented [here](../../../reference_appendices/common_return_values#common-return-values), the following are the fields unique to this module:
| Key | Returned | Description |
| --- | --- | --- |
| **openstack\_image** complex | always, but can be null | has all the openstack information about the image |
| | **checksum** string | success | Checksum for the image. |
| | **container\_format** string | success | Container format of the image. |
| | **created\_at** string | success | Image created at timestamp. |
| | **deleted** boolean | success | Image deleted flag. |
| | **deleted\_at** string | success | Image deleted at timestamp. |
| | **disk\_format** string | success | Disk format of the image. |
| | **id** string | success | Unique UUID. |
| | **is\_public** boolean | success | Is public flag of the image. |
| | **min\_disk** integer | success | Min amount of disk space required for this image. |
| | **min\_ram** integer | success | Min amount of RAM required for this image. |
| | **name** string | success | Name given to the image. |
| | **owner** string | success | Owner for the image. |
| | **properties** dictionary | success | Additional properties associated with the image. |
| | **protected** boolean | success | Image protected flag. |
| | **size** integer | success | Size of the image. |
| | **status** string | success | Image status. |
| | **tags** list / elements=string | success | List of tags assigned to the image |
| | **updated\_at** string | success | Image updated at timestamp. |
### Authors
* OpenStack Ansible SIG
| programming_docs |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.