question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I'm trying to create an AWS ECS task with Terraform which will put logs in a specific log group on CloudWatch. The problem is that container definition is in the JSON file and there is no way for me to map the CloudWatch group name from .tf file to that .json file.
container_definition.json:
[
{
"name": "supreme-task",
"image": "xxxx50690yyyy.dkr.ecr.eu-central-1.amazonaws.com/supreme-task",
"essential": true,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 5000
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "supreme-task-group", <- This needs to be taken from variable.tf file.
"awslogs-region": "eu-central-1",
"awslogs-stream-prefix": "streaming"
}
}
}
]
variable.tf:
variable "ecs_task_definition_name" {
description = "Task definition name."
type = string
default = "supreme-task-def"
}
variable "task_role" {
description = "Name of the task role."
type = string
default = "supreme-task-role"
}
variable "task_execution_role" {
description = "Name of the task execution role."
type = string
default = "supreme-task-exec-role"
}
variable "cloudwatch_group" {
description = "CloudWatch group name."
type = string
default = "supreme-task-group"
}
task definition:
resource "aws_ecs_task_definition" "task_definition" {
family = var.ecs_task_definition_name
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = 1024
memory = 4096
container_definitions = file("modules/ecs-supreme-task/task-definition.json")
execution_role_arn = aws_iam_role.task_execution_role.name
task_role_arn = aws_iam_role.task_role.name
}
Is there a way to do that? Or maybe this should be done differently?
| Solved by following @ydaetskcorR's comment.
Made container definition as inline parameter.
container_definitions = <<DEFINITION
[
{
"name": "${var.repository_name}",
"image": "${var.repository_uri}",
"essential": true,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 5000
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${var.cloudwatch_group}",
"awslogs-region": "eu-central-1",
"awslogs-stream-prefix": "ecs"
}
}
}
]
DEFINITION
| Terraform | 59,684,900 | 19 |
I want to create a secret in several k8s clusters in the Google Kubernetes Engine using the Terraform.
I know that I can use "host", "token" and some else parameters in "kubernetes" provider, but I can describe these parameters only once, and I don’t know how to connect to another cluster during the file of terraform.
My question is how to create a secret (or do other operations) in multiple k8s cluster via Terraform. Maybe you know some tools on github or other tips for doing via single terraform file?
| You can use alias for provider in terraform like described in documentation
So you can define multiple providers for multiple k8s clusters and then refer them by alias.
e.g.
provider "kubernetes" {
config_context_auth_info = "ops1"
config_context_cluster = "mycluster1"
alias = "cluster1"
}
provider "kubernetes" {
config_context_auth_info = "ops2"
config_context_cluster = "mycluster2"
alias = "cluster2"
}
resource "kubernetes_secret" "example" {
...
provider = kubernetes.cluster1
}
| Terraform | 57,861,264 | 19 |
As a follow up to Terraform 0.12 nested for loops. I am trying to produce an object out of a nested loop but failing miserably :(
How would you go about producing:
Outputs:
association-list = {
"policy1" = "user1"
"policy2" = "user1"
"policy2" = "user2"
}
From:
iam-policy-users-map = {
"policy1" = [ "user1" ]
"policy2" = [ "user1", "user2" ]
}
I have tried many variations of:
variable iam-policy-users-map {
default = {
"policy1" = [ "user1" ]
"policy2" = [ "user1", "user2" ]
}
}
locals {
association-map = merge({
for policy, users in var.iam-policy-users-map : {
for user in users : {
policy => user
}
}
})
output association-map {
value = local.association-map
}
with zero success so far. Only managed to get the following depending on the variation:
Error: Invalid 'for' expression.
Extra characters after the end of the 'for' expression.
Error: Missing attribute value.
Expected an attribute value, introduced by an equals sign ("=").
Error: Invalid 'for' expression.
Key expression is required when building an object.
Error: Missing key/value separator. Expected an equals sign ("=") to
mark the beginning of the attribute value.
For reference, the following code is however capable of producing a list of maps:
variable iam-policy-users-map {
default = {
"policy1" = [ "user1" ]
"policy2" = [ "user1", "user2" ]
}
}
locals {
association-list = flatten([
for policy, users in var.iam-policy-users-map : [
for user in users : {
user = user
policy = policy
}
]
])
}
output association-list {
value = local.association-list
}
Outputs:
association-list = [
{
"policy" = "policy1"
"user" = "user1"
}, {
"policy" = "policy2"
"user" = "user1"
}, {
"policy" = "policy2"
"user" = "user2"
},
]
| A partial answer can be found at https://github.com/hashicorp/terraform/issues/22263.
Long story short: this was a foolish attempt to begin with, a map cannot contain duplicate keys.
I am however still interested in understanding how a map of maps could be produced from a nested for loop. See second code example above, producing a list of maps.
EDIT: a full answer was given on the github issue linked above.
"This is (obviously) a useless structure, but I wanted to illustrate that it is possible:
locals {
association-list = {
for policy, users in var.iam-policy-users-map:
policy => { // can't have the nested for expression before the key!
for u in users:
policy => u...
}
}
}
Outputs:
association-list = {
"policy1" = {
"policy1" = [
"user1",
]
}
"policy2" = {
"policy2" = [
"user1",
"user2",
]
}
}
"
| Terraform | 57,280,623 | 19 |
The timestamp() function in the interpolation syntax will return an ISO 8601 formatted string, which looks like this 2019-02-06T23:22:28Z. However, I want to have a string which looks like this 20190206232240706500000001. A string with only numbers (integers) and no hyphens, white spaces, colon, Z or T. What is a simple and elegant way to achieve this?
It works if I replace every a single character class at the time hyphens, white spaces, colon Z and T:
locals {
timestamp = "${timestamp()}"
timestamp_no_hyphens = "${replace("${local.timestamp}", "-", "")}"
timestamp_no_spaces = "${replace("${local.timestamp_no_hyphens}", " ", "")}"
timestamp_no_t = "${replace("${local.timestamp_no_spaces}", "T", "")}"
timestamp_no_z = "${replace("${local.timestamp_no_t}", "Z", "")}"
timestamp_no_colons = "${replace("${local.timestamp_no_z}", ":", "")}"
timestamp_sanitized = "${local.timestamp_no_colons}"
}
output "timestamp" {
value = "${local.timestamp_sanitized}"
}
The resulting output is in the desired format, except the string is significantly shorter:
Outputs:
timestamp = 20190207000744
However, this solution is very ugly. Is there another way of doing the same thing in a more elegant way as well as producing a string with the same length as the example string 20190206232240706500000001?
| Terraform 0.12.0 introduced a new function formatdate which can make this more readable:
output "timestamp" {
value = formatdate("YYYYMMDDhhmmss", timestamp())
}
At the time of writing, formatdate's smallest supported unit is whole seconds, so this won't give exactly the same result as the regexp approach, but can work if the nearest second is accurate enough for the use-case.
| Terraform | 54,564,512 | 19 |
I have the following simple setup:
~$ tree
.
├── main.tf
└── modules
└── world
└── main.tf
~$ cat main.tf
output "root_module_says" {
value = "hello from root module"
}
module "world" {
source = "modules/world"
}
~$ cat modules/world/main.tf
output "world_module_says" {
value = "hello from world module"
}
I then run:
~$ terraform get
~$ terraform apply
I expect to see world_module_says in the outputs, but I do not, I only see root_module_says.
This is really confusing as to why?
If it helps:
~$ terraform --version
v0.10.8
| Terraform only shows the output from root (by default pre v0.12)
https://www.terraform.io/docs/commands/output.html
Prior to Terraform 0.12 you can get the output from the world module with:
terraform output -module=world
I think the logic here is that the output from the module would be consumed by root and if you actually needed the output then you'd output it in root too so main.tf might contain this:
output "root_module_says" {
value = "hello from root module"
}
output "world_module_says" {
value = "${module.world.world_module_says}"
}
module "world" {
source = "modules/world"
}
Beginning with Terraform 0.12 this is the only way to get the output from within a module.
| Terraform | 52,503,528 | 19 |
I want to create a S3 and make it encryption at rest with AES256, but terraform complain that: * aws_s3_bucket.s3: : invalid or unknown key: server_side_encryption_configuration (see my code complained by terraform below)
What is wrong with server_side_encryption_configuration? isn't it supported? https://www.terraform.io/docs/providers/aws/r/s3_bucket.html
Anyway, how to have "encryption at rest with AES256" for S3 using terraform?
resource "aws_s3_bucket" "s3" {
bucket = "s3_bucket_name"
acl = "private"
force_destroy = true
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
| You probably have an older version of the AWS provider plugin. To update it, run terraform init with the -upgrade flag set to true
terraform init -upgrade=true
| Terraform | 47,957,225 | 19 |
Im using terraform to get the official centos ami:
data "aws_ami" "centos" {
most_recent = true
owners = ["679593333241"] # Centos.org
filter {
name = "name"
values = ["CentOS Linux 7 x86_64 HVM EBS 1708_01-*"]
}
}
I need the owner ID and the way I found it in this case was to aimlessly google until some article had it (they didn't say how they got it).
I can't find the owner ID on the ami's market place page.
If I launch the ami through it's market place page the ec2 instance it creates actually has my personal id set for the instance's "owner" field- so I can't launch an ec2 instance, check the owner and get it that way either.
Why is this so impossible to get? I must be missing something obvious.
| You don't necessarily need the owner ID but it is obviously a good idea to include to ensure you are getting the AMI you expect.
Given that you know the trusted owner up-front, the simple way to find the owner ID is to look in the AMIs table in the AWS console.
From the AWS console, navigate to EC2 > Images > AMIs. Select the "Public Images" filter from the dropdown on the left of the search box. You can then search for any AMI you want, and the Owner value is shown in the table:
| Terraform | 47,467,593 | 19 |
I'm using the HTTP data source to retrieve data from an internal service. The service returns JSON data.
I can't interpolate the returned JSON data and look up data in it.
For example:
module A
data "http" "json_data" {
url = "http://myservice/jsondata"
# Optional request headers
request_headers {
"Accept" = "application/json"
}
}
output "json_data_key" {
value = "${lookup(data.http.json_data.body, "mykey")}"
}
main.tf
provider "aws" {
region = "${var.region}"
version = "~> 0.1"
}
module "moduleA" {
source = "../../../terraform-modules/moduleA"
}
resource "aws_instance" "example" {
ami = "ami-2757f631"
instance_type = "${module.moduleA.json_data_key}"
}
The lookup function will fail to extract the key within the JSON data.
Is there any way to decode the JSON data into a terraform map ?
| variable "json" {
default = "{\"foo\": \"bar\"}"
}
data "external" "json" {
program = ["echo", "${var.json}"]
}
output "map" {
value = "${data.external.json.result}"
}
| Terraform | 46,371,424 | 19 |
I have an AWS Auto-Scaling Group, a Launch Configuration, and an Auto-Scaling Group Policy defined in Terraform like this:
resource "aws_autoscaling_group" "default" {
name = "..."
health_check_type = "EC2"
vpc_zone_identifier = ["${...}"]
min_size = "${var.asg_capacity}"
max_size = "${var.asg_capacity * 2}"
desired_capacity = "${var.asg_capacity}"
launch_configuration = "${aws_launch_configuration.default.id}"
termination_policies = ["OldestInstance"]
}
resource "aws_autoscaling_policy" "default" {
name = "..."
autoscaling_group_name = "${aws_autoscaling_group.default.name}"
scaling_adjustment = "${var.asg_capacity}"
adjustment_type = "ChangeInCapacity"
cooldown = 300
}
resource "aws_launch_configuration" "default" {
name_prefix = "..._"
image_id = "${var.coreos_ami_id}"
instance_type = "${var.ec2_instance_type}"
iam_instance_profile = "${aws_iam_instance_profile.default.arn}"
key_name = "..."
security_groups = ["${aws_security_group.default.id}"]
user_data = "${data.template_file.cloud_init.rendered}"
lifecycle {
create_before_destroy = true
}
}
When I change my user data, a new launch configuration is created and then attached to the auto-scaling group. I would assume that this would cause the auto-scaling group to scale up by var.asg_capacity instances, wait 300 seconds, and then tear down the old ones as per OldestInstance.
When I have done similar things in CloudFormation, I have used the following configuration options:
ASG:
Type: AWS::AutoScaling::AutoScalingGroup
UpdatePolicy:
AutoScaleRollingUpdate:
# during a scale, 6 instances in service
MaxBatchSize: 3
MinInstancesInService: 3
PauseTime: PT5M
Properties:
...
Is there an analog for this in Terraform? I would really like my auto-scaling groups to change when I change the launch configuration.
|
I would assume that this would cause the auto-scaling group to scale up by var.asg_capacity instances, wait 300 seconds, and then tear down the old ones as per OldestInstance.
This assumption, unfortunately, is incorrect. When you change the launch configuration, the only thing that happens is a new launch configuration is created in your AWS account and associated with the Auto Scaling Group (ASG). That means all future Instances in that ASG will launch with the new launch configuration. However, merely changing the launch configuration does not trigger the launch of any instances, so you won't see your changes.
To force new instances to launch, you have two options:
Updated answer from 2022
In 2020, AWS introduced AWS instance refresh, which is a native way to roll out changes to Auto Scaling Groups. If you configure the instance_refresh block in your aws_autoscaling_group resource, then when you make a change to the launch configuration and run apply, AWS will kick off the instance refresh process automatically (note: this process runs in the background, so apply will complete very quickly, but the refresh will happen after, often taking 5-30 min).
resource "aws_launch_configuration" "example" {
image_id = var.ami
instance_type = var.instance_type
user_data = data.template_file.user_data.rendered
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "example" {
name = "example-asg"
launch_configuration = aws_launch_configuration.example.id
availability_zones = data.aws_availability_zones.all.names
min_size = var.min_size
max_size = var.max_size
instance_refresh {
strategy = "Rolling"
preferences {
min_healthy_percentage = 50
}
}
}
Original answer from 2016
You can use some native Terraform functionality to do a rolling deploy:
Configure the name parameter of the ASG to depend directly on the name of the launch configuration. That way, each time the launch configuration changes (which it will when you update the AMI or User Data), Terraform will try to replace the ASG.
Set the create_before_destroy parameter of the ASG to true, so each time Terraform tries to replace it, it will create the replacement before destroying the original.
Set the min_elb_capacity parameter of the ASG to the min_size of the cluster so that Terraform will wait for at least that many servers from the new ASG to register in the ELB before it'll start destroying the original ASG.
Here's a rough idea of what the Terraform code would look like:
resource "aws_launch_configuration" "example" {
image_id = "${var.ami}"
instance_type = "${var.instance_type}"
user_data = "${data.template_file.user_data.rendered}"
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "example" {
name = "${var.cluster_name}-${aws_launch_configuration.example.name}"
launch_configuration = "${aws_launch_configuration.example.id}"
availability_zones = ["${data.aws_availability_zones.all.names}"]
min_size = "${var.min_size}"
max_size = "${var.max_size}"
min_elb_capacity = "${var.min_size}"
lifecycle {
create_before_destroy = true
}
}
For a fully working example, check out the zero-downtime deployment example code from the book Terraform: Up & Running.
| Terraform | 40,985,151 | 19 |
I have used the following code:
module "instance" {
for_each = var.worker_private_ip
source = "../../modules/ec2"
env = var.env
project_name = var.project_name
ami = var.ami
instance_type = var.instance_type
subnet_id = module.vpc.subnet_id
vpc_security_group_ids = [module.security_group.security_group_id]
private_ip = each.value
key_name = var.public_key_name
user_data = file("${path.module}/startup.sh.tpl")
}
When I send terraform plan/apply command, I receive an error
path.module is "."
│ Invalid value for "path" parameter: no file exists at "./startup.sh.tpl";
So, terraform searching for file in current directory, but I expect for searching in (../../modules/ec2/)
I have startup.sh.tpl file in module folder and if I set user_data to:
user_data = file("../../modules/ec2/startup.sh.tpl")
everything is fine, also if I move sciprt to current directory everything ok too, plan and apply commands are correct
What am I doing wrong? Why ${path.module} points to the current directory ".", but not to ../../modules/ec2/?
| From the docs:
path.module is the filesystem path of the module where the expression is placed.
This means that it will return the relative path between your project's root folder and the location where path.module is used.
For example:
if you are using it in a .tf file which is inside your ../../modules/ec2/ folder, it will return ../../modules/ec2;
if your are using it inside your main.tf (or any .tf file) in your project's root folder, it will return .
In your case, if the user data is inside the module, probably there would be need for it to make it as an input.
| Terraform | 72,480,751 | 18 |
After upgrading terraform to 3.64.2, even though I haven't changed any code, terraform plan reminds me that it will replace tag with tag_all. what's the difference between tags and tags_all?
~ resource "aws_lb_listener" "frontend_http_tcp" {
id = "xxxxx"
~ tags = {
- "environment" = "production" -> null
- "purpose" = "onboarding-integration" -> null
- "terraform" = "true" -> null
}
~ tags_all = {
- "environment" = "production"
- "purpose" = "onboarding-integration"
- "terraform" = "true"
} -> (known after apply)
# (4 unchanged attributes hidden)
# (1 unchanged block hidden)
}
| In Terraform, you can define tags in top-level. tags_all is basically individual resource tags + top level tags
For example;
# Terraform 0.12 and later syntax
provider "aws" {
# ... other configuration ...
default_tags {
tags = {
Environment = "Production"
Owner = "Ops"
}
}
}
resource "aws_vpc" "example" {
# ... other configuration ...
# This configuration by default will internally combine tags defined
# within the provider configuration block and those defined here
tags = {
Name = "MyVPC"
}
}
In above example; tags_all will be
tags_all = {
Name = "MyVPC"
Environment = "Production"
Owner = "Ops"
}
while tag is
tags = {
Name = "MyVPC"
}
Reference = https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/resource-tagging
| Terraform | 71,643,844 | 18 |
I'm trying to replicate a SQL instance in GCP via terraform. The active instance has a public IP, however subnets from a secondary project are shared with the project hosing the SQL instance, and the SQL instance is associated with the secondary project's network.
I've added the private_network setting properly (I think) in the ip_configuration section, however I'm getting the following error:
Error: Error, failed to create instance xxxx: googleapi: Error 400: Invalid request: Incorrect Service Networking config for instance: xxxx:xxxxx:SERVICE_NETWORKING_NOT_ENABLED., invalid
I can't find much documentation when I google that particular error, and I'm relatively new to Terraform, so I'm hoping someone can point out what I'm missing from either this section of my Terraform config, or another resource altogether.
resource "google_sql_database_instance" "cloudsql-instance-qa" {
depends_on = [google_project_service.project_apis]
database_version = "MYSQL_5_7"
name = "${var.env_shorthand}-${var.resource_name}"
project = var.project_id
region = var.region
settings {
activation_policy = "ALWAYS"
availability_type = "ZONAL"
backup_configuration {
binary_log_enabled = "true"
enabled = "true"
point_in_time_recovery_enabled = "false"
start_time = "15:00"
}
crash_safe_replication = "false"
disk_autoresize = "true"
disk_size = "5003"
disk_type = "PD_SSD"
ip_configuration {
ipv4_enabled = "true"
private_network = "projects/gcp-backend/global/networks/default"
require_ssl = "false"
}
location_preference {
zone = var.zone
}
maintenance_window {
day = "7"
hour = "4"
}
pricing_plan = "PER_USE"
replication_type = "SYNCHRONOUS"
tier = "db-n1-standard-1"
}
}
| If you see the following error:
Error: Error, failed to create instance xxxx: googleapi: Error 400:
Invalid request: Incorrect Service Networking config for instance:
xxxx:xxxxx:SERVICE_NETWORKING_NOT_ENABLED., invalid
Enable the Service Networking API:
gcloud services enable servicenetworking.googleapis.com --project=[PSM_PROJECT_NUMBER]
Getting Started with the Service Networking API
| Terraform | 66,536,427 | 18 |
I need to enable "CloudWatch Lambda Insights" for a lambda using Terraform, but could not find the documentation. How I can do it in Terraform?
Note: This question How to add CloudWatch Lambda Insights to serverless config? may be relevant.
| There is no "boolean switch" in the aws_lambda_function resource of the AWS Terraform provider that you can set to true, that would enable Cloudwatch Lambda Insights.
Fortunately, it is possible to do this yourself. The following Terraform definitions are based on this AWS documentation: Using the AWS CLI to enable Lambda Insights on an existing Lambda function
The process involves two steps:
Add a layer to your Lambda
Attach a AWS policy to your Lambdas role.
The Terraform definitions would look like this:
resource "aws_lambda_function" "insights_example" {
[...]
layers = [
"arn:aws:lambda:us-east-1:580247275435:layer:LambdaInsightsExtension:14"
]
}
resource "aws_iam_role_policy_attachment" "insights_policy" {
role = aws_iam_role.insights_example.id
policy_arn = "arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy"
}
Important: The arn of the layer is different for each region. The documentation I linked above has a link to a list of them. Furthermore, there is an additional step required if your Lambda is in a VPC, which you can read about in the documentation. The described "VPC step" can be put into Terraform as well.
For future readers: The version of that layer in my example is 14. This will change over time. So please do not just copy & paste that part. Follow the provided links and look for the current version of that layer.
Minimal, Complete, and Verifiable example
Tested with:
Terraform v0.14.4
+ provider registry.terraform.io/hashicorp/archive v2.0.0
+ provider registry.terraform.io/hashicorp/aws v3.24.0
Create the following two files (handler.py and main.tf) in a folder. Then run the following commands:
terraform init
terraform plan
terraform apply
Besides deploying the required resources, it will also create a zip archive containing the handler.py which is the deployment artifact used by the aws_lambda_function resource. So this is an all-in-one example without the need of further zipping etc.
handler.py
def lambda_handler(event, context):
return {
'message' : 'CloudWatch Lambda Insights Example'
}
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
resource "aws_lambda_function" "insights_example" {
function_name = "insights-example"
runtime = "python3.8"
handler = "handler.lambda_handler"
role = aws_iam_role.insights_example.arn
filename = "${path.module}/lambda.zip"
layers = [
"arn:aws:lambda:us-east-1:580247275435:layer:LambdaInsightsExtension:14"
]
depends_on = [
data.archive_file.insights_example
]
}
resource "aws_iam_role" "insights_example" {
name = "InsightsExampleLambdaRole"
assume_role_policy = data.aws_iam_policy_document.lambda_assume.json
}
resource "aws_iam_role_policy_attachment" "insights_example" {
role = aws_iam_role.insights_example.id
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "insights_policy" {
role = aws_iam_role.insights_example.id
policy_arn = "arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy"
}
data "aws_iam_policy_document" "lambda_assume" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
data "archive_file" "insights_example" {
type = "zip"
source_file = "${path.module}/handler.py"
output_path = "${path.module}/lambda.zip"
}
| Terraform | 65,735,878 | 18 |
I am trying to get the vpc_id of default vpc in my aws account using terraform
This is what I tried but it gives an error
Error: Invalid data source
this is what I tried:
data "aws_default_vpc" "default" {
}
# vpc
resource "aws_vpc" "kubernetes-vpc" {
cidr_block = "${var.vpc_cidr_block}"
enable_dns_hostnames = true
tags = {
Name = "kubernetes-vpc"
}
}
| The aws_default_vpc is indeed not a valid data source. But the aws_vpc data source does have a boolean default you can use to choose the default vpc:
data "aws_vpc" "default" {
default = true
}
| Terraform | 60,619,873 | 18 |
I can select an event from event template when I trigger a lambda function. How can I create a customized event template in terraform. I want to make it easier for developers to trigger the lambda by selecting this customized event template from the list.
I'd like to add an event on this list:
| Unfortunately, at the time of this answer (2020-02-21), there is no way to accomplish this via the APIs that AWS provides. Ergo, the terraform provider does not have the ability to accomplish this (it's limited to what's available in the APIs).
I also have wanted to be able to configure test events via terraform.
A couple of options
Propose to AWS that they expose some APIs for managing test events. This would give contributors to the AWS terraform provider the opportunity to add this resource.
Provide the developers with a PostMan collection, set of shell scripts (or other scripts) using the awscli, or some other mechanism to invoke the lambdas. This is essentially the same as pulling the templating functionality out of the console and into your own tooling.
| Terraform | 60,329,800 | 18 |
Following terraform best practice for bootstrapping instances, I'm working on a cloud-init config in order to bootstrap my instance. My only need is to install a specific package.
My terraform config looks like this:
resource "google_compute_instance" "bastion" {
name = "my-first-instance"
machine_type = "n1-standard-1"
zone = "europe-west1-b"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
metadata = {
ssh-keys = "eugene:${file("/Users/eugene/.ssh/id_rsa.pub")}"
user-data = file("my_cloud_init.conf")
}
}
Following example for installing packages from cloud-init docs, here's the contents of my_cloud_init.conf:
#cloud-config
packages:
- kubectl
After running terraform plan -out myplan and terraform apply myplan, I ssh onto the node only to find kubectl not available. Moreover, there's no evidence that cloud-init was run or that it exists on the node:
$ which -a cloud-init
$ cat /var/log/cloud-init
cat: /var/log/cloud-init: No such file or directory
Looking for clues about usage of cloud-init with Google Cloud Compute instances wasn't fruitful:
"Google Cloud Engine" page from cloud-init docs suggests settings user-data to a cloud-init config should be enough,
I see a cloud-init tutorial, but it's for Container Optimized OS,
there are some clues about cloud-init on other images, but nothing indicates cloud-init is available on debian-cloud/debian-9,
there's "Running startup scripts", but it has no mention of cloud-init.
I don't mind using another image, as long as it's Debian or Ubuntu and I don't have to make an image template myself.
How to use cloud-init with a debian-based image on Google Cloud? What am I missing?
| cloud-init is installed on the latest (at the moment of writing) Ubuntu 18.04 LTS (ubuntu-1804-bionic-v20191002) image :
<my_user>@instance-1:~$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"
<my_user>@instance-1:~$ which cloud-init
/usr/bin/cloud-init
You should replace debian-cloud/debian-9 with ubuntu-os-cloud/ubuntu-1804-bionic-v20191002.
| Terraform | 58,248,190 | 18 |
Lots of examples exist online that show how to run a startup script on a VM deployed on GCP/GCE with Terraform, but they all use inline startup scripts, with all the startup script code included in the terraform compute.tf file. This is done either with a single line for the startup script, or with <<SCRIPT[script code]SCRIPT for multiple lines. I haven't found a single example showing a way to assign the startup script parameter to another file on local disk, perhaps in the same directory as compute.tf. It is quite a mess to clutter compute.tf with hundreds of lines of startup script. Isn't there a better way to do this?
I realize I could write a wrapper script that combines a compute.tf and a separate startup file into a single compute.tf and then runs terraform, but I'm seeking a more direct route, assuming one exists.
Thank you.
| To reference a file in your GCE VM declarations just use the file function to read the contents from your selected file. For example:
resource "google_compute_instance" "default" {
...
metadata_startup_script = file("/path/to/your/file")
}
On a similar note, you can also use the template_file data source to perform token replacement on a template file and then reference the resolved file content in your GCE VM declaration. For example:
data "template_file" "default" {
template = file("/path/to/your/file")
vars = {
address = "some value"
}
}
resource "google_compute_instance" "default" {
...
metadata_startup_script = data.template_file.default.rendered
}
References:
https://www.terraform.io/docs/providers/google/r/compute_instance.html
https://www.terraform.io/docs/configuration-0-11/interpolation.html#file-path-
https://www.terraform.io/docs/providers/template/d/file.html
| Terraform | 57,682,483 | 18 |
What I need is terraform analog for CloudFormation's DeletionPolicy: Retain.
The resource should be left as is during terraform destroy, that's all.
prevent_destroy does not fit because the whole environment going to be deleted during terraform destroy
ignore_changes does not fit because there's no parameter's change.
How can I do it?
| You could break down the destroy to a set of tasks
Use terraform state rm, to remove resources/modules you want to retain, from your state. Now they are no longer tracked by terraform.
Remove these resources/modules, from your .tf files
Run terraform plan. You should see that there are no changes to be applied. This is to ensure that the selected resources have been safely removed from your terraform state files and terraform code.
Run terraform destroy. This should destroy all other resources.
| Terraform | 56,884,305 | 18 |
In an existing Terraform directory:
~ terraform version
Terraform v0.11.11
+ provider.aws v1.51.0
If I setup a new Terraform directory:
~ terraform version
Terraform v0.11.11
+ provider.aws v1.55.0
How do I upgrade my provider.aws? If I set version = "~> 1.55.0" in the provider "aws" in my .tf file, I get an error:
* provider.aws: no suitable version installed
version requirements: "~> 1.55.0"
versions installed: "1.51.0"
I expected to find a terraform update command or something similar. But I can't find that.
Am I not supposed to upgrade the provider? Do I need to delete state, rerun init and then refresh? Or is there a better way?
| terraform init -upgrade
Use terraform init -upgrade command to upgrade the latest acceptable version of each provider.
Before Upgrade
ubuntu@staging-docker:~/terraform$ terraform -version
Terraform v0.12.8
+ provider.aws v2.16.0
+ provider.template v2.1.2
Command to upgrade
ubuntu@staging-docker:~/terraform$ terraform init -upgrade
Upgrading modules...
- asg in asg
- ecs in ecs
- lambda in lambda
- lt in lt
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.27.0...
- Downloading plugin for provider "template" (hashicorp/template) 2.1.2...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 2.27"
* provider.template: version = "~> 2.1"
After Upgrade
ubuntu@staging-docker:~/terraform$ terraform version
Terraform v0.12.8
+ provider.aws v2.27.0
+ provider.template v2.1.2
| Terraform | 54,155,036 | 18 |
I have a terraform module that provisions resources primarily in eu-west-1. I need an ACM certificate to attach to a Cloudfront distribution. The certificate must be provisioned in us-east-1.
I have thus configured two providers:
provider "aws" {
version = "~> 1.0"
region = "eu-west-1"
}
provider "aws" {
version = "~> 1.0"
region = "us-east-1"
alias = "us-east-1"
}
In my module, I provision the certificate like so:
resource "aws_acm_certificate" "cert" {
provider = "aws.us-east-1"
domain_name = "${var.domain_name}"
validation_method = "DNS"
tags = "${var.tags}"
lifecycle {
create_before_destroy = true
}
}
Problem #1: I tried to import my existing ACM certificate using:
terraform import module.mymod.aws_acm_certificate.cert arn:aws:acm:us-east-1:xyz:certificate/uuid
This fails with: "Could not find certificate with id". Is terraform looking in the wrong region? I confirmed with the aws CLI that the certificate does indeed exist (e.g. no typos in the ARN).
Ok, so I figured I could just create new certificate. This does work, and I now have two certificates, but I then run into problem #2:
resource "aws_route53_record" "cert_validation" {
name = "${aws_acm_certificate.cert.domain_validation_options.0.resource_record_name}"
type = "${aws_acm_certificate.cert.domain_validation_options.0.resource_record_type}"
zone_id = "${data.aws_route53_zone.zone.id}"
records = ["${aws_acm_certificate.cert.domain_validation_options.0.resource_record_value}"]
ttl = 60
}
This attempts to set up DNS validation for ACM. The hosted zone exists in eu-west-1, so I'm expecting problems here. However, this still fails with "Could not find certificate ...", and I'm assuming terraform gets confused about regions. I tried adding provider = "aws.us-east-1" to this resource as well, but it still fails the same way.
So, no matter what I do, Terraform is unable to locate my certificate, even it has created it itself. Am I doing something wrong?
| Turns out my problem was with aws_acm_certificate_validation. By specifying the provider in the same region as the certificate, it was all resolved.
resource "aws_acm_certificate_validation" "cert" {
provider = "aws.us-east-1" # <== Add this
certificate_arn = "${aws_acm_certificate.cert.arn}"
validation_record_fqdns = ["${aws_route53_record.cert_validation.fqdn}"]
}
| Terraform | 51,988,417 | 18 |
When creating an AWS Lambda Function with terraform 0.9.3, I'm failing to make it join my selected VPC.
This is how my function looks like:
resource "aws_lambda_function" "lambda_function" {
s3_bucket = "${var.s3_bucket}"
s3_key = "${var.s3_key}"
function_name = "${var.function_name}"
role = "${var.role_arn}"
handler = "${var.handler}"
runtime = "${var.runtime}"
timeout = "30"
memory_size = 256
publish = true
vpc_config {
subnet_ids = ["${var.subnet_ids}"]
security_group_ids = ["${var.security_group_ids}"]
}
}
The policy I'm using for the role is
data "aws_iam_policy_document" "lambda-policy_policy_document" {
statement {
effect = "Allow"
actions = [
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface"
]
resources = ["*"]
}
}
The resources are created just fine, if I try to add the VPC and the subnets via the AWS console it all works out.
Update (creation plan):
module.******.aws_lambda_function.lambda_function
arn: "<computed>"
environment.#: "1"
environment.0.variables.%: "1"
environment.0.variables.environment: "******"
function_name: "******"
handler: "******"
last_modified: "<computed>"
memory_size: "256"
publish: "true"
qualified_arn: "<computed>"
role: "******"
runtime: "******"
s3_bucket: "******"
s3_key: "******"
source_code_hash: "<computed>"
timeout: "30"
version: "<computed>"
vpc_config.#: "1"
vpc_config.0.vpc_id: "<computed>"
Though, if I run terraform plan again, the VPC config is always changed.
vpc_config.#: "0" => "1" (forces new resource)
| I think the value of subnet_ids is like this: "subnet-xxxxx,subnet-yyyyy,subnet-zzzzz" and it take it as single subnet instead of list. You can fix this problem like this:
vpc_config {
subnet_ids = ["${split(",", var.subnet_ids)}"]
security_group_ids = ["${var.security_group_ids}"]
}
| Terraform | 43,590,164 | 18 |
I am using Terraform v0.12.26 and set up and aws_alb_target_group as:
resource "aws_alb_target_group" "my-group" {
count = "${length(local.target_groups)}"
name = "${var.namespace}-my-group-${
element(local.target_groups, count.index)
}"
port = 8081
protocol = "HTTP"
vpc_id = var.vpc_id
health_check {
healthy_threshold = var.health_check_healthy_threshold
unhealthy_threshold = var.health_check_unhealthy_threshold
timeout = var.health_check_timeout
interval = var.health_check_interval
path = var.path
}
tags = {
Name = var.namespace
}
lifecycle {
create_before_destroy = true
}
}
The locals look like:
locals {
target_groups = [
"green",
"blue",
]
}
When I run terraform apply it returns the following error:
Error: Missing resource instance key
on ../../modules/aws_alb/outputs.tf line 3, in output "target_groups_arn":
3: aws_alb_target_group.http.arn,
Because aws_alb_target_group.http has "count" set, its attributes must be
accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
aws_alb_target_group.http[count.index]
I followed this implementation
Any idea how to fix it?
Output
output "target_groups_arn" {
value = [
aws_alb_target_group.http.arn,
]
}
| Since aws_alb_target_group.http is a counted resource you'll need to reference specific instances by index or all of them as a list with [*] (aka Splat Expressions) as follows:
output "target_groups_arn" {
value = aws_alb_target_group.http[*].arn,
}
The target_groups_arn output will be a list of the TG ARNs.
| Terraform | 62,433,708 | 17 |
I have a provisioning pipeline that incorporates Terraform Cloud, and our leadership is asking us to use Terragrunt to improve Terraform code quality.
Terragrunt is a great tool for this, but I haven't see any evidence that anyone has successfully used it on Terraform Cloud.
Can anyone address this? Please only answer if you either
have done this yourself, or
can provide documentation that this is feasible, or
provide documentation that it is not a good idea or
is not possible with Terraform Cloud.
| Terragrunt expects you to run terragrunt commands, and under the hood, it runs terraform commands, passing along TF_VAR_* environment variables. TFC also runs terraform commands directly. Therefore, you cannot run Terragrunt within TFC - it won't execute the terragrunt binary, only the terraform binary.
However, you can use Terragrunt from the CLI to execute Terraform runs on TFC using a remote backend. In this mode, you run Terragrunt commands as usual, and when Terraform is called, it actually executes within TFC. You can review the runs in the TFC UI, the state is stored in TFC, etc. However, due to the limitation described above, you cannot actually run Terragrunt from the UI.
To set this up, first you need to get an API token and configure the CLI with a credentials block in .terraformrc.
Next, you'll need to generate a backend block:
generate "remote_state" {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
terraform {
backend "remote" {
hostname = "app.terraform.io" # Change this to your hostname for TFE
organization = "your-tfc-organization"
workspaces {
name = "your-workspace"
}
}
}
EOF
}
This code generates a file called backend.tf alongside your Terraform module. This instructs Terraform to use TFC as a remote backend. It will use workspace called your-workspace. If this workspace doesn't exist, TFC will create it automatically using implict workspace creation. You'll have one workspace for each module you're calling in Terragrunt.
TFC does not support the TF_VAR_* environment variables that "stock" Terraform supports. Therefore, the Terragrunt inputs block, which is the standard way that Terragrunt passes variables to Terraform, does not work.
Instead, you can create a *.auto.tfvars.json file. You can generate this file in Terragrunt as well:
generate "tfvars" {
path = "terragrunt.auto.tfvars.json"
if_exists = "overwrite"
disable_signature = true
contents = jsonencode({name = "your-name"})
}
All the variables required for a module should be passed as JSON to the the contents attribute above. A more flexible pattern is to use a locals block to set up the variables, then just pass those in within the content block. JSON is preferred to avoid type issues.
A final wrinkle is that when the workspace is created automatically, it won't have the API credentials (e.g. AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) that are needed to interface with the cloud provider. You can define this in the provider configuration by creating a provider.tf file, but then the credentials are static and in plain text. Not good.
Instead, you can either set the environment variables in each workspace manually, or you can use the tfe_workspace and tfe_variable resources to create them with Terraform in advance. The latter method is recommended since it's programmatic, making it much easier to update if you need to rotate your credentials.
In both cases you'll need to have a workspace for each module called by Terragrunt.
See also: this blog post on the topic and this content on integration with Terragrunt.
| Terraform | 60,062,705 | 17 |
I have a terraform project I am working on. In it, I want a file to contain many variables. I want these variables to be accessible from any module of the project. I have looked in the docs and on a udemy course but still don't see how to do this. How does one do this in terraform? Thanks!
| I don't think this is possible. There are several discussions about this at Github, but this is not something the Hashicorp team wants.
In general we're against the particular solution of Global Variables, since it makes the input -> resources -> output flow of Modules less explicit, and explicitness is a core design goal.
I know, we have to repeat a lot of variables between different modules
| Terraform | 59,584,420 | 17 |
This is a bit of a newbie question, but I've just gotten started with GCP provisioning using Terraform / Terragrunt, and I find the workflow with obtaining GCP credentials quite confusing. I've come from using AWS exclusively, where obtaining credentials, and configuring them in the AWS CLI was quite straightforward.
Basically, the Google Cloud Provider documentation states that you should define a provider block like so:
provider "google" {
credentials = "${file("account.json")}"
project = "my-project-id"
region = "us-central1"
zone = "us-central1-c"
}
This credentials field shows I (apparently) must generate a service account, and keep a JSON somewhere on my filesystem.
However, if I run the command gcloud auth application-default login, this generates a token located at ~/.config/gcloud/application_default_credentials.json; alternatively I can also use gcloud auth login <my-username>. From there I can access the Google API (which is what Terraform is doing under the hood as well) from the command line using a gcloud command.
So why does the Terraform provider require a JSON file of a service account? Why can't it just use the credentials that the gcloud CLI tool is already using?
By the way, if I configure Terraform to point to the application_default_credentials.json file, I get the following errors:
Initializing modules...
Initializing the backend...
Error: Failed to get existing workspaces: querying Cloud Storage
failed: Get
https://www.googleapis.com/storage/v1/b/terraform-state-bucket/o?alt=json&delimiter=%2F&pageToken=&prefix=projects%2Fsomeproject%2F&prettyPrint=false&projection=full&versions=false:
private key should be a PEM or plain PKCS1 or PKCS8; parse error:
asn1: syntax error: sequence truncated
|
if I configure Terraform to point to the application_default_credentials.json file, I get the following errors:
The credentials field in provider config expects a path to service account key file, not user account credentials file. If you want to authenticate with your user account try omitting credentials and then running gcloud auth application-default login; if Terraform doesn't find your credentials file you can set the GOOGLE_APPLICATION_CREDENTIALS environment variabe to point to ~/.config/gcloud/application_default_credentials.json.
Read here for more on the topic of service accounts vs user accounts. For what it's worth, Terraform docs explicitly advice against using application-default login:
This approach isn't recommended- some APIs are not compatible with credentials obtained through gcloud
Similarly GCP docs state the following:
Important: For almost all cases, whether you are developing locally or in a production application, you should use service accounts, rather than user accounts or API keys.
| Terraform | 57,453,468 | 17 |
I want to allow roles within an account that have a shared prefix to be able to read from an S3 bucket. For example, we have a number of roles named RolePrefix1, RolePrefix2, etc, and may create more of these roles in the future. We want all roles in an account that begin with RolePrefix to be able to access the S3 bucket, without having to change the policy document in the future.
My terraform for bucket policy document is as below:
data "aws_iam_policy_document" "bucket_policy_document" {
statement {
effect = "Allow"
actions = ["s3:GetObject"]
principals = {
type = "AWS"
identifiers = ["arn:aws:iam::111122223333:role/RolePrefix*"]
}
resources = ["${aws_s3_bucket.bucket.arn}/*"]
}
}
This gives me the following error:
Error putting S3 policy: MalformedPolicy: Invalid principal in policy.
Is it possible to achieve this functionality in another way?
| You cannot use wildcard along with the ARN in the IAM principal field. You're allowed to use just "*".
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html
When you specify users in a Principal element, you cannot use a wildcard (*) to mean "all users". Principals must always name a specific user or users.
Workaround:
Keep "Principal":{"AWS":"*"} and create a condition based on ARNLike etc as they accept user ARN with wildcard in condition.
Example:
https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
| Terraform | 56,678,539 | 17 |
When defining the aws provider in terraform,
provider "aws" {
access_key = "<AWS_ACCESS_KEY>"
secret_key = "<AWS_SECRET_KEY>"
region = "<AWS_REGION>"
}
I'd like to be able to just use the, already defined, system variables
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Is there any way to have the tf files read environment variables?
doing something like,
provider "aws" {
access_key = env.AWS_ACCESS_KEY_ID
secret_key = env.AWS_SECRET_KEY_ID
region = env.AWS_REGION
}
| Yes, can read environment variables in Terraform. There is a very specific way that this has to be done. You will need to make the environment variable a variable in terraform.
For example I want to pass in a super_secret_variable to terraform. I will need to create a variable for it in my terraform file.
variable "super_secret_variable" {
type = "string
}
Then based on convention I will have to prefix my environment variable with TF_VAR_ like this:
TF_VAR_super_secret_variable
Then terraform will automatically detect it and use it. Terraform processors variables based on a specific order that order is -var option, -var-file option, environment variable, then default values if defined in your tf file.
Alternative you can pass environment variables in through the CLI to set variables in terraform like so.
> terraform apply -var super_secret_variable=$super_secret_variable
This doesn't require that you prefix it so if they are something you can't change that may be your best course of action.
You can read more here in the docs.
| Terraform | 53,330,060 | 17 |
The VPC I'm working on has 3 logical tiers: Web, App and DB. For each tier there is one subnet in each availability zone. Total of 6 subnets in the region I'm using.
I'm trying to create EC2 instances using a module and the count parameter but I don't know how to tell terraform to use the two subnets of the App tier. An additional constraint I have is to use static IP addresses (or a way to have a deterministic private name)
I'm playing around with the resource
resource "aws_instance" "app_server" {
...
count = "${var.app_servers_count}"
# Not all at the same time, though!
availability_zone = ...
subnet_id = ...
private_ip = ...
}
Things I've tried/thought so far:
Use data "aws_subnet" "all_app_subnets" {...}, filter by name, get all the subnets that match and use them as a list. But aws_subnet cannot return a list;
Use data "aws_availability_zones" {...} to find all the zones. But I still have the problem of assigning the correct subnet;
Use data "aws_subnet_ids" {...} which looks like the best option. But apparently it doesn't have a filter option to match the networks namel
Pass the subnets IDs as list of strings to the module. But I don't want to hard code the IDs, it's not automation;
Hard code the subnets as data "aws_subnet" "app_subnet_1" {...}, data "aws_subnet" "app_subnet_2" {...} but then I have to use separate sets of variables for each subnet which I don't like;
Get information for each subnet like in the point above but then create a map to access it as a list. But it's not possibile to use interpolation in variables definition;
Not using modules and hard-code each instance for each environment. Mmmm... really?
I really ran out of ideas. It seems that nobody has to deploy instances in specific subnetworks and keep a good degree of abstration. I see only examples where subnetworks are not specified or where people just use default values for everything. Is this really something so unusual?
Thanks in advance to everyone.
| It is possible to evenly distribute instances across multiple zones using modulo.
variable "zone" {
description = "for single zone deployment"
default = "europe-west4-b"
}
variable "zones" {
description = "for multi zone deployment"
default = ["europe-west4-b", "europe-west4-c"]
}
resource "google_compute_instance" "default" {
count = "${var.role.count}"
...
zone = "${var.zone != "" ? var.zone: var.zones[ count.index % length(var.zones) ]}"
...
}
This distribution mechanism allow to distribute nodes evenly across zones.
E.g. zones = [A,B] - instance-1 will be in A, instance-2 will in B, instance-3 will be in A again.
By adding zone C to zones will shift instance-3 to C.
| Terraform | 46,041,396 | 17 |
In the terraform docs it shows how to use a template. Is there any way to log this rendered output the console?
https://www.terraform.io/docs/configuration/interpolation.html#templates
data "template_file" "example" {
template = "${hello} ${world}!"
vars {
hello = "goodnight"
world = "moon"
}
}
output "rendered" {
value = "${template_file.example.rendered}"
}
| You need to run terraform apply then terraform output rendered
$ terraform apply
template_file.example: Creating...
rendered: "" => "<computed>"
template: "" => "${hello} ${world}!"
vars.#: "" => "2"
vars.hello: "" => "goodnight"
vars.world: "" => "moon"
template_file.example: Creation complete
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
Outputs:
rendered = goodnight moon!
$ terraform output rendered
goodnight moon!
| Terraform | 37,887,888 | 17 |
I have multiple tasks in a role as follows. I do not want to create another yml file to handle this task. I already have an include for the web servers, but a couple of our Perl servers require some web packages to be installed.
- name: Install Perl Modules
command: <command>
with_dict: perl_modules
- name: Install PHP Modules
command: <command>
with_dict: php_modules
when: <Install php modules only if hostname contains the word "batch">
Host inventory file
[webs]
web01
web02
web03
[perl]
perl01
perl02
perl03
perl-batch01
perl-batch02
| Below should do the trick:
- name: Install PHP Modules
command: <command>
with_dict: php_modules
when: "'batch' in inventory_hostname"
Note you'll have a couple of skipped hosts during playbook run.
inventory_hostname is one of Ansible's "magic" variables:
Additionally, inventory_hostname is the name of the hostname as
configured in Ansible’s inventory host file. This can be useful for
when you don’t want to rely on the discovered hostname
ansible_hostname or for other mysterious reasons. If you have a long
FQDN, inventory_hostname_short also contains the part up to the first
period, without the rest of the domain.
Source: Ansible Docs - Magic variables
| Ansible | 30,533,372 | 35 |
In my local.yml I'm able to run the playbook and reference variables within group_vars/all however I'm not able to access variables within group_vars/phl-stage. Let's assume the following.
ansible-playbook -i phl-stage site.yml
I have a variable, let's call it deploy_path that's different for each environment. I place the variable within group_vars/< environment name >. If I include the file group_vars/phl-stage within vars_files it works but I would've thought the group file would be automatically loaded?
site.yml
- include: local.yml
local.yml
- hosts: 127.0.0.1
connection: local
vars_files:
- "group_vars/perlservers"
- "group_vars/deploy_list"
group_vars/phl-stage
[webservers]
phl-web1
phl-web2
[perlservers]
phl-perl1
phl-perl2
[phl-stage:children]
webservers
perlservers
Directory structure:
group_vars
all
phl-stage
phl-prod
site.yml
local.yml
| You're confusing the structure a bit.
The group_vars directory contains files for each hostgroup defined in your inventory file. The files define variables that member hosts can use.
The inventory file doesn't reside in the group_vars dir, it should be outside.
Only hosts that are members of a group can use its variables, so unless you put 127.0.0.1 in a group, it won't be able to use any group_vars beside those defined in group_vars/all.
What you want is this dir structure:
group_vars/
all
perlservers
phl-stage
hosts
site.yml
local.yml
Your hosts file should look like this, assuming 127.0.0.1 is just a staging server and not perl or web server:
[webservers]
phl-web1
phl-web2
[perlservers]
phl-perl1
phl-perl2
[phl-stage]
127.0.0.1
[phl-stage:children]
webservers
perlservers
So you define which hosts belong to which group in the inventory, and then for each group you define variables in its group_vars file.
| Ansible | 23,767,765 | 35 |
In ansible playbook I need to run docker-compose commands. How can I do it? I need to run command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
| Updated answer 02/2024:
docker_compose_v2 is out since community.docker v3.6.0.
You can copy docker-compose.yml and run Compose such as:
- name: copy Docker Compose files
copy:
src: files/{{ item }}
dest: /somewhere/yourproject/{{ item }}
loop:
- docker-compose.yml
- docker-compose.prod.yml
# use files parameter to use multiple docker-compose.yml files
# mind the _v2 suffix
- name: deploy Docker Compose stack
community.docker.docker_compose_v2:
project_src: /somewhere/yourproject
files:
- docker-compose.yml
- docker-compose.prod.yml
Old answer (06/2020) using docker_compose module, only compatible with docker-compose < 2.0.0:
You should copy your Docker Compose files and use docker_compose module such as:
- name: copy Docker Compose files
copy:
src: files/{{ item }}
dest: /somewhere/yourproject/{{ item }}
loop:
- docker-compose.yml
- docker-compose.prod.yml
# use files parameter to use multiple docker-compose.yml files
- name: deploy Docker Compose stack
community.docker.docker_compose:
project_src: /somewhere/yourproject
files:
- docker-compose.yml
- docker-compose.prod.yml
Edit 2023-08-22: as of today Compose v2 is not supported by Ansible, it only works with v1. There's ongoing work towards docker_compose_v2 module but it's not available yet. In the meantime you can use shell as per @Tatiana's answer
| Ansible | 62,452,039 | 34 |
I am running myserver in ubuntu:
+ sudo cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.6 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.6 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
I use ansible and when I run it I get the following error:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on dd63315fad06's Python /usr/bin/python. Please read module documentation and install in the appropriate location, for example via `pip install docker` or `pip install docker-py` (Python 2.6). The error was: No module named docker"}
when I run
python -c "import sys; print(sys.path)"
I see:
['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/local/lib/python2.7/dist-packages/pip-19.2.2-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/fasteners-0.15-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/monotonic-1.5-py2.7.egg', '/usr/lib/python2.7/dist-packages']
and python versions are as follows:
+ python --version
Python 2.7.12
+ python3 --version
Python 3.5.2
Then as I see everything is fine and I am not sure why I get
"Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on dd63315fad06's Python /usr/bin/python. Please read module documentation and install in the appropriate location, for example via `pip install docker` or `pip install docker-py` (Python 2.6). The error was: No module named docker"
in ansible?
| It appears that you don't have the docker module installed.
You will need to install it via your system package manager (apt install python-docker, for example), or using pip (pip install docker).
If you have multiple Python versions, make sure that you've installed the docker module into the version that Ansible is using.
| Ansible | 59,384,708 | 34 |
I'm using Ansible to automate some configuration steps for my application VM, but having difficult to insert a new key-value to an existing json file on the remote host.
Say I have this json file:
{
"foo": "bar"
}
And I want to insert a new key value pair to make the file become:
{
"foo": "bar",
"hello": "world"
}
Since json format is not line based, I'm excluding lineinfile module from my options. Also, I would prefer not to use any external modules. Google keeps giving me examples to show how to read json file, but nothing about change json values and write them back to file. Really appreciate your help please!
| since the file is of json format, you could import the file to a variable, append the extra key:value pairs you want, and then write back to the filesystem.
here is a way to do it:
---
- hosts: localhost
connection: local
gather_facts: false
vars:
tasks:
- name: load var from file
include_vars:
file: /tmp/var.json
name: imported_var
- debug:
var: imported_var
- name: append more key/values
set_fact:
imported_var: "{{ imported_var | default([]) | combine({ 'hello': 'world' }) }}"
- debug:
var: imported_var
- name: write var to file
copy:
content: "{{ imported_var | to_nice_json }}"
dest: /tmp/final.json
UPDATE:
as OP updated, the code should work towards remote host, in this case we cant use included_vars or lookups. We could use the slurp module.
NEW code for remote hosts:
---
- hosts: greenhat
# connection: local
gather_facts: false
vars:
tasks:
- name: load var from file
slurp:
src: /tmp/var.json
register: imported_var
- debug:
msg: "{{ imported_var.content|b64decode|from_json }}"
- name: append more key/values
set_fact:
imported_var: "{{ imported_var.content|b64decode|from_json | default([]) | combine({ 'hello': 'world' }) }}"
- debug:
var: imported_var
- name: write var to file
copy:
content: "{{ imported_var | to_nice_json }}"
dest: /tmp/final.json
hope it helps
| Ansible | 50,796,341 | 34 |
I'm trying to get the number of hosts of a certain group.
Imagine an inventory file like this:
[maingroup]
server-[01:05]
Now in my playbook I would like to get the number of hosts that are part of maingroup which would be 5 in this case and store that in a variable which is supposed to be used in a template in one of the playbook's tasks.
At the moment I'm setting the variable manually which is far from ideal..
vars:
HOST_COUNT: 5
| vars:
HOST_COUNT: "{{ groups['maingroup'] | length }}"
| Ansible | 36,310,633 | 34 |
I have created a script to start/stop my application. Now I want to add it as a centos system service. First I created a task to create a link from my script to /etc/init.d/service_name as below.
---
- name: create startup link
file: src={{ cooltoo_service_script }} dest={{ cooltoo_service_init }} state=link
After create the service, I want to add it to system service. The command used to do that is "chkconfig --add service_name". I wonder whether there is a ansible module to do that instead of hardcoded the command in ansible-playbook file. I have looked at this page http://docs.ansible.com/ansible/service_module.html but it only shows how to manage a service not create a new one.
| The below code snippet will create Service in CentOS 7.
Code
Tasks
/tasks/main.yml
- name: TeamCity | Create environment file
template: src=teamcity.env.j2 dest=/etc/sysconfig/teamcity
- name: TeamCity | Create Unit file
template: src=teamcity.service.j2 dest=/lib/systemd/system/teamcity.service mode=644
notify:
- reload systemctl
- name: TeamCity | Start teamcity
service: name=teamcity.service state=started enabled=yes
Templates
/templates/teamcity.service.j2
[Unit]
Description=JetBrains TeamCity
Requires=network.target
After=syslog.target network.target
[Service]
Type=forking
EnvironmentFile=/etc/sysconfig/teamcity
ExecStart={{teamcity.installation_path}}/bin/teamcity-server.sh start
ExecStop={{teamcity.installation_path}}/bin/teamcity-server.sh stop
User=teamcity
PIDFile={{teamcity.installation_path}}/teamcity.pid
Environment="TEAMCITY_PID_FILE_PATH={{teamcity.installation_path}}/teamcity.pid"
[Install]
WantedBy=multi-user.target
\templates\teamcity.env.j2
TEAMCITY_DATA_PATH="{{ teamcity.data_path }}"
Handlers
\handlers\main.yml
- name: reload systemctl
command: systemctl daemon-reload
Reference :
Ansible playbook structure: http://docs.ansible.com/ansible/playbooks_intro.html
SystemCtl: https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units
| Ansible | 35,984,151 | 34 |
I have templates for configuration files stored in my project's repositories. What I would like to do is use Ansible's template module to create a configuration file using that template on the remote server, after the project has been cloned from the repository.
Looking at the documentation for the template module it appears that the src attribute only supports local files.
I wanted to avoid storing the configuration template with my Ansible playbook as it makes more sense for me to keep these project specific templates within the project repository.
Is there an alternative to the template module that I could use?
| You've got two options here if your template is going to be on the remote host.
Firstly, you can use the fetch module which works as pretty much the opposite to the copy module to bring the template back after cloning the repo on the remote host.
A playbook for this might look something like:
- name : clone repo on remote hosts
git :
repo : {{ git_repo_src }}
dest : {{ git_repo_dest }}
- name : fetch template from single remote host
run_once : true
fetch :
src : {{ template_path }}/{{ template_file }}
dest : /tmp/{{ template_file }}
flat : yes
fail_on_missing : yes
- name : template remote hosts
template :
src : /tmp/{{ template_file }}
dest : {{ templated_file_dest }}
owner : {{ templated_file_owner }}
group : {{ templated_file_group }}
mode : {{ templated_file_mode }}
The fetch task uses run_once to make sure that it only bothers copying the template from the first host it runs against. Assuming all these hosts in your play are the getting the same repo then this should be fine but if you needed to make sure that it copied from a very specific host then you could combine it with delegate_to.
Alternatively you could just have Ansible clone the repo locally and use it directly with something like:
- name : clone repo on remote hosts
git :
repo : {{ git_repo_src }}
dest : {{ git_repo_dest }}
- name : clone repo on Ansible host
hosts : localhost
connection : local
git :
repo : {{ git_repo_src }}
dest : {{ git_repo_local_dest }}
- name : template remote hosts
template :
src : {{ template_local_src }}
dest : {{ templated_file_dest }}
owner : {{ templated_file_owner }}
group : {{ templated_file_group }}
mode : {{ templated_file_mode }}
| Ansible | 33,163,204 | 34 |
I'm starting out with ansible and I'm looking for a way to create a boilerplate project on the server and on the local environment with ansible playbooks.
I want to use ansible templates locally to create some generic files.
But how would i take ansible to execute something locally?
I read something with local_action but i guess i did not get this right.
This is for the webbserver...but how do i take this and create some files locally?
- hosts: webservers
remote_user: someuser
- name: create some file
template: src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
| You can delegate tasks with the parameter delegate_to to any host you like, for example:
- name: create some file
template: src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
delegate_to: localhost
See Playbook Delegation in the docs.
For localhost you additionally need to ensure local connection in your inventory:
[local]
localhost ansible_connection=local
| Ansible | 31,383,693 | 34 |
I'm currently writing an Ansible play that follows this general format and is run via a cron job:
pre_tasks:
-Configuration / package installation
tasks:
-Work with installed packages
post_tasks:
-Cleanup / uninstall packages
The problem with the above is that sometimes a command in the tasks section fails, and when it does the post_tasks section doesn't run, leaving the system in a messy state. Is it possible to force the commands in post_tasks to run even if a failure or fatal error occurs?
My current approach is to apply ignore_errors: yes to everything under the tasks section, and then apply a when: conditional to each task to individually check if the prior command succeeded.
This solution seems like a hack, but it gets worse because even with ignore_errors: yes set, if a Fatal error is encountered for a task the entire play will still immediately fail, so I have to also run a cron'd bash script to manually check on things after reach play execution.
All I want is a guarantee that even if tasks fails, post_tasks will still run. I'm sure there is a way to do this without resorting to bash script wrappers.
| This feature became available in Ansible 2.0:
This is the documentation for the new stanza markers block, rescue, and always.
| Ansible | 23,875,377 | 34 |
I am trying to wget a file from a web server from within an Ansible playbook.
Here is the Ansible snippet:
---
- hosts: all
sudo: true
tasks:
- name: Prepare Install folder
sudo: true
action: shell sudo mkdir -p /tmp/my_install/mysql/ && cd /tmp/my_install/mysql/
- name: Download MySql
sudo: true
action: shell sudo wget http://{{ repo_host }}/MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar
Invoking it via:
ansible-playbook my_3rparties.yml -l vsrv644 --extra-vars "repo_host=vsrv656" -K -f 10
It fails with the following:
Cannot write to `MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar' (Permission denied).
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/usr2/ihazan/vufroria_3rparties.retry
vsrv644 : ok=2 changed=1 unreachable=0 failed=1
When trying to do the command that fail via regular remote ssh to mimic what ansible would do, it doesn't work as follows:
-bash-4.1$ ssh ihazan@vsrv644 'cd /tmp/my_install/mysql && sudo wget http://vsrv656/MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar'
Enter passphrase for key '/usr2/ihazan/.ssh/id_rsa':
sudo: sorry, you must have a tty to run sudo
But I can solve it using -t as follows:
-bash-4.1$ ssh -t ihazan@vsrv644 'cd /tmp/my_install/mysql && sudo wget http://vsrv656/MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar'
Then it works.
Is there a way to set the -t (pseudo tty option) on ansible?
P.S: I could solve it by editing the sudoers file as others propose but that is a manual step I am trying to avoid.
| Don't use shell-module when there is specialized modules available. In your case:
Create directories with file-module:
- name: create project directory {{ common.project_dir }}
file: state=directory path={{ common.project_dir }}
Download files with get_url-module:
- name: download sources
get_url: url={{ opencv.url }} dest={{ common.project_dir }}/{{ opencv.file }}
Note the new module call syntax in the examples above.
If you have to use sudo with password remember to give --ask-sudo-pass when needed (see e.g. Remote Connection Information).
| Ansible | 22,939,775 | 34 |
I already know that if you have long conditionals with and between them you can use lists to split them on multiple lines.
Still, I am not aware of any solution for the case where you have OR between them.
Practical example from real life:
when: ansible_user_dir is not defined or ansible_python is not defined or ansible_processor_vcpus is not defined
This line is ugly and hard to read, and clearly would not fit a 79 column.
How can we rewrite it to make it easier to read?
| Use the YAML folding operator >
when: >
ansible_user_dir is not defined or
ansible_python is not defined or
ansible_processor_vcpus is not defined
As the ansible documentation states:
Values can span multiple lines using | or >. Spanning multiple lines using a Literal Block Scalar | will include the newlines and any trailing spaces. Using a Folded Block Scalar > will fold newlines to spaces; it’s used to make what would otherwise be a very long line easier to read and edit. In either case the indentation will be ignored.
Additional info can be found here:
https://yaml-multiline.info/
https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
http://yaml.org/spec/1.2/spec.html#id2760844
| Ansible | 53,098,493 | 33 |
Assuming the below tasks:
- shell: "some_script.sh"
register: "some_script_result"
- debug:
msg: "Output: {{ some_script_result.stdout_lines }}
I receive the below output:
"msg": "Output: [u'some_value',u'some_value2,u'some_value3]"
How do I get the output to print as?
"msg":
Output:
- some_value
- some_value2
- some_value3
Ansible version is 2.4.2.
| Try this option. You’ll love it.
There's a new YAML callback plugin introduced with Ansible 2.5 — meaning any machine running Ansible 2.5.0 or later can automatically start using this format without installing custom plugins.
To use it, edit your ansible.cfg file (either globally, in /etc/ansible/ansible.cfg, or locally in your playbook/project), and add the following lines under the [defaults] section:
# Use the YAML callback plugin.
stdout_callback = yaml
# Use the stdout_callback when running ad-hoc commands.
bin_ansible_callbacks = True
Now I can easily read through your output message
If you get the following error:
ERROR! Invalid callback for stdout specified: yaml
run
ansible-galaxy collection install community.general
| Ansible | 50,009,505 | 33 |
I've set up a box with a user david who has sudo privileges. I can ssh into the box and perform sudo operations like apt-get install. When I try to do the same thing using Ansible's "become privilege escalation", I get a permission denied error. So a simple playbook might look like this:
simple_playbook.yml:
---
- name: Testing...
hosts: all
become: true
become_user: david
become_method: sudo
tasks:
- name: Just want to install sqlite3 for example...
apt: name=sqlite3 state=present
I run this playbook with the following command:
ansible-playbook -i inventory simple_playbook.yml --ask-become-pass
This gives me a prompt for a password, which I give, and I get the following error (abbreviated):
fatal: [123.45.67.89]: FAILED! => {...
failed: E: Could not open lock file /var/lib/dpkg/lock - open (13:
Permission denied)\nE: Unable to lock the administration directory
(/var/lib/dpkg/), are you root?\n", ...}
Why am I getting permission denied?
Additional information
I'm running Ansible 2.1.1.0 and am targeting a Ubuntu 16.04 box. If I use remote_user and sudo options as per Ansible < v1.9, it works fine, like this:
remote_user: david
sudo: yes
Update
The local and remote usernames are the same. To get this working, I just needed to specify become: yes (see @techraf's answer):
|
Why am I getting permission denied?
Because APT requires root permissions (see the error: are you root?) and you are running the tasks as david.
Per these settings:
become: true
become_user: david
become_method: sudo
Ansible becomes david using sudo method. It basically runs its Python script with sudo david in front.
the user 'david' on the remote box has sudo privileges.
It means david can execute commands (some or all) using sudo-executable to change the effective user for the child process (the command). If no username is given, this process runs as the root account.
Compare the results of these two commands:
$ sudo whoami
root
$ sudo david whoami
david
Back to the APT problem, you (from CLI) as well as Ansible (connecting with SSH using your account) need to run:
sudo apt-get install sqlite3
not:
sudo david apt-get install sqlite3
which will fail with the very exact message Ansible displayed.
The following playbook will escalate by default to the root user:
---
- name: Testing...
hosts: all
become: true
tasks:
- name: Just want to install sqlite3 for example...
apt: name=sqlite3 state=present
| Ansible | 40,983,674 | 33 |
For a role I'm developing I need to verify that the kernel version is greater than a particular version.
I've found the ansible_kernel fact, but is there an easy way to compare this to other versions? I thought I might manually explode the version string on the dots (.) and compare the numbers, but I can't even find a friendly filter to explode the version string out, so I'm at a loss.
| There is a version test for it:
{{ ansible_distribution_version is version('12.04', '>=') }}
{{ sample_version_var is version('1.0', operator='lt', strict=True) }}
Prior to Ansible 2.5, all tests were also provided as filters, so, the same was achievable with a filter, named version_compare, but in current versions of Ansible, the test was renamed and, overall, the tests and filters have been clearly disambiguated
{{ ansible_distribution_version | version_compare('12.04', '>=') }}
{{ sample_version_var | version_compare('1.0', operator='lt', strict=True) }}
| Ansible | 39,779,802 | 33 |
I have Ansible role, for example
---
- name: Deploy app1
include: deploy-app1.yml
when: 'deploy_project == "{{app1}}"'
- name: Deploy app2
include: deploy-app2.yml
when: 'deploy_project == "{{app2}}"'
But I deploy only one app in one role call. When I deploy several apps, I call role several times. But every time there is a lot of skipped tasks output (from tasks which do not pass condition), which I do not want to see. How can I avoid it?
| I'm assuming you don't want to see the skipped tasks in the output while running Ansible.
Set this to false in the ansible.cfg file.
display_skipped_hosts = false
Note. It will still output the name of the task although it will not display "skipped" anymore.
UPDATE: by the way you need to make sure ansible.cfg is in the current working directory.
Taken from the ansible.cfg file.
ansible will read ANSIBLE_CONFIG,
ansible.cfg in the current working directory, .ansible.cfg in
the home directory or /etc/ansible/ansible.cfg, whichever it
finds first.
So ensure you are setting display_skipped_hosts = false in the right ansible.cfg file.
Let me know how you go
| Ansible | 39,189,549 | 33 |
Several of my playbooks have sub-plays structure like this:
- hosts: sites
user: root
tags:
- configuration
tasks:
(...)
- hosts: sites
user: root
tags:
- db
tasks:
(...)
- hosts: sites
user: "{{ site_vars.user }}"
tags:
- app
tasks:
(...)
In Ansible 1.x both admins and developers were able to use such playbook. Admins could run it with all the tags (root and user access), while developers had access only to the last tag with tasks at user access level. When developers run this playbook with the app tag, gathering facts was skipped for the first two tags. Now however, in Ansible 2.1, it is not being skipped, which causes failure for users without root access.
Is there a mechanism or an easy modification to fix this behaviour? Is there a new approach which should be applied for such cases now?
| There is an easy mod – turn off facts gathering and call setup explicitly:
- hosts: sites
user: root
tags:
- configuration
gather_facts: no
tasks:
- setup:
(...)
| Ansible | 38,308,871 | 33 |
It seems to me that both tools are used to easily install and automatically configure applications.
However, I've limitedly used Docker and haven't used Ansible at all. So I'm a little confused.
Whenever I search for a comparison between these two technologies, I find details about how to use these technologies in combination.
| There are many reasons most articles talk about using them together.
Think of Ansible as a way of installing and configuring a machine where you can go back and tweak any individual step of that install and configuration in the future. You can then scale that concept out to many machines as you are able to manage.
A key difference where Ansible has an advantage is that it can not just manage the internals of the machine, but also manage the other systems such as networking, DNS, monitoring etc that surround the machine.
Building out many machines via Ansible takes pretty much as much time to do 50 machines as it does to make 1, as all 50 will be created step by step. If you are running a rolling deploy across multiple environments its this build step by step that takes up time.
Now think of Docker as having built one of those individual machines - installed and configured and ready to be deployed wherever a docker system is available (which is pretty much everywhere these days). The drawback here is you don't get to manage all the other aspects needed around making docker containers actually work, and tweaking them long term isn't as much fun as it sounds if you haven't automated the configuration (hence Ansible helps here).
Scaling from 1 to 50 Docker machines once you have already created the initial image is blindingly fast in comparison to the step by step approach Ansible takes, and this is most obvious during a rolling deploy of many machines in smaller groups.
Each has its drawbacks in either ability or speed. Combine them both however and it can be pretty awesome. As no doubt with most of the articles you have already read, I would recommend looking at using Ansible to create (and update) your base Docker container(s) and then using Ansible to manage the rollout of whatever scale of containers you need to satisfy your applications usage.
| Ansible | 30,550,378 | 33 |
I'm working in a project, and we use ansible to create a deploy a cluster of servers.
One of the tasks that I've to implement, is to copy a local file to the remote host, only if that file exists locally.
Now I'm trying to solve this problem using this
- hosts: 127.0.0.1
connection: local
tasks:
- name: copy local filetocopy.zip to remote if exists
- shell: if [[ -f "../filetocopy.zip" ]]; then /bin/true; else /bin/false; fi;
register: result
- copy: src=../filetocopy.zip dest=/tmp/filetocopy.zip
when: result|success
Bu this is failing with the following message:
ERROR: 'action' or 'local_action' attribute missing in task "copy local filetocopy.zip to remote if exists"
I've tried to create this if with command task.
I've already tried to create this task with a local_action, but I couldn't make it work.
All samples that I've found, doesn't consider a shell into local_action, there are only samples of command, and neither of them have anything else then a command.
Is there a way to do this task using ansible?
| A more comprehensive answer:
If you want to check the existence of a local file before performing some task, here is the comprehensive snippet:
- name: get file stat to be able to perform a check in the following task
local_action: stat path=/path/to/file
register: file
- name: copy file if it exists
copy: src=/path/to/file dest=/destination/path
when: file.stat.exists
If you want to check the existence of a remote file before performing some task, this is the way to go:
- name: get file stat to be able to perform check in the following task
stat: path=/path/to/file
register: file
- name: copy file if it exists
copy: src=/path/to/file dest=/destination/path
when: file.stat.exists
| Ansible | 28,855,236 | 33 |
Recently I have been using ansible for a wide variety of automation. However, during testing for automatic tomcat6 restart on specific webserver boxes. I came across this new error that I can't seem to fix.
FAILED => failed to transfer file to /command
Looking at documentation said its because of sftp-server not being in the sshd_config, however it is there.
Below is the command I am running to my webserver hosts.
ansible all -a "/usr/bin/sudo /etc/init.d/tomcat6 restart" -u user --ask-pass --sudo --ask-sudo-pass
There is a .ansible hidden folder on each of the boxes so I know its making to them but its not executing the command.
Running -vvvv gives me this after:
EXEC ['sshpass', '-d10', 'ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'GSSAPIAuthentication=no', '-o', 'PubkeyAuthentication=no', '-o', 'User=user', '-o', 'ConnectTimeout=10', '10.10.10.103', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689 && echo $HOME/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689'"]
then
10.10.10.103 | FAILED => failed to transfer file to /home/user/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689/command
Any help on this issue is much appreciated.
Thanks,
Edit:
To increase Googleability, here is another manifestation of the error that the chosen answer fixes.
Running the command ansible-playbook -i inventory hello_world.yml gives this warning for every host.
[WARNING]: sftp transfer mechanism failed on [host.example.com]. Use ANSIBLE_DEBUG=1 to see detailed information
And when you rerun the command as ANSIBLE_DEBUG=1 ansible-playbook -i inventory hello_world.yml the only extra information you get is:
>>>sftp> put /var/folders/nc/htqkfk6j6h70hlxrr43rm4h00000gn/T/tmpxEWCe5 /home/ubuntu/.ansible/tmp/ansible-tmp-1487430536.22-28138635532013/command.py
| do you have sftp subsystem enabled in sshd on the remote server?
You can check it in /etc/sshd/sshd_config, the config file name depends on your distribution…anyway, look there for:
Subsystem sftp /usr/lib/ssh/sftp-server
If this line is commented-out, the sftp is disabled.
To fix it, you can either enable sftp, or change Ansible configuration.
I prefer the Ansible configuration change, take a look at ansible.cfg and add/change:
[ssh_connection]
scp_if_ssh=True
| Ansible | 23,899,028 | 33 |
For a backup I need to iterate over all hosts in my inventory file to be sure that the backup destination exists. My structure looks like
/var/backups/
example.com/
sub.example.com/
So I need a (built-in) variable/method to list all hosts from inventory file, not only a single group.
For groups its look like this
- name: ensure backup directories are present
action: file path=/var/backups/{{ item }} state=directory
owner=backup group=backup mode=0700
with_items: groups['group_name']
tags:
- backup
| Thats the solution:
with_items: groups['all']
| Ansible | 20,828,703 | 33 |
In Ansible, what is the difference between the service and the systemd modules? The service module seems to include the systemd module so what's the point of having systemd by itself?
| The module service is a generic one. According to the Ansible documentation :
Supported init systems include BSD init, OpenRC, SysV, Solaris SMF, systemd, upstart.
The module systemd is available only from Ansible 2.2 and is dedicated to systemd.
According to the developers of Ansible :
we are moving away from having everything in a monolithic 'service' module and splitting into specific modules, following the same model the 'package' module has.
| Ansible | 43,974,099 | 32 |
OVERVIEW
I'd like to have reliable django deployments and I think I'm not following the best practices here. Till now I've been using fabric as a configuration management tool in order to deploy my django sites but I'm not sure that's the best way to go.
In the high performance django book there is a warning which says:
Fabric is not a configuration management tool. Trying to use it as one
will ultimately cause you heartache and pain. Fabric is an excellent
choice for executing scripts in one or more remote systems, but that's
only a small piece of the puzzle. Don't reinvent the wheel by building
your own configuration management system on top of fabric
So, I've decided I want to learn ansible.
QUESTIONS
Does it make sense using both fabric and ansible tools somehow?
Is it possible to use ansible from my windows development environment to deploy to production centos(6/7) servers?
There is this nice site https://galaxy.ansible.com/ which contains a lot of playbooks, any good recommendation to deploy django on centos servers?
|
Does it make sense using both fabric and ansible tools somehow?
Yes. All your logic should live in Ansible and you can use Fabric as a lightweight wrapper around it.
fab deploy
is easier to remember than, e.g.
ansible-playbook -v --inventory=production --tags=app site.yml
Is it possible to use ansible from my windows development environment to deploy to production centos(6/7) servers?
Sounds like you can't. Alternatively, If you use Fabric, it could copy your ansible playbooks up to a server (or pull directly from git) and run ansible from there.
| Ansible | 39,370,364 | 32 |
I am working on a role where I want one task to be run at the end of the tasks file if and only if any of the previous tasks in that task file have changed.
For example, I have:
- name: install package
apt: name=mypackage state=latest
- name: modify a file
lineinfile: do stuff
- name: modify a second file
lineinfile: other stuff
- name: restart if anything changed
service: name=mypackage state=restarted
... and I want to only restart the service if an update has been installed or any of the config files have been changed.
How can I do this?
| Best practice here is to use handlers.
In your role create a file handlers/main.yml with the content:
- name: restart mypackage
service: name=mypackage state=restarted
Then notify this handler from all tasks. The handler will be notified only if a task reports a changed state (=yellow output)
- name: install package
apt: name=mypackage state=latest
notify: restart mypackage
- name: modify a file
lineinfile: do stuff
notify: restart mypackage
- name: modify a second file
lineinfile: other stuff
notify: restart mypackage
Handlers will be executed at the very end of your play. If you have other roles involved which depend on the restarted mypackage service, you might want to flush all handlers at the end of the role:
- meta: flush_handlers
Additionally have a look at the force_handlers setting. In case an error happens in any other role processed after your mypackge role, the handler would not be triggered. Set force_handlers=True in your ansible.cfg to still force your handlers to be executed after errors. This is a very important topic since when you run your playbook the next time the files will not be changed and therefore the handler not get notified, hence your service never restarted.
You can also do this without handlers but this is very ugly. You need to register the output of every single task so you can later check the state in the condition applied to the restart task.
- name: install package
apt: name=mypackage state=latest
register: mypackage_1
- name: modify a file
lineinfile: do stuff
register: mypackage_2
- name: modify a second file
lineinfile: other stuff
register: mypackage_3
- name: restart if anything changed
service: name=mypackage state=restarted
when: mypackage_1 is changed or mypackage_2 is changed or mypackage_3 is changed
It was possible to use mypackage_1 | changed till ansible 2.9
See also the answer to Ansible Handler notify vs register.
| Ansible | 38,144,598 | 32 |
Is there a way to evaluate a relative path in Ansible?
tasks:
- name: Run docker containers
include: tasks/dockerup.yml src_code='..'
Essentially I am interested in passing the source code path to my task. It happens that the source code is the parent path of {{ansible_inventory}} but there doesn't seem to be anything to accomplish that out of the box.
---- further info ----
Project structure:
myproj
app
deploy
deploy.yml
So I am trying to access app from deploy.yml.
| You can use the dirname filter:
{{ inventory_dir | dirname }}
For reference, see Managing file names and path names in the docs.
| Ansible | 35,271,368 | 32 |
Trying to register an ec2 instance in AWS with Ansible's ec2_ami module, and using current date/time as version (we'll end up making a lot of AMIs in the future).
This is what I have:
- name: Create new AMI
hosts: localhost
connection: local
gather_facts: false
vars:
tasks:
- include_vars: ami_vars.yml
- debug: var=ansible_date_time
- name: Register ec2 instance as AMI
ec2_ami: aws_access_key={{ ec2_access_key }}
aws_secret_key={{ ec2_secret_key }}
instance_id={{ temp_instance.instance_ids[0] }}
region={{ region }}
wait=yes
name={{ ami_name }}
with_items: temp_instance
register: new_ami
From ami_vars.yml:
ami_version: "{{ ansible_date_time.iso8601 }}"
ami_name: ami_test_{{ ami_version }}
When I run the full playbook, I get this error message:
fatal: [localhost]: FAILED! => {"failed": true, "msg": "ERROR! ERROR! ERROR! 'ansible_date_time' is undefined"}
However, when run the debug command separately, from a separate playbook, it works fine:
- name: Test date-time lookup
hosts: localhost
connection: local
tasks:
- include_vars: ami_vars.yml
- debug: msg="ami version is {{ ami_version }}"
- debug: msg="ami name is {{ ami_name }}"
Result:
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "ami version is 2016-02-05T19:32:24Z"
}
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "ami name is ami_test_2016-02-05T19:32:24Z"
}
Any idea what's going on?
| Remove this:
gather_facts: false
ansible_date_time is part of the facts and you are not gathering it.
| Ansible | 35,232,088 | 32 |
I am running a custom command because I haven't found a working module doing what I need, and I want to adjust the changed flag to reflect the actual behaviour:
- name: Remove unused images
shell: '[ -n "$(docker images -q -f dangling=true)" ] && docker rmi $(docker images -q -f dangling=true) || echo Ignoring failure...'
register: command_result
changed_when: "command_result.stdout == 'Ignoring failure...'"
- debug: var="1 {{ command_result.stdout }}"
when: "command_result.stdout != 'Ignoring failure...'"
- debug: var="2 {{ command_result.stdout }}"
when: "command_result.stdout == 'Ignoring failure...'"
(I know the shell command is ugly and could be improved by a more complex script but I don't want to for now)
Running this task on an host where no Docker image can be removed gives the following output:
TASK: [utils.dockercleaner | Remove unused images] ****************************
changed: [cloud-host] => {"changed": true, "cmd": "[ -n \"$(docker images -q -f dangling=true)\" ] && docker rmi $(docker images -q -f dangling=true) || echo Ignoring failure...", "delta": "0:00:00.064451", "end": "2015-07-30 18:37:25.620135", "rc": 0, "start": "2015-07-30 18:37:25.555684", "stderr": "", "stdout": "Ignoring failure...", "stdout_lines": ["Ignoring failure..."], "warnings": []}
TASK: [utils.dockercleaner | debug var="DIFFERENT {{ command_result.stdout }}"] ***
skipping: [cloud-host]
TASK: [utils.dockercleaner | debug var="EQUAL {{ command_result.stdout }}"] ***
ok: [cloud-host] => {
"var": {
"EQUAL Ignoring failure...": "EQUAL Ignoring failure..."
}
}
So, I have this stdout return value "stdout": "Ignoring failure...", and the debug task shows the strings are equal, so why is the task still displayed as "changed"?
I am using ansible 1.9.1.
The documentation I am refering to is this one: http://docs.ansible.com/ansible/playbooks_error_handling.html#overriding-the-changed-result
| I think you may have misinterpreted what changed_when does.
changed_when marks the task as changed based on the evaluation of the conditional statement which in your case is:
"command_result.stdout == 'Ignoring failure...'"
So whenever this condition is true, the task will be marked as changed.
| Ansible | 31,731,756 | 32 |
How can I get the current role name in an ansible task yaml file?
I would like to do something like this
---
# role/some-role-name/tasks/main.yml
- name: Create a directory which is called like the current role name
action: file
path=/tmp/"{{ role_name }}"
mode=0755
state=directory
The result of this task should be a directory /tmp/some-role-name on the server
| The simplest way is to just use the following
{{role_path|basename}}
| Ansible | 25,324,261 | 32 |
I want to change one line of my code in file /var/www/kibana/config.js during installation from
elasticsearch: "http://"+window.location.hostname+":9200"
to
elasticsearch: "http://192.168.1.200:9200"
Here I tried to use lineinfile to do that as show below
- name: Comment out elasticsearch the config.js to ElasticSearch server
lineinfile:
dest=/var/www/kibana/config.js
backrefs=true
regexp="(elasticsearch.* \"http.*)$"
line="elasticsearch\: \" {{ elasticsearch_URL }}:{{ elasticsearch_port }} \" "
state=present
I have set variables of {{elasticsearch_URL}} and {{elasticsearch_port}} to http://192.168.1.200 and 9200, respectively.
Here is the error message I met:
ERROR: Syntax Error while loading YAML script, /Users/shuoy/devops_workspace/ansible_work/logging-for-openstack/roles/kibana/tasks/Debian.yml
Note: The error may actually appear before this position: line 29, column 25
regexp="(elasticsearch.* \"http.*)$"
line="elasticsearch\: \" {{ elasticsearch_URL }}:{{ elasticsearch_port }} \" "
^
| The solution that will work in any case no matter how many nested quotes you might have and without forcing you to add more quotes around the whole thing (which can get tricky to impossible depending on the line you want to write) is to output the colon through a Jinja2 expression, which simply returns the colon as a string:
{{ ":" }}
Or in your complete line:
line="elasticsearch\: \" {{ elasticsearch_URL }}{{ ":" }}{{ elasticsearch_port }} \" "
Credit to this goes to github user drewp.
Another option is to use a mutiline string. But instead of quoting the whole string, I suggest to make use of YAMLs multiline feature. For this specific case:
- name: Comment out elasticsearch the config.js to ElasticSearch server
lineinfile: >
dest=/var/www/kibana/config.js
backrefs=true
regexp="(elasticsearch.* \"http.*)$"
line="elasticsearch\: \" {{ elasticsearch_URL }}:{{ elasticsearch_port }} \" "
state=present
The key is the > behind lineinfile:
But let's keep in mind that this question is from 2014 and you just don't do this string soup anymore in modern ansible. Instead the ideal solution would look like this, which is much more readable and does not require to escape all the quotes etc:
- name: Comment out elasticsearch the config.js to ElasticSearch server
lineinfile:
dest: /var/www/kibana/config.js
backrefs: true
line: >
elasticsearch: {{ elasticsearch_URL }}:{{ elasticsearch_port }}
state: present
regexp: (elasticsearch.* "http.*)$
Here we have the the > behind line: , so we don't need to escape anything in the value.
Other suggestions: Remove backrefs, as it is not used and state: present is default so can be removed
| Ansible | 24,835,706 | 32 |
I'm trying to restart the Jenkins service using Ansible:
- name: Restart Jenkins to make the plugin data available
service: name=jenkins state=restarted
- name: Wait for Jenkins to restart
wait_for:
host=localhost
port=8080
delay=20
timeout=300
- name: Install Jenkins plugins
command:
java -jar {{ jenkins_cli_jar }} -s {{ jenkins_dashboard_url }} install-plugin {{ item }}
creates=/var/lib/jenkins/plugins/{{ item }}.jpi
with_items: jenkins_plugins
But on the first run, the third task throws lots of Java errors including this: Suppressed: java.io.IOException: Server returned HTTP response code: 503 for URL, which makes me think the web server (handled entirely by Jenkins) wasn't ready. Sometimes when I go to the Jenkins dashboard using my browser it says that Jenkins isn't ready and that it will reload when it is, and it does, it works fine. But I'm not sure if accessing the page is what starts the server, or what.
So I guess what I need is to curl many times until the http code is 200? Is there any other way?
Either way, how do I do that?
How do you normally restart Jenkins?
| Using the URI module http://docs.ansible.com/ansible/uri_module.html
- name: "wait for ABC to come up"
uri:
url: "http://127.0.0.1:8080/ABC"
status_code: 200
register: result
until: result.status == 200
retries: 60
delay: 1
| Ansible | 23,919,744 | 32 |
I'm trying to execute my first remote shell script on Ansible. I've first generated and copied the SSH keys. Here is my yml file:
---
- name: Ansible remote shell
hosts: 192.168.10.1
user: myuser1
become: true
become_user: jboss
tasks:
- name: Hello server
shell: /home/jboss/script.sh
When launching the playbook however, the outcome is "no hosts matched":
ansible-playbook setup.yml
PLAY [Ansible remote shell
********************************************
skipping: no hosts matched
PLAY RECAP ********************************************************************
I've tried also using the host name (instead of the IP address), however nothing changed. Any help ?
| You need to define a host inventory.
The default path for this is /etc/ansible/hosts (as also stated by helloV).
For a minimal example you can also specify an inventory in the command line:
ansible-playbook setup.yml -i 192.168.10.1,
The trailing comma makes it a list, such that ansible parses it directy. Otherwise you can run
ansible-playbook setup.yml -i myinventory
where myinventory is a file listing your hosts.
| Ansible | 38,203,317 | 31 |
---
# file: main.yml
- hosts: fotk
remote_user: fakesudo
tasks:
- name: create a developer user
user: name={{ user }}
password={{ password }}
shell=/bin/bash
generate_ssh_key=yes
state=present
roles:
- { role: create_developer_environment, sudo_user: "{{ user }}" }
- { role: vim, sudo_user: "{{ user }}" }
For some reason the create user task is not running. I have searched every key phrase I can think of on Google to find an answer without success.
The roles are running which is odd.
Is it possible for a playbook to contain both tasks and roles?
| You can also do pre_tasks: and post_tasks: if you need to do things before or after. From the Docs https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html
- hosts: localhost
pre_tasks:
- shell: echo 'hello in pre'
roles:
- { role: some_role }
tasks:
- shell: echo 'in tasks'
post_tasks:
- shell: echo 'goodbye in post'
Gives the output: PLAY [localhost]
GATHERING FACTS
*************************************************************** ok: [localhost]
TASK: [shell echo 'hello in pre']
********************************************* changed: [localhost]
TASK: [some_role | shell echo 'hello from the role']
************************** changed: [localhost]
TASK: [shell echo 'in tasks']
************************************************* changed: [localhost]
TASK: [shell echo 'goodbye in post']
****************************************** changed: [localhost]
PLAY RECAP
******************************************************************** localhost : ok=5 changed=4 unreachable=0
failed=0
This is with ansible 1.9.1
| Ansible | 30,987,865 | 31 |
I'm trying to reboot server running CentOS 7 on VirtualBox. I use this task:
- name: Restart server
command: /sbin/reboot
async: 0
poll: 0
ignore_errors: true
Server is rebooted, but I get this error:
TASK: [common | Restart server] ***********************************************
fatal: [rolcabox] => SSH Error: Shared connection to 127.0.0.1 closed.
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
FATAL: all hosts have already failed -- aborting
What am I doing wrong? How can I fix this?
| You're likely not doing anything truly wrong, it's just that /sbin/reboot is shutting down the server so quickly that the server is tearing down the SSH connection used by Ansible before Ansible itself can close it. As a result Ansible is reporting an error because it sees the SSH connection failing for an unexpected reason.
What you might want to do to get around this is to switch from using /sbin/reboot to using /sbin/shutdown instead. The shutdown command lets you pass a time, and when combined with the -r switch it will perform a reboot rather than actually shutting down. So you might want to try a task like this:
- name: Restart server
command: /sbin/shutdown -r +1
async: 0
poll: 0
ignore_errors: true
This will delay the server reboot for 1 minute, but in doing so it should give Ansible enough time to to close the SSH connection itself, thereby avoiding the error that you're currently getting.
| Ansible | 29,955,605 | 31 |
I have a problem installing MySQL with ansible on a vagrant ubuntu,
This is my MySQL part
---
- name: Install MySQL
apt:
name: "{{ item }}"
with_items:
- python-mysqldb
- mysql-server
- name: copy .my.cnf file with root password credentials
template:
src: templates/root/.my.cnf
dest: ~/.my.cnf
owner: root
mode: 0600
- name: Start the MySQL service
service:
name: mysql
state: started
enabled: true
# 'localhost' needs to be the last item for idempotency, see
# http://ansible.cc/docs/modules.html#mysql-user
- name: update mysql root password for all root accounts
mysql_user:
name: root
host: "{{ item }}"
password: "{{ mysql_root_password }}"
priv: "*.*:ALL,GRANT"
with_items:
- "{{ ansible_hostname }}"
- 127.0.0.1
- ::1
- localhost
And I have this error
failed: [default] => (item=vagrant-ubuntu-trusty-64) => {"failed": true, "item": "vagrant-ubuntu-trusty-64"}
msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials
failed: [default] => (item=127.0.0.1) => {"failed": true, "item": "127.0.0.1"}
msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials
failed: [default] => (item=::1) => {"failed": true, "item": "::1"}
msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials
failed: [default] => (item=localhost) => {"failed": true, "item": "localhost"}
msg: unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials
my .my.cnf is
[client]
user=root
password={{ mysql_root_password }}
and when copied on the server
[client]
user=root
password=root
I don't understand why, ~/.my.cnf is created
Project Github
Thanks
| When mysql-server is installed headlessly, there's no password. Therefore to make .my.cnf work, it should have a blank password line. Here's what I tested with for a .my.cnf:
[client]
user=root
password=
It's also slightly strange to put .my.cnf in your vagrant user directory as owned by root and only readable as root.
After ensuring the password was blank in .my.cnf, I was able to properly set the password for root in those four contexts. Note that it fails to run after that, since .my.cnf would need to be updated, so it fails the idempotency test.
There's a note on the ansible mysql_user module page that suggests writing the password and then writing the .my.cnf file. If you do that, you need a where clause to the mysql_user action (probably with a file stat before that).
Even more elegant is to use check_implicit_admin along with login_user and login_password. That's beautifully idempotent.
As a third way, perhaps check_implicit_admin makes it even easier.
Here's my successful playbook showing the above, tested with a few fresh servers. Kinda proud of this. Note .my.cnf is unnecessary for all of this.
---
- hosts: mysql
vars:
mysql_root_password: fart
tasks:
- name: Install MySQL
apt: name={{ item }} update_cache=yes cache_valid_time=3600 state=present
sudo: yes
with_items:
- python-mysqldb
- mysql-server
#- name: copy cnf
# copy: src=.my.cnf dest=~/.my.cnf owner=ubuntu mode=0644
# sudo: yes
- name: Start the MySQL service
sudo: yes
service:
name: mysql
state: started
enabled: true
- name: update mysql root password for all root accounts
sudo: yes
mysql_user:
name: root
host: "{{ item }}"
password: "{{ mysql_root_password }}"
login_user: root
login_password: "{{ mysql_root_password }}"
check_implicit_admin: yes
priv: "*.*:ALL,GRANT"
with_items:
- "{{ ansible_hostname }}"
- 127.0.0.1
- ::1
- localhost
(edit- removed my.cnf)
| Ansible | 26,597,926 | 31 |
Ansible expects python 2. On my system (Arch Linux), "python" is Python 3, so I have to pass -e "ansible_python_interpreter=/usr/bin/python2" with every command.
ansible-playbook my-playbook.yml -e "ansible_python_interpreter=/usr/bin/python2"
Is there a away to set ansible_python_interpreter globally on my system, so I don't have to pass it to every command? I don't want to add it to my playbooks, as not all systems that runs the playbook has a setup similar to mine.
| Well you can set in three ways
http://docs.ansible.com/intro_inventory.html#list-of-behavioral-inventory-parameters ansible_python_interpreter=/usr/bin/python2 this will set it per host
Set it host_vars/ ansible_python_interpreter: "/usr/bin/python2" this will set it per host
set it for all nodes in the file group_vars/all (you may need to create the directory group_vars and the file all) as ansible_python_interpreter: "/usr/bin/python2"
Hope that helps
| Ansible | 22,769,568 | 31 |
I am using ansible to replace the ssh keys for a user on multiple RHEL6 & RHEL7 servers. The task I am running is:
- name: private key
copy:
src: /Users/me/Documents/keys/id_rsa
dest: ~/.ssh/
owner: unpriv
group: unpriv
mode: 0600
backup: yes
Two of the hosts that I'm trying to update are giving the following error:
fatal: [host1]: FAILED! => {"failed": true, "msg": "Failed to set
permissions on the temporary files Ansible needs to create when
becoming an unprivileged user (rc: 1, err: chown: changing ownership
of /tmp/ansible-tmp-19/': Operation not permitted\nchown: changing
ownership of/tmp/ansible-tmp-19/stat.py': Operation not
permitted\n). For information on working around this, see
https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"}
The thing is that these two that are getting the errors are clones of some that are updating just fine. I've compared the sudoers and sshd settings, as well as permissions and mount options on the /tmp directory. They are all the same between the problem hosts and the working ones. Any ideas on what I could check next?
I am running ansible 2.3.1.0 on Mac OS Sierra, if that helps.
Update:
@techraf
I have no idea why this worked on all hosts except for two. Here is the original playbook:
- name: ssh_keys
hosts: my_hosts
remote_user: my_user
tasks:
- include: ./roles/common/tasks/keys.yml
become: yes
become_method: sudo
and original keys.yml:
- name: public key
copy:
src: /Users/me/Documents/keys/id_rsab
dest: ~/.ssh/
owner: unpriv
group: unpriv
mode: 060
backup: yes
I changed the playbook to:
- name: ssh_keys
hosts: my_hosts
remote_user: my_user
tasks:
- include: ./roles/common/tasks/keys.yml
become: yes
become_method: sudo
become_user: root
And keys.yml to:
- name: public key
copy:
src: /Users/me/Documents/keys/id_rsab
dest: /home/unpriv/.ssh/
owner: unpriv
group: unpriv
mode: 0600
backup: yes
And it worked across all hosts.
| Try to install ACL on remote host, after that execute ansible script
sudo apt-get install acl
As explained in the doc
when both the connection user and the become_user are unprivileged, the module file is written as the user that Ansible connects as (the remote_user), but the file needs to be readable by the user Ansible is set to become. On POSIX systems, Ansible solves this problem in the following way:
First, if setfacl is installed and available in the remote PATH, and the temporary directory on the remote host is mounted with POSIX.1e filesystem ACL support, Ansible will use POSIX ACLs to share the module file with the second unprivileged user.
Next, if POSIX ACLs are not available or setfacl could not be run, Ansible will attempt to change ownership of the module file using chown for systems which support doing so as an unprivileged user
| Ansible | 46,352,173 | 30 |
Heres my if else Ansible logic ..
- name: Check certs exist
stat: path=/etc/letsencrypt/live/{{ rootDomain }}/fullchain.pem
register: st
- include: ./_common/check-certs-renewable.yaml
when: st.stat.exists
- include: ./_common/create-certs.yaml
when: not st.stat.exists
This code boils down to:
IF certs exist
renew certs
ELSE
create certs
END IF
Is this the correct approach or is there a better approach to the IF ELSE construct in ansible?
| What you have there should work and is one way of doing it.
Alternatively, you could use a Jinja query to reduce it to 2 tasks, such that:
- name: Check certs exist
stat: path=/etc/letsencrypt/live/{{ rootDomain }}/fullchain.pem
register: st
- include: "{{ './_common/check-certs-renewable.yaml' if st.stat.exists else './_common/create-certs.yaml' }}"
However, it's more a matter of personal preference than anything else, and your way is more readable, so I would just stick with that IMHO.
| Ansible | 42,037,814 | 30 |
I've an error when I launch a playbook but I don't found why....
ERROR! the field 'hosts' is required but was not set
There is my main.yml :
---
- hosts: hosts
- vars:
- elasticsearch_java_home: /usr/lib/jmv/jre-1.7.0
- elasticsearch_http_port: 8443
- tasks:
- include: tasks/main.yml
- handlers:
- include: handlers/main.yml
And my /etc/ansible/hosts :
[hosts]
10.23.108.182
10.23.108.183
10.23.108.184
10.23.108.185
When I test a ping, all is good :
[root@poste08-08-00 elasticsearch]# ansible hosts -m ping
10.23.108.183 | SUCCESS => {
"changed": false,
"ping": "pong" }
10.23.108.182 | SUCCESS => {
"changed": false,
"ping": "pong" }
10.23.108.185 | SUCCESS => {
"changed": false,
"ping": "pong" }
10.23.108.184 | SUCCESS => {
"changed": false,
"ping": "pong" }
Please, help me :)
Regards,
| You have a syntax error in your playbook.
---
- hosts: webservers
vars:
http_port: 80
max_clients: 200
See: https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_intro.html
| Ansible | 36,724,870 | 30 |
I have an ansible 2.1.0 on my server, where I do deployment via vagrant and on PC too.
The role "deploy" have :
- name: upload code
become: true
become_user: www-data
git: [email protected]:****.git
dest=/var/www/main
key_file=/var/www/.ssh/id_rsa
accept_hostkey=true
update=yes
force=yes
register: fresh_code
notify: restart php-fpm
tags: fresh_code
In this case with ansible 2.1.0 I get an error:
fatal: [default]: FAILED! => {"failed": true, "msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user. For information on working around this, see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"}
It it ansible 2.0.1.0 which I use on my PC, is all normally - folder /var/www/ have folder main with owner and group www-data
If I use only became_user: www-data and if I use become_method: sudo with became_user: www-data - i got same error
What need to do to resolve this?
| On debian/ubuntu you can resolve this by first installing the acl package on the remote host, like with this ansible task:
- name: install setfacl support
become: yes
apt: pkg=acl
Same thing with redhat/centos -- install the acl package on the remote host:
- name: install setfacl support
become: yes
yum: name=acl
| Ansible | 36,646,880 | 30 |
I want to add keys to a dictionary when using set_fact with with_items. This is a small POC which will help me complete some other work. I have tried to generalize the POC so as to remove all the irrelevant details from it.
When I execute following code it is shows a dictionary with only one key that corresponds to the last item of the with_items. It seems that it is re-creating a new dictionary or may be overriding an existing dictionary for every item in the with_items. I want a single dictionary with all the keys.
Code:
---
- hosts: localhost
connection: local
vars:
some_value: 12345
dict: {}
tasks:
- set_fact: {
dict: "{
{{ item }}: {{ some_value }}
}"
}
with_items:
- 1
- 2
- 3
- debug: msg="{{ dict }}"
| This can also be done without resorting to plugins, tested in Ansible 2.2.
---
- hosts: localhost
connection: local
vars:
some_value: 12345
dict: {}
tasks:
- set_fact:
dict: "{{ dict | combine( { item: some_value } ) }}"
with_items:
- 1
- 2
- 3
- debug: msg="{{ dict }}"
Alternatively, this can be written without the complex one-liner with an include file.
tasks:
- include: append_dict.yml
with_items: [1, 2, 3]
append_dict.yml:
- name: "Append dict: define helper variable"
set_fact:
_append_dict: "{ '{{ item }}': {{ some_value }} }"
- name: "Append dict: execute append"
set_fact:
dict: "{{ dict | combine( _append_dict ) }}"
Output:
TASK [debug]
*******************************************************************
ok: [localhost] => {
"msg": {
"1": "12345",
"2": "12345",
"3": "12345"
}
}
Single quotes ' around {{ some_value }} are needed to store string values explicitly.
This syntax can also be used to append from a dict elementwise using with_dict by referring to item.key and item.value.
Manipulations like adding pre- and postfixes or hashes can be performed in the same step, for example
set_fact:
dict: "{{ dict | combine( { item.key + key_postfix: item.value + '_' + item.value | hash('md5') } ) }}"
| Ansible | 31,772,732 | 30 |
I am downloading the file with wget from ansible.
- name: Download Solr
shell: wget http://mirror.mel.bkb.net.au/pub/apache/lucene/solr/4.7.0/solr-4.7.0.zip
args:
chdir: {{project_root}}/solr
but I only want to do that if zip file does not exist in that location. Currently the system is downloading it every time.
| Note: this answer covers general question of "How can i check the file existence in ansible", not a specific case of downloading file.
The problems with the previous answers using "command" or "shell" actions is that they won't work in --check mode. Actually, first action will be skipped, and next will error out on "when: solr_exists.rc != 0" condition (due to variable not being defined).
Since Ansible 1.3, there's more direct way to check for file existance - using "stat" module. It of course also works well as "local_action" to check a local file existence:
- local_action: stat path={{secrets_dir}}/secrets.yml
register: secrets_exist
- fail: msg="Production credentials not found"
when: secrets_exist.stat.exists == False
| Ansible | 22,469,880 | 30 |
I am working with vagrant and ansible. I want to automate the deployment role of ansible (You can check my repo here).
For this purpose, I am trying to deploy my local ssh key into my VPS and my vagrant guest machine (I am trying SSH agent forwarding).
GOAL
Automate deployment process with git using ansible. I've already done this:
---
- name: read-write git checkout from github
git: repo={{ repository }} dest=/home/site
Where:
---
# Variables here are applicable to all host groups
repository: [email protected]:dgnest/dgnest.git
PROBLEM
When I do: "vagrant provision", the console stop here:
TASK: [deployment | read-write git checkout from github] **********************
That's because I haven't set up the ssh keys.
I TRIED
I would like to use the key_file option that the git module of ansible has. But it fails too.
---
- name: read-write git checkout from github
git: repo={{ repository }} dest=/home/site key_file=/home/oscar/.ssh/id_rsa.pub
Another option is to copy my ~/ssh/id_rsa.pub into each VPS and vagrant, but my problem in this case is to handle with all the different users. Vagrant uses the "vagrant" user and my VPS uses another ones, so I had to put my ssh local key into each of these user?
Hope you can help me. Thank you.
UPDATE:
I've just automated the @leucos answer (Thanks). Copying the private and public rsa keys. I share this link with the implementation.
| You don't have to copy your local SSH key to remote servers. Instead, you just create file named ansible.cfg in the directory you are running deployment scripts from, and put the next settings:
[ssh_connection]
ssh_args = -o ForwardAgent=yes
That's it, now your local identity is forwarded to the remote servers you manage with Ansible.
| Ansible | 21,925,808 | 30 |
Currently I have all of my deployment scripts in shell, which installs about 10 programs and configures them. The way I see it shell is a fantastic tool for this:
Modular: Only one program per script, this way I can spread the programs across different servers.
Simple: Shell scripts are extremely simple and don't need any other software installed.
One-click: I only have to run the shell script once and everything is setup.
Agnostic: Most programmers can figure out shell and don't need to know how to use a specific program.
Versioning: Since my code is on GitHub a simple Git pull and restart all of supervisor will run my latest code.
With all of these advantages, why is it people are constantly telling me to use a tool such as Ansible or Chef, and not to use shell?
| Shell scripts aren't that bad, if you've got them working like you need to.
People recommend other tools (such as CFEngine, Puppet, Chef, Ansible, and whatever else) for various reasons, some of which are:
The same set of reasons why people use tools like make instead of implementing build
systems with scripts.
Idempotency: The quality whereby the took ensures that it can be safely
re-run any number of times, and at each run it will either come to the desired
state, or remain there, or at least move closer to it in a //convergent// manner.
Sure, you can write scripts so that the end results are idempotent:
# Crude example
grep myhost /etc/hosts || echo '1.2.3.4 myhost' >> /etc/hosts
But it's a lot nicer with idempotent tools.
Shell scripts are imperative. Tools such as Chef/Ansible/Puppet are declarative.
In general, declarative leads to better productivity given some threshold of scale.
The DSL's take away some power but then they give you order, cleanliness and other
kinds of power. I love shell scripting, but I love Ruby too, and the Puppet people
love their language! If you still think shell is the way to go because you like
it more, hey, you don't have a problem then.
[ADDED] Re-distributable, re-usable packages. Ruby has gems, Perl has CPAN, Node has npm,
Java has maven - and all languages these have their own conventions of how reusable
source code must be packaged and shared with the world.
Shell Scripts don't.
Chef has cookbooks that follow conventions and can be imported much the same way
you import a gem into your ruby application to give your application some new ability.
Puppet has puppetforge and it's modules, Juju has charms (they are pretty close
to shell scripts so you might be interested).
The tools have actually helped them! I was a die-hard shell scripter, and still am,
but using Chef lets me go home earlier, get a good night's sleep, stay in control,
be portable across OS's, avoid confusion - tangible benefits I experienced after
giving up large-scale server shell-scripting.
| Ansible | 19,702,879 | 30 |
My local /etc/ansible/hosts file just has
[example]
172.31.nn.nnn
Why do I get that
host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
message ?
If I change it to
[local]
localhost ansible_connection=local
it seems to work ok.
But that is limited to local. I want to ping my aws instance from my local machine.
Full message:
ansible 2.8.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/michael/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/michael/.local/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /home/michael/.local/lib/python2.7/site-packages/ansible/plugins/callb
ack/minimal.pyc
META: ran handlers
<172.31.40.133> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<172.31.40.133> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/michael/Dropbox/90_201
9/work/aws/rubymd2.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,p
ublickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/home/michael/.ansible/cp/7e7a30892
f 172.31.40.133 '/bin/sh -c '"'"'echo ~ubuntu && sleep 0'"'"''
<172.31.40.133> (255, '', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /et
c/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\nd
ebug1: Control socket "/home/michael/.ansible/cp/7e7a30892f" does not exist\r\ndebug2: resolving "172.31.40.133" port 22\r\ndebu
g2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 172.31.40.133 [172.31.40.133] port 22.\r\ndebug2: fd 3 setting O_NON
BLOCK\r\ndebug1: connect to address 172.31.40.133 port 22: Connection timed out\r\nssh: connect to host 172.31.40.133 port 22: C
onnection timed out\r\n')
172.31.40.133 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Readin
g configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Try
ing existing master\r\ndebug1: Control socket \"/home/michael/.ansible/cp/7e7a30892f\" does not exist\r\ndebug2: resolving \"172
.31.40.133\" port 22\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 172.31.40.133 [172.31.40.133] port 22.\r
\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 172.31.40.133 port 22: Connection timed out\r\nssh: connect to h
ost 172.31.40.133 port 22: Connection timed out",
"unreachable": true
}
I tried ading [inventory] at the top and also enable_plugins = ini. The first didn't help and the second gave a parse message.
fyi security group info:
| The messages about declined parsing are informational only. There are several different plugins for inventory files, and you can see from the output that the ini plugin is successfully parsing your inventory (Parsed /etc/ansible/hosts inventory source with ini plugin).
This issue is unrelated to Ansible. You need to first establish ssh connectivity to your managed node.
For what it's worth, the security group settings appear fine (assuming they are applied to your host) so there could be an issue with the host itself (i.e. internal firewall or sshd not running).
| Ansible | 56,465,268 | 29 |
I am new to ansible.
Is there a simple way to replace the line starting with option domain-name-servers in /etc/dhcp/interface-br0.conf with more IPs?
option domain-name-servers 10.116.184.1,10.116.144.1;
I want to add ,10.116.136.1
| You can use the lineinfile Ansible module to achieve that.
- name: replace line
lineinfile:
path: /etc/dhcp/interface-br0.conf
regexp: '^(.*)option domain-name-servers(.*)$'
line: 'option domain-name-servers 10.116.184.1,10.116.144.1,10.116.136.1;'
backrefs: yes
The regexp option tells the module what will be the content to replace.
The line option replaces the previously found content with the new content of your choice.
The backrefs option guarantees that if the regexp does not match, the file will be left unchanged.
| Ansible | 40,788,575 | 29 |
I have the following tasks in a playbook I'm writing (results listed next to the debug statement in <>):
- debug: var=nrpe_installed.stat.exists <true>
- debug: var=force_install <true>
- debug: var=plugins_installed.stat.exists <true>
- name: Run the prep
include: prep.yml
when: (nrpe_installed.stat.exists == false or plugins_installed.stat.exists == true or force_install == true)
tags: ['prep']
- debug: var=nrpe_installed.stat.exists <true>
- debug: var=force_install <true>
- debug: var=force_nrpe_install <false>
- name: Install NRPE
include: install-nrpe.yml
when: (nrpe_installed.stat.exists == false or force_install == true or force_nrpe_install == true)
tags: ['install_nrpe']
vars:
nrpe_url: 'http://url.goes.here'
nrpe_md5: 3921ddc598312983f604541784b35a50
nrpe_version: 2.15
nrpe_artifact: nrpe-{{ nrpe_version }}.tar.gz
nagios_ip: {{ nagios_ip }}
config_dir: /home/ansible/config/
And I'm running it with the following command:
ansible-playbook install.yml -i $invFile --extra-vars="hosts=webservers force_install=True"
The first include runs, but the second skips with this output:
skipping: [server1] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true}
I'm under the impression that the conditional check should pass for all of them as force_install == true evaluates to true which should make the whole when evaluate to true (since it's a series of 'OR's).
How do I get the when to run when the variables are set appropriately?
Edit:
Changing the second when for the Install NRPE include to the following works, but doesn't explain why the other one, Run the prep runs appropriately:
Working:
when: (not nrpe_installed.stat.exists or force_install or force_nrpe_install)
Also working:
when: (nrpe_installed.stat.exists == false or plugins_installed.stat.exists == true or force_install == true)
Not working:
when: (nrpe_installed.stat.exists == false or force_install == true or force_nrpe_install == true)
The truncated (duplicates removed) output of that particular section of the play is:
TASK [debug] *******************************************************************
ok: [server2] => {
"nrpe_installed.stat.exists": true
}
TASK [debug] *******************************************************************
ok: [server2] => {
"plugins_installed.stat.exists": true
}
TASK [debug] *******************************************************************
ok: [server2] => {
"force_install": true
}
TASK [Run the prep] ************************************************************
included: /tasks/nrpe-install/prep.yml for server2, server3, server4, server5, server6, server7
TASK [Prep and configure for installation | Install yum packages] **************
ok: [server6] => (item=[u'gcc', u'glibc', u'glibc-common', u'gd', u'gd-devel', u'make', u'net-snmp', u'openssl-devel', u'unzip', u'tar', u'gzip', u'xinetd']) => {"changed": false, "item": ["gcc", "glibc", "glibc-common", "gd", "gd-devel", "make", "net-snmp", "openssl-devel", "unzip", "tar", "gzip", "xinetd"], "msg": "", "rc": 0, "results": ["gcc-4.1.2-55.el5.x86_64 providing gcc is already installed", "glibc-2.5-123.el5_11.3.i686 providing glibc is already installed", "glibc-common-2.5-123.el5_11.3.x86_64 providing glibc-common is already installed", "gd-2.0.33-9.4.el5_4.2.x86_64 providing gd is already installed", "gd-devel-2.0.33-9.4.el5_4.2.i386 providing gd-devel is already installed", "make-3.81-3.el5.x86_64 providing make is already installed", "net-snmp-5.3.2.2-20.el5.x86_64 providing net-snmp is already installed", "openssl-devel-0.9.8e-40.el5_11.x86_64 providing openssl-devel is already installed", "unzip-5.52-3.el5.x86_64 providing unzip is already installed", "tar-1.15.1-32.el5_8.x86_64 providing tar is already installed", "gzip-1.3.5-13.el5.centos.x86_64 providing gzip is already installed", "xinetd-2.3.14-20.el5_10.x86_64 providing xinetd is already installed"]}
TASK [Prep and configure for installation | Make nagios group] *****************
ok: [server2] => {"changed": false, "gid": 20002, "name": "nagios", "state": "present", "system": false}
TASK [Prep and configure for installation | Make nagios user] ******************
ok: [server6] => {"append": false, "changed": false, "comment": "User for Nagios NRPE", "group": 20002, "home": "/home/nagios", "move_home": false, "name": "nagios", "shell": "/bin/bash", "state": "present", "uid": 20002}
TASK [debug] *******************************************************************
ok: [server2] => {
"nrpe_installed.stat.exists": true
}
TASK [debug] *******************************************************************
ok: [server2] => {
"force_install": true
}
TASK [debug] *******************************************************************
ok: [server2] => {
"force_nrpe_install": false
}
TASK [Install NRPE] ************************************************************
skipping: [server2] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true}
| You need to convert the variable to a boolean:
force_install|bool == true
I don't claim I understand the logic behind it. In python any non-empty string should be truthy. But when directly used in a condition it evaluates to false.
The bool filter then again interprets the strings 'yes', 'on', '1', 'true' (case-insensitive) and 1 as true (see source). Any other string is false.
You might want to also set a default value in case force_install is not defined, since it would result in an undefined variable error:
force_install|default(false)|bool == true
| Ansible | 37,888,760 | 29 |
I want to execute the next command using ansible playbook:
curl -X POST [email protected] -H "Content-Type: application/json" http://marathon.service.consul:8080/v2/apps
How can I run it?
If I run:
- name: post to consul
uri:
url: http://marathon.service.consul:8080/v2/apps/
method: POST
body: "{{ lookup('file','mesos-consul.json') }}"
body_format: json
HEADER_Content-Type: "application/json"
I have the next fail:
fatal: [172.16.8.231]: FAILED! => {"failed": true, "msg": "ERROR! the file_name '/home/ikerlan/Ik4-Data-Platform/ansible/playbooks/Z_PONER_EN_MARCHA/dns-consul/mesos-consul.j2' does not exist, or is not readable"}
| The best way to do this is to use the URI module:
tasks:
- name: post to consul
uri:
url: http://marathon.service.consul:8080/v2/apps/
method: POST
body: "{{ lookup('file','mesos-consul.json') }}"
body_format: json
headers:
Content-Type: "application/json"
Since your json file is on the remote machine the easiest way to execute is probably with the shell module:
- name: post to consul
shell: 'curl -X POST -d@/full/path/to/mesos-consul.json -H "Content-Type: application/json" http://marathon.service.consul:8080/v2/apps'
| Ansible | 35,798,101 | 29 |
My basic problem is that upon creation of a set of aws servers I want to configure them to know about each other.
Upon creation of each server their details are saved in a registered 'servers' var (shown below). What I really want to be able to do after creation is run a task like so:
- name: Add servers details to all other servers
lineinfile:
dest: /path/to/configfile
line: "servername={{ item.1.private_ip }}"
delegate_to: "{{ item.0.public_dns_name }}"
with_nested:
- list_of_servers
- list_of_servers
Supplying the list twice to 'with_nested' is essential here.
Getting a list of list is easy enough to do with:
"{{ servers.results | map(attribute='tagged_instances') | list }}"
which returns:
[
[ { "private_ip": "ip1", "public_dns_name": "dns1" } , { ... }],
[ { ... }, { ... } ]
]
but how would you turn this into:
[
{ "private_ip": "ip1", "public_dns_name": "dns1" },
{ ... },
{ ... },
{ ... }
]
The 'servers' registered var looks like:
"servers": {
"changed": true,
"msg": "All items completed",
"results": [
{
...
"tagged_instances": [
{
...
"private_ip": "ip1",
"public_dns_name": "dns1",
...
},
{
...
"private_ip": "ip2",
"public_dns_name": "dns2",
...
}
]
},
{
...
"tagged_instances": [
{
...
"private_ip": "ip3",
"public_dns_name": "dn3",
...
},
{
...
"private_ip": "ip4",
"public_dns_name": "dns4",
...
}
]
},
...
]
}
Note: I have a pretty ugly solution by using 'with_flattened' and a debug statement to create a new registered var 'flattened_servers' which I then map over again. But am hoping for a more elegant solution :)
| Jinja2 comes with a built-in filter sum which can be used like:
{{ servers.results | sum(attribute='tagged_instances', start=[]) }}
| Ansible | 31,876,069 | 29 |
Anyone on my team can SSH into our special deploy server, and from there run an Ansible playbook to push new code to machines.
We're worried about what will happen if two people try to do deploys simultaneously. We'd like to make it so that the playbook will fail if anyone else is currently running it.
Any suggestions for how to do this? The standard solution is to use a pid file, but Ansible does not have built-in support for these.
| Personally I use RunDeck ( http://rundeck.org/ ) as a wrapper around my Ansible playbooks for multiple reasons:
You can set a RunDeck 'job' to only be able to be run at one time (or set it to run as many times at the same time as you want)
You can set up users within the system so that auditing of who has run what is listed clearly
You can set additional variables with constraints on what can be used (specify a list of options)
Its a whole lot cheaper than Ansible Tower (RunDeck is free)
It has a full API for running jobs pragmatically from build systems
You don't need to write complicated bash wrappers around the ansible-playbook command
SSH can become a litmus test of 'something needs an ansible script written' - I don't allow SSH access except in full on break/fix situations, and we have happier SA's as a result
Lastly, and definitely way up there in the 'nice to have' category is you can schedule RunDeck jobs to run ansible playbooks in a very easy way for anybody who logs on to the console to see what is running when
There are many more good reasons of course, but my fingers are getting tired of typing ;)
| Ansible | 21,869,912 | 29 |
I am new to ansible and was exploring dependent roles. documentation link
What I did not come across the documentation was- where to place the requirements.yml file.
For instance, if my site.yml looks like this:
---
- name: prepare system
hosts: all
roles:
- role1
And, lets say
role1 depends on role2 and role3
role2 depends on role4 and role5
Typically, ansible-galaxy have the following structure:
└── test-role
├── defaults
│ └── main.yml
├── files
├── handlers
│ └── main.yml
├── meta
│ └── main.yml
├── README.md
├── tasks
│ └── main.yml
├── templates
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
Dependencies, are added to meta/main.yml. Assuming, role1 has dependencies marked in this file like (and likewise for role2):
dependencies:
- role: role2
- role: role3
And, I also have a requirements.yml file which looks like:
---
- src: some git link1
version: master
name: role2
- src: some git link2
version: master
name: role3
My question:
where do I place this requirements.yml file for role1?
I understand the requirements will need to be installed by the command,
ansible-galaxy install -r requirements.yml -p roles/
And, I can do this for role1, but how does this get automated for role2? Do the successive dependencies need to be resolved and installed manually this way, or is there something better?
| Technically speaking, you could put your requirements.yml file anywhere you like as long as you reflect the correct path in your ansible-galaxy install command.
Meanwhile, if you ever want to run your playbooks from Ansible Tower/Awx, I suggest you stick to the Ansible Tower requirements and put your requirements.yml file in <project-top-level-directory>/roles/requirements.yml
Regarding dependencies between roles, ansible-galaxy is able to follow them by itself when they are encountered during installation. So you don't need to specify all of them in your requirements.yml, only top level ones. You just need to specify your dependencies correctly in each external roles.
In meta/main.yml for role1
dependencies:
- src: https://my.scm.com/my-ansible-roles/role2.git
scm: git
version: master
name: role2
- src: https://my.scm.com/my-ansible-roles/role3.git
scm: git
version: master
name: role3
In meta/main.yml for role2
dependencies:
- src: https://my.scm.com/my-ansible-roles/role4.git
scm: git
version: master
name: role4
- src: https://my.scm.com/my-ansible-roles/role5.git
scm: git
version: master
name: role5
roles/requirements.yml
---
- src: https://my.scm.com/my-ansible-roles/role1.git
scm: git
version: master
name: role1
To be as exhaustive as possible, this is what I now usually do on my projects to handle dependencies locally as well as local/project only roles
Basic project structure
ansible-project-dir
└─── roles
| └─── locally-versioned-role1
| └─── locally-versioned-role2
| └─── ...
| └─── requirements.yml
| └─── .gitignore
└─── ansible.cfg
└─── playbook1.yml
└─── playbook2.yml
ansible.cfg
I force roles search and downloads in the local roles directory by setting roles_path = roles, so user can use ansible-galaxy install without the -p parameter.
roles/requirements.yml
Already discussed above. Just list dependencies to top-level external (i.e. not versioned in the project) as galaxy role name or as git uris. If you need to fully checkout those roles to later make git commits/push on them, you can use ansible-galaxy install -g -f -r roles/requirements
roles/.gitignore
# Ignore everything in roles dir
/*
# Except:
# the .gitignore file
!.gitignore
# the requirements file
!requirements.yml
# Readme if you have one
!README.md
# and any specific role we want to version locally
!locally-versioned-role*/
| Ansible | 55,773,505 | 28 |
I have a very complex Ansible setup with thousands of servers and hundreds of groups various servers are member of (dynamic inventory file).
Is there any way to easily display all groups that a specific host is member of?
I know how to list all groups and their members:
ansible localhost -m debug -a 'var=groups'
But I want to do this not for ALL hosts, but only for a single one.
| Create a playbook called 'showgroups' (executable file) containing:
#!/usr/bin/env ansible-playbook
- hosts: all
gather_facts: no
tasks:
- name: show the groups the host(s) are in
debug:
msg: "{{group_names}}"
You can run it like this to show the groups of one particular host (-l) in your inventory (-i):
./showgroups -i develop -l jessie.fritz.box
| Ansible | 46,362,787 | 28 |
I have to parse the output of the following command:
mongo <dbname> --eval "db.isMaster()"
which gives output as follows:
{
"hosts" : [
"xxx:<port>",
"xxx:<port>",
"xxx:<port>"
],
"setName" : "xxx",
"setVersion" : xxx,
"ismaster" : true,
"secondary" : false,
"primary" : "xxx",
"me" : "xxx",
"electionId" : ObjectId("xxxx"),
"maxBsonObjectSize" : xxx,
"maxMessageSizeBytes" : xxxx,
"maxWriteBatchSize" : xxx,
"localTime" : ISODate("xxx"),
"maxWireVersion" : 4,
"minWireVersion" : 0,
"ok" : 1
}
I need to parse the above output to check the value of "ismaster" is true. Please let me know how i can do this in ansible.
At the moment i am simply checking that the text "ismaster" : true is shown in the output using the following code:
tasks:
- name: Check if the mongo node is primary
shell: mongo <dbname> --eval "db.isMaster()"
register: output_text
- name: Run command on master
shell: <command to execute>
when: "'\"ismaster\\\" : true,' in output_text.stdout"
However it would be nice to use Ansible's json processing to check the same. Please advise.
| There are quite a bit of helpful filters in Ansible.
Try: when: (output_text.stdout | from_json).ismaster
| Ansible | 40,844,720 | 28 |
I have this Docker image -
FROM centos:7
MAINTAINER Me <me.me>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible
RUN git clone https://github.com/.../dockerAnsible.git
RUN ansible-playbook dockerFileBootstrap.yml
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 80 443 3306
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Basically, I want it so that php-fpm starts when the docker container starts. I have php-fpm working if I manually go into the container and turn it on with /usr/sbin/php-fpm.
I tried it inside of my ansible file with this command (it didn't work). I tried using the service module as well with no luck.-
- name: Start php fpm
command: /usr/sbin/php-fpm
How can I have php-fpm running along with apache?
| You should use supervisor in order to launch several services
In your dockerfile, install supervisor, then you launch
COPY ./docker/supervisord.conf /etc/supervisord.conf
....
CMD ["/usr/bin/supervisord", "-n"]
And your docker/supervisord.conf contains all the services you want to start, so you can have something like that
[program:php-fpm]
command=/opt/remi/php70/root/usr/sbin/php-fpm -c /etc/php-fpm.conf
;command=/usr/sbin/php70-fpm -c /etc/php-fpm.d
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:nginx]
command=/usr/sbin/nginx
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
Of course you should adapt with your path and php-fpm versions and your services (nginx in my example, apache for you etc...), but basically supervisor is the best way to manage the start of several services from one start point.
Here you can find the official doc of docker about supervisor
https://docs.docker.com/engine/admin/using_supervisord/
| Ansible | 37,313,780 | 28 |
Ansible 1.9.4.
The script should execute some task only on hosts where some variable is defined. It works fine normally, but it doesn't work with the with_items statement.
- debug: var=symlinks
when: symlinks is defined
- name: Create other symlinks
file: src={{ item.src }} dest={{ item.dest }} state=link
with_items: "{{ symlinks }}"
when: symlinks is defined
But I get:
TASK: [app/symlinks | debug var=symlinks] *********************
skipping: [another-host-yet]
TASK: [app/symlinks | Create other symlinks] ******************
fatal: [another-host-yet] => with_items expects a list or a set
Maybe I am doing something wrong?
| with_items: "{{ symlinks | default([]) }}"
| Ansible | 35,470,667 | 28 |
Ansible Best Practices described that every role contains file directory that have all files needed by this rule.
In my case I have different roles that share the same files. But I cannot make a copy of these files in each role as there will be no one source of these files and if edit happens to one of them it will become tedious to make this change for every role.
A solution I made is to create another folder and reference it using absolute or relative path. Is this the best way of doing it?
My ansible directory look like this
play.yml
roles/
web/
tasks/
files/
common-1
common-2
other-multiple-files
role-2/
tasks/
files/
common-1
common-2
other-multiple-files
role-3/
tasks/
files/
common-2
role-4/
tasks/
files/
common-1
| You've got two reasonably decent approaches you can try here to reduce repetition.
You could have a separate shared-files directory that sits as a sibling to your role folders like this:
play.yml
roles/
web/
tasks/
files/
other-multiple-files
role-2/
tasks/
files/
other-multiple-files
role-3/
tasks/
role-4/
tasks/
shared-files/
common-1
common-2
You would then reference this in the tasks with a relative file location from where the role/files folder would be:
- name: copy common-1
copy:
src: ../../common-1
dest: /path/to/dest/common-1
- name: copy role specific file
src: other-multiple-files
dest: /path/to/dest/other-multiple-files
Or alternatively to use a relative path to the folder, you could symlink things across like this:
play.yml
roles/
web/
tasks/
files/
common-1 -> ../../common-1
common-2 -> ../../common-2
other-multiple-files
role-2/
tasks/
files/
common-1 -> ../../common-1
common-2 -> ../../common-2
other-multiple-files
role-3/
tasks/
files/
common-2 -> ../../common-2
role-4/
tasks/
files/
common-1 -> ../../common-1
shared-files/
common-1
common-2
And you can then reference the file as if it was in the role/files directory directly:
- name: copy common-1
copy:
src: common-1
dest: /path/to/dest/common-1
- name: copy role specific file
src: other-multiple-files
dest: /path/to/dest/other-multiple-files
| Ansible | 34,287,465 | 28 |
I'm trying to create an small webapp infrastructure with ansible on Amazon AWS and I want to do all the process: launch instance, configure services, etc. but I can't find a proper tool or module to deal with that from ansible. Mainly EC2 Launch.
Thanks a lot.
| This is the short answer of your question, if you want detail and fully automated role, please let me know. Thanks
Prerequisite:
Ansible
Python boto library
Set up the AWS access and secret keys in the environment settings
(best is inside the ~./boto)
To Create the EC2 Instance(s):
In order to create the EC2 Instance, please modified these parameters that you can find inside the "ec2_launch.yml" file under "vars":
region # where is want to launch the instance(s), USA, Australia, Ireland etc
count # Number of instance(s), you want to create
Once, you have mentioned these parameter, please run the following command:
ansible-playbook -i hosts ec2_launch.yml
Contents of hosts file:
[local]
localhost
[webserver]
Contents of ec2_launch.yml file:
---
- name: Provision an EC2 Instance
hosts: local
connection: local
gather_facts: False
tags: provisioning
# Necessary Variables for creating/provisioning the EC2 Instance
vars:
instance_type: t1.micro
security_group: webserver # Change the security group name here
image: ami-98aa1cf0 # Change the AMI, from which you want to launch the server
region: us-east-1 # Change the Region
keypair: ansible # Change the keypair name
count: 1
# Task that will be used to Launch/Create an EC2 Instance
tasks:
- name: Create a security group
local_action:
module: ec2_group
name: "{{ security_group }}"
description: Security Group for webserver Servers
region: "{{ region }}"
rules:
- proto: tcp
type: ssh
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
rules_egress:
- proto: all
type: all
cidr_ip: 0.0.0.0/0
- name: Launch the new EC2 Instance
local_action: ec2
group={{ security_group }}
instance_type={{ instance_type}}
image={{ image }}
wait=true
region={{ region }}
keypair={{ keypair }}
count={{count}}
register: ec2
- name: Add the newly created EC2 instance(s) to the local host group (located inside the directory)
local_action: lineinfile
dest="./hosts"
regexp={{ item.public_ip }}
insertafter="[webserver]" line={{ item.public_ip }}
with_items: "{{ ec2.instances }}"
- name: Wait for SSH to come up
local_action: wait_for
host={{ item.public_ip }}
port=22
state=started
with_items: "{{ ec2.instances }}"
- name: Add tag to Instance(s)
local_action: ec2_tag resource={{ item.id }} region={{ region }} state=present
with_items: "{{ ec2.instances }}"
args:
tags:
Name: webserver
| Ansible | 30,227,140 | 28 |
I execute a shell: docker ps ... task in some of my playbooks. This normally works but sometimes the docker daemon hangs and docker ps does not return for ~2 hours.
How can I configure Ansible to timeout in a reasonable amount of time if docker ps does not return?
| A task timeout (in seconds) is added in 2.10 release, which is useful in such scenarios.
https://github.com/ansible/ansible/issues/33180 --> https://github.com/ansible/ansible/pull/69284
Playbook Keywords
For example, below playbook fails in 2.10 version:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- shell: |
while true; do
sleep 1
done
timeout: 5
...
with an error message like:
TASK [shell] **************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "The shell action failed to execute in the expected time frame (5) and was terminated"}
| Ansible | 41,947,750 | 27 |
I have the following task in my ansible playbook:
- name: Install EPEL repo.
yum:
name: "{{ epel_repo_url }}"
state: present
register: result
until: '"failed" not in result'
retries: 5
delay: 10
Another value I can pass to state is "installed". What is the difference between the two? Some documentation available here: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/yum_module.html
| They do the same thing, i.e. they are aliases to each other, see this comment in the source code of the yum module:
# removed==absent, installed==present, these are accepted as aliases
And how they are used in the code:
if state in ['installed', 'present']:
if disable_gpg_check:
yum_basecmd.append('--nogpgcheck')
res = install(module, pkgs, repoq, yum_basecmd, conf_file, en_repos, dis_repos)
elif state in ['removed', 'absent']:
res = remove(module, pkgs, repoq, yum_basecmd, conf_file, en_repos, dis_repos)
elif state == 'latest':
if disable_gpg_check:
yum_basecmd.append('--nogpgcheck')
res = latest(module, pkgs, repoq, yum_basecmd, conf_file, en_repos, dis_repos)
else:
# should be caught by AnsibleModule argument_spec
module.fail_json(msg="we should never get here unless this all"
" failed", changed=False, results='', errors='unexpected state')
return res
https://github.com/ansible/ansible-modules-core/blob/devel/packaging/os/yum.py
| Ansible | 40,410,270 | 27 |
Is it possible to apply a list of items to multiple tasks in an Ansible playbook? To give an example:
- name: download and execute
hosts: server1
tasks:
- get_url: url="some-url/{{item}}" dest="/tmp/{{item}}"
with_items:
- "file1.sh"
- "file2.sh"
- shell: /tmp/{{item}} >> somelog.txt
with_items:
- "file1.sh"
- "file2.sh"
Is there some syntax to avoid the repetition of the item-list?
| As of today you can use with_items with include, so you'd need to split your playbook into two files:
- name: download and execute
hosts: server1
tasks:
- include: subtasks.yml file={{item}}
with_items:
- "file1.sh"
- "file2.sh"
and subtasks.yml:
- get_url: url="some-url/{{file}}" dest="/tmp/{{file}}"
- shell: /tmp/{{file}} >> somelog.txt
There was a request to make with_items applicable to block, but the Ansible team has said it will never be supported.
| Ansible | 39,040,521 | 27 |
Here I am trying to test my bash script where it is prompting four times.
#!/bin/bash
date >/opt/prompt.txt
read -p "enter one: " one
echo $one
echo $one >>/opt/prompt.txt
read -p "enter two: " two
echo $two
echo $two >>/opt/prompt.txt
read -p "enter three: " three
echo $three
echo $three >>/opt/prompt.txt
read -p "enter password: " password
echo $password
echo $password >>/opt/prompt.txt
for this script I wrote the code below, and it is working fine
- hosts: "{{ hosts }}"
tasks:
- name: Test Script
expect:
command: sc.sh
responses:
enter one: 'one'
enter two: 'two'
enter three: 'three'
enter password: 'pass'
echo: yes
But if I am doing the same for mysql_secure_installation command it not working
- hosts: "{{ hosts }}"
tasks:
- name: mysql_secure_installation Command Test
expect:
command: mysql_secure_installation
responses:
'Enter current password for root (enter for none):': "\n"
'Set root password? [Y/n]:': 'y'
'New password:': '123456'
'Re-enter new password:': '123456'
'Remove anonymous users? [Y/n]:': 'y'
'Disallow root login remotely? [Y/n]:': 'y'
'Remove test database and access to it? [Y/n]:': 'y'
'Reload privilege tables now? [Y/n]:': 'y'
echo: yes
and its trackback is here:
PLAY [S1] **********************************************************************
TASK [setup] *******************************************************************
ok: [S1]
TASK [mysql_secure_installation Command Test] **********************************
fatal: [S1]: FAILED! => {"changed": true, "cmd": "mysql_secure_installation", "delta": "0:00:30.139266", "end": "2016-07-15 15:36:32.549415", "failed": true, "rc": 1, "start": "2016-07-15 15:36:02.410149", "stdout": "\r\n\r\n\r\n\r\nNOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL\r\n SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!\r\n\r\n\r\nIn order to log into MySQL to secure it, we'll need the current\r\npassword for the root user. If you've just installed MySQL, and\r\nyou haven't set the root password yet, the password will be blank,\r\nso you should just press enter here.\r\n\r\nEnter current password for root (enter for none): ", "stdout_lines": ["", "", "", "", "NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL", " SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!", "", "", "In order to log into MySQL to secure it, we'll need the current", "password for the root user. If you've just installed MySQL, and", "you haven't set the root password yet, the password will be blank,", "so you should just press enter here.", "", "Enter current password for root (enter for none): "]}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @/home/jackson/AnsibleWorkSpace/AnsibleTest/example1.retry
PLAY RECAP *********************************************************************
S1 : ok=1 changed=0 unreachable=0 failed=1
I have also tried blank '' instead of "\n" for the first answer but it is not working either. I also visited Ansible expect doc but they show only very simple example and explanation. I am also trying regex match for multiple different responses but it is also not working.
Please do not recommend me to use mysql module of Ansible, because here my purpose is to learn this module for future use.
| The reason is that the questions are interpreted as regexps. Hence you must escape characters with a special meaning in regular expressions, such as -()[]\?*. et cetara.
Hence:
'Enter current password for root (enter for none):'
should instead be:
'Enter current password for root \(enter for none\):'
Good luck!
| Ansible | 38,393,343 | 27 |
Is it possible to skip some items in Ansible with_items loop operator, on a conditional, without generating an additional step?
Just for example:
- name: test task
command: touch "{{ item.item }}"
with_items:
- { item: "1" }
- { item: "2", when: "test_var is defined" }
- { item: "3" }
in this task I want to create file 2 only when test_var is defined.
| The other answer is close but will skip all items != 2. I don't think that's what you want. here's what I would do:
- hosts: localhost
tasks:
- debug: msg="touch {{item.id}}"
with_items:
- { id: 1 }
- { id: 2 , create: "{{ test_var is defined }}" }
- { id: 3 }
when: item.create | default(True) | bool
| Ansible | 37,189,826 | 27 |
I used an ansible playbook to install git:
---
- hosts: "www"
tasks:
- name: Update apt repo
apt: update_cache=yes
- name: Install dependencies
apt: name={{item}} state=installed
with_items:
- git
I checked the installed versions:
$ git --version
git version 1.9.1
But adding these to the ansible playbook: apt: name=git=1.9.1 state=installed
and rerunning results in the following error:
fatal: [46.101.94.110]: FAILED! => {"cache_update_time": 0,
"cache_updated": false, "changed": false, "failed": true, "msg":
"'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o
"Dpkg::Options::=--force-confold" install 'git=1.9.1'' failed:
E: Version '1.9.1' for 'git' was not found\n", "stderr": "E: Version
'1.9.1' for 'git' was not found\n", "stdout": "Reading package
lists...\nBuilding dependency tree...\nReading state
information...\n", "stdout_lines": ["Reading package lists...",
"Building dependency tree...", "Reading state information..."]}
| Git package with that specific version is as follows:
git=1:1.9.1-1ubuntu0.2
Your task should be:
apt: name=git=1:1.9.1-1ubuntu0.2 state=present
Regards
| Ansible | 36,150,362 | 27 |
I am using the host file as below,
[qa-workstations]
10.39.19.190 ansible_user=test ansible_ssh_pass=test
I am using below command to execute "whoami" command in host
root@Svr:~/ansible# ansible all -a "whoami" -i /etc/ansible/host
10.39.19.190 | success | rc=0 >>
root
ansible by default trying to use user name in which I have logged in, i.e root instead of test user which I have specified in host file
It works fine when I try to pass the username in ansible cli command
root@Svr:~/ansible# ansible all -a "whoami" -i /etc/ansible/host -u test
10.39.19.190 | success | rc=0 >>
test
But its not possible for me to pass username every time in CLI as different host uses different username. Also I don't have a key pair generated for each host, because host machine keeps changing often
version used:
ansible 1.5.4
Ubuntu 14.04 LTS
| With recent versions of Ansible, you can use the ansible_user parameter in the host definition.
For example, on the host mysql-host.mydomain the user I need to connect with is mysql :
[docker-hosts]
mysql-host.mydomain ansible_user=mysql
But as you are using an older version of ansible, you might need to use ansible_ssh_user instead
http://docs.ansible.com/ansible/faq.html#how-do-i-handle-different-machines-needing-different-user-accounts-or-ports-to-log-in-with
| Ansible | 34,334,377 | 27 |
I have a playbook with multiple hosts section. I would like to define a variable in this playbook.yml file that applies only within the file, for example:
vars:
my_global_var: 'hello'
- hosts: db
tasks:
-shell: echo {{my_global_var}}
- hosts: web
tasks:
-shell: echo {{my_global_var}}
The example above does not work. I have to either duplicate the variable for each host section (bad) or define it at higher level, for example in my group_vars/all (not what I want, but works). I am also aware that variables files can be included, but this affects readibility. Any suggestion to get it in the right scope (e.g the playbook file itself)?
| The set_fact module will accomplish this if group_vars don't suit your needs.
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/set_fact_module.html
This module allows setting new variables. Variables are set on a host-by-host >basis just like facts discovered by the setup module. These variables will >survive between plays during an Ansible run, but will not be saved across >executions even if you use a fact cache.
- hosts: db:web
tasks:
- set_fact: my_global_var='hello'
- hosts: db
tasks:
-shell: echo {{my_global_var}}
- hosts: web
tasks:
-shell: echo {{my_global_var}}
| Ansible | 33,992,153 | 27 |
I was given a task to verify some routing entries for all Linux server and here is how I did it using an Ansible playbook
---
- hosts: Linux
serial: 1
tasks:
- name: Check first
command: /sbin/ip route list xxx.xxx.xxx.xxx/24
register: result
changed_when: false
- debug: msg="{{result.stdout}}"
- name: Check second
command: /sbin/ip route list xxx.xxx.xxx.xxx/24
register: result
changed_when: false
- debug: msg="{{result.stdout}}"
You can see I have to repeat same task for each routing entry and I believe I should be able to avoid this. I tried use with_items loop but got following error message
One or more undefined variables: 'dict object' has no attribute 'stdout'
is there a way to register variable for each command and loop over them one by one ?
| Starting in Ansible 1.6.1, the results registered with multiple items are stored in result.results as an array. So you can use result.results[0].stdout and so on.
Testing playbook:
---
- hosts: localhost
gather_facts: no
tasks:
- command: "echo {{item}}"
register: result
with_items: [1, 2]
- debug:
var: result
Result:
$ ansible-playbook -i localhost, test.yml
PLAY [localhost] **************************************************************
TASK: [command echo {{item}}] *************************************************
changed: [localhost] => (item=1)
changed: [localhost] => (item=2)
TASK: [debug ] ****************************************************************
ok: [localhost] => {
"var": {
"result": {
"changed": true,
"msg": "All items completed",
"results": [
{
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.002502",
"end": "2015-08-07 16:44:08.901313",
"invocation": {
"module_args": "echo 1",
"module_name": "command"
},
"item": 1,
"rc": 0,
"start": "2015-08-07 16:44:08.898811",
"stderr": "",
"stdout": "1",
"stdout_lines": [
"1"
],
"warnings": []
},
{
"changed": true,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.002516",
"end": "2015-08-07 16:44:09.038458",
"invocation": {
"module_args": "echo 2",
"module_name": "command"
},
"item": 2,
"rc": 0,
"start": "2015-08-07 16:44:09.035942",
"stderr": "",
"stdout": "2",
"stdout_lines": [
"2"
],
"warnings": []
}
]
}
}
}
PLAY RECAP ********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
| Ansible | 31,881,762 | 27 |
I have very simple line in the template:
ip={{ip|join(', ')}}
And I have list for ip:
ip:
- 1.1.1.1
- 2.2.2.2
- 3.3.3.3
But application wants IPs with quotes (ip='1.1.1.1', '2.2.2.2').
I can do it like this:
ip:
- "'1.1.1.1'"
- "'2.2.2.2'"
- "'3.3.3.3'"
But it is very ugly. Is any nice way to add quotes on each element of the list in ansible?
Thanks!
| This will work :
ip={{ '\"' + ip|join('\", \"') + '\"' }}
A custom filter plugin will also work. In ansible.cfg uncomment filter_plugins and give it a path, where we put this
def wrap(list):
return [ '"' + x + '"' for x in list]
class FilterModule(object):
def filters(self):
return {
'wrap': wrap
}
in a file called core.py. Like this. Then you can simply use
ip|wrap|join(', ')
And it should produce comma seperated list with each ip wrapped in quotes.
| Ansible | 29,537,684 | 27 |
I'm struggling with a pattern pulling inventory vars in Ansible templates, please help. :)
I'm setting up a monitoring server, and I want to be able to automatically provision the servers using Ansible. I'm struggling with loops in the template to allow me to this.
My semi-working soluition so far is in the playbook that calls the template task I have:
monitoringserver.yml
vars:
servers_to_monitor:
- {cname: web1, ip_address: 192.168.33.111}
- {cname: web2, ip_address: 192.168.33.112}
- {cname: db1, ip_address: 192.168.33.211}
- {cname: db2, ip_address: 192.168.33.212}
template.yml
all_hosts += [
{% for host in servers_to_monitor %}
"{{ host.cname }}{{ host.ip }}|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/",
{% endfor %}
]
But this isn't ideal as I can't define different IP address for different servers to be monitoring. How have other people done this? I'm sure it must be trivial but my brain's struggling with the syntax.
Thanks
Alan
edit: To clarify the resulting template looks something like this:
all_hosts += [
"web1|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/",
"web2|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/",
"db1|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/",
"db2|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/",
]
What I would like is the values web1/web2/db1/db2 to be different depending on whether I'm using a production inventory file or a development inventory file.
| Ideally you would be using different inventory files for production and staging, which would allow you to keep the same {{ inventory_hostname }} value, but target different machines.
You can also loop through different groups...
hosts:
[web]
web1
web2
[db]
db1
db2
playbook:
- name: play that sets a group to loop over
vars:
servers_to_monitor: "{{ groups['db'] }}"
tasks:
- template:
src: set-vars.j2
dest: set-vars.js
template:
all_hosts += [
{% for host in servers_to_monitor %}
"{{ hostvars[host].inventory_hostname }}{{ hostvars[host].ansible_default_ipv4.address }}|cmk-agent|prod|lan|tcp|wato|/" + FOLDER_PATH + "/",
{% endfor %}
]
| Ansible | 26,989,492 | 27 |
---
- hosts: test
tasks:
- name: print phone details
debug: msg="user {{ item.key }} is {{ item.value.name }} ({{ item.value.telephone }})"
with_dict: "{{ users }}"
vars:
users:
alice: "Alice"
telephone: 123
When I run this playbook, I am getting this error:
One or more undefined variables: 'dict object' has no attribute 'name'
This one actually works just fine:
debug: msg="user {{ item.key }} is {{ item.value }}"
What am I missing?
| This is not the exact same code. If you look carefully at the example, you'll see that under users, you have several dicts.
In your case, you have two dicts but with just one key (alice, or telephone) with respective values of "Alice", 123.
You'd rather do :
- hosts: localhost
gather_facts: no
tasks:
- name: print phone details
debug: msg="user {{ item.key }} is {{ item.value.name }} ({{ item.value.telephone }})"
with_dict: "{{ users }}"
vars:
users:
alice:
name: "Alice"
telephone: 123
(note that I changed host to localhost so I can run it easily, and added gather_facts: no since it's not necessary here. YMMV.)
| Ansible | 26,639,325 | 27 |
In a playbook I got the following code:
---
- hosts: db
vars:
postgresql_ext_install_contrib: yes
postgresql_pg_hba_passwd_hosts: ['10.129.181.241/32']
...
I would like to replace the value of postgresql_pg_hba_passwd_hosts with all of my webservers private ips. I understand I can get the values like this in a template:
{% for host in groups['web'] %}
{{ hostvars[host]['ansible_eth1']['ipv4']['address'] }}
{% endfor %}
What is the simplest/easiest way to assign the result of this loop to a variable in a playbook? Or is there a better way to collect this information in the first place? Should I put this loop in a template?
Additional challenge: I'd have to add /32 to every entry.
| You can assign a list to variable by set_fact and ansible filter plugin.
Put custom filter plugin to filter_plugins directory like this:
(ansible top directory)
site.yml
hosts
filter_plugins/
to_group_vars.py
to_group_vars.py convert hostvars into list that selected by group.
from ansible import errors, runner
import json
def to_group_vars(host_vars, groups, target = 'all'):
if type(host_vars) != runner.HostVars:
raise errors.AnsibleFilterError("|failed expects a HostVars")
if type(groups) != dict:
raise errors.AnsibleFilterError("|failed expects a Dictionary")
data = []
for host in groups[target]:
data.append(host_vars[host])
return data
class FilterModule (object):
def filters(self):
return {"to_group_vars": to_group_vars}
Use like this:
---
- hosts: all
tasks:
- set_fact:
web_ips: "{{hostvars|to_group_vars(groups, 'web')|map(attribute='ansible_eth0.ipv4.address')|list }}"
- debug:
msg: "web ip is {{item}}/32"
with_items: web_ips
| Ansible | 24,798,382 | 27 |
I'm new to ansible (and docker). I would like to test my ansible playbook before using it on any staging/production servers.
Since I don't have access to an empty remote server, I thought the easiest way to test would be to use Docker container and then just run my playbook with the Docker container as the host.
I have a basic DockerFile that creates a standard ubuntu container. How would I configure the ansible hosts in order to run it against the docker container? Also, I suspect I would need to "run" the docker container to allow ansible to connect to it.
| Running the playbook in a docker container may not actually be the best approach unless your stage and production servers are also Docker containers. The Docker ubuntu image is stripped down and will have some differences from a full installation. A better option might be to run the playbook in an Ubuntu VM that matches your staging and production installations.
That said, in order to run the ansible playbook within the container you should write a Dockerfile that runs your playbook. Here's a sample Dockerfile:
# Start with the ubuntu image
FROM ubuntu
# Update apt cache
RUN apt-get -y update
# Install ansible dependencies
RUN apt-get install -y python-yaml python-jinja2 git
# Clone ansible repo (could also add the ansible PPA and do an apt-get install instead)
RUN git clone http://github.com/ansible/ansible.git /tmp/ansible
# Set variables for ansible
WORKDIR /tmp/ansible
ENV PATH /tmp/ansible/bin:/sbin:/usr/sbin:/usr/bin
ENV ANSIBLE_LIBRARY /tmp/ansible/library
ENV PYTHONPATH /tmp/ansible/lib:$PYTHON_PATH
# add playbooks to the image. This might be a git repo instead
ADD playbooks/ /etc/ansible/
ADD inventory /etc/ansible/hosts
WORKDIR /etc/ansible
# Run ansible using the site.yml playbook
RUN ansible-playbook /etc/ansible/site.yml -c local
The ansible inventory file would look like
[local]
localhost
Then you can just docker build . (where . is the root of the directory where your playbooks and Dockerfile live), then docker run on the resulting image.
Michael DeHaan, the CTO of Ansible, has an informative blog post on this topic.
| Ansible | 24,738,264 | 27 |
In my playbooks I reference username (exclusively its "ubuntu") a lot.
Is there a built in way to say "get it from the value passed in the command line"?
I know I can do
ansible-playbook <task> -u <user> -K --extra-vars "user=<user>"
and then I can use {{user}} in the playbook, but it feels odd defining the user twice.
| As Woodham stated, the ansible variable that represents the connecting user is
{{ ansible_user }} (Ansible < 2.0 was {{ ansible_ssh_user }} )
But you don't have to define it in the inventory file per se.
You can define it in:
1. Your play, if you use ansible-playbook:
See the manual on Playbooks
- name: Some play
hosts: all
remote_user: ubuntu
2. In the inventory file:
See the manual on inventory
[all]
other1.example.com ansible_user=ubuntu (Ansible < 2.0 was ansible_ssh_user)
3. As you stated, on the commandline:
ansible-playbook -i inventory -u ubuntu playbook.yml
4. An ansible config file as a remote_user directive.
See the manual on a config file
The ansible config file can be placed in the current folder ansible.cfg, your homedir .ansible.cfg or /etc/ansible/ansbile.cfg.
[defaults]
remote_user=ubuntu
| Ansible | 24,095,807 | 27 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.