question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I am using the Terraform archive_file provider to package multiple files into a zip file. It works fine when I define the archive like this: data "archive_file" "archive" { type = "zip" output_path = "./${var.name}.zip" source_dir = "${var.source_dir}" } However I don't want the archive to contain all of the files in var.source_dir, I only want a subset of them. I notice the archive_file provider has a source_file attribute so I was hoping I could supply a list of those files and package them into the archive like so: locals { source_files = ["${var.source_dir}/foo.txt", "${var.source_dir}/bar.txt"] } data "archive_file" "archive" { type = "zip" output_path = "./${var.name}.zip" count = "2" source_file = "${local.source_files[count.index]}" } but that doesn't work, the archive gets built for each file defined in local.source-files hence I have a "last one wins" scenario where the archive file that gets built only contains bar.txt. I tried this: locals { source_files = ["${var.source_dir}/main.py", "${var.source_dir}/requirements.txt"] } data "archive_file" "archive" { type = "zip" output_path = "./${var.name}.zip" source_file = "${local.source_files}" } but unsurprisingly that failed with: data.archive_file.archive: source_file must be a single value, not a list Is there a way to achieve what I'm after here i.e pass a list of files to the archive_file provider and have it package all of them into the archive file?
---- Thanks jamiet, I modified as your comment ---- copy files to temp dir and archive them locals { source_files = ["${var.source_dir}/main.py", "${var.source_dir}/requirements.txt"] } data "template_file" "t_file" { count = "${length(local.source_files)}" template = "${file(element(local.source_files, count.index))}" } resource "local_file" "to_temp_dir" { count = "${length(local.source_files)}" filename = "${path.module}/temp/${basename(element(local.source_files, count.index))}" content = "${element(data.template_file.t_file.*.rendered, count.index)}" } data "archive_file" "archive" { type = "zip" output_path = "${path.module}/${var.name}.zip" source_dir = "${path.module}/temp" depends_on = [ "local_file.to_temp_dir", ] } use source of archive_file locals { source_files = ["${var.source_dir}/main.py", "${var.source_dir}/requirements.txt"] } data "template_file" "t_file" { count = "${length(local.source_files)}" template = "${file(element(local.source_files, count.index))}" } data "archive_file" "archive" { type = "zip" output_path = "./${var.name}.zip" source { filename = "${basename(local.source_files[0])}" content = "${data.template_file.t_file.0.rendered}" } source { filename = "${basename(local.source_files[1])}" content = "${data.template_file.t_file.1.rendered}" } } create shell script and call it using external data resource. locals { source_files = ["${var.source_dir}/main.py", "${var.source_dir}/requirements.txt"] } data "template_file" "zip_sh" { template = <<EOF #!/bin/bash zip $* %1>/dev/null %2>/dev/null echo '{"result":"success"}' EOF } resource "local_file" "zip_sh" { filename = "${path.module}/zip.sh" content = "${data.template_file.zip_sh.rendered}" } data "external" "zip_sh" { program = ["${local_file.zip_sh.filename}", "${var.name}", "${join(" ", local.source_files)}"] depends_on = [ "data.template_file.zip_sh", ] }
Terraform
56,916,719
15
I am trying to set up my infrastructure properly with no passwords or keys laying around. AWS RDS has an option to do so, by enabling users(applications) to authenticate with generated tokens. However, in the documentation, one of the steps(this one) requires running a query in the Postgres database to create an user and grant him specific permissions: CREATE USER test_rds WITH LOGIN; GRANT rds_iam TO test_rds; I want to provision entire stack with Terraform. I have looked up some "hacks" to run the query (here) after RDS instantiation by either using: resource "null_resource" "db_setup" { depends_on = ["aws_db_instance.your_database_instance", "aws_security_group.sg_allowing_external_access"] provisioner "local-exec" { // run shell commands to manually psql into the db or: resource "aws_instance" "web" { provisioner "remote-exec" { inline = [ // run shell commands to manually psql into the db but both of them require creating master-password and somehow delivering it inside the "scripts". Is it possible to do that with Terraform cleanly, with no hardcoded passwords getting passed around? I would love to provision the database and enable only specific EC2/ECS instances with correct permissions to access it, without any passwords in my git repository.
Once you enable IAM authentication for an RDS database user/role, you are no longer able to use password based authentication for that user/role. This means you can either use a less secure password or even just generate a random password (using the random_id resource) that you use to set the master password and first use to authenticate so that you can grant the rds_iam permissions to the master user and any other users you create. While this password will end up in the state file (even if randomly generated), as mentioned, once the rds_iam grant has been applied then you won't be able to use this password to login to your database.
Terraform
55,834,290
15
There is a repeatable configuration that I see in many Terraform projects where the provider is AWS: The configuration of an outbound (egress) rule to allow ALL outbound traffic. As far as I understand, this is the default behavior in AWS as mentioned in the AWS user guide: By default, a security group includes an outbound rule that allows all outbound traffic. You can remove the rule and add outbound rules that allow specific outbound traffic only. If your security group has no outbound rules, no outbound traffic originating from your instance is allowed. An example for a common Terraform setup for security group - The focus of my question is the egress block: resource "aws_security_group" "my_sg" { name = "my_sg" description = "Some description" vpc_id = "${aws_vpc.my_vpc.id}" tags { Name = "my_sg_tag" } #Not redundant - Because a new security group has no inbound rules. ingress { from_port = "80" to_port = "80" protocol = "TCP" cidr_blocks = ["0.0.0.0/0"] } #Isn't this redundant? egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } Is this configuration being made for documentation or does it have a technical reason?
The documentation for the aws_security_group resource specifically states that they remove AWS' default egress rule intentionally by default and require users to specify it to limit surprises to users: NOTE on Egress rules: By default, AWS creates an ALLOW ALL egress rule when creating a new Security Group inside of a VPC. When creating a new Security Group inside a VPC, Terraform will remove this default rule, and require you specifically re-create it if you desire that rule. We feel this leads to fewer surprises in terms of controlling your egress rules. If you desire this rule to be in place, you can use this egress block: egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } There's also a technical/UX reason here in that it would be tricky to make Terraform understand whether it should keep the allow all egress rule when making changes to the security group. Should it always provide the allow all egress rule unless another egress rule is specified and then if so remove the default? How would that work with the combination of the aws_security_group_rule resource? AWS have made the decision that a default rule to allow all egress outbound is a nicer user experience than not having it (and confusing people as to why their instance is unable to communicate outbound) without too much of a security impact (compared to the equivalent for inbound). Even if they were to change their mind on the benefit of this now they would be unable to do this without massively breaking a lot of people's setups/workflows which AWS is very reluctant to do. Terraform, on the other hand, has made the decision the other way and that suits the tool better as well as slightly improving the security posture of the tool at the expense of making people define a repeated egress block in a lot of places. If you particularly care about the repetition and you do always want to allow all egress traffic then you might find it useful to use a module instead that automatically includes an allow all egress rule.
Terraform
55,023,605
15
I have a following (simplified) Terraform code: variable "cluster_id" { default = 1 } resource "aws_instance" "instance" { ... some instance properties ... tags { "Name" = "${format("cluster-%02d", var.cluster_id)}" } } And when I run terraform apply the plan shows: tags.Name: "%!d(string=1)" The cluster_id in format() is not handled as a number so formatting fails. I would expect that I get cluster-01 but that's not the case. Am I doing something wrong or is it really not possible to use custom variables as numbers in formatting?
Terraform, pre 0.12, only supports string, list and map types as an input variable so despite you providing an integer (or a float or a boolean) it will be cast to a string. Both Terraform and Go allow you to use the same padding for integers and strings though so you can just use the following to 0 pad the cluster_id: resource "aws_instance" "instance" { # ... some instance properties ... tags { "Name" = "${format("cluster-%02s", var.cluster_id)}" } } Specifically you use the format function, passing a format string as the first argument, then a list of values for the remaining arguments. The format string should contain a % symbol for each place where you want one of the parameters to be substituted (the first % symbol is replaced by the first parameter after the format string, the second by the second, etc). After the % symbol, to say that you want the parameter to be formatted with leading zeros include a 0. Next, say how many digits to display; e.g. to pad to 5 characters put a 5. Finally put an S to say that this should output a string So the above format string would be %05s, and we'd expect 1 additional parameter to be passed to take the place of this placeholder. Example output demo { format("%04s %04s %02s", 93, 2, 7) } Output demo = "0093 0002 07"
Terraform
54,771,137
15
Using Terraform, I am trying to add a keyvault access policy to an application (that is also created in Terraform), which requires an object_it (which is GUID) of that application. In ARM template it looks like this: "objectId": "[reference(variables('myAppResourceId'), '2015-08-31-PREVIEW').principalId]" so Terraform needs the principal id there to be assigned to the object_id. If I use the value "object_id = ${azurerm_app_service.myApp.id}" like this: resource "azurerm_key_vault_access_policy" "pol1" { vault_name = "${azurerm_key_vault.kv1.name}" resource_group_name = "${azurerm_key_vault.kv1.resource_group_name}" tenant_id = "${data.azurerm_subscription.current.subscription_id}" object_id = "${azurerm_app_service.myApp.id}" key_permissions = "${var.app_keys_permissions}" secret_permissions = "${var.app_secrets_permissions}" } then when I run apply command, I get the following error: azurerm_key_vault_access_policy.pol1: "object_id" is an invalid UUUID: encoding/hex: invalid byte: U+002F '/' this is probably the id that looks like an url with a slash,so this does not work, since I need the GUID only. I tried also a suggestion from Terraform grant azure function app with msi access to azure keyvault, by using object_id = "${lookup(azurerm_app_service.app1.identity[0],"principal_id")}" for an app service instead of the function and I get an error: azurerm_key_vault_access_policy.appPolicy1: At column 43, line 1: list "azurerm_app_service.app1.identity" does not have any elements so cannot determine type. in: ${lookup(azurerm_app_service.app1.identity[0],"principal_id")} could someone help me with this object_id please? thanks
When you read the description for azurerm_key_vault_access_policy property object_id, then you should know it could mean the web app principal Id. And the azurerm_app_service.myApp.id that you put is not the principal Id, it's the app service resource Id. You should put the azurerm_app_service.myApp.identity.principal_id that associated with your web app. Take a look at the Attributes of the App Service Resource. Hope this will help you. However, something not mentionned in the documentation is the need to specify an identity block in your app_service declaration. identity { type = "SystemAssigned" } If you don't specify it, you might get an empty list as identity attribute.
Terraform
54,189,450
15
I'm using azurerm_virtual_machine_extension to bootstrap some virtual machines in azure. All examples i've found show using something similar to: settings = <<SETTINGS { "fileUris": [ "https://my.bootstrapscript.com/script.sh}" ], "commandToExecute": "bash script.sh" } SETTINGS While this works, my issue is i'm having to publicly host script for use with fileUris. Is there an option within settings that will allow me to send local file contents from my terraform folder? Something like: settings = <<SETTINGS { "file": [ ${file("./script.txt")} ], "commandToExecute": "bash script.sh" } SETTINGS Thanks.
Yes We Can! Introduction In protected_settings, use "script". Scripts terraform script provider "azurerm" { } resource "azurerm_virtual_machine_extension" "vmext" { resource_group_name = "${var.resource_group_name}" location = "${var.location}" name = "${var.hostname}-vmext" virtual_machine_name = "${var.hostname}" publisher = "Microsoft.Azure.Extensions" type = "CustomScript" type_handler_version = "2.0" protected_settings = <<PROT { "script": "${base64encode(file(var.scfile))}" } PROT } variables variable resource_group_name { type = string default = "ORA" } variable location { type = string default = "eastus" } variable hostname { type = string default = "ora" } variable scfile{ type = string default = "yum.bash" } bash script #!/bin/bash mkdir -p ~/download cd ~/download wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm rpm -ivh epel-release-latest-7.noarch.rpm yum -y install cowsay cowsay ExaGridDba Output apply [terraform@terra stackoverflow]$ terraform apply An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # azurerm_virtual_machine_extension.vmex0 will be created + resource "azurerm_virtual_machine_extension" "vmex0" { + id = (known after apply) + location = "eastus" + name = "ora-vmext" + protected_settings = (sensitive value) + publisher = "Microsoft.Azure.Extensions" + resource_group_name = "ORA" + tags = (known after apply) + type = "CustomScript" + type_handler_version = "2.0" + virtual_machine_name = "ora" } Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes azurerm_virtual_machine_extension.vmex0: Creating... azurerm_virtual_machine_extension.vmex0: Still creating... [10s elapsed] azurerm_virtual_machine_extension.vmex0: Still creating... [20s elapsed] azurerm_virtual_machine_extension.vmex0: Still creating... [30s elapsed] azurerm_virtual_machine_extension.vmex0: Still creating... [40s elapsed] azurerm_virtual_machine_extension.vmex0: Still creating... [50s elapsed] azurerm_virtual_machine_extension.vmex0: Still creating... [1m0s elapsed] azurerm_virtual_machine_extension.vmex0: Still creating... [1m10s elapsed] azurerm_virtual_machine_extension.vmex0: Still creating... [1m20s elapsed] azurerm_virtual_machine_extension.vmex0: Still creating... [1m30s elapsed] azurerm_virtual_machine_extension.vmex0: Still creating... [1m40s elapsed] azurerm_virtual_machine_extension.vmex0: Still creating... [1m50s elapsed] azurerm_virtual_machine_extension.vmex0: Still creating... [2m0s elapsed] azurerm_virtual_machine_extension.vmex0: Creation complete after 2m1s [id=/subscriptions/7fe8a9c3-0812-42e2-9733-3f567308a0d0/resourceGroups/ORA/providers/Microsoft.Compute/virtualMachines/ora/extensions/ora-vmext] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. stdout on the target [root@ora ~]# cat /var/lib/waagent/custom-script/download/0/stdout Preparing... ######################################## Updating / installing... epel-release-7-12 ######################################## Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package cowsay.noarch 0:3.04-4.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: cowsay noarch 3.04-4.el7 epel 42 k Transaction Summary ================================================================================ Install 1 Package Total download size: 42 k Installed size: 77 k Downloading packages: Public key for cowsay-3.04-4.el7.noarch.rpm is not installed Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : cowsay-3.04-4.el7.noarch 1/1 Verifying : cowsay-3.04-4.el7.noarch 1/1 Installed: cowsay.noarch 0:3.04-4.el7 Complete! < ExaGridDba > ------------ \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || Remarks The script size limit is 262144 bytes base64 encoded, or 196608 bytes. "#!" determines the interpreter. "#!/bin/python" would start a python script. These azurerm_virtual_machine_extension parameters are not required: settings fileUris commandToExecute storageAccountName storageAccountKey protected_settings parameter "script" might not be mentioned in the Terraform documentation. Please refer to Use the Azure Custom Script Extension Version 2 with Linux virtual machines azurerm_virtual_machine_extension may be used during VM creation, or as a standalone administrative tool. Conclusion In Azure VM, it is possible to run a script without referring to a blob storage account.
Terraform
54,088,476
15
I have successfully created an autoscaling group using Terraform. I would like to now find a way to dynamically name the provisioned instances based on index value. For an aws_instance type, it can be easily done by: resource "aws_instance" "bar" { count = 3 tags { Name = "${var.instance_name_gridNode}${count.index + 1}" App-code = "${var.app-code}" PC-code = "${var.pc-code}" } } This will result in 3 instances named: 1) Node1 2) Node2 3) Node3 However as aws_autoscaling_group is dynamically provisioned (for both scaling in and out situations) how does one control the naming convention of the provisioned instances? resource "aws_autoscaling_group" "gridrouter_asg" { name = "mygridrouter" launch_configuration = "${aws_launch_configuration.gridGgr_lcfg.id}" min_size = 1 max_size = 2 health_check_grace_period = 150 desired_capacity = 1 vpc_zone_identifier = ["${var.subnet_id}"] health_check_type = "EC2" tags = [ { key = "Name" value = "${var.instance_name_gridGgr_auto}" propagate_at_launch = true }, ] }
AWS autoscaling groups can be tagged as with many resources and using the propagate_at_launch flag those tags will also be passed to the instances that it creates. Unfortunately these are entirely static and the ASG itself has no way of tagging instances differently. On top of this the default scale in policy will not remove the newest instances first so even if you did tag your instances as Node1, Node2, Node3 then when the autoscaling group scaled in it's most likely (depending on criteria) to remove Node1 leaving you with Node2 and Node3. While it's possible to change the termination policy to NewestInstance so that it would remove Node3 this is unlikely to be an optimal scale in policy. I'd question why you feel you need to take ASG instances differently from each other and maybe rethink about how you manage your instances when they are more ephemeral as is generally the case in modern clouds but more so when using autoscaling groups. If you really did want to tag instances differently for some specific reason you could have the ASG not propagate the Name tag at launch to instances and then have a Lambda function trigger on the scale out event (either via a lifecycle hook or a Cloudwatch event) to determine the tag value to use and then tag the instance with it.
Terraform
50,502,919
15
I've got a Terraform module like this: module "helloworld" { source = "../service" } and ../service contains: resource "aws_cloudwatch_metric_alarm" "cpu_max" { comparison_operator = "GreaterThanOrEqualToThreshold" evaluation_periods = "2" ... etc } How do you override the service variables comparison_operator and evaluation_periods in your module? E.g. to set cpu_max to 4 is it as simple as aws_cloudwatch_metric_alarm .cpu_max.evaluation_periods = 4 in your module?
You have to use a variable with a default value. variable "evaluation_periods" { default = 4 } resource "aws_cloudwatch_metric_alarm" "cpu_max" { comparison_operator = "GreaterThanOrEqualToThreshold" evaluation_periods = "${var.evaluation_periods}" } And in your module module "helloworld" { source = "../service" evaluation_periods = 2 }
Terraform
45,097,380
15
I am trying to build out our AWS environments using Terraform but am hitting some issues scaling. I have a repository of just modules that I want to use repeatedly when building my environments and a second repository just to handle the actual implementations of those modules. I am aware of HashiCorp's Github page that has an example but there, each environment is one state file. I want to split environments out but then have multiple state files within each environment. When the state files get big, applying small updates takes way too long. Every example I've seen where multiple state files are used, the Terraform files are extremely un-DRY and not ideal. I would prefer to be able to have different variable values between environments but have the same configuration. Has anyone ever done anything like this? Am I missing something? I'm a bit frustrated because every Terraform example is never at scale and it makes it hard for n00b such as myself to start down the right path. Any help or suggestions is very much appreciated!
The idea of environment unfortunately tends to mean different things to different people and organizations. To some, it's simply creating multiple copies of some infrastructure -- possibly only temporary, or possibly long-lived -- to allow for testing and experimentation in one without affecting another (probably production) environment. For others, it's a first-class construct in a deployment architecture, with the environment serving as a container into which other applications and infrastructure are deployed. In this case, there are often multiple separate Terraform configurations that each have a set of resources in each environment, sharing data to create a larger system from smaller parts. Terraform has a feature called State Environments that serves the first of these use-cases by allowing multiple named states to exist concurrently for a given configuration, and allowing the user to switch between them using the terraform env commands to focus change operations on a particular state. The State Environments feature alone is not sufficient for the second use-case, since it only deals with multiple states in a single configuration. However, it can be used in conjunction with other Terraform features, making use of the ${terraform.env} interpolation value to deal with differences, to allow multiple state environments within a single configuration to interact with a corresponding set of state environments within another configuration. One "at scale" approach (relatively-speaking) is described in my series of articles Terraform Environment+Application Pattern, which describes a generalization of a successful deployment architecture with many separate applications deployed together to form an environment. In that pattern, the environments themselves (which serve as the "container" for applications, as described above) are each created with a separate Terraform configuration, allowing each to differ in the details of how it is configured, but they each expose data in a standard way to allow multiple applications -- each using the State Environments feature -- to be deployed once for each environment using the same configuration. This compromise leads to some duplication between the environment configurations -- which can be mitgated by using Terraform modules to share patterns between them -- but these then serve as a foundation to allow other configurations to be generalized and deployed multiple times without such duplication.
Terraform
44,288,602
15
I am new to terraform - I have created remote tfstate in s3, and now there are some manual changes too that are done in my AWS infrastructure. I need to import those manual changes into tfstate. I used the import command for some resources, but for some resources such as IAM policy etc, there is no such import command. Also some resources such as DB are changed with new parameters added, and I need to import them as well. When I try to import those changes it says: Error importing: 1 error(s) occurred: * Can't import aws_security_group.Q8SgProdAdminSshInt, would collide with an existing resource. Please remove or rename this resource before continuing. Any help would be appreciated. Thanks.
Before directly answering this question I think some context would help: Behind the scenes, Terraform maintains a state file that contains a mapping from the resources in your configuration to the objects in the underlying provider API. When you create a new object with Terraform, the id of the object that was created is automatically saved in the state so that future commands can locate the referenced object for read, update, and delete operations. terraform import, then, is a different way to create an entry in the state file. Rather than creating a new object and recording its id, instead the user provides an id on the command line. Terraform reads the object with that id and adds the result to the state file, after which it is indistinguishable in the state from a resource that Terraform created itself. So with all of that said, let's address your questions one-by-one. Importing Resources That Don't Support terraform import Since each resource requires a small amount of validation and data-fetching code to do an import, not all resources are supported for import at this time. Given what we know about what terraform import does from the above, in theory it's possible to skip Terraform's validation of the provided id and instead manually add the resource to the state. This is an advanced operation and must be done with care to avoid corrupting the state. First, retrieve the state into a local file that you'll use for your local work: terraform state pull >manual-import.tfstate This will create a file manual-import.tfstate that you can open in a text editor. It uses JSON syntax, so though its internal structure is not documented as a stable format we can carefully edit it as long as we remain consistent with the expected structure. It's simplest to locate an existing resource that is in the same module as where you want to import and duplicate and edit it. Let's assume we have a resources object like this: "resources": { "null_resource.foo": { "type": "null_resource", "depends_on": [], "primary": { "id": "5897853859325638329", "attributes": { "id": "5897853859325638329" }, "meta": {}, "tainted": false }, "deposed": [], "provider": "" } }, Each attribute within this resources object corresponds to a resource in your configuration. The attribute name is the type and name of the resource. In this case, the resource type is null_resource and the attribute name is foo. In your case you might see something like aws_instance.server here. The id attributes are, for many resources (but not all!), the main thing that needs to be populated. So we can duplicate this structure for a hypothetical IAM policy: "resources": { "null_resource.foo": { "type": "null_resource", "depends_on": [], "primary": { "id": "5897853859325638329", "attributes": { "id": "5897853859325638329" }, "meta": {}, "tainted": false }, "deposed": [], "provider": "" }, "aws_iam_policy.example": { "type": "aws_iam_policy", "depends_on": [], "primary": { "id": "?????", "attributes": { "id": "?????" }, "meta": {}, "tainted": false }, "deposed": [], "provider": "" } }, The challenge at this step is to figure out what sort of id this resource requires. The only sure-fire way to know this is to read the code, which tells me that this resource expects the id to be the full ARN of the policy. With that knowledge, we replace the two ????? sequences in the above example with the ARN of the policy we want to import. After making manual changes to the state it's necessary to update the serial number at the top-level of the file. Terraform expects that any new change will have a higher serial number, so we can increment this number. After completing the updates, we must upload the updated state file back into Terraform: terraform state push manual-import.tfstate Finally we can ask Terraform to refresh the state to make sure it worked: terraform refresh Again, this is a pretty risky process since the state file is Terraform's record of its relationship with the underlying system and it can be hard to recover if the content of this file is lost. It's often easier to simply replace a resource than to go to all of this effort, unless it's already serving a critical role in your infrastructure and there is no graceful migration strategy available. Imports Colliding With Existing Resources The error message given in your question is talking about an import "colliding" with an existing resource: Error importing: 1 error(s) occurred: * Can't import aws_security_group.Q8SgProdAdminSshInt, would collide with an existing resource. Please remove or rename this resource before continuing. The meaning of this message is that when Terraform tried to write the new resource to the state file it found a resource entry already present for the name aws_security_group.Q8SgProdAdminSshInt. This suggests that either it was already imported or that a new security group was already created by Terraform itself. You can inspect the attributes of the existing resource in state: terraform state show aws_security_group.Q8SgProdAdminSshInt Compare the data returned with the security group you were trying to import. If the ids match then there's nothing left to do, since the resource was already imported. If the ids don't match then you need to figure out which of the two objects is the one you want to keep. If you'd like to keep the one that Terraform already has, you can manually delete the one you were trying to import. If you'd like to keep the one you were trying to import instead, you can drop the unwanted one from the Terraform state to make way for the import to succeed: terraform state rm aws_security_group.Q8SgProdAdminSshInt Note that this just makes Terraform "forget" the resource; it will still exist in EC2, and will need to be deleted manually via the console, command line tools, or API. Be sure to note down its id before deleting it to ensure that you can find it in order to to clean it up.
Terraform
43,950,097
15
I have TF templates whose purpose is to create multiple copies of the same cloud infrastructure. For example you have multiple business units inside a big organization, and you want to build out the same basic networks. Or you want an easy way for a developer to spin up the stack that he's working on. The only difference between "tf apply" invokations is the variable BUSINESS_UNIT, for example, which is passed in as an environment variable. Is anyone else using a system like this, and if so, how do you manage the state files ?
You should use a Terraform Module. Creating a module is nothing special: just put any Terraform templates in a folder. What makes a module special is how you use it. Let's say you put the Terraform code for your infrastructure in the folder /terraform/modules/common-infra. Then, in the templates that actually define your live infrastructure (e.g. /terraform/live/business-units/main.tf), you could use the module as follows: module "business-unit-a" { source = "/terraform/modules/common-infra" } To create the infrastructure for multiple business units, you could use the same module multiple times: module "business-unit-a" { source = "/terraform/modules/common-infra" } module "business-unit-b" { source = "/terraform/modules/common-infra" } module "business-unit-c" { source = "/terraform/modules/common-infra" } If each business unit needs to customize some parameters, then all you need to do is define an input variable in the module (e.g. under /terraform/modules/common-infra/vars.tf): variable "business_unit_name" { description = "The name of the business unit" } Now you can set this variable to a different value each time you use the module: module "business-unit-a" { source = "/terraform/modules/common-infra" business_unit_name = "a" } module "business-unit-b" { source = "/terraform/modules/common-infra" business_unit_name = "b" } module "business-unit-c" { source = "/terraform/modules/common-infra" business_unit_name = "c" } For more information, see How to create reusable infrastructure with Terraform modules and Terraform: Up & Running.
Terraform
39,803,182
15
Problem The google_project document says the project_id is optional. project_id - (Optional) The project ID. If it is not provided, the provider project is used. However, Terraform complains it is required. gcp.tf data "google_project" "project" { } output "project_number" { value = data.google_project.project.number } Error: project: required field is not set │ │ with data.google_project.project, │ on gcp.tf line 1, in data "google_project" "project": │ 1: data "google_project" "project" { Question Please help understand if this is a documentation defect and the argument is mandatory actually. Workaround Set the GOOGLE_PROJECT environment variable. export GOOGLE_PROJECT=... terraform apply
Your 'Workaround' is functionally equivalent to what the documentation suggests. Namely that the provider project should be set, i.e.: provider "google" { project = "..." } You don't include your provider config but, I assume, it doesn't include the default project to be used. So, either|or but, somewhere you need to define the default project. Otherwise, you should expect to get the error.
Terraform
70,674,928
14
I'm using minikube locally. The following is the .tf file I use to create my kubernetes cluster: provider "kubernetes" { config_path = "~/.kube/config" } resource "kubernetes_namespace" "tfs" { metadata { name = "tfs" # terraform-sandbox } } resource "kubernetes_deployment" "golang_webapp" { metadata { name = "golang-webapp" namespace = "tfs" labels = { app = "webapp" } } spec { replicas = 3 selector { match_labels = { app = "webapp" } } template { metadata { labels = { app = "webapp" } } spec { container { image = "golang-docker-example" name = "golang-webapp" image_pull_policy = "Never" # this is set so that kuberenetes wont try to download the image but use the localy built one liveness_probe { http_get { path = "/" port = 8080 } initial_delay_seconds = 15 period_seconds = 15 } readiness_probe { http_get { path = "/" port = 8080 } initial_delay_seconds = 3 period_seconds = 3 } } } } } } resource "kubernetes_service" "golang_webapp" { metadata { name = "golang-webapp" namespace = "tfs" labels = { app = "webapp_ingress" } } spec { selector = { app = kubernetes_deployment.golang_webapp.metadata.0.labels.app } port { port = 8080 target_port = 8080 protocol = "TCP" } # type = "ClusterIP" type = "NodePort" } } resource "kubernetes_ingress" "main_ingress" { metadata { name = "main-ingress" namespace = "tfs" } spec { rule { http { path { backend { service_name = "golang-webapp" service_port = 8080 } path = "/golang-webapp" } } } } } When executing terraform apply, I am successfully able to create all of the resources except for the ingress. The error is: Error: Failed to create Ingress 'tfs/main-ingress' because: the server could not find the requested resource (post ingresses.extensions) with kubernetes_ingress.main_ingress, on main.tf line 86, in resource "kubernetes_ingress" "main_ingress": 86: resource "kubernetes_ingress" "main_ingress" { When I try to create an ingress service with kubectl using the same configuration as the one above (only in .yaml and using the kubectl apply command) it works, so it seems that kubectl & minikube are able to create this type of ingress, but terraform cant for some reason... Thanks in advance for any help! Edit 1: adding the .yaml that I'm able to create the ingress with apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress namespace: tfs annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: golang-webapp port: number: 8080
The kubernetes_ingress resource generate an ingress with an apiVersion which is not supported by your kubernetes cluster. You have to use [kubernetes_ingress_v1][1] resource which looks similar to kubernetes_ingress resource with some diferences. For your example, it will be like this : resource "kubernetes_ingress_v1" "jenkins-ingress" { metadata { name = "example-ingress" namespace = "tfs" annotations = { "nginx.ingress.kubernetes.io/rewrite-target" = "/$1" } } spec { rule { http { path { path = "/" backend { service { name = "golang-webapp" port { number = 8080 } } } } } } } }
Terraform
70,497,809
14
I'm creating a Security group using terraform, and when I'm running terraform plan. It is giving me an error like some fields are required, and all those fields are optional. Terraform Version: v1.0.5 AWS Provider version: v3.57.0 main.tf resource "aws_security_group" "sg_oregon" { name = "tf-sg" description = "Allow web traffics" vpc_id = aws_vpc.vpc_terraform.id ingress = [ { description = "HTTP" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] }, { description = "HTTPS" from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] }, { description = "SSH" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ] egress = [ { description = "for all outgoing traffics" from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] } ] tags = { Name = "sg-for-subnet" } } error in console │ Inappropriate value for attribute "ingress": element 0: attributes "ipv6_cidr_blocks", "prefix_list_ids", "security_groups", and "self" are required. │ Inappropriate value for attribute "egress": element 0: attributes "prefix_list_ids", "security_groups", and "self" are required. I'm following this doc: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group Any help would be appreciated.
Since you are using Attributes as Blocks you have to provide values for all options: resource "aws_security_group" "sg_oregon" { name = "tf-sg" description = "Allow web traffics" vpc_id = aws_vpc.vpc_terraform.id ingress = [ { description = "HTTP" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = [] prefix_list_ids = [] security_groups = [] self = false }, { description = "HTTPS" from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = [] prefix_list_ids = [] security_groups = [] self = false }, { description = "SSH" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = [] prefix_list_ids = [] security_groups = [] self = false } ] egress = [ { description = "for all outgoing traffics" from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] prefix_list_ids = [] security_groups = [] self = false } ] tags = { Name = "sg-for-subnet" } }
Terraform
69,079,945
14
When I run terraform plan it shows a list of changes made out of Terraform and at the end of output, it also informs that "No changes. Your infrastructure matches the configuration.": Note: Objects have changed outside of Terraform Terraform detected the following changes made outside of Terraform since the last "terraform apply": # google_sql_database_instance.db1 has been changed ~ resource "google_sql_database_instance" "db1" { id = "db1" name = "db1" # (12 unchanged attributes hidden) .... whole list of objects to update .... .... Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these changes. ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── No changes. Your infrastructure matches the configuration. Your configuration already matches the changes detected above. If you'd like to update the Terraform state to match, create and apply a refresh-only plan: terraform apply -refresh-only Not sure why it first says there are changes in infrastructure but also say that Configuration matches the Infrastructure. I ran a test "Apply" and Terraform did not change anything but I want to know why is it showing these two different statements and also want to ensure that nothing is changes accidentally.
When Terraform creates a plan, it does two separate operations for each of your resource instances: Read the latest values associated with the object from the remote system, to make sure that Terraform takes into account any changes you've made outside of Terraform. Compare the updated objects against the configuration to see if there are any differences, and if so to propose actions Terraform will take in order to make the remote objects match the configuration. The output you've shared is talking about both of those steps. Terraform first reports that when it read the latest values it detected that some things have already changed outside of Terraform, and explains what it detected. It then compared those updated objects against your configuration and found that your configuration already matches, and so Terraform doesn't need to make any additional changes to your infrastructure. The final paragraph of the output includes "your configuration already matches the changes detected above", which suggests that you have made some changes to the objects outside of Terraform but you've also updated the configuration to match. Therefore Terraform doesn't need to make any changes to the remote objects to make them match the configuration, because something other than Terraform already updated them.
Terraform
67,666,185
14
For clarification, what I'm trying to do is fire off a Fargate task when theres an item in a specific queue. I've used this tutorial to get pretty much where I am. This worked fine but the problem I ran into was every file upload (the structure of the s3 bucket is s3_bucket_name/{unknown_name}/known_file_names) was resulting in a task being triggered and I only want/need it to trigger once per {unknown_name} . I've since changed my configuration to add an item to a queue when it detects a test_file.txt file. Is it possible to trigger a fargate task on a queue like this? If so, how?
SQS doesn't trigger or "push" messages to anything. As mentioned in the comments, AWS Lambda has an SQS integration that can automatically poll SQS for you and trigger a Lambda function with new messages, which you could use to create your Fargate tasks. However I would recommend refactoring your Fargate task like this: Reconfigure the code running in your container to poll the SQS queue for messages. Run your task as an ECS service. Configure ECS service autoscaling to spin up instances of your task based on the depth of the SQS queue.
Terraform
66,388,494
14
I am trying to edit Terraform configuration files with Python. I am parsing Terraform files (.tf) using python hcl2 library which returns a python dictionary. I want to add new key/value pairs or change some values in the dictionary. Directly writing to the file is not a good practice since the returned python dictionary is not in Hashicorp Configuration Language format. Also there can be multiple configuration files like variables.tf etc. which are linked together. Should I implement my own serializer which converts python dictionary to terraform configuration file or is there an easier way to do it?
The python-hcl2 library implements a parser for the HCL syntax, but it doesn't have a serializer, and its API is designed to drop all of the HCL specifics and retain only a basic Python data structure, so it doesn't seem to retain enough information to surgically modify the input without losing details such as comments, ordering of attributes, etc. At the time I'm writing this, the only HCL implementation that explicitly supports updating existing configuration files in-place is the Go package hclwrite. It allows callers to load in arbitrary HCL source, surgically modify parts of it, and then re-serialize that updated version with only minor whitespace normalization to the unchanged parts of the input. In principle it would be possible to port hclwrite to Python, or to implement a serializer from a dictionary like python-hcl2 returns if you are not concerned with preserving unchanged input, but both of these seem like a significant project. If you do decide to do it, one part that warrants careful attention is serialization of strings into HCL syntax, because the required escaping isn't exactly the same as any other language. You might wish to refer to the escapeQuotedStringLit function from hclwrite to see all of the cases to handle, so you can potentially implement compatible logic in Python.
Terraform
65,685,549
14
I have a file for creating terraform resources with helm helm.tf. In this file I create a honeycomb agent and need to pass in some watchers, so I'm using a yaml file for configuration. Here is the snippet from helm.tf: resource "helm_release" "honeycomb" { version = "0.11.0" depends_on = [module.eks] repository = "https://honeycombio.github.io/helm-charts" chart = "honeycomb" name = "honeycomb" values = [ file("modules/kubernetes/helm/honeycomb.yml") ] } and here is the yaml file agent: watchers: - labelSelector: "app=my-app" namespace: my-namespace dataset: {{$env}} parser: name: nginx dataset: {{$env}} options: log_format: "blah" Unfortunately my attempt at setting the variables with {{$x}} has not worked, so how would I send the env variable to the yaml values file? I have the variable available to me in the tf file but am unsure how to set it up in the values file. Thanks
You may use templatefile function main.tf resource "helm_release" "honeycomb" { version = "0.11.0" depends_on = [module.eks] repository = "https://honeycombio.github.io/helm-charts" chart = "honeycomb" name = "honeycomb" values = [ templatefile("modules/kubernetes/helm/honeycomb.yml", { env = "${var.env}" }) ] } honeycomb.yml agent: watchers: - labelSelector: "app=my-app" namespace: my-namespace dataset: "${env}" parser: name: nginx dataset: "${env}" options: log_format: "blah"
Terraform
64,696,721
14
I want use EFS with fargate but I have this error when the task start: ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: Failed to resolve "fs-xxxxx.efs.eu-west-1.amazonaws.com" - check that your file system ID is correct I have checked the file system ID, it is corrects...how can I have more info about this error? Could it be related to the security groups? This is the code that I use with terraform, I use two mount points for the two availability zones: resource "aws_efs_file_system" "efs_apache" { } resource "aws_efs_mount_target" "efs-mount" { count = 2 file_system_id = aws_efs_file_system.efs_apache.id subnet_id = sort(var.subnet_ids)[count.index] security_groups = [aws_security_group.efs.id] } resource "aws_efs_access_point" "efs-access-point" { file_system_id = aws_efs_file_system.efs_apache.id } resource "aws_security_group" "efs" { name = "${var.name}-efs-sg" description = "Allow traffic from self" vpc_id = var.vpc_id egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 2049 to_port = 2049 protocol = "tcp" security_groups = [aws_security_group.fargate_sg.id] } } this is the fargate service: resource "aws_ecs_task_definition" "task_definition" { family = var.name requires_compatibilities = ["FARGATE"] network_mode = "awsvpc" execution_role_arn = aws_iam_role.task_execution_role.arn task_role_arn = aws_iam_role.task_role.arn cpu = var.cpu memory = var.memoryHardLimit volume { name = "efs-apache" efs_volume_configuration { file_system_id = aws_efs_file_system.efs_apache.id root_directory = "/" transit_encryption = "ENABLED" authorization_config { access_point_id = aws_efs_access_point.efs-access-point.id iam = "ENABLED" } } } depends_on = [aws_efs_file_system.efs_apache] container_definitions = <<EOF [ { "name": "${var.name}", "image": "${data.aws_caller_identity.current.account_id}.dkr.ecr.${data.aws_region.current.name}.amazonaws.com/${lower(var.project_name)}_app:latest", "memory": ${var.memoryHardLimit}, "memoryReservation": ${var.memorySoftLimit}, "cpu": ${var.cpu}, "essential": true, "command": [ "/bin/sh -c \"/app/start.sh" ], "entryPoint": [ "sh", "-c" ], "mountPoints": [ { "containerPath": "/var/www/sites_json", "sourceVolume": "efs-apache", "readOnly": false } ], "portMappings": [ { "containerPort": ${var.docker_container_port}, "hostPort": ${var.docker_container_port} } ], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "${var.name}-Task-LogGroup", "awslogs-region": "${data.aws_region.current.name}", "awslogs-stream-prefix": "ecs" } } } ] EOF } How can I solve?
Make sure you have enabled DNS Resolution and DNS hostnames in your VPC. EFS needs both these options enabled to work since it relies on the DNS hostname to resolve the connection. This had me stuck for a while since most documentation on the internet focuses on the security groups for this error. The terraform AWS provider resource aws_vpc sets enable_dns_hostnames = false by default, so you'll need to explicitly set it to true. Your terraform VPC config should look something like this: resource "aws_vpc" "main" { cidr_block = "10.255.248.0/22" enable_dns_hostnames = true }
Terraform
64,432,002
14
In terraform there is an example to create EC2 machine in aws. # Create a new instance of the latest Ubuntu 20.04 on an # t3.micro node with an AWS Tag naming it "HelloWorld" provider "aws" { region = "us-west-2" } data "aws_ami" "ubuntu" { most_recent = true filter { name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"] } filter { name = "virtualization-type" values = ["hvm"] } owners = ["099720109477"] # Canonical } resource "aws_instance" "web" { ami = data.aws_ami.ubuntu.id instance_type = "t3.micro" tags = { Name = "HelloWorld" } } But can I also run some scripts inside? like install jenkins? install docker, or just run command: sudo yum update -y during terraform apply operation? If so, I would much appropriate an example of something like that or guide resource.
Yes, you can. In AWS, you use UserData for that which: can be used to perform common automated configuration tasks and even run scripts after the instance starts. In terraform, the corresponding attribute is user_data. To use it to install Jenkins you can try the following: resource "aws_instance" "web" { ami = data.aws_ami.ubuntu.id instance_type = "t3.micro" user_data = <<-EOL #!/bin/bash -xe apt update apt install openjdk-8-jdk --yes wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add - echo "deb https://pkg.jenkins.io/debian binary/" >> /etc/apt/sources.list apt update apt install -y jenkins systemctl status jenkins find /usr/lib/jvm/java-1.8* | head -n 3 EOL tags = { Name = "HelloWorld" } } Please note, that the above code is example and I can't guarantee it will work on Ubuntu 20.04. But it works on 18.04. Also Jenksis works on port 8080, so your security groups would need to allow it, if you want to access jenkins directly, without ssh tunnel for instance.
Terraform
63,978,548
14
Since 1995, we have used an update mechanism which cleanly updates and removes software centrally stores all software meta-data internally to manage needs and artifacts from a single source of truth NEVER triggers itself arbitrarily. While we understand terraform has begun reaching out to a registry in a brave reinvention of that wheel without any of those features, we wish to disable it completely. Our current kit includes only one plugin: terraform-0.13.0-1.el7.harbottle.x86_64 golang-github-terraform-provider-vsphere-1.13.0-0.1.x86_64 The goal is never check the registry return an error if the given module is not installed and I'd be very grateful for good suggestions toward that end. Is there a setting I've overlooked, or can we fake it by telling it to look somewhere empty? Is there a -stay-in-your-lane switch? Clarification: the add-on package is a go-build package which delivers a single artifact /usr/bin/terraform-provider-vsphere and nothing else. This has worked wonderfully for all previous incarnations and may have only begun to act up in v13. Update: These things failed: terraform init -plugin-dir=/dev/shm terraform init -get-plugins=false terraform init -get=false setting terraform::required_providers::vsphere::source="" echo "disable_checkpoint = true" > ~/.terraformrc $ terraform init -get-plugins=false Initializing the backend... Initializing provider plugins... - Finding latest version of -/vsphere... - Finding latest version of hashicorp/vsphere... Update: I'm still a bit off: rpm -qlp golang-github-terraform-provider-vsphere /usr/share/terraform/plugins/registry.terraform.io/hashicorp/vsphere/1.14.0/linux_amd64/terraform-provider-vsphere I feel I'm really close. /usr/share/ is in the XDG default search path, and it DOES seem to find the location, but it seems to check the registry first/at-all, which is unexpected. Initializing provider plugins... - Finding latest version of hashicorp/vsphere... - Finding latest version of -/vsphere... - Installing hashicorp/vsphere v1.14.0... - Installed hashicorp/vsphere v1.14.0 (unauthenticated) Error: Failed to query available provider packages Are we sure it stops checking if it has something local, and that it does that by default? Did I read that right?
What you are describing here sounds like the intention of the Provider Installation settings in Terraform's CLI configuration file. Specifically, you can put your provider files in a local filesystem directory of your choice -- for the sake of this example, I'm going to arbitrarily choose /usr/local/lib/terraform, and then write the following in the CLI configuration file: provider_installation { filesystem_mirror { path = "/usr/local/lib/terraform" } } If you don't already have a CLI configuration file, you can put this in the file ~/.terraformrc. With the above configuration, your golang-github-terraform-provider-vsphere-1.13.0-0.1.x86_64 package would need to place the provider's executable at the following path (assuming that you're working with a Linux system): /usr/local/lib/terraform/registry.terraform.io/hashicorp/vsphere/1.30.0/linux_amd64/terraform-provider-vsphere_v1.13.0_x4 (The filename above is the one in the official vSphere provider release, but if you're building this yourself from source then it doesn't matter what exactly it's called as long as it starts with terraform-provider-vsphere.) It looks like you are in the process of completing an upgrade from Terraform v0.12, and so Terraform is also trying to install the legacy (un-namespaced) version of this provider, -/vsphere. Since you won't have that in your local directory the installation of that would fail, but with the knowledge that this provider is now published at hashicorp/vsphere we can avoid that by manually migrating it in the state, thus avoiding the need for Terraform to infer this automatically on the next terraform apply: terraform state replace-provider 'registry.terraform.io/-/vsphere' 'registry.terraform.io/hashicorp/vsphere' After you run this command your latest state snapshot will not be compatible with Terraform 0.12 anymore, so if you elect to abort your upgrade and return to 0.12 you will need to restore the previous version from a backup. If your state is not stored in a location that naturally retains historical versions, one way to get such a backup is to run terraform state pull with a Terraform 0.12 executable and save the result to a file. (By default, Terraform defers taking this action until terraform apply to avoid upgrading the state format until it would've been making other changes anyway.) The provider_installation configuraton above is an answer if you want to make this true for all future use of Terraform, which seems to be your goal here, but for completeness I also want to note that the following command should behave in an equivalent way to the result of the above configuration if you want to force a local directory only for one particular invocation of terraform init: terraform init -plugin-dir=/usr/local/lib/terraform Since you seem to be upgrading from Terraform 0.12, it might also interest you to know that Terraform 0.13's default installation behavior (without any special configuration) is the same as Terraform 0.12 with the exception of now expecting a different local directory structure than before, to represent the hierarchical provider namespace. (That is, to distinguish hashicorp/vsphere from a hypothetical othernamespace/vsphere.) Specifically, Terraform 0.13 (as with Terraform 0.12) will skip contacting the remote registry for any provider for which it can discover at least one version available in the local filesystem. It sounds like your package representing the provider was previously placing a terraform-provider-vsphere executable somewhere that Terraform 0.12 could find and use it. You can adapt that strategy to Terraform 0.13 by placing the executable at the following location: /usr/local/share/terraform/plugins/registry.terraform.io/hashicorp/vsphere/1.30.0/linux_amd64/terraform-provider-vsphere_v1.13.0_x4 (Again, the exact filename here isn't important as long as it starts with terraform-provider-vsphere.) /usr/local/share here is assuming one of the default data directories from the XDG Base Directory specification, but if you have XDG_DATA_HOME/XDG_DATA_DIRS overridden on your system then Terraform should respect that and look in the other locations you've listed. The presence of such a file, assuming you haven't overridden the default behavior with an explicit provider_installation block, will cause Terraform to behave as if you had written the following in the CLI configuration: provider_installation { filesystem_mirror { path = "/usr/local/share/terraform/plugins" include = ["hashicorp/vsphere"] } direct { exclude = ["hashicorp/vsphere"] } } This form of the configuration forces local installation only for the hashicorp/vsphere provider, thus mimicking what Terraform 0.12 would've done with a local plugin file terraform-provider-vsphere. You can get the more thorough behavior of never contacting remote registries with a configuration like the one I opened this answer with, which doesn't include a direct {} block at all.
Terraform
63,680,319
14
I want to add terraform version 0.12.21 in an alpine container, but I can only add 0.11.0 using apk. If I try to add it as the desired version I get the following error: / # apk upgrade terraform==0.12.21-r0 OK: 192 MiB in 66 packages / # apk add terraform==0.12.21-r0 ERROR: unsatisfiable constraints: terraform-0.11.0-r0: breaks: world[terraform=0.12.21-r0] How do I fix this apk error?
I havent found an apk solution but I can just download the desired binary and replace the existing one with the following in the dockerfile: # upgrade terraform to 0.12.21 RUN wget https://releases.hashicorp.com/terraform/0.12.21/terraform_0.12.21_linux_amd64.zip RUN unzip terraform_0.12.21_linux_amd64.zip && rm terraform_0.12.21_linux_amd64.zip RUN mv terraform /usr/bin/terraform
Terraform
63,080,980
14
I have installed a version (0.12.24) of Terraform which is later than the required version (0.12.17) specified in our configuration. How can I downgrade to that earlier version? My system is Linux Ubuntu 18.04.
As long as you are in linux, do the following in the terminal: rm -r $(which terraform) Install the previous version: wget https://releases.hashicorp.com/terraform/1.4.4/terraform_1.4.4_linux_amd64.zip unzip terraform_1.4.4_linux_amd64.zip mv terraform /usr/local/bin/terraform terraform --version That's it, my friend. EDIT: I've assumed people now use v1.4.5 so the previous version is v1.4.4.
Terraform
61,901,363
14
I'm trying to check if a variable exists on a template file using terraform template syntax, but I get error that This object does not have an attribute named "proxy_set_header. $ cat nginx.conf.tmpl %{ for location in jsondecode(locations) } location ${location.path} { %{ if location.proxy_set_header } proxy_set_header ${location.proxy_set_header}; %{ endif } } %{ endfor } I tried with if location.proxy_set_header != "" and if location.proxy_set_header without success. How to check if a variable exists with the String Templates?
If you are using Terraform 0.12.20 or later then you can use the new function can to concisely write a check like this: %{ for location in jsondecode(locations) } location ${location.path} { %{ if can(location.proxy_set_header) } proxy_set_header ${location.proxy_set_header}; %{ endif } } %{ endfor } The can function returns true if the given expression could evaluate without an error. The documentation does suggest preferring try in most cases, but in this particular situation your goal is to show nothing at all if that attribute isn't present, and so this equivalent approach with try is, I think, harder to understand for a future reader: %{ for location in jsondecode(locations) } location ${location.path} { ${ try("proxy_set_header ${location.proxy_set_header};", "") } } %{ endfor } As well as being (subjectively) more opaque as to the intent, this ignores the recommendation in the try docs of using it only with attribute lookup and type conversion expressions. Therefore I think the can usage above is justified due to its relative clarity, but either way should work.
Terraform
60,224,456
14
Within Octopus Deploy I've setup a Terraform Apply Step using their Apply a Terraform template In my Terraform main.tf file I want to use a connection to run an remote-exec on a Amazon Linux EC2 instance in AWS resource "aws_instance" "nginx" { ami = "${var.aws_ami}" instance_type = "t2.nano" key_name = "${var.key_name}" connection { type = "ssh" user = "ec2-user" private_key = "${var.aws_key_path}" } provisioner "remote-exec" { inline = [ "sudo amazon-linux-extras install epel -y", "sudo yum update -y", "sudo amazon-linux-extras install nginx1.12 -y", "sudo systemctl enable nginx.service", "sudo systemctl start nginx.service", "sudo systemctl status nginx.service" ] } } As part of the connection block we need to connect using an SSH key pair using a Private Key PEM to auth with the Public Key stored on AWS My Private Key is stored as a variable in my Project in Octopus deploy For my private key to be interpreted correctly in Terraform as a multi-line string I had to use 'here doc' syntax using a starting EOF and an ending EOF This syntax explanation can be found on the Terraform official documentation at https://www.terraform.io/docs/configuration-0-11/syntax.html This was my original problem that my variable syntax was falling over as I wasn't handling the multi-line PEM file correctly and I raised the ticket below with Octopus Deploy Support https://help.octopus.com/t/terraform-apply-step-pem-variable-set-to-unix-lf-ucs-2-le-bom/23659 Where they kindly were able to point me in the direction of the EOF syntax This all worked great on Terraform v0.11 but we've a lot of code here on this side that's been written in the latest HCL2 in v0.12 So I wanted to force Octopus Deploy to use a v0.12 binary rather than the prepackaged v0.11 that Octopus Deploy comes with. And they offer a built in Special var so you can use a different binary But when I run it with this binary the script blows up with the error below Error: Unterminated template string No closing marker was found for the string. August 6th 2019 14:54:07 Error Calamari.Integration.Processes.CommandLineException: The following command: "C:\Program Files\Octopus Deploy\Octopus\bin\terraform.exe" apply -no-color -auto-approve -var-file="octopus_vars.tfvars" August 6th 2019 14:54:07 Error With the working directory of: C:\Octopus\Work\20190806135350-47862-353\staging August 6th 2019 14:54:07 Error Failed with exit code: 1 August 6th 2019 14:54:07 Error Error: Unterminated template string August 6th 2019 14:54:07 Error on octopus_vars.tfvars line 34: I've had a look at the official documentation for v0.12 https://www.terraform.io/docs/configuration/syntax.html#terraform-syntax And I'm not sure if there is anything that helps in relation to how to manage multi-line that they had in v0.11 Here is the code block that worked in v0.11 successfully from my tfvars file aws_ami = "#{ami}" key_name = "#{awsPublicKey}" aws_private_key = <<-EOF #{testPrivateKey} -EOF The expected result when I ran this with the latest version of Terraform v0.12.6 was that it would function normally and run my Terraform Apply within Octopus Deploy My hope here is that someone from Hashicorp has a workaround for this as I see this was supposed to be fixed with https://github.com/hashicorp/terraform/pull/20281 But I'm using the latest binary at the time of writing this v0.12.6 downloaded today Any suggestions anyone on how to get this working in v0.12? Cheers
The correct syntax for a "flush heredoc" does not include a dash on the final marker: aws_key_path = <<-EOF #{martinTestPrivateKey} EOF If prior versions were accepting -EOF to end the heredoc then that unfortunately was a bug, which has now been fixed in Terraform 0.12 and so moving forward you must use the syntax as documented, with the marker alone on the final line.
Terraform
57,379,491
14
I made some experiments with terraform, kubernetes, cassandra and elassandra, I separated all by modules, but now I can't delete a specific module. I'm using gitlab-ci, and I store the terraform states on a AWS backend. This mean that, every time that I change the infrastructure in terraform files, after a git push, the infrastructure will be updated with an gitlab-ci that run terraform init, terraform plan and terraform apply. My terraform main file is this: # main.tf ########################################################################################################################################## # BACKEND # ########################################################################################################################################## terraform { backend "s3" {} } data "terraform_remote_state" "state" { backend = "s3" config { bucket = "${var.tf_state_bucket}" dynamodb_table = "${var.tf_state_table}" region = "${var.aws-region}" key = "${var.tf_key}" } } ########################################################################################################################################## # Modules # ########################################################################################################################################## # Cloud Providers: ----------------------------------------------------------------------------------------------------------------------- module "gke" { source = "./gke" project = "${var.gcloud_project}" workspace = "${terraform.workspace}" region = "${var.region}" zone = "${var.gcloud-zone}" username = "${var.username}" password = "${var.password}" } module "aws" { source = "./aws-config" aws-region = "${var.aws-region}" aws-access_key = "${var.aws-access_key}" aws-secret_key = "${var.aws-secret_key}" } # Elassandra: ---------------------------------------------------------------------------------------------------------------------------- module "k8s-elassandra" { source = "./k8s-elassandra" host = "${module.gke.host}" username = "${var.username}" password = "${var.password}" client_certificate = "${module.gke.client_certificate}" client_key = "${module.gke.client_key}" cluster_ca_certificate = "${module.gke.cluster_ca_certificate}" } # Cassandra: ---------------------------------------------------------------------------------------------------------------------------- module "k8s-cassandra" { source = "./k8s-cassandra" host = "${module.gke.host}" username = "${var.username}" password = "${var.password}" client_certificate = "${module.gke.client_certificate}" client_key = "${module.gke.client_key}" cluster_ca_certificate = "${module.gke.cluster_ca_certificate}" } This is a tree of my directory: . ├── aws-config │   ├── terraform_s3.tf │   └── variables.tf ├── gke │   ├── cluster.tf │   ├── gcloud_access_key.json │   ├── gcp.tf │   └── variables.tf ├── k8s-cassandra │   ├── k8s.tf │   ├── limit_ranges.tf │   ├── quotas.tf │   ├── services.tf │   ├── stateful_set.tf │   └── variables.tf ├── k8s-elassandra │   ├── k8s.tf │   ├── limit_ranges.tf │   ├── quotas.tf │   ├── services.tf │   ├── stateful_set.tf │   └── variables.tf ├── main.tf └── variables.tf I'm blocked here: -> I want to remove the module k8s-cassandra If I comment ou delete the module in main.tf (module "k8s-cassandra" {...), I receive this error: TERRAFORM PLAN... Acquiring state lock. This may take a few moments... Releasing state lock. This may take a few moments... Error: module.k8s-cassandra.kubernetes_stateful_set.cassandra: configuration for module.k8s-cassandra.provider.kubernetes is not present; a provider configuration block is required for all operations If I insert terraform destroy -target=module.k8s-cassandra -auto-approve between terraform init and terraform plan stills not working. Anyone can help me, please? Thanks :)
The meaning of this error message is that Terraform was relying on a provider "kubernetes" block inside the k8s-cassandra module in order to configure the AWS provider. By removing the module from source code, you've implicitly removed that configuration and so the existing objects already present in the state cannot be deleted -- the provider configuration needed to do that is not present. Although Terraform allows provider blocks inside child modules for flexibility, the documentation recommends keeping all of them in the root module and passing the provider configurations by name into the child modules using a providers map, or by automatic inheritance by name. provider "kubernetes" { # global kubernetes provider config } module "k8s-cassandra" { # ...module arguments... # provider "kubernetes" is automatically inherited by default, but you # can also set it explicitly: providers = { "kubernetes" = "kubernetes" } } To get out of the conflict situation you have already though, the answer is to temporarily restore the module "k8s-cassandra" block and then destroy the objects it is managing before removing it, using the -target option: terraform destroy -target module.k8s-cassandra Once all of the objects managed by that module have been destroyed and removed from the state, you can then safely remove the module "k8s-cassandra" block from configuration. To prevent this from happening again, you should rework the root and child modules here so that the provider configurations are all in the root module, and child modules only inherit provider configurations passed in from the root. For more information, see Providers Within Modules in the documentation.
Terraform
54,518,488
14
Are there any scripts that automate persistent disks formatting and attaching to the Google Cloud VM instance, instead of doing formatting & mounting steps? The persistent disk is created with Terraform, which also creates a VM and attaches the disk to it with the attached_disk command. I am hoping to run a simple script on the VM instance start that would: check if the attached disk is formatted, and format if needed with ext4 check if the disk is mounted, and mount if not do nothing otherwise
Have you considered using a startup script on the instance (I presume you can also add a startup-script with Terraform)? You could use an if loop to discover if the disk is formatted, then if not, you could try running the formatting/mounting commands in the documentation you linked (I realise you have suggested you do not want to follow the manual steps in the documentation, but these can be integrated into the startup script to achieve the desired result). Running the following outputs and empty string if the disk is not formatted: sudo blkid /dev/sdb You could therefore use this in a startup script to discover if the disk is formatted, then perform formatting/mounting if that is not the case. For example, you could use something like this (Note*** If the disk is formatted but not mounted this could be dangerous and should not be used if your use case could involve existing disks which may have already been formatted): #!/bin/bash if sudo blkid /dev/sdb;then exit else sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb; \ sudo mkdir -p /mnt/disks/newdisk sudo mount -o discard,defaults /dev/sdb /mnt/disks/newdisk fi
Terraform
53,162,620
14
I have a terraform configuration that correctly creates a lambda function on aws with a zip file provided. My problem is that I always have to package the lambda first (I use serverless package method for this), so I would like to execute a script that package my function and move the zip to the right directory before terraform creates the lambda function. Is that possible? Maybe using a combination of null_resource and local-exec?
You already proposed the best answer :) When you add a depends_on = ["null_resource.serverless_execution"] to your lambda resource, you can ensure, that packaging will be done before uploading the zip file. Example: resource "null_resource" "serverless_execution" { provisioner "local-exec" { command = "serverless package ..." } } resource "aws_lambda_function" "update_lambda" { depends_on = ["null_resource.serverless_execution"] filename = "${path.module}/path/to/package.zip" [...] } https://www.terraform.io/docs/provisioners/local-exec.html
Terraform
52,421,656
14
I'm provisioning a single windows server for testing with terraform in AWS. Every time i need to decrypt my windows password with my PEM file to connect. Instead, i chose the terraform argument get_password_data and stored my password_data in tfstate file. Now how do i decrypt the same with interpolation syntax rsadecrypt Please find my below terraform code ### Resource for EC2 instance creation ### resource "aws_instance" "ec2" { ami = "${var.ami}" instance_type = "${var.instance_type}" key_name = "${var.key_name}" subnet_id = "${var.subnet_id}" security_groups = ["${var.security_groups}"] availability_zone = "${var.availability_zone}" private_ip = "x.x.x.x" get_password_data = "true" connection { password = "${rsadecrypt(self.password_data)}" } root_block_device { volume_type = "${var.volume_type}" volume_size = "${var.volume_size}" delete_on_termination = "true" } tags { "Cost Center" = "R1" "Name" = "AD-test" "Purpose" = "Task" "Server Name" = "Active Directory" "SME Name" = "Ravi" } } output "instance_id" { value = "${aws_instance.ec2.id}" } ### Resource for EBS volume creation ### resource "aws_ebs_volume" "additional_vol" { availability_zone = "${var.availability_zone}" size = "${var.size}" type = "${var.type}" } ### Output of Volume ID ### output "vol_id" { value = "${aws_ebs_volume.additional_vol.id}" } ### Resource for Volume attachment ### resource "aws_volume_attachment" "attach_vol" { device_name = "${var.device_name}" volume_id = "${aws_ebs_volume.additional_vol.id}" instance_id = "${aws_instance.ec2.id}" skip_destroy = "true" }
The password is encrypted using the key_pair you specified when launching the instance, you still need to use it to decrypt as password_data is still just the base64 encoded encrypted password data. You should use ${rsadecrypt(self.password_data,file("/path/to/private_key.pem"))} This is for good reason. You really don't want just a base64 encoded password floating around in state. Short version: You are missing the second argument in the interpolation function.
Terraform
51,094,442
14
I am trying to generate a bunch of files from templates. I need to replace the hardcoded 1 with the count.index, not sure what format terraform will allow me to use. resource "local_file" "foo" { count = "${length(var.files)}" content = "${data.template_file.tenant_repo_multi.1.rendered}" #TODO: Replace 1 with count index. filename = "${element(var.files, count.index)}" } data "template_file" "tenant_repo_multi" { count = "${length(var.files)}" template = "${file("templates/${element(var.files, count.index)}")}" } variable "files" { type = "list" default = ["filebeat-config_filebeat.yml",...] } I am running with: Terraform v0.11.7 + provider.gitlab v1.0.0 + provider.local v1.1.0 + provider.template v1.0.0
You can iterate through the tenant_repo_multi data source like so - resource "local_file" "foo" { count = "${length(var.files)}" content = "${element(data.template_file.tenant_repo_multi.*.rendered, count.index)}" filename = "${element(var.files, count.index)}" } However, have you considered using the template_dir resource in the Terraform Template provider. An example below - resource "template_dir" "config" { source_dir = "./unrendered" destination_dir = "./rendered" vars = { message = "world" } }
Terraform
50,301,523
14
I want to create reserved instances for long periods of time like e.g. with one year run time. Does anybody know if Terraform allows to create such reserved instances in AWS? I could now find anything in the Terraform documentation.
Reserved instances in AWS work on a first come first served basis. If you create any on demand instance that happens to match the criteria of your reserved instance then it will use your reserved instance quota first. The AWS docs also explain this: Reserved Instances are automatically applied to running On-Demand Instances provided that the specifications match. If you have no running On-Demand Instances that match the specifications of your Reserved Instance, the Reserved Instance is unused until you launch an instance with the required specifications. If you're launching an instance to take advantage of the billing benefit of a Reserved Instance, ensure that you specify the following information during launch: Platform: You must choose an Amazon Machine Image (AMI) that matches the platform (product description) of your Reserved Instance. For example, if you specified Linux/UNIX, you can launch an instance from an Amazon Linux AMI. Instance type: Specify the same instance type as your Reserved Instance; for example, t2.large. Availability Zone: If you purchased a Reserved Instance for a specific Availability Zone, you must launch the instance into the same Availability Zone. If you purchased a regional Reserved Instance, you can launch your instance into any Availability Zone. Tenancy: The tenancy of your instance must match the tenancy of the Reserved Instance; for example, dedicated or shared. For more information, see Dedicated Instances.
Terraform
48,751,593
14
My pipeline sh block: sh "set +e; /terraform/terraform plan -var aws_access_key=${aws_access_key} - var aws_secret_key=${aws_secret_key} -var aws_ami=${ami_id} -var aws_instance_type=${instance_type} -var aws_elb_security_group=${elb_sg} -var aws_ec2_security_group=${ec2_sg} -detailed-exitcode; echo \$? > status" exitCode = readFile('status').trim() echo "Terraform Plan Exit Code: ${exitCode}" output : + set +e + /terraform/terraform plan -var aws_access_key=**** -var aws_secret_key=**** -var aws_ami=ami-xxxxxxx + -var aws_instance_type=t2.medium -var aws_elb_security_group=sg-xxxx /terraform/selectdev/int/mp-frontend@tmp/durable-6c57c14c/script.sh: line 3: -var: command not found + -var aws_ec2_security_group=sg-axxx /terraform/selectdev/int/mp-frontend@tmp/durable-6c57c14c/script.sh: line 4: -var: command not found + -detailed-exitcode /terraform/selectdev/int/mp-frontend@tmp/durable-6c57c14c/script.sh: line 5: -detailed-exitcode: command not found + echo 127 I'm not sure why new line is being added to the command and If I do single quotes like sh '', variables are blank. what am I doing wrong ? I tried to do like below but it too adding new lines def command = $/....../$ res = sh(returnStdout: true, script: command)
First, FYI: single quotes skip variable interpolation in groovy If you want to have a multiple line script in a string, you need to escape endlines in a multi line variable. You need three things: Use triple double strings """. This allows you to have multi-line strings with interpolation (triple single quoted strings ''' let you do the same thing without interpolation). Escape endlines with \. This lets you insert newlines to format a long command. Wrap variables with double quotes (valid within triple double quotes, but you can also just escape the double quotes otherwise: \") For example as follows: (one argument per line for readability) sh("""set +e; /terraform/terraform plan \ -var aws_access_key="${aws_access_key}" \ -var aws_secret_key="${aws_secret_key}" \ -var aws_ami="${ami_id}" \ -var aws_instance_type="${instance_type}" \ -var aws_elb_security_group="${elb_sg}" \ -var aws_ec2_security_group="${ec2_sg}" \ -detailed-exitcode; echo \$? > status""")
Terraform
48,630,765
14
I need to define a resource in Terraform (v0.10.8) that has a list property that may or may not be empty depending on a variable, see volume_ids in the following definition: resource "digitalocean_droplet" "worker_node" { count = "${var.droplet_count}" [...] volume_ids = [ "${var.volume_size != 0 ? element(digitalocean_volume.worker.*.id, count.index) : ""}" ] } resource "digitalocean_volume" "worker" { count = "${var.volume_size != 0 ? var.droplet_count : 0}" [...] } } The solution I've come up with fails however in the case where the list should be empty (i.e., var.volume_size is 0): volume_ids = [ "${var.volume_size != 0 ? element(digitalocean_volume.worker.*.id, count.index) : ""}" ] The following Terraform error message is produced: * module.workers.digitalocean_droplet.worker_node[1]: element: element() may not be used with an empty list in: ${var.volume_size != 0 ? element(digitalocean_volume.worker.*.id, count.index) : ""} How should I correctly write my definition of volume_ids?
Unfortunately this is one of many language shortcomings in terraform. The hacky workaround is to tack an empty list onto your empty list. ${var.volume_size != 0 ? element(concat(digitalocean_volume.worker.*.id , list("")), count.index) : ""}
Terraform
47,412,837
14
I am trying to have a common user_data file for common tasks such as folder creation and certain package install and a separate user_data file for application specific configuration I am trying the below - user_data = "${data.template_file.userdata_common.rendered}", "${data.template_file.userdata_master.rendered}" With these configs - Common User Data Template data "template_file" "userdata_common" { template = "${file("${path.module}/userdata_common.sh")}" vars { "ALBTarget" = "${var.ALBTarget}" "s3bucket" = "${var.s3bucket}" "centrifydomain" = "${lookup(var.centrifydomain, format("%s-%s", lower(var.env),var.region))}" "centrifyadgroup" = "${lookup(var.centrifyadgroup, format("%s-%s", lower(var.env),var.region))}" } } Application Specific Config data "template_file" "userdata_master" { template = "${file("${path.module}/userdata_master.sh")}" vars { "ALBTarget" = "${var.ALBTarget}" "s3bucket" = "${var.s3bucket}" "centrifydomain" = "${lookup(var.centrifydomain, format("%s-%s", lower(var.env),var.region))}" "centrifyadgroup" = "${lookup(var.centrifyadgroup, format("%s-%s", lower(var.env),var.region))}" } } I get the below Error when i do Plan - Failed to load root config module: Error parsing /terraform/main.tf: key ${data.template_file.userdata_common.rendered}"' expected start of object ('{') or assignment ('=') Is this possible using Terraform (0.9.3)? If not, what's the best way to do this with Terraform?
Did you try template_cloudinit_config? Add below codes. data "template_cloudinit_config" "master" { gzip = true base64_encode = true # get common user_data part { filename = "common.cfg" content_type = "text/part-handler" content = "${data.template_file.userdata_common.rendered}" } # get master user_data part { filename = "master.cfg" content_type = "text/part-handler" content = "${data.template_file.userdata_master.rendered}" } } # sample code to use it. resource "aws_instance" "web" { ami = "ami-d05e75b8" instance_type = "t2.micro" user_data = "${data.template_cloudinit_config.master.rendered}" } Let me know if it works.
Terraform
43,642,308
14
In terraform, long keys can be specified as follows: resource "aws_iam_role_policy" "foo-policy" { role = "${aws_iam_role.foo-role.name}" name = "foo-policy" policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:*:*:*" ] } ] } EOF } This is a common pattern for IAM policy documents. The approach is documented here and is the example given in the AWS IAM role policy page on terraform. Is there a way to instead read the document from an external file? This has numerous advantages: you can use tools to generate the policy you can use linting tools to validate the policy JSON. Also editor syntax highlighting will work, showing JSON mistakes like trailing commas. you can use more advanced tools to validate the policy document syntax
You can use terraform's template_file data source for this. Simply write your policy out to a file in a path that your terraform scripts can access, and then create a template_file data source that references it. For example: data "template_file" "policy" { template = "${file("somepath/my-policy.json")}" } And then, in foo-policy, you would render it like so: policy = "${data.template_file.policy.rendered}" An additional benefit of template_file is that you can interpolate variables within the referenced file. For example, you could have variables like ${IAMUser} or ${AWSAccountNumber} in your policy and pass it in via the template_file vars option, which would allow you to reuse the policy file. Further Reading Terraform Docs - Configuring Data Sources
Terraform
43,526,544
14
I used to use multiple .sh files that ran different "terraform remote config" commands to switch between state files in buckets in different Google Cloud projects for different environments (dev, test and prod). With version 0.9.0, I understand that this now goes into a a .tf file: terraform { backend "gcs" { bucket = "terraform-state-test" path = "terraform.tfstate" project = "cloud-test" } } In version 0.9.0 there is now also the State Environment ("terraform env"): resource "google_container_cluster" "container_cluster" { initial_node_count = "${terraform.env == "prod" ? 5 : 1}" } But how should I now manage multiple environments in the same directory structure with the new backend configuration?
At the time of this writing, not all of the remote backends in Terraform have been updated to support state environments. For those that have, each backend has its own conventions for how to represent the now-multiple states in the data store. As of version 0.9.2, the "consul", "s3" and "local" backends have been updated. The "gcs" backend has not yet, but once it has the procedure described here will apply to that too. There's initially a "default" environment, but if you never run terraform apply with this environment selected then you can ignore it and name your environments whatever you want. To create a new environment called "production" and switch to it: terraform workspace new production This will establish an entirely separate state on the backend, so terraform plan should show that all of the resources need to be created fresh. You can switch between already-existing environments like this: terraform workspace select production Before 0.9, many teams (including yours, it sounds like) wrote wrapper scripts to simulate this behavior. It's likely that these scripts didn't follow exactly the same naming conventions in the storage backends, so some manual work will be required to migrate. One way to do that migration is to start off using the "local" backend, which stores state in a local directory called terraform.state.d. While working locally you can create the environments you want and then carefully overwrite the empty state files in the local directory with the existing state files from your previous scripted solution. Once all of the local environments have appropriate states in place you can then change the backend block in the config to the appropriate remote backend and run terraform init to trigger a migration of all of the local environments into the new backend. After this, the terraform workspace select command will begin switching between the remote environments rather than the local ones. If your chosen remote backend doesn't yet support environments, it's best to continue with a scripted solution for the time being. This means replacing terraform remote config in your existing wrapper script with a use of the partial configuration pattern to pass environment-specific configuration into terraform init.
Terraform
43,048,050
14
I need to use regular expressions in my Terraform code. The documentation for the replace function says the string if wrapped in a forward slash can be treated as a regex. I've tried the following: Name = "${replace(var.string, var.search | lower(var.search), replace)}" I need to use regex to replace either the string or the lower case of the string with the replace string.
The Terraform docs for the replace function state that you need to wrap your search string in forward slashes for it to search for a regular expression and this is also seen in the code. Terraform uses the re2 library to handle regular expressions which does supposedly take a /i flag to make it case insensitive. However I couldn't seem to get that to work at all (even trying /search/i/) but it does support Perl style regular expressions unless in POSIX mode so simply prefixing your search variable with (?i) should work fine. A basic worked example looks like this: variable "string" { default = "Foo" } variable "search" { default = "/(?i)foo/" } variable "replace" { default = "bar" } resource "aws_instance" "example" { ami = "ami-123456" instance_type = "t2.micro" tags { Name = "${replace(var.string, var.search, var.replace)}" } }
Terraform
42,808,041
14
I am trying to access one module variable in another new module to get aws instance ids which are created in that module and use them in a module Cloud watch alerts which creates alarm in those instance ids . The structure is something like the below **Amodule #here this is used for creating kafka aws instances* main.tf kafkainstancevariables.tf Bmodule #here this is used for creating alarms in those kafka instances main.tf cloudwatchalertsforkafkainstancesVariables.tf Outside modules terraform mainfile from where all modules are called main.tf variables.tf*** How to access the variables created in Amodule in Bmodule? Thank you!
You can use outputs to accomplish this. In your kafka module, you could define an output that looks something like this: output "instance_ids" { value = ["${aws_instance.kafka.*.id}"] } In another terraform file, let's assume you instantiated the module with something like: module "kafka" { source = "./modules/kafka" } You can then access that output as follows: instances = ["${module.kafka.instance_ids}"] If your modules are isolated from each other (i.e. your cloudwatch module doesn't instantiate your kafka module), you can pass the outputs as variables between modules: module "kafka" { source = "./modules/kafka" } module "cloudwatch" { source = "./modules/cloudwatch" instances = ["${module.kafka.instance_ids}"] } Of course, your "cloudwatch" module would have to declare the instances variable. See https://www.terraform.io/docs/modules/usage.html#outputs for more information on using outputs in modules.
Terraform
41,042,096
14
Using Terraform 0.7.7. I have a simple Terraform file with the following: provider "aws" { access_key = "${var.access_key}" secret_key = "${var.secret_key}" region = "${var.region}" } resource "aws_instance" "personal" { ami = "${lookup(var.amis, var.region)}" instance_type = "t2.micro" } resource "aws_eip" "ip" { instance = "${aws_instance.personal.id}" } resource "aws_key_pair" "personal" { key_name = "mschuchard-us-east" public_key = "${var.public_key}" } Terraform apply yields the following error: aws_key_pair.personal: Creating... fingerprint: "" => "<computed>" key_name: "" => "mschuchard-us-east" public_key: "" => "ssh-rsa pubkey hash mschuchard-us-east" aws_instance.personal: Creating... ami: "" => "ami-c481fad3" availability_zone: "" => "<computed>" ebs_block_device.#: "" => "<computed>" ephemeral_block_device.#: "" => "<computed>" instance_state: "" => "<computed>" instance_type: "" => "t2.micro" key_name: "" => "<computed>" network_interface_id: "" => "<computed>" placement_group: "" => "<computed>" private_dns: "" => "<computed>" private_ip: "" => "<computed>" public_dns: "" => "<computed>" public_ip: "" => "<computed>" root_block_device.#: "" => "<computed>" security_groups.#: "" => "<computed>" source_dest_check: "" => "true" subnet_id: "" => "<computed>" tenancy: "" => "<computed>" vpc_security_group_ids.#: "" => "<computed>" aws_instance.personal: Creation complete aws_eip.ip: Creating... allocation_id: "" => "<computed>" association_id: "" => "<computed>" domain: "" => "<computed>" instance: "" => "i-0ab94b58b0089697d" network_interface: "" => "<computed>" private_ip: "" => "<computed>" public_ip: "" => "<computed>" vpc: "" => "<computed>" aws_eip.ip: Creation complete Error applying plan: 1 error(s) occurred: * aws_key_pair.personal: Error import KeyPair: InvalidKeyPair.Duplicate: The keypair 'mschuchard-us-east' already exists. status code: 400, request id: 51950b9a-55e8-4901-bf35-4d2be234abbf The only help I found with googling was to blow away the *.tfstate files, which I tried and that did not help. I can launch an EC2 instance with the gui with this key pair and easily ssh into it, but Terraform is erroring when trying to use the same fully functional keypair.
The error is telling you that the keypair already exists in your AWS account but Terraform has no knowledge of it in its state files so is attempting to create it each time. You have two options available to you here. Firstly, you could simply delete it from the AWS account and allow Terraform to upload it and thus allow it to be managed by Terraform and be in its state files. Alternatively you could use the Terraform import command to import the pre-existing resource into your state file: terraform import aws_key_pair.personal mschuchard-us-east
Terraform
40,120,065
14
UPDATE: Been working on this off and on among other things. Cannot seem to get a working config w/ two subnets and an SSH bastion. Placing bounty for a full .tf file config that: * creates two private subnets * creates a bastion * spins an ec2 instance on each subnet configured via the bastion (run some arbitrary shell command via the bastion) * has an internet gateway configured * has a nat gateway for the hosts on the private subnets * has routes and security groups configured accordingly Original post: I am trying to learn Terraform and build a prototype. I have an AWS VPC configured via Terraform. In addition to a DMZ subnet, I have a public subnet 'web' that receives traffic from the internet. I have a private subnet 'app' that is not accessible from the internet. I am trying to configure a bastion host so that terraform can provision instances on the private 'app' subnet. I have not yet been able to get this to work. When I ssh in to the bastion, I cannot SSH from the bastion host to any instances within the private subnet. I suspect there is a routing problem. I have been building this prototype via several available examples and the documentation. Many of the examples use slightly different techniques and terraform routing definitions via the aws provider. Can someone please provide the ideal or proper way to define these three subnets (public 'web', public 'dmz' w/ a bastion, and private 'app') so that instances on the 'web' subnet can access the 'app' subnet and that the bastion host in the DMZ can provision instances in the private 'app' subnet? A snip of my configs are below: resource "aws_subnet" "dmz" { vpc_id = "${aws_vpc.vpc-poc.id}" cidr_block = "${var.cidr_block_dmz}" } resource "aws_route_table" "dmz" { vpc_id = "${aws_vpc.vpc-poc.id}" route { cidr_block = "0.0.0.0/0" gateway_id = "${aws_internet_gateway.gateway.id}" } } resource "aws_route_table_association" "dmz" { subnet_id = "${aws_subnet.dmz.id}" route_table_id = "${aws_route_table.dmz.id}" } resource "aws_subnet" "web" { vpc_id = "${aws_vpc.vpc-poc.id}" cidr_block = "10.200.2.0/24" } resource "aws_route_table" "web" { vpc_id = "${aws_vpc.vpc-poc.id}" route { cidr_block = "0.0.0.0/0" instance_id = "${aws_instance.bastion.id}" } } resource "aws_route_table_association" "web" { subnet_id = "${aws_subnet.web.id}" route_table_id = "${aws_route_table.web.id}" } resource "aws_subnet" "app" { vpc_id = "${aws_vpc.vpc-poc.id}" cidr_block = "10.200.3.0/24" } resource "aws_route_table" "app" { vpc_id = "${aws_vpc.vpc-poc.id}" route { cidr_block = "0.0.0.0/0" instance_id = "${aws_instance.bastion.id}" } } resource "aws_route_table_association" "app" { subnet_id = "${aws_subnet.app.id}" route_table_id = "${aws_route_table.app.id}" }
Here is a snippet that may help you. This was untested but was pulled from one of my terraform files where I provision VMs in a private subnet. I know this works with one private subnet, I tried to implement two here like your original question. I jump through my NAT instances to hit and provision private subnet boxes with Terraform. If your security groups are set up correctly it does work. It took some experimentation for me. /* VPC creation */ resource "aws_vpc" "vpc_poc" { cidr_block = "10.200.0.0/16" } /* Internet gateway for the public subnets */ resource "aws_internet_gateway" "gateway" { vpc_id = "${aws_vpc.vpc_poc.id}" } /* DMZ subnet - public */ resource "aws_subnet" "dmz" { vpc_id = "${aws_vpc.vpc_poc.id}" cidr_block = "10.200.1.0/24" /* may help to be explicit here */ map_public_ip_on_launch = true /* this is recommended in the docs */ depends_on = ["aws_internet_gateway.gateway"] } resource "aws_route_table" "dmz" { vpc_id = "${aws_vpc.vpc_poc.id}" route { cidr_block = "0.0.0.0/0" gateway_id = "${aws_internet_gateway.gateway.id}" } } resource "aws_route_table_association" "dmz" { subnet_id = "${aws_subnet.dmz.id}" route_table_id = "${aws_route_table.dmz.id}" } /* Web subnet - public */ resource "aws_subnet" "web" { vpc_id = "${aws_vpc.vpc_poc.id}" cidr_block = "10.200.2.0/24" map_public_ip_on_launch = true depends_on = ["aws_internet_gateway.gateway"] } resource "aws_route_table" "web" { vpc_id = "${aws_vpc.vpc_poc.id}" route { cidr_block = "0.0.0.0/0" /* your public web subnet needs access to the gateway */ /* this was set to bastion before so you had a circular arg */ gateway_id = "${aws_internet_gateway.gateway.id}" } } resource "aws_route_table_association" "web" { subnet_id = "${aws_subnet.web.id}" route_table_id = "${aws_route_table.web.id}" } /* App subnet - private */ resource "aws_subnet" "app" { vpc_id = "${aws_vpc.vpc_poc.id}" cidr_block = "10.200.3.0/24" } /* Create route for DMZ Bastion */ resource "aws_route_table" "app" { vpc_id = "${aws_vpc.vpc_poc.id}" route { cidr_block = "0.0.0.0/0" /* this send traffic to the bastion to pass off */ instance_id = "${aws_instance.nat_dmz.id}" } } /* Create route for App Bastion */ resource "aws_route_table" "app" { vpc_id = "${aws_vpc.vpc_poc.id}" route { cidr_block = "0.0.0.0/0" /* this send traffic to the bastion to pass off */ instance_id = "${aws_instance.nat_web.id}" } } resource "aws_route_table_association" "app" { subnet_id = "${aws_subnet.app.id}" route_table_id = "${aws_route_table.app.id}" } /* Default security group */ resource "aws_security_group" "default" { name = "default-sg" description = "Default security group that allows inbound and outbound traffic from all instances in the VPC" vpc_id = "${aws_vpc.vpc_poc.id}" ingress { from_port = "0" to_port = "0" protocol = "-1" self = true } egress { from_port = "0" to_port = "0" protocol = "-1" self = true } } /* Security group for the nat server */ resource "aws_security_group" "nat" { name = "nat-sg" description = "Security group for nat instances that allows SSH and VPN traffic from internet. Also allows outbound HTTP[S]" vpc_id = "${aws_vpc.vpc_poc.id}" ingress { from_port = 80 to_port = 80 protocol = "tcp" /* this your private subnet cidr */ cidr_blocks = ["10.200.3.0/24"] } ingress { from_port = 443 to_port = 443 protocol = "tcp" /* this is your private subnet cidr */ cidr_blocks = ["10.200.3.0/24"] } ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = -1 to_port = -1 protocol = "icmp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 22 to_port = 22 protocol = "tcp" /* this is the vpc cidr block */ cidr_blocks = ["10.200.0.0/16"] } egress { from_port = -1 to_port = -1 protocol = "icmp" cidr_blocks = ["0.0.0.0/0"] } } /* Security group for the web */ resource "aws_security_group" "web" { name = "web-sg" description = "Security group for web that allows web traffic from internet" vpc_id = "${aws_vpc.vpc_poc.id}" ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } } /* Install deploy key for use with all of our provisioners */ resource "aws_key_pair" "deployer" { key_name = "deployer-key" public_key = "${file("~/.ssh/id_rsa")}" } /* Setup NAT in DMZ subnet */ resource "aws_instance" "nat_dmz" { ami = "ami-67a54423" availability_zone = "us-west-1a" instance_type = "m1.small" key_name = "${aws_key_pair.deployer.id}" /* Notice we are assigning the security group here */ security_groups = ["${aws_security_group.nat.id}"] /* this puts the instance in your public subnet, but translate to the private one */ subnet_id = "${aws_subnet.dmz.id}" /* this is really important for nat instance */ source_dest_check = false associate_public_ip_address = true } /* Give NAT EIP In DMZ */ resource "aws_eip" "nat_dmz" { instance = "${aws_instance.nat_dmz.id}" vpc = true } /* Setup NAT in Web subnet */ resource "aws_instance" "nat_web" { ami = "ami-67a54423" availability_zone = "us-west-1a" instance_type = "m1.small" key_name = "${aws_key_pair.deployer.id}" /* Notice we are assigning the security group here */ security_groups = ["${aws_security_group.nat.id}"] /* this puts the instance in your public subnet, but translate to the private one */ subnet_id = "${aws_subnet.web.id}" /* this is really important for nat instance */ source_dest_check = false associate_public_ip_address = true } /* Give NAT EIP In DMZ */ resource "aws_eip" "nat_web" { instance = "${aws_instance.nat_web.id}" vpc = true } /* Install server in private subnet and jump host to it with terraform */ resource "aws_instance" "private_box" { ami = "ami-d1315fb1" instance_type = "t2.large" key_name = "${aws_key_pair.deployer.id}" subnet_id = "${aws_subnet.api.id}" associate_public_ip_address = false /* this is what gives the box access to talk to the nat */ security_groups = ["${aws_security_group.nat.id}"] connection { /* connect through the nat instance to reach this box */ bastion_host = "${aws_eip.nat_dmz.public_ip}" bastion_user = "ec2-user" bastion_private_key = "${file("keys/terraform_rsa")}" /* connect to box here */ user = "ec2-user" host = "${self.private_ip}" private_key = "${file("~/.ssh/id_rsa")}" } }
Terraform
35,822,830
14
I am working on a aws stack and have some lambdas and s3 bucket ( sample code below) . how to generate zip file for lambda via terrarform. I have seen different styles and probably depends on the version of terraform as well. resource "aws_lambda_function" "my_lambda" { filename = "my_lambda_func.zip" source_code_hash = filebase64sha256("my_lambda_func.zip")
So to give a more up-to-date and use-case based answer, for terraform version 2.3.0, you can apply the following: data "archive_file" "dynamodb_stream_lambda_function" { type = "zip" source_file = "../../lambda-dynamodb-streams/index.js" output_path = "lambda_function.zip" } resource "aws_lambda_function" "my_dynamodb_stream_lambda" { function_name = "my-dynamodb-stream-lambda" role = aws_iam_role.my_stream_lambda_role.arn handler = "index.handler" filename = data.archive_file.dynamodb_stream_lambda_function.output_path source_code_hash = data.archive_file.dynamodb_stream_lambda_function.output_base64sha256 runtime = "nodejs16.x" }
Terraform
71,992,754
13
I have two resources: resource "aws_lightsail_instance" "myserver-sig" { name = "myserver-Sig" availability_zone = "eu-west-2a" blueprint_id = "ubuntu_20_04" bundle_id = "nano_2_0" key_pair_name = "LightsailDefaultKeyPair" } and resource "aws_lightsail_instance_public_ports" "myserver-sig-public-ports" { instance_name = aws_lightsail_instance.myserver-sig.name port_info { protocol = "tcp" from_port = 443 to_port = 443 } port_info { protocol = "tcp" from_port = 80 to_port = 80 } depends_on = [ aws_lightsail_instance.myserver-sig, ] } When I first run terraform apply both resources are created. If I want to replace the aws_lightsail_instance with a new version then the aws_lightsail_instance will redeploy, but the aws_lightsail_instance_public_ports will not because the ports haven't changed. However as part of the deploy of aws_lightsail_instance it changes the public ports to close 443 and open 22. This means that the end state of the redeploy of the aws_lightsail_instance is that port 443 is closed. If I run terraform apply again then it will correctly replace aws_lightsail_instance_public_ports opening port 443 How do I force a recreation of the aws_lightsail_instance_public_ports resource so that I only have to run terraform apply once?
You can force the recreation (delete/create or -/+) by using the -replace=ADDRESS argument with terraform plan or terraform apply: terraform apply -replace=aws_lightsail_instance_public_ports.myserver-sig-public-ports This replaces the former workflow of terraform taint <resource_address> followed by a plan and apply. If you are using an older version of Terraform, then you would need to use taint instead: terraform taint aws_lightsail_instance_public_ports.myserver-sig-public-ports
Terraform
70,772,731
13
I am trying to build in Terraform a Web ACL resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/wafv2_web_acl This resource has the nested blocks rule->action->block and rule-> action->count I would like to have a variable which's type allows me to set the action to either count {} or block{} so that the two following configurations are possible: With block: resource "aws_wafv2_web_acl" "example" { ... rule { ... action { block {} } ... } With count: resource "aws_wafv2_web_acl" "example" { ... rule { ... action { count {} } ... } I can achieve this result with a boolean variable and dynamic blocks in a very non-declarative way so far. My question is, can the type of a variable reference the type of a nested block, so that the content of the nested block can be passed in a variable? What I am trying to achieve is something that would look similar to this (non working syntax): resource "aws_wafv2_web_acl" "example" { ... rule { ... action = var.action_block ... } } variable "action_block" { description = "Action of the rule" type = <whatever type is accepted by aws_wafv2_web_acl->rule->action> } so that it can be passed down in a similar manner to this module "my_waf" { source = "../modules/waf" action_block { block {} } } For reference, what I am trying to avoid: dynamic "action" { for_each = var.block ? [] : [1] content { count {} } } dynamic "action" { for_each = var.block ? [1] : [] content { block {} } } Thank you so much for your help!
The only marginal improvement I can imagine is to move the dynamic blocks one level deeper, to perhaps make it clear to a reader that the action block will always be present and it's the count or block blocks inside that have dynamic behavior: action { dynamic "count" { for_each = var.block ? [] : [1] content {} } dynamic "block" { for_each = var.block ? [1] : [] content {} } } There are some other ways you could formulate those two for_each expressions so that the input could have a different shape, but you'll need to write out a suitable type constraint for that variable yourself which matches whatever conditions you want to apply to it.
Terraform
70,382,612
13
I have a Terraform script that create an Azure Key Vault, imports my SSL certificate (3DES .pfx file with a password), and creates an Application Gateway with a HTTP listener. I'm trying to change this to a HTTPS listener that uses my SSL certificate from KeyVault. I've stepped through this process manually in Azure Portal and I have this working with PowerShell. Unfortunately I don't find Terraform's documentation clear on how this is supposed to be achieved. Here are relevant snippets of my Application Gateway and certificate resources: resource "azurerm_application_gateway" "appgw" { name = "my-appgw" location = "australiaeast" resource_group_name = "my-rg" http_listener { protocol = "https" ssl_certificate_name = "appgw-listener-cert" ... } identity { type = "UserAssigned" identity_ids = [azurerm_user_assigned_identity.appgw_uaid.id] } ssl_certificate { key_vault_secret_id = azurerm_key_vault_certificate.ssl_cert.secret_id name = "appgw-listener-cert" } ... } resource "azurerm_key_vault" "kv" { name = "my-kv" location = "australiaeast" resource_group_name = "my-rg" ... access_policy { object_id = data.azurerm_client_config.current.object_id tenant_id = data.azurerm_client_config.current.tenant_id certificate_permissions = [ "Create", "Delete", "DeleteIssuers", "Get", "GetIssuers", "Import", "List", "ListIssuers", "ManageContacts", "ManageIssuers", "Purge", "SetIssuers", "Update" ] key_permissions = [ "Backup", "Create", "Decrypt", "Delete", "Encrypt", "Get", "Import", "List", "Purge", "Recover", "Restore", "Sign", "UnwrapKey", "Update", "Verify", "WrapKey" ] secret_permissions = [ "Backup", "Delete", "Get", "List", "Purge", "Restore", "Restore", "Set" ] } access_policy { object_id = azurerm_user_assigned_identity.uaid_appgw.principal_id tenant_id = data.azurerm_client_config.current.tenant_id secret_permissions = [ "Get" ] } } resource "azurerm_key_vault_certificate" "ssl_cert" { name = "my-ssl-cert" key_vault_id = azurerm_key_vault.kv.id certificate { # These are stored as sensitive variables in Terraform Cloud # ssl_cert_b64 value was retrieved by: $ cat my-ssl-cert.pfx | base64 > o.txt contents = var.ssl_cert_b64 password = var.ssl_cert_passwd } certificate_policy { issuer_parameters { name = "Unknown" } key_properties { exportable = false key_size = 2048 key_type = "RSA" reuse_key = false } secret_properties { content_type = "application/x-pkcs12" } } } Here is the (sanitised) error I get in Terraform Cloud: Error: waiting for create/update of Application Gateway: (Name "my-appgw" / Resource Group "my-rg"): Code="ApplicationGatewayKeyVaultSecretException" Message="Problem occured while accessing and validating KeyVault Secrets associated with Application Gateway '/subscriptions/1324/resourceGroups/my-rg/providers/Microsoft.Network/applicationGateways/my-appgw'. See details below:" Details=[{"code":"ApplicationGatewaySslCertificateDoesNotHavePrivateKey","message":"Certificate /subscriptions/1324/resourceGroups/my-rg/providers/Microsoft.Network/applicationGateways/my-appgw/sslCertificates/appgw-listener-cert does not have Private Key."}] I downloaded the certificate from Key Vault and it appears to be a valid, not corrupted or otherwise broken. I don't understand why the error says it doesn't have a Private Key. Can someone point out what I've missed or I'm doing wrong?
I tested 2 scenarios in my environment : Scenario 1: Generating a new certificate in Keyvault and uploading it in application gateway ssl certificate. provider "azurerm" { features{} } data "azurerm_client_config" "current" {} data "azurerm_resource_group" "example"{ name = "ansumantest" } resource "azurerm_user_assigned_identity" "base" { resource_group_name = data.azurerm_resource_group.example.name location = data.azurerm_resource_group.example.location name = "mi-appgw-keyvault" } resource "azurerm_key_vault" "kv" { name = "ansumankeyvault01" location = data.azurerm_resource_group.example.location resource_group_name = data.azurerm_resource_group.example.name tenant_id = data.azurerm_client_config.current.tenant_id sku_name = "standard" access_policy { object_id = data.azurerm_client_config.current.object_id tenant_id = data.azurerm_client_config.current.tenant_id certificate_permissions = [ "Create", "Delete", "DeleteIssuers", "Get", "GetIssuers", "Import", "List", "ListIssuers", "ManageContacts", "ManageIssuers", "Purge", "SetIssuers", "Update" ] key_permissions = [ "Backup", "Create", "Decrypt", "Delete", "Encrypt", "Get", "Import", "List", "Purge", "Recover", "Restore", "Sign", "UnwrapKey", "Update", "Verify", "WrapKey" ] secret_permissions = [ "Backup", "Delete", "Get", "List", "Purge", "Restore", "Restore", "Set" ] } access_policy { object_id = azurerm_user_assigned_identity.base.principal_id tenant_id = data.azurerm_client_config.current.tenant_id secret_permissions = [ "Get" ] } } output "secret_identifier" { value = azurerm_key_vault_certificate.example.secret_id } resource "azurerm_key_vault_certificate" "example" { name = "generated-cert" key_vault_id = azurerm_key_vault.kv.id certificate_policy { issuer_parameters { name = "Self" } key_properties { exportable = true key_size = 2048 key_type = "RSA" reuse_key = true } lifetime_action { action { action_type = "AutoRenew" } trigger { days_before_expiry = 30 } } secret_properties { content_type = "application/x-pkcs12" } x509_certificate_properties { # Server Authentication = 1.3.6.1.5.5.7.3.1 # Client Authentication = 1.3.6.1.5.5.7.3.2 extended_key_usage = ["1.3.6.1.5.5.7.3.1"] key_usage = [ "cRLSign", "dataEncipherment", "digitalSignature", "keyAgreement", "keyCertSign", "keyEncipherment", ] subject_alternative_names { dns_names = ["internal.contoso.com", "domain.hello.world"] } subject = "CN=hello-world" validity_in_months = 12 } } } resource "azurerm_virtual_network" "example" { name = "example-network" resource_group_name = data.azurerm_resource_group.example.name location = data.azurerm_resource_group.example.location address_space = ["10.254.0.0/16"] } resource "azurerm_subnet" "frontend" { name = "frontend" resource_group_name = data.azurerm_resource_group.example.name virtual_network_name = azurerm_virtual_network.example.name address_prefixes = ["10.254.0.0/24"] } resource "azurerm_subnet" "backend" { name = "backend" resource_group_name = data.azurerm_resource_group.example.name virtual_network_name = azurerm_virtual_network.example.name address_prefixes = ["10.254.2.0/24"] } resource "azurerm_public_ip" "example" { name = "example-pip" resource_group_name = data.azurerm_resource_group.example.name location = data.azurerm_resource_group.example.location allocation_method = "Static" sku = "standard" } #&nbsp;since these variables are re-used - a locals block makes this more maintainable locals { backend_address_pool_name = "${azurerm_virtual_network.example.name}-beap" frontend_port_name = "${azurerm_virtual_network.example.name}-feport" frontend_ip_configuration_name = "${azurerm_virtual_network.example.name}-feip" http_setting_name = "${azurerm_virtual_network.example.name}-be-htst" listener_name = "${azurerm_virtual_network.example.name}-httplstn" request_routing_rule_name = "${azurerm_virtual_network.example.name}-rqrt" redirect_configuration_name = "${azurerm_virtual_network.example.name}-rdrcfg" } resource "null_resource" "previous" {} resource "time_sleep" "wait_240_seconds" { depends_on = [azurerm_key_vault.kv] create_duration = "240s" } resource "azurerm_application_gateway" "network" { name = "example-appgateway" resource_group_name = data.azurerm_resource_group.example.name location = data.azurerm_resource_group.example.location sku { name = "Standard_v2" tier = "Standard_v2" capacity = 2 } gateway_ip_configuration { name = "my-gateway-ip-configuration" subnet_id = azurerm_subnet.frontend.id } frontend_port { name = local.frontend_port_name port = 443 } frontend_ip_configuration { name = local.frontend_ip_configuration_name public_ip_address_id = azurerm_public_ip.example.id } backend_address_pool { name = local.backend_address_pool_name } backend_http_settings { name = local.http_setting_name cookie_based_affinity = "Disabled" path = "/path1/" port = 443 protocol = "Https" request_timeout = 60 } http_listener { name = local.listener_name frontend_ip_configuration_name = local.frontend_ip_configuration_name frontend_port_name = local.frontend_port_name protocol = "Https" ssl_certificate_name = "app_listener" } identity { type = "UserAssigned" identity_ids = [azurerm_user_assigned_identity.base.id] } ssl_certificate { name = "app_listener" key_vault_secret_id = azurerm_key_vault_certificate.example.secret_id } request_routing_rule { name = local.request_routing_rule_name rule_type = "Basic" http_listener_name = local.listener_name backend_address_pool_name = local.backend_address_pool_name backend_http_settings_name = local.http_setting_name } depends_on = [time_sleep.wait_240_seconds] } Output: Scenario 2 : Using one certificate which I import from local machine to keyvault and using it in application gateway. provider "azurerm" { features{} } data "azurerm_client_config" "current" {} data "azurerm_resource_group" "example"{ name = "ansumantest" } resource "azurerm_user_assigned_identity" "base" { resource_group_name = data.azurerm_resource_group.example.name location = data.azurerm_resource_group.example.location name = "mi-appgw-keyvault" } resource "azurerm_key_vault" "kv" { name = "ansumankeyvault01" location = data.azurerm_resource_group.example.location resource_group_name = data.azurerm_resource_group.example.name tenant_id = data.azurerm_client_config.current.tenant_id sku_name = "standard" access_policy { object_id = data.azurerm_client_config.current.object_id tenant_id = data.azurerm_client_config.current.tenant_id certificate_permissions = [ "Create", "Delete", "DeleteIssuers", "Get", "GetIssuers", "Import", "List", "ListIssuers", "ManageContacts", "ManageIssuers", "Purge", "SetIssuers", "Update" ] key_permissions = [ "Backup", "Create", "Decrypt", "Delete", "Encrypt", "Get", "Import", "List", "Purge", "Recover", "Restore", "Sign", "UnwrapKey", "Update", "Verify", "WrapKey" ] secret_permissions = [ "Backup", "Delete", "Get", "List", "Purge", "Restore", "Restore", "Set" ] } access_policy { object_id = azurerm_user_assigned_identity.base.principal_id tenant_id = data.azurerm_client_config.current.tenant_id secret_permissions = [ "Get" ] } } output "secret_identifier" { value = azurerm_key_vault_certificate.example.secret_id } resource "azurerm_key_vault_certificate" "example" { name = "imported-cert" key_vault_id = azurerm_key_vault.kv.id certificate { contents = filebase64("C:/appgwlistner.pfx") password = "password" } certificate_policy { issuer_parameters { name = "Self" } key_properties { exportable = true key_size = 2048 key_type = "RSA" reuse_key = false } secret_properties { content_type = "application/x-pkcs12" } } } resource "azurerm_virtual_network" "example" { name = "example-network" resource_group_name = data.azurerm_resource_group.example.name location = data.azurerm_resource_group.example.location address_space = ["10.254.0.0/16"] } resource "azurerm_subnet" "frontend" { name = "frontend" resource_group_name = data.azurerm_resource_group.example.name virtual_network_name = azurerm_virtual_network.example.name address_prefixes = ["10.254.0.0/24"] } resource "azurerm_subnet" "backend" { name = "backend" resource_group_name = data.azurerm_resource_group.example.name virtual_network_name = azurerm_virtual_network.example.name address_prefixes = ["10.254.2.0/24"] } resource "azurerm_public_ip" "example" { name = "example-pip" resource_group_name = data.azurerm_resource_group.example.name location = data.azurerm_resource_group.example.location allocation_method = "Static" sku = "standard" } #&nbsp;since these variables are re-used - a locals block makes this more maintainable locals { backend_address_pool_name = "${azurerm_virtual_network.example.name}-beap" frontend_port_name = "${azurerm_virtual_network.example.name}-feport" frontend_ip_configuration_name = "${azurerm_virtual_network.example.name}-feip" http_setting_name = "${azurerm_virtual_network.example.name}-be-htst" listener_name = "${azurerm_virtual_network.example.name}-httplstn" request_routing_rule_name = "${azurerm_virtual_network.example.name}-rqrt" redirect_configuration_name = "${azurerm_virtual_network.example.name}-rdrcfg" } resource "null_resource" "previous" {} resource "time_sleep" "wait_240_seconds" { depends_on = [azurerm_key_vault.kv] create_duration = "240s" } resource "azurerm_application_gateway" "network" { name = "example-appgateway" resource_group_name = data.azurerm_resource_group.example.name location = data.azurerm_resource_group.example.location sku { name = "Standard_v2" tier = "Standard_v2" capacity = 2 } gateway_ip_configuration { name = "my-gateway-ip-configuration" subnet_id = azurerm_subnet.frontend.id } frontend_port { name = local.frontend_port_name port = 443 } frontend_ip_configuration { name = local.frontend_ip_configuration_name public_ip_address_id = azurerm_public_ip.example.id } backend_address_pool { name = local.backend_address_pool_name } backend_http_settings { name = local.http_setting_name cookie_based_affinity = "Disabled" path = "/path1/" port = 443 protocol = "Https" request_timeout = 60 } http_listener { name = local.listener_name frontend_ip_configuration_name = local.frontend_ip_configuration_name frontend_port_name = local.frontend_port_name protocol = "Https" ssl_certificate_name = "app_listener" } identity { type = "UserAssigned" identity_ids = [azurerm_user_assigned_identity.base.id] } ssl_certificate { name = "app_listener" key_vault_secret_id = azurerm_key_vault_certificate.example.secret_id } request_routing_rule { name = local.request_routing_rule_name rule_type = "Basic" http_listener_name = local.listener_name backend_address_pool_name = local.backend_address_pool_name backend_http_settings_name = local.http_setting_name } depends_on = [time_sleep.wait_240_seconds] } Outputs: Note: Please make sure to have the pfx certificate with private keys. While you are exporting a pfx certificate using a security certificate, please make sure to have the following propeties selected as shown below and then give a password and export it.
Terraform
69,193,030
13
I came across a pattern in couple of terraform code in Github. resource "aws_vpc" "this" I wanted to know how keyword this provides a particular advantage over a named resource. I can't find a Hashicorp documentation on this keyword. https://github.com/terraform-aws-modules/terraform-aws-vpc/blob/3210728ee26665fab6b1f07417bcb0e518573a1d/main.tf https://github.com/cloudposse/terraform-aws-vpn-connection/blob/master/context.tf
No, there is nothing special about this in terms of TF syntax or handling. Its just a name that may indicate that you have only one VPC in your setup. But this is not enforced by TF mechanism. Other common names are main or just vpc.
Terraform
69,126,254
13
Hi I was wondering if we can add an SNS topic from Terraform with Email subscription. So it will be easy to setup Alarms and create SNS topic to send alert to an email with one “Terraform apply” command. Thanks
resource "aws_sns_topic" "topic" { name = "topic-name" } resource "aws_sns_topic_subscription" "email-target" { topic_arn = aws_sns_topic.topic.arn protocol = "email" endpoint = "[email protected]" }
Terraform
67,348,642
13
Terraform v0.13.5 provider aws v3.7.0 Backend: AWS S3+DynamoDB terraform plan was aborted, and now it cannot acquire the state lock. I'm trying to release it manually but get error: terraform force-unlock -force xxx-xxx-xx-dddd Failed to unlock state: failed to retrieve lock info: unexpected end of JSON input The state file looks complete and passes json syntax validation successfully. How to fix that?
Solution: double-check you're in correct terraform workspace.
Terraform
64,694,279
13
I have a tf script for provisioning a Cloud SQL instance, along with a couple of dbs and an admin user. I have renamed the instance, hence a new instance was created but terraform is encountering issues when it comes to deleting the old one. Error: Error, failed to delete instance because deletion_protection is set to true. Set it to false to proceed with instance deletion I have tried setting the deletion_protection to false but I keep getting the same error. Is there a way to check which resources need to have the deletion_protection set to false in order to be deleted? I have only added it to the google_sql_database_instance resource. My tf script: // Provision the Cloud SQL Instance resource "google_sql_database_instance" "instance-master" { name = "instance-db-${random_id.random_suffix_id.hex}" region = var.region database_version = "POSTGRES_12" project = var.project_id settings { availability_type = "REGIONAL" tier = "db-f1-micro" activation_policy = "ALWAYS" disk_type = "PD_SSD" ip_configuration { ipv4_enabled = var.is_public ? true : false private_network = var.network_self_link require_ssl = true dynamic "authorized_networks" { for_each = toset(var.is_public ? [1] : []) content { name = "Public Internet" value = "0.0.0.0/0" } } } backup_configuration { enabled = true } maintenance_window { day = 2 hour = 4 update_track = "stable" } dynamic "database_flags" { iterator = flag for_each = var.database_flags content { name = flag.key value = flag.value } } user_labels = var.default_labels } deletion_protection = false depends_on = [google_service_networking_connection.cloudsql-peering-connection, google_project_service.enable-sqladmin-api] } // Provision the databases resource "google_sql_database" "db" { name = "orders-placement" instance = google_sql_database_instance.instance-master.name project = var.project_id } // Provision a super user resource "google_sql_user" "admin-user" { name = "admin-user" instance = google_sql_database_instance.instance-master.name password = random_password.user-password.result project = var.project_id } // Get latest CA certificate locals { furthest_expiration_time = reverse(sort([for k, v in google_sql_database_instance.instance-master.server_ca_cert : v.expiration_time]))[0] latest_ca_cert = [for v in google_sql_database_instance.instance-master.server_ca_cert : v.cert if v.expiration_time == local.furthest_expiration_time] } // Get SSL certificate resource "google_sql_ssl_cert" "client_cert" { common_name = "instance-master-client" instance = google_sql_database_instance.instance-master.name }
Seems like your code going to recreate this sql-instance. But your current tfstate file contains an instance-code with true value for deletion_protection parameter. In this case, you need first of all change value of this parameter to false manually in tfstate file or by adding deletion_protection = true in the code with running terraform apply command after that (beware: your code shouldn't do a recreation of the instance). And after this manipulations, you can do anything with your SQL instance
Terraform
64,611,122
13
I'm receiving this curious error message PlatformTaskDefinitionIncompatibilityException: The specified platform does not satisfy the task definition’s required capabilities I suspect it's something to do with this line although not quite sure file_system_id = aws_efs_file_system.main.id This is my script: provider "aws" { region = "us-east-1" profile = var.profile } ### Network # Fetch AZs in the current region data "aws_availability_zones" "available" {} resource "aws_vpc" "main" { cidr_block = "172.17.0.0/16" } # Create var.az_count private subnets, each in a different AZ resource "aws_subnet" "private" { count = "${var.az_count}" cidr_block = "${cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)}" availability_zone = "${data.aws_availability_zones.available.names[count.index]}" vpc_id = "${aws_vpc.main.id}" } # Create var.az_count public subnets, each in a different AZ resource "aws_subnet" "public" { count = "${var.az_count}" cidr_block = "${cidrsubnet(aws_vpc.main.cidr_block, 8, var.az_count + count.index)}" availability_zone = "${data.aws_availability_zones.available.names[count.index]}" vpc_id = "${aws_vpc.main.id}" map_public_ip_on_launch = true } # IGW for the public subnet resource "aws_internet_gateway" "gw" { vpc_id = "${aws_vpc.main.id}" } # Route the public subnet traffic through the IGW resource "aws_route" "internet_access" { route_table_id = "${aws_vpc.main.main_route_table_id}" destination_cidr_block = "0.0.0.0/0" gateway_id = "${aws_internet_gateway.gw.id}" } # Create a NAT gateway with an EIP for each private subnet to get internet connectivity resource "aws_eip" "gw" { count = "${var.az_count}" vpc = true depends_on = ["aws_internet_gateway.gw"] } resource "aws_nat_gateway" "gw" { count = "${var.az_count}" subnet_id = "${element(aws_subnet.public.*.id, count.index)}" allocation_id = "${element(aws_eip.gw.*.id, count.index)}" } # Create a new route table for the private subnets # And make it route non-local traffic through the NAT gateway to the internet resource "aws_route_table" "private" { count = "${var.az_count}" vpc_id = "${aws_vpc.main.id}" route { cidr_block = "0.0.0.0/0" nat_gateway_id = "${element(aws_nat_gateway.gw.*.id, count.index)}" } } # Explicitely associate the newly created route tables to the private subnets (so they don't default to the main route table) resource "aws_route_table_association" "private" { count = "${var.az_count}" subnet_id = "${element(aws_subnet.private.*.id, count.index)}" route_table_id = "${element(aws_route_table.private.*.id, count.index)}" } ### Security # ALB Security group # This is the group you need to edit if you want to restrict access to your application resource "aws_security_group" "lb" { name = "tf-ecs-alb" description = "controls access to the ALB" vpc_id = "${aws_vpc.main.id}" ingress { protocol = "tcp" from_port = 80 to_port = 80 cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } # Traffic to the ECS Cluster should only come from the ALB resource "aws_security_group" "ecs_tasks" { name = "tf-ecs-tasks" description = "allow inbound access from the ALB only" vpc_id = "${aws_vpc.main.id}" ingress { protocol = "tcp" from_port = "${var.app_port}" to_port = "${var.app_port}" security_groups = ["${aws_security_group.lb.id}"] } egress { protocol = "-1" from_port = 0 to_port = 0 cidr_blocks = ["0.0.0.0/0"] } } ### ALB resource "aws_alb" "main" { name = "tf-ecs-chat" subnets = aws_subnet.public.*.id security_groups = ["${aws_security_group.lb.id}"] } resource "aws_alb_target_group" "app" { name = "tf-ecs-chat" port = 80 protocol = "HTTP" vpc_id = "${aws_vpc.main.id}" target_type = "ip" } # Redirect all traffic from the ALB to the target group resource "aws_alb_listener" "front_end" { load_balancer_arn = "${aws_alb.main.id}" port = "80" protocol = "HTTP" default_action { target_group_arn = "${aws_alb_target_group.app.id}" type = "forward" } } ### ECS resource "aws_ecs_cluster" "main" { name = "tf-ecs-cluster" } resource "aws_ecs_task_definition" "app" { family = "app" network_mode = "awsvpc" requires_compatibilities = ["FARGATE"] cpu = "${var.fargate_cpu}" memory = "${var.fargate_memory}" task_role_arn = "${aws_iam_role.ecs_task_role_role.arn}" execution_role_arn = "${aws_iam_role.ecs_task_role_role.arn}" container_definitions = <<DEFINITION [ { "cpu": ${var.fargate_cpu}, "image": "${var.app_image}", "memory": ${var.fargate_memory}, "name": "app", "networkMode": "awsvpc", "portMappings": [ { "containerPort": ${var.app_port}, "hostPort": ${var.app_port} } ] } ] DEFINITION volume { name = "efs-html" efs_volume_configuration { file_system_id = aws_efs_file_system.main.id root_directory = "/opt/data" } } } resource "aws_ecs_service" "main" { name = "tf-ecs-service" cluster = "${aws_ecs_cluster.main.id}" task_definition = "${aws_ecs_task_definition.app.arn}" desired_count = "${var.app_count}" launch_type = "FARGATE" network_configuration { security_groups = ["${aws_security_group.ecs_tasks.id}"] subnets = aws_subnet.private.*.id } load_balancer { target_group_arn = "${aws_alb_target_group.app.id}" container_name = "app" container_port = "${var.app_port}" } depends_on = [ "aws_alb_listener.front_end", ] } # ECS roles & policies # Create the IAM task role for ECS Task definition resource "aws_iam_role" "ecs_task_role_role" { name = "test-ecs-task-role" assume_role_policy = "${file("ecs-task-role.json")}" tags = { Terraform = "true" } } # Create the AmazonECSTaskExecutionRolePolicy managed role resource "aws_iam_policy" "ecs_task_role_policy" { name = "test-ecs-AmazonECSTaskExecutionRolePolicy" description = "Provides access to other AWS service resources that are required to run Amazon ECS tasks" policy = "${file("ecs-task-policy.json")}" } # Assign the AmazonECSTaskExecutionRolePolicy managed role to ECS resource "aws_iam_role_policy_attachment" "ecs_task_policy_attachment" { role = "${aws_iam_role.ecs_task_role_role.name}" policy_arn = "${aws_iam_policy.ecs_task_role_policy.arn}" } resource "aws_efs_file_system" "main" { tags = { Name = "ECS-EFS-FS" } } resource "aws_efs_mount_target" "main" { count = "${var.subnets-count}" file_system_id = "${aws_efs_file_system.main.id}" subnet_id = "${element(var.subnets, count.index)}" } variables.tf variable "az_count" { description = "Number of AZs to cover in a given AWS region" default = "2" } variable "app_image" { description = "Docker image to run in the ECS cluster" default = "xxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/test1:nginx" } variable "app_port" { description = "Port exposed by the docker image to redirect traffic to" # default = 3000 default = 80 } variable "app_count" { description = "Number of docker containers to run" default = 2 } variable "fargate_cpu" { description = "Fargate instance CPU units to provision (1 vCPU = 1024 CPU units)" default = "256" } variable "fargate_memory" { description = "Fargate instance memory to provision (in MiB)" default = "512" } ################ variable "subnets" { type = "list" description = "list of subnets to mount the fs to" default = ["subnet-xxxxxxx","subnet-xxxxxxx"] } variable "subnets-count" { type = "string" description = "number of subnets to mount to" default = 2 }
You simply require to upgrade your ecs service to latest version resource "aws_ecs_service" "service" { platform_version = "1.4.0" launch_type = "FARGATE" ... } efs feature is only available on the latest version When you don’t specify platform_version, it will default to LATEST which is set to 1.3.0 which doesn’t allow efs volumes. UPDATE: As of 1/21/22, it seems that the LATEST ECS service version is 1.4.0, so explicitly specifying the ECS platform version is no longer necessary to have EFS mounts work. Per: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform-linux-fargate.html
Terraform
61,351,000
13
I didn't find anything useful in the docs. Can this be done with terraform somehow?
Registering a domain name involves a commitment from a Domain Name Registrar for a minimum of 12 months. Tools like Terraform and AWS CloudFormation are used to create, update and delete infrastructure such as networks, EC2 instances and database. While AWS does offer the ability to register domain names (which is done through gandi.net), this is not something you would do with tools like Terraform because you cannot simply "unregister" a domain name. You could choose to use such tools to add sub-domains to an existing domain. They just aren't appropriate for the initial purchase of a domain name.
Terraform
60,591,937
13
I've been writing reusable modules for an AWS infrastructure. In creating a security group, my approach is to create a generic module for a security group and provide a list of ports in the control code. However, when using count it creates a security group each for every port. Is there a way around this to iterate a specific part like in this scenario? SG Module resource "aws_security_group" "this" { name = var.sg_name description = var.description vpc_id = var.vpc_id count = min(length(var.ingress_ports)) ingress { from_port = var.ingress_ports[count.index] to_port = var.ingress_ports[count.index] protocol = "tcp" cidr_blocks = ["10.0.0.0/8"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } Control Code module "qliksense_sg" { source = "modules/aws-sg" sg_name = "My-SG" description = "A security group" vpc_id = module.vpc.vpc_id ingress_ports = ["80", "443"] }
To do this in Terraform 0.12 you can use dynamic blocks. In fact, the example given in that documentation link is for adding ingress rules over a list of ports: resource "aws_security_group" "example" { name = "example" # can use expressions here dynamic "ingress" { for_each = var.service_ports content { from_port = ingress.value to_port = ingress.value protocol = "tcp" } } }
Terraform
59,154,055
13
I've deployed my infra using Terraform and I noticed that I have some interesting information in the state (terraform.tfstate) file of terraform which I would like to extract. For example $ terraform state show 'packet_device.worker' id = 6015bg2b-b8c4-4925-aad2-f0671d5d3b13 billing_cycle = hourly created = 2015-12-17T00:06:56Z facility = ewr1 ... which I would like to transform somehow to $ terraform state show 'packet_device.worker.id' 6015bg2b-b8c4-4925-aad2-f0671d5d3b13 But adding the id at the end doesn't seem to work. Any suggestions how I can achieve this behaviour?
You can utilize terraform show -json and jq to get a specific value out of a Terraform state file. terraform show -json <state_file> | jq '.values.root_module.resources[] | select(.address=="<terraform_resource_name>") | .values.<property_name>' You have a state file named terraform.tfstate and a Terraform resource as packet_device.worker and you want to get id. Then it would be as follows: terraform show -json terraform.tfstate | jq '.values.root_module.resources[] | select(.address=="packet_device.worker") | .values.id' terraform.tfstate also can be omitted since it is the default name for the state file.
Terraform
57,811,596
13
I have an object containing the list of subnets I want to create. variable "subnet-map" { default = { ec2 = [ { cidr_block = "10.0.1.0/24" availability_zone = "eu-west-1a" } ], lambda = [ { cidr_block = "10.0.5.0/24" availability_zone = "eu-west-1a" }, { cidr_block = "10.0.6.0/24" availability_zone = "eu-west-1b" }, { cidr_block = "10.0.7.0/24" availability_zone = "eu-west-1c" } ], secrets = [ { cidr_block = "10.0.8.0/24" availability_zone = "eu-west-1a" }, { cidr_block = "10.0.9.0/24" availability_zone = "eu-west-1b" }, { cidr_block = "10.0.10.0/24" availability_zone = "eu-west-1c" } ], rds = [ { cidr_block = "10.0.11.0/24" availability_zone = "eu-west-1a" }, { cidr_block = "10.0.12.0/24" availability_zone = "eu-west-1b" }, { cidr_block = "10.0.13.0/24" availability_zone = "eu-west-1c" } ] } } Earlier I was using the count loop construct. So I used to flatten the above structure into a list of objects locals { subnets = flatten([ for resource in keys(var.subnet-map) : [ for subnet in var.subnet-map[resource] : { resource = resource cidr_block = subnet.cidr_block availability_zone = subnet.availability_zone } ] ]) } And then I would create the resources by doing resource "aws_subnet" "aws-subnets" { count = length(local.subnets) vpc_id = aws_vpc.aws-vpc.id cidr_block = local.subnets[count.index].cidr_block availability_zone = local.subnets[count.index].availability_zone tags = { Name = "subnet-${local.subnets[count.index].resource}-${local.subnets[count.index].availability_zone}" } } Now I want to use the for_each loop. But I cannot figure out how to do it. This is what I've done so far. resource "aws_subnet" "subnets-dev" { for_each = var.subnet-map vpc_id = aws_vpc.vpc-dev.id cidr_block = each.value.cidr_block availability_zone = each.value.availability_zone tags = { Name = "subnet-dev-${each.value.resource}-${each.value.availability_zone}" environment = "dev" } } But it keeps giving an error saying Error: Unsupported attribute on vpc/main.tf line 93, in resource "aws_subnet" "subnets-dev": 93: Name = "subnet-dev-${each.value.resource}-${each.value.availability_zone}" |---------------- | each.value is tuple with 3 elements This value does not have any attributes. How could I fix this?
I'm not sure I fully follow all of what you tried here because your initial snippet of var.subnet-map shows it being a map of maps of lists of objects, but later on when you used for_each = var.subnet-map it seems to have treated it as a map of lists instead. Did you remove that extra level of maps (the "default" key) before trying for_each here? Working with your original definition of variable "subnet-map", your first step with for_each will be similar to what you did with count: you need to flatten the structure, this time into a map of objects rather than a list of objects. The easiest way to get there is to derive a map from your existing flattened list: locals { subnets = flatten([ for resource in keys(var.subnet-map) : [ for subnet in var.subnet-map[resource] : { resource = resource cidr_block = subnet.cidr_block availability_zone = subnet.availability_zone } ] ]) subnets_map = { for s in local.subnets: "${s.resource}:${s.availability_zone}" => s } } Here I assumed that your "resource" string and your availability zone together are a suitable unique identifier for a subnet. If not, you can adjust the "${s.resource}:${s.availability_zone}" part to whatever unique key you want to use for these. Now you can use the flattened map as the for_each map: resource "aws_subnet" "subnets-dev" { for_each = local.subnets_map vpc_id = aws_vpc.vpc-dev.id cidr_block = each.value.cidr_block availability_zone = each.value.availability_zone tags = { Name = "subnet-dev-${each.value.resource}-${each.value.availability_zone}" environment = "dev" } } This will give you instances with addresses like aws_subnet.subnets-dev["ec2:eu-west-1a"]. Note that if you are migrating from count with existing subnets that you wish to retain, you'll need to also do a one-time migration step to tell Terraform which indexes from the existing state correspond to which keys in the new configuration. For example, if (and only if) index 0 was previously the one for ec2 in eu-west-1a, the migration command for that one would be: terraform state mv 'aws_subnet.subnets-dev[0]' 'aws_subnet.subnets-dev["ec2:eu-west-1a"]' If you're not sure how they correlate, you can run terraform plan after adding for_each and look at the instances that Terraform is planning to destroy. If you work through each one of those in turn, taking the address Terraform currently knows along with the resource and availability zone names shown in the Name tag, you can migrate each of them to its new address so that Terraform will no longer think you're asking for it to destroy the numbered instances and replace them with named ones.
Terraform
57,570,505
13
I'm learning terraform, and want to setup an AWS infrastructure using the tool. We have 3 AWS environments, sandbox, staging, and production, and have existing infrastructure to support these environments. For example, we have 3 separate VPCs for each environment. I want to use terraform import to import the states of these resources, based on the environment I'm trying to setup. So I essentially want to do this, though I know this is not syntactically correct, but you get the idea. $ terraform import aws_vpc.my_vpc -var 'environment=sandbox' I therefore have my module setup like this vpc/main.tf ----------- provider "aws" { region = "us-east-1" } resource "aws_vpc" "my_vpc" { cidr_block = "" } vpc/variables.tf ---------------- variable "environment" { type map = map(string) default { sandbox = "vpc-1234" staging = "vpc-2345" production = "vpc-3456" } } So this means I essentially want to do $ terraform import aws_vpc.my_vpc vpc-1234 How can I achieve this?
I had the same issue and figured out that the order is important. This command works: $ terraform import -var 'environment=sandbox' aws_vpc.my_vpc vpc-1234
Terraform
57,187,782
13
I'm having a set of Terraform files and in particular one variables.tf file which sort of holds my variables like aws access key, aws access token etc. I want to now automate the resource creation on AWS using GitLab CI / CD. My plan is the following: Write a .gitlab-ci-yml file Have the terraform calls in the .gitlab-ci.yml file I know that I can have secret environment variables in GitLab, but I'm not sure how I can push those variables into my Terraform variables.tf file which looks like this now! # AWS Config variable "aws_access_key" { default = "YOUR_ADMIN_ACCESS_KEY" } variable "aws_secret_key" { default = "YOUR_ADMIN_SECRET_KEY" } variable "aws_region" { default = "us-west-2" } In my .gitlab-ci.yml, I have access to the secrets like this: - 'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}' - 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}' - 'AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}' How can I pipe it to my Terraform scripts? Any ideas? I would need to read the secrets from GitLab's environment and pass it on to the Terraform scripts!
Which executor are you using for your GitLab runners? You don't necessarily need to use the Docker executor but can use a runner installed on a bare-metal machine or in a VM. If you install the gettext package on the respective machine/VM as well you can use the same method as I described in Referencing gitlab secrets in Terraform for the Docker executor. Another possibility could be that you set job: stage: ... variables: TF_VAR_SECRET1: ${GITLAB_SECRET} or job: stage: ... script: - export TF_VAR_SECRET1=${GITLAB_SECRET} in your CI job configuration and interpolate these. Please see Getting an Environment Variable in Terraform configuration? as well
Terraform
56,461,518
13
I want to deploy my api gateway with terraform using a swagger file to describe my api. The swagger.yaml looks like this: swagger: '2.0' info: version: '1.0' title: "CodingTips" schemes: - https paths: "/api": get: description: "Get coding tips" produces: - application/json x-amazon-apigateway-integration: ${apiIntegration} responses: '200': description: "Codingtips were successfully requested" Terraform is giving me a BadRequestException saying that The REST API doesn't contain any methods. Because of this I am thinking that it is trying to deploy the REST api without waiting for the methods and integrations of this api to be created. This made me think in the direction of having to add DEPENDS_ON to the aws_api_gateway_deployment. However I do not know what to depend on since I don't define the method and integration resource using swagger. They should be automatically deducted from the swagger definition. Am I thinking in the right direction and if so, what do I have to make my aws_api_gateway_deployment depend on? Or is something else wrong with the way I am trying to deploy this api. My apigateway.tf file looks like this: resource "aws_api_gateway_rest_api" "codingtips-api-gateway" { name = "ServerlessExample" description = "Terraform Serverless Application Example" body = "${data.template_file.codingtips_api_swagger.rendered}" } locals{ "get_codingtips_arn" = "${aws_lambda_function.get-tips-lambda.invoke_arn}" "x-amazon-coding-tips-apigateway-integration" = <<EOF # uri = "${local.get_codingtips_arn}" passthroughBehavior: when_no_match httpMethod: POST type: aws_proxy credentials: "${aws_iam_role.api_gateway_role.arn}" EOF } data "template_file" codingtips_api_swagger{ template = "${file("./swagger.yaml")}" vars { apiIntegration = "${indent(8, local.x-amazon-coding-tips-apigateway-integration)}" } } resource "aws_api_gateway_deployment" "codingtips-api-gateway-deployment" { rest_api_id = "${aws_api_gateway_rest_api.codingtips-api-gateway.id}" stage_name = "test" } How can I fix the BadRequestException: The REST API doesn't contain any methods ?
I found out what was wrong. It is a syntactical error in the locals{} block. uri = should be uri: . Using a colon instead of an equal sign. The block then looks like this: locals{ "get_codingtips_arn" = "${aws_lambda_function.get-tips-lambda.invoke_arn}" "x-amazon-codingtips-get-apigateway-integration" = <<EOF # comment for new line uri: "${aws_lambda_function.get-tips-lambda.invoke_arn}" passthroughBehavior: when_no_match httpMethod: POST type: aws_proxy EOF } Researching this I found that it reads easier when you specify the x-amazon-apigateway-integration in the swagger.yaml like this: swagger: '2.0' info: version: '1.0' title: "CodingTips" schemes: - https paths: "/api": get: description: "Get coding tips" produces: - application/json responses: '200': description: "The codingtips request was successful." x-amazon-apigateway-integration: uri: ${uri_arn} passthroughBehavior: "when_no_match" httpMethod: "POST" type: "aws_proxy" The data{} and locals{} blocks in your terraform then look like: data "template_file" codingtips_api_swagger{ template = "${file("swagger.yaml")}" vars { uri_arn = "${local.get_codingtips_arn}" } } locals { "get_codingtips_arn" = "${aws_lambda_function.get-tips-lambda.invoke_arn}" }
Terraform
54,047,171
13
I am trying to build an AWS EC2 redhat instance using an AWS launch template with Terraform. I can create an launch template with a call to Terraform's resource aws_launch_template. My question is how do I use Terraform to build an EC2 server with the created launch template? What Terraform aws provider resource do I call? Many thanks for your help!
Welcome to Stack Overflow! You can create an aws_autoscaling_group resource to make use of your new Launch Template. Please see the example here for more details. Code: resource "aws_launch_template" "foobar" { name_prefix = "foobar" image_id = "ami-1a2b3c" instance_type = "t2.micro" } resource "aws_autoscaling_group" "bar" { availability_zones = ["us-east-1a"] desired_capacity = 1 max_size = 1 min_size = 1 launch_template = { id = "${aws_launch_template.foobar.id}" version = "$$Latest" } }
Terraform
53,749,816
13
I've been running containers on ECS, and using AWS Cloudwatch events to notify me when my tasks complete. All of the infrastructure has been created using Terraform. However, I'm unable to get the correct syntax in my event pattern so that I am only notified for non-zero exit codes. The following resource works great, and sends notifications to SNS every time one of my containers exits: resource "aws_cloudwatch_event_rule" "container-stopped-rule" { name = "container-stopped" description = "Notification for containers that exit for any reason. (error)." event_pattern = <<PATTERN { "source": [ "aws.ecs" ], "detail-type": [ "ECS Task State Change" ], "detail": { "lastStatus": [ "STOPPED" ], "stoppedReason" : [ "Essential container in task exited" ] } } PATTERN } However, I'm trying to modify the pattern slightly so that I'm only notified when a container exits with an error code- since we get so many notifications, we've started to tune out the emails and sometimes don't notice the email notifications where containers are exiting with errors: resource "aws_cloudwatch_event_rule" "container-stopped-rule" { name = "container-stopped" description = "Notification for containers with exit code of 1 (error)." event_pattern = <<PATTERN { "source": [ "aws.ecs" ], "detail-type": [ "ECS Task State Change" ], "detail": { "containers": [ { "exitCode": 1 } ], "lastStatus": [ "STOPPED" ], "stoppedReason" : [ "Essential container in task exited" ] } } PATTERN } This triggers the following error when I terraform apply: aws_cloudwatch_event_rule.container-stopped-rule: Updating CloudWatch Event Rule failed: InvalidEventPatternException: Event pattern is not valid. Reason: Match value must be String, number, true, false, or null at [Source: (String)"{"detail":{"containers":[{"exitCode":1}],"lastStatus":["STOPPED"],"stoppedReason":["Essential container in task exited"]},"detail-type":["ECS Task State Change"],"source":["aws.ecs"]}"; line: 1, column: 27] status code: 400 This is perplexing to me, since I'm following the exact structure laid out in the AWS CloudWatch documentation for containers. I've even attempted to put double quotes around 1 in case Terraform wants a string instead of a number. I also tried to use AWS Console to manually edit the event pattern JSON, but received this error: Validation error. Details: Event pattern contains invalid value (can only be a nonempty array or nonempty object) I'm honestly a bit stumped at this point and would appreciate any tips on where my syntax is incorrect.
The event pattern syntax is pretty weird, I ran into the same issue. The following will work: { "source": [ "aws.ecs" ], "detail-type": [ "ECS Task State Change" ], "detail": { "lastStatus": [ "STOPPED" ], "stoppedReason": [ "Essential container in task exited" ], "containers": { "exitCode": [ 1 ] } } } I used $.detail.group in the Input Transformer to get the task family name in the notification message.
Terraform
53,015,242
13
I am relatively new to AWS and the beast. After working on API Gateway to Lambda proxy integration I am getting Execution failed due to configuration error: Invalid permissions on Lambda function I followed below setup referred from really well documented terraform documentation and does exactly what was needed for me. But while testing on API Gateway console giving the above error. resource "aws_lambda_permission" "apigw" { statement_id = "AllowAPIGatewayInvoke" action = "lambda:InvokeFunction" function_name = "${aws_lambda_function.resource_name.arn}" principal = "apigateway.amazonaws.com" # The /*/* portion grants access from any method on any resource # within the API Gateway "REST API". source_arn = "${aws_api_gateway_deployment.resource_name_of_deployment.execution_arn}/*/*" }
Few learnings from API Gateway Lambda proxy integration API Gateway is deployed in different stages and ARN for API gateway in stage vs on test console is somewhat different. (atleast thats what I got on terraform output) As many documentations and fixes for the problem suggests to explicitly configure detailed path as "arn:aws:execute-api:region_name:account_id:${aws_api_gateway_rest_api.api_resource.id}/*/*" The configured source with granted access permission arn:aws:execute-api:region:accountid:fu349z93pa/*/* From terraform documentation For "${aws_api_gateway_deployment.deployment_rsc_name.execution_arn}" The configured source with granted access permission is arn:aws:execute-api:region:accountid:fu349z93pa/stage/*/* If you test from API Gateway console you would end up with same error and have to manually add permission to lambda or reselect lambda function name on method integration console (which does the same thing). That configures 2 API gateways to access Lambda. (one with /stage deployed ARN and other /*/METHOD/* - used for test console) But if you test API gateway from ARN of stage environment on postman it works just as fine without any manual updates to infrastructure built with terraform. And in most cases that is the one that would matter. Even after fixing first error manually / not second challenge is Malformed response from lambda This one is fairly easy and well documented. AWS Doc All we have to do is update lambda to respond with a specified format. for. e.g. add below callback(null, { "statusCode": 200, "body" : JSON.stringify(sampleResponseJSON) }); on lambda `js` Once it is working end to end we could always add error handling scenarios. Hopefully, this should save some time for beginners like me.
Terraform
52,210,516
13
If I get this right, lb_listener only accepts forward as valid action type. https://www.terraform.io/docs/providers/aws/r/lb_listener.html How do I configure a listener to redirect HTTP to HTTPS? i.e. this is the desired state in elb listener:
This functionality was added to the AWS provider and released with 1.33.0. Here's how you'd set the default action on a load balancer listener with the aws_lb_listener resource: resource "aws_lb" "front_end" { # ... } resource "aws_lb_listener" "front_end" { load_balancer_arn = "${aws_lb.front_end.arn}" port = "80" protocol = "HTTP" default_action { type = "redirect" redirect { port = "443" protocol = "HTTPS" status_code = "HTTP_301" } } } You can also add redirects and fixed type responses with individual load balancer listener rules in the aws_lb_listener_rule resource: resource "aws_lb_listener_rule" "redirect_http_to_https" { listener_arn = "${aws_lb_listener.front_end.arn}" action { type = "redirect" redirect { port = "443" protocol = "HTTPS" status_code = "HTTP_301" } } condition { host_header { values = ["my-service.*.terraform.io"] } } }
Terraform
51,767,917
13
I'm using terraform v.0.11.7. I wants to create 4 subnets (2 public subnets, 2 private subnets) Here's a content of vars.tf variable "region" { default = "ap-south-1" } variable "ami_id" { type = "map" default = "ami-d783a9b8" } variable "credentials" { default = "/root/.aws/credentials" } variable "vpc_cidr" { default = "10.0.0.0/16" } variable "pub_subnet_aza_cidr" { default = "10.0.10.0/24" } variable "pub_subnet_azc_cidr" { default = "10.0.20.0/24" } variable "pri_subnet_aza_cidr" { default = "10.0.30.0/24" } variable "pri_subnet_azc_cidr" { default = "10.0.40.0/24" } Now inside main.tf, i want to associate the first 2 public subnets to public route table, how to do that? resource "aws_subnet" "pub_subnet_aza" { vpc_cidr = "{aws_vpc.vpc.id}" cidr_block = "${var.pub_subnet_aza_cidr}" tags { Name = "Pub-Sunet-A" } availability_zone = "${data.aws_availability_zone.available.name[0]}" } resource "aws_subnet" "pub_subnet_azc" { vpc_cidr = "{aws_vpc.vpc.id}" cidr_block = "${var.pub_subnet_azc_cidr}" tags { Name = "Pub-Subnet-C" } availability_zone = "${data.aws_availability_zone.available.name[2]}" } resource "aws_route_table_association" "public" { subnet_id = "${aws_subnet.pub_subnet_aza.id}" # How to put pub_subnet_azc.id into here? route_table_id = "${aws_route_table.public.id}" }
Better use lists of subnets to reduce the amount of variables. Then you can also use count = length(var.subnets) to get 2 instances of the route table association resource and pick the correct one from the subnets list. variable "subnet_cidrs_public" { description = "Subnet CIDRs for public subnets (length must match configured availability_zones)" # this could be further simplified / computed using cidrsubnet() etc. # https://www.terraform.io/docs/configuration/interpolation.html#cidrsubnet-iprange-newbits-netnum- default = ["10.0.10.0/24", "10.0.20.0/24"] type = "list" } resource "aws_subnet" "public" { count = "${length(var.subnet_cidrs_public)}" vpc_id = "${aws_vpc.main.id}" cidr_block = "${var.subnet_cidrs_public[count.index]}" availability_zone = "${var.availability_zones[count.index]}" } resource "aws_route_table_association" "public" { count = "${length(var.subnet_cidrs_public)}" subnet_id = "${element(aws_subnet.public.*.id, count.index)}" route_table_id = "${aws_route_table.public.id}" } I see you've been reading availability zones via data, which is fine and you can still do. You just have to somehow set the association between a subnet and the AZ. I leave that up to you. Certainly more elegant would be to provision a subnet in every AZ of that region. Once we use cidrsubnet() to compute address spaces for the subnets, we could use length(data.availability_zones) as the driver for all the rest. Shouldn't be too complex. Here is the full code: provider "aws" { region = "eu-west-1" } variable "availability_zones" { description = "AZs in this region to use" default = ["eu-west-1a", "eu-west-1c"] type = "list" } variable "vpc_cidr" { default = "10.0.0.0/16" } variable "subnet_cidrs_public" { description = "Subnet CIDRs for public subnets (length must match configured availability_zones)" # this could be further simplified / computed using cidrsubnet() etc. # https://www.terraform.io/docs/configuration/interpolation.html#cidrsubnet-iprange-newbits-netnum- default = ["10.0.10.0/24", "10.0.20.0/24"] type = "list" } resource "aws_vpc" "main" { cidr_block = "${var.vpc_cidr}" tags { Name = "stackoverflow-51739482" } } resource "aws_subnet" "public" { count = "${length(var.subnet_cidrs_public)}" vpc_id = "${aws_vpc.main.id}" cidr_block = "${var.subnet_cidrs_public[count.index]}" availability_zone = "${var.availability_zones[count.index]}" } resource "aws_route_table" "public" { vpc_id = "${aws_vpc.main.id}" tags { Name = "public" } } resource "aws_route_table_association" "public" { count = "${length(var.subnet_cidrs_public)}" subnet_id = "${element(aws_subnet.public.*.id, count.index)}" route_table_id = "${aws_route_table.public.id}" }
Terraform
51,739,482
13
It's been somewhat long I'm trying to automate the deployment of an application gateway using Terraform but it simply fails with an error message. I have made sure all protocol settings to HTTPS. However, I doubt there is something fishy with the PFX certificate. Is it that I'm not supplying the authentication certificate due to which it's failing? Tried a lot over the web to get a solution but there are no mentions of this. Terraform Code: # Create a resource group resource "azurerm_resource_group" "rg" { name = "my-rg-application-gateway-12345" location = "West US" } # Create a application gateway in the web_servers resource group resource "azurerm_virtual_network" "vnet" { name = "my-vnet-12345" resource_group_name = "${azurerm_resource_group.rg.name}" address_space = ["10.254.0.0/16"] location = "${azurerm_resource_group.rg.location}" } resource "azurerm_subnet" "sub1" { name = "my-subnet-1" resource_group_name = "${azurerm_resource_group.rg.name}" virtual_network_name = "${azurerm_virtual_network.vnet.name}" address_prefix = "10.254.0.0/24" } resource "azurerm_subnet" "sub2" { name = "my-subnet-2" resource_group_name = "${azurerm_resource_group.rg.name}" virtual_network_name = "${azurerm_virtual_network.vnet.name}" address_prefix = "10.254.2.0/24" } resource "azurerm_public_ip" "pip" { name = "my-pip-12345" location = "${azurerm_resource_group.rg.location}" resource_group_name = "${azurerm_resource_group.rg.name}" public_ip_address_allocation = "dynamic" } # Create an application gateway resource "azurerm_application_gateway" "network" { name = "my-application-gateway-12345" resource_group_name = "${azurerm_resource_group.rg.name}" location = "West US" sku { name = "Standard_Small" tier = "Standard" capacity = 2 } gateway_ip_configuration { name = "my-gateway-ip-configuration" subnet_id = "${azurerm_virtual_network.vnet.id}/subnets/${azurerm_subnet.sub1.name}" } ssl_certificate { name = "certificate" data = "${base64encode(file("mycert.pfx"))}" password = "XXXXXXX" } frontend_port { name = "${azurerm_virtual_network.vnet.name}-feport" port = 80 } frontend_ip_configuration { name = "${azurerm_virtual_network.vnet.name}-feip" public_ip_address_id = "${azurerm_public_ip.pip.id}" } backend_address_pool { name = "${azurerm_virtual_network.vnet.name}-beap" } backend_http_settings { name = "${azurerm_virtual_network.vnet.name}-be-htst" cookie_based_affinity = "Disabled" port = 443 protocol = "Https" request_timeout = 1 } http_listener { name = "${azurerm_virtual_network.vnet.name}-httpslstn" frontend_ip_configuration_name = "${azurerm_virtual_network.vnet.name}-feip" frontend_port_name = "${azurerm_virtual_network.vnet.name}-feport" protocol = "https" } request_routing_rule { name = "${azurerm_virtual_network.vnet.name}-rqrt" rule_type = "Basic" http_listener_name = "${azurerm_virtual_network.vnet.name}-httpslstn" backend_address_pool_name = "${azurerm_virtual_network.vnet.name}-beap" backend_http_settings_name = "${azurerm_virtual_network.vnet.name}-be-htst" } } Error: Error: Error applying plan: 1 error(s) occurred: * azurerm_application_gateway.network: 1 error(s) occurred: * azurerm_application_gateway.network: Error Creating/Updating ApplicationGateway "my-application-gateway-12345" (Resource Group "my-rg-application-gateway-12345"): network.ApplicationGatewaysClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="ApplicationGatewayHttpsListenerMustReferenceSslCert" Message="Http Listener /subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/my-rg-application-gateway-12345/providers/Microsoft.Network/applicationGateways/my-application-gateway-12345/httpListeners/my-vnet-12345-httpslstn uses protocol Https. Ssl Certificate must be specified." Details=[] Terraform does not automatically rollback in the face of errors. Instead, your Terraform state file has been partially updated with any resources that successfully completed. Please address the error above and apply again to incrementally change your infrastructure.
As mentioned in the azurerm_application_gateway docs you need to add the ssl_certificate_name to your http_listener block when using https.
Terraform
48,825,236
13
I am trying to use a multiline string in the provisioner "remote-exec" block of my terraform script. Yet whenever I use the EOT syntax as outlined in the documentation and various examples I get an error that complains about having: invalid characters in heredoc anchor. Here is an example of a simple provisioner "remote-exec" that received this error (both types of EOT receive this error when attempted separately): provisioner "remote-exec" { inline = [ << EOT echo hi EOT, << EOT echo \ hi EOT, ] } Update: Here is the working solution, read carefully if you are having this issue because terraform is very picky when it comes to EOF: provisioner "remote-exec" { inline = [<<EOF echo foo echo bar EOF ] } Note that if you want to use EOF all the commands you use in a provisioner "remote-exec" block must be inside the EOF. You cannot have both EOF and non EOF its one or the other. The first line of EOF must begin like this, and you cannot have any whitespace in this line after <<EOF or else it will complain about having invalid characters in heredoc anchor: inline = [<<EOF Your EOF must then end like this with the EOF at the same indentation as the ] EOF ]
Heredocs in Terraform are particularly funny about the surrounding whitespace. Changing your example to the following seems to get rid of the heredoc specific errors: provisioner "remote-exec" { inline = [<<EOF echo hi EOF, <<EOF echo \ hi EOF ] } You shouldn't need multiple heredocs in here at all though as the inline array is a list of commands that should be ran on the remote host. Using a heredoc with commands across multiple lines should work fine for you: provisioner "remote-exec" { inline = [<<EOF echo foo echo bar EOF ] }
Terraform
37,886,759
13
My question is similar to this git hub post: https://github.com/hashicorp/terraform/issues/745 It is also related to another stack exchange post of mine: Terraform stalls while trying to get IP addresses of multiple instances? I am trying to bootstrap several servers and there are several commands I need to run on my instances that require the IP addresses of all the other instances. However I cannot access the variables that hold the IP addresses of my newly created instances until they are created. So when I try to run a provisioner "remote-exec" block like this: provisioner "remote-exec" { inline = [ "sudo apt-get update", "sudo apt-get install -y curl", "echo ${openstack_compute_instance_v2.consul.0.network.0.fixed_ip_v4}", "echo ${openstack_compute_instance_v2.consul.1.network.1.fixed_ip_v4}", "echo ${openstack_compute_instance_v2.consul.2.network.2.fixed_ip_v4}" ] } Nothing happens because all the instances are waiting for all the other instances to finish being created and so nothing is created in the first place. So I need a way for my resources to be created and then run my provisioner "remote-exec" block commands after they are created and terraform can access the IP addresses of all my instances.
The solution is to create a resource "null_resource" "nameYouWant" { } and then run your commands inside that. They will run after the initial resources are created: resource "aws_instance" "consul" { count = 3 ami = "ami-ce5a9fa3" instance_type = "t2.micro" key_name = "ansible_aws" tags { Name = "consul" } } resource "null_resource" "configure-consul-ips" { count = 3 connection { user = "ubuntu" private_key="${file("/home/ubuntu/.ssh/id_rsa")}" agent = true timeout = "3m" } provisioner "remote-exec" { inline = [ "sudo apt-get update", "sudo apt-get install -y curl", "sudo echo '${join("\n", aws_instance.consul.*.private_ip)}' > /home/ubuntu/test.txt" ] } } Also see the answer here: Terraform stalls while trying to get IP addresses of multiple instances? Thank you so much @ydaetskcor for the answer
Terraform
37,865,979
12
I created a YML pipeline using terraform . It uses a script task and returns in output the web app name steps: - script: | [......] terraform apply -input=false -auto-approve # Get the App Service name for the dev environment. WebAppNameDev=$(terraform output appservice_name_dev) # Write the WebAppNameDev variable to the pipeline. echo "##vso[task.setvariable variable=WebAppNameDev;isOutput=true]$WebAppNameDev" name: 'RunTerraform' The task works fine but when i deploy the webapp it crashes because seems variable $WebAppNameDev has double quotes. - task: AzureWebApp@1 displayName: 'Azure App Service Deploy: website' inputs: azureSubscription: 'MySubscription' appName: $(WebAppNameDev) package: '$(Pipeline.Workspace)/drop/*.zip' The error looks like: Got service connection details for Azure App Service:'"spikeapp-dev-6128"' ##[error]Error: Resource '"spikeapp-dev-6128"' doesn't exist. Resource should exist before deployment. How can i remove double quotes or fix the terraform output?
I solved by adding -raw parameter to terraform output. WebAppNameDev=$(terraform output -raw appservice_name_dev) ref. https://www.terraform.io/docs/cli/commands/output.html
Terraform
66,935,287
12
I have a terraform list a = [1,2,3,4] Is there a way for me to apply a function (e.g. *2) on the list, to get b = [2,4,6,8] I was looking for an interpolation syntax, perhaps map(a, _*2), or even something like variable "b" { count = "${length(a)}" value = "${element(a, count.index)} * 2 } As far as I can see no such thing exists. Am I missing something?
As per @Rowan Jacob's answer, this is now possible in v0.12 using the new for expression. See: https://www.terraform.io/docs/configuration/expressions.html#for-expressions variable "a" { type = "list" default = [1,2,3,4] } locals { b = [for x in var.a : x * 2] } output "local_b" { value = "${local.b}" } gives Outputs: local_b = [2, 4, 6, 8,]
Terraform
51,267,625
12
I am trying to deploy a website container through Terraform. Everything goes right, just the task fails with STOPPED (CannotPullECRContainerError: AccessDeniedException) Here is a copy of my Terraform script: # Specify the provider and access details provider "aws" { region = "${var.aws_region}" access_key = "${var.access_key}" secret_key = "${var.secret_key}" } ## EC2 ### Network data "aws_availability_zones" "available" {} resource "aws_vpc" "main" { cidr_block = "10.10.0.0/16" } resource "aws_subnet" "main" { count = "${var.az_count}" cidr_block = "${cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)}" availability_zone = "${data.aws_availability_zones.available.names[count.index]}" vpc_id = "${aws_vpc.main.id}" } resource "aws_internet_gateway" "gw" { vpc_id = "${aws_vpc.main.id}" } resource "aws_route_table" "r" { vpc_id = "${aws_vpc.main.id}" route { cidr_block = "0.0.0.0/0" gateway_id = "${aws_internet_gateway.gw.id}" } } resource "aws_route_table_association" "a" { count = "${var.az_count}" subnet_id = "${element(aws_subnet.main.*.id, count.index)}" route_table_id = "${aws_route_table.r.id}" } ### Compute resource "aws_autoscaling_group" "app" { name = "tf-website-asg" vpc_zone_identifier = ["${aws_subnet.main.*.id}"] min_size = "${var.asg_min}" max_size = "${var.asg_max}" desired_capacity = "${var.asg_desired}" launch_configuration = "${aws_launch_configuration.app.name}" } data "template_file" "cloud_config" { template = "${file("${path.module}/cloud-config.yml")}" vars { aws_region = "${var.aws_region}" ecs_cluster_name = "${aws_ecs_cluster.main.name}" ecs_log_level = "info" ecs_agent_version = "latest" ecs_log_group_name = "${aws_cloudwatch_log_group.ecs.name}" } } data "aws_ami" "stable_coreos" { most_recent = true filter { name = "description" values = ["CoreOS Container Linux stable *"] } filter { name = "architecture" values = ["x86_64"] } filter { name = "virtualization-type" values = ["hvm"] } owners = ["595879546273"] # CoreOS } resource "aws_launch_configuration" "app" { security_groups = [ "${aws_security_group.instance_sg.id}", ] key_name = "${var.key_name}" image_id = "${data.aws_ami.stable_coreos.id}" instance_type = "${var.instance_type}" iam_instance_profile = "${aws_iam_instance_profile.app.name}" user_data = "${data.template_file.cloud_config.rendered}" associate_public_ip_address = true lifecycle { create_before_destroy = true } } ### Security resource "aws_security_group" "lb_sg" { description = "controls access to the application ELB" vpc_id = "${aws_vpc.main.id}" name = "tf-ecs-lbsg" ingress { protocol = "tcp" from_port = 80 to_port = 80 cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = [ "0.0.0.0/0", ] } } resource "aws_security_group" "instance_sg" { description = "controls direct access to application instances" vpc_id = "${aws_vpc.main.id}" name = "tf-ecs-instsg" ingress { protocol = "tcp" from_port = 22 to_port = 22 cidr_blocks = [ "${var.admin_cidr_ingress}", ] } ingress { protocol = "tcp" from_port = 80 to_port = 80 security_groups = [ "${aws_security_group.lb_sg.id}", ] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } ## ECS resource "aws_ecs_cluster" "main" { name = "terraform_website_ecs_cluster" } data "template_file" "task_definition" { template = "${file("${path.module}/task-definition.json")}" vars { image_url = "xxxxx.dkr.ecr.eu-west-1.amazonaws.com/nginx:latest" container_name = "website" log_group_region = "${var.aws_region}" log_group_name = "${aws_cloudwatch_log_group.app.name}" } } resource "aws_ecs_task_definition" "website" { family = "tf_website_td" container_definitions = "${data.template_file.task_definition.rendered}" } resource "aws_ecs_service" "test" { name = "tf-ecs-website" cluster = "${aws_ecs_cluster.main.id}" task_definition = "${aws_ecs_task_definition.website.arn}" desired_count = 1 iam_role = "${aws_iam_role.ecs_service.name}" load_balancer { target_group_arn = "${aws_alb_target_group.test.id}" container_name = "website" container_port = "80" } depends_on = [ "aws_iam_role_policy.ecs_service", "aws_alb_listener.front_end", ] } ## IAM resource "aws_iam_role" "ecs_service" { name = "tf_website_ecs_role" assume_role_policy = <<EOF { "Version": "2008-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "ecs.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } EOF } resource "aws_iam_role_policy" "ecs_service" { name = "tf_website_ecs_policy" role = "${aws_iam_role.ecs_service.name}" policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:Describe*", "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", "elasticloadbalancing:DeregisterTargets", "elasticloadbalancing:Describe*", "elasticloadbalancing:RegisterInstancesWithLoadBalancer", "elasticloadbalancing:RegisterTargets", "ecr:GetAuthorizationToken" ], "Resource": "*" } ] } EOF } resource "aws_iam_instance_profile" "app" { name = "tf-ecs-instprofile" role = "${aws_iam_role.app_instance.name}" } resource "aws_iam_role" "app_instance" { name = "tf-ecs-website-instance-role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } EOF } data "template_file" "instance_profile" { template = "${file("${path.module}/instance-profile-policy.json")}" vars { app_log_group_arn = "${aws_cloudwatch_log_group.app.arn}" ecs_log_group_arn = "${aws_cloudwatch_log_group.ecs.arn}" } } resource "aws_iam_role_policy" "instance" { name = "TfEcsInstanceRole" role = "${aws_iam_role.app_instance.name}" policy = "${data.template_file.instance_profile.rendered}" } ## ALB resource "aws_alb_target_group" "test" { name = "tf-website-ecs-website" port = 80 protocol = "HTTP" vpc_id = "${aws_vpc.main.id}" } resource "aws_alb" "main" { name = "tf-website-alb-ecs" subnets = ["${aws_subnet.main.*.id}"] security_groups = ["${aws_security_group.lb_sg.id}"] } resource "aws_alb_listener" "front_end" { load_balancer_arn = "${aws_alb.main.id}" port = "80" protocol = "HTTP" default_action { target_group_arn = "${aws_alb_target_group.test.id}" type = "forward" } } ## CloudWatch Logs resource "aws_cloudwatch_log_group" "ecs" { name = "tf-ecs-group/ecs-agent" } resource "aws_cloudwatch_log_group" "app" { name = "tf-ecs-group/app-website" } Thanks for the help
So found how to fix the problem. I was missing the following rights in the policy: "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:BatchGetImage", "ecr:GetDownloadUrlForLayer"
Terraform
48,540,828
12
Running terraform for creatind a key policy in AWS KMS I am getting the error: aws_kms_key.dyn_logs_server_side_cmk: MalformedPolicyDocumentException: The new key policy will not allow you to update the key policy in the future. status code: 400, request id: e34567896780780 There are many posts about this problem but nothing helped. So, my kms.tf file is as follows: provider "aws" { access_key = "${var.aws_access_key}" secret_key = "${var.aws_secret_key}" region     = "${var.aws_region}" } resource "aws_kms_key" "dyn_logs_server_side_cmk" { description = "dyn-logs-sse-cmk-${var.environment}" enable_key_rotation = "true" policy = <<EOF { "Version":"2015-11-17", "Statement":[ { "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": {"AWS": "arn:aws:iam::${var.account_id}:root"}, "Action": "kms:*", "Resource": "*" } ] }EOF } That’s what I see in the output after terraform apply "dyn-vpc.plan" aws_kms_key.dyn_logs_server_side_cmk: Creating... arn:                 "" => "<computed>" description:         "" => "dyn-logs-server-dyn" enable_key_rotation: "" => "true" is_enabled:          "" => "true" key_id:              "" => "<computed>" key_usage:           "" => "<computed>" policy:              "" => "{\n   \"Version\":\"2015-11-17\",\n   \"Statement\":[\n      {\n         \"Sid\": \"Enable IAM User Permissions\",\n         \"Effect\": \"Allow\",\n         \"Principal\": {\"AWS\": \"arn:aws:iam::12345678901234:root\"},\n         \"Action\": \"kms:*\",\n         \"Resource\": \"*\"\n      }\n   ]\n}\n" aws_kms_key.dyn_logs_server_side_cmk: Still creating... (10s elapsed) aws_kms_key.dyn_logs_server_side_cmk: Still creating... (20s elapsed) Error applying plan: 1 error(s) occurred: * aws_kms_key.dyn_logs_server_side_cmk: 1 error(s) occurred: * aws_kms_key.dyn_logs_server_side_cmk: MalformedPolicyDocumentException: The new key policy will not allow you to update the key policy in the future.
In my case the account id was correct but the user creating the key wasn't included in the Enable IAM User Permissions statement. I had to do this resource "aws_kms_key" "dyn_logs_server_side_cmk" { description = "dyn-logs-sse-cmk-${var.environment}" enable_key_rotation = "true" policy = <<EOF { "Version":"2015-11-17", "Statement":[ { "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::${var.account_id}:root", "arn:aws:iam::${var.account_id}:user/system/terraform-user" ] }, "Action": "kms:*", "Resource": "*" } ] }EOF }
Terraform
48,509,193
12
Is there any way to start an AWS Database Migration Service full-load-and-cdc replication task through Terraform? Preferably, this would start automatically upon creation of the task. The AWS DMS console provides an option to "Start task on create", and the AWS CLI provides a start-replication-task command, but I'm not seeing similar options in the Terraform resources. The aws_dms_replication_task provides the cdc_start_time argument, but I think this may apply only to cdc tasks. I've tried setting this argument to a number of past/current/future timestamps with my full-load-and-cdc replication task, but the task never started (it was merely created and entered the ready state). I'd be glad to log a feature request to Terraform if this feature is not supported, but wanted to check with the community first to see if I was overlooking an existing way to do this today. (Note: This question has also been logged to the Terraform Google group.)
I've logged an issue for this feature request: Terraform AWS Provider #2083: Support Starting AWS Database Migration Service Replication Task
Terraform
46,920,760
12
How can I get Terraform 0.10.1 to support two different providers without having to run 'terraform init' every time for each provider? I am trying to use Terraform to 1) Provision an API server with the 'DigitalOcean' provider 2) Subsequently use the 'Docker' provider to spin up my containers Any suggestions? Do I need to write an orchestrating script that wraps Terraform?
Terraform's current design struggles with creating "multi-layer" architectures in a single configuration, due to the need to pass dynamic settings from one provider to another: resource "digitalocean_droplet" "example" { # (settings for a machine running docker) } provider "docker" { host = "tcp://${digitalocean_droplet.example.ipv4_address_private}:2376/" } As you saw in the documentation, passing dynamic values into provider configuration doesn't fully work. It does actually partially work if you use it with care, so one way to get this done is to use a config like the above and then solve the "chicken-and-egg" problem by forcing Terraform to create the droplet first: $ terraform plan -out=tfplan -target=digitalocean_droplet.example The above will create a plan that only deals with the droplet and any of its dependencies, ignoring the docker resources. Once the Docker droplet is up and running, you can then re-run Terraform as normal to complete the setup, which should then work as expected because the Droplet's ipv4_address_private attribute will then be known. As long as the droplet is never replaced, Terraform can be used as normal after this. Using -target is fiddly, and so the current recommendation is to split such systems up into multiple configurations, with one for each conceptual "layer". This does, however, require initializing two separate working directories, which you indicated in your question that you didn't want to do. This -target trick allows you to get it done within a single configuration, at the expense of an unconventional workflow to get it initially bootstrapped.
Terraform
45,734,925
12
This question is NOT answered. Someone mentioned environment variables. Can you elaborate on this? 5/28/2024 - Simplified the question (below): This is an oracle problem. I have 4 PCs. I need program 1 run on the one machine that has Drive E. Out of the remaining 3 that don't have drive E, I need program 2 run on ONLY one of the 3. For the other 2, don't run anything. This seems like a simple problem, but not in ansible. It keeps coming up. Especially in error conditions. I need a global variable. One that I can set when processing one host play, then check at a later time with another host. In a nutshell, so I can branch later in the playbook, depending on the variable. We have no control over custom software installation, but if it is installed, we have to put different software on other machines. To top it off, the installations vary, depending on the VM folder. My kingdom for a global var. The scope of variables relates ONLY to the current ansible_hostname. Yes, we have group_vars/all.yml as globals, but we can't set them in a play. If I set a variable, no other host's play/task can see it. I understand the scope of variables, but I want to SET a global variable that can be read throughout all playbook plays. The actual implementation is unimportant but variable access is (important). My Question: Is there a way to set a variable that can be checked when running a different task on another host? Something like setGlobalSpaceVar(myvar, true)? I know there isn't any such method, but I'm looking for a work-around. Rephrasing: set a variable in one tasks for one host, then later in another task for another host, read that variable. The only way I can think of is to change a file on the controller, but that seems bogus. An example The following relates to oracle backups and our local executable, but I'm keeping it generic. For below - Yes, I can do a run_once, but that won't answer my question. This variable access problem keeps coming up in different contexts. I have 4 xyz servers. I have 2 programs that need to be executed, but only on 2 different machines. I don't know which. The settings may be change for different VM environments. Our programOne is run on the server that has a drive E. I can find which server has drive E using ansible and do the play accordingly whenever I set a variable (driveE_machine). It only applies to that host. For that, the other 3 machines won't have driveE_machine set. In a later play, I need to execute another program on ONLY one of the other 3 machines. That means I need to set a variable that can be read by the other 2 hosts that didn't run the 2nd program. I'm not sure how to do it. Inventory file: [xyz] serverxyz[1:4].private.mystuff Playbook example: --- - name: stackoverflow variable question hosts: xyz gather_facts: no serial: 1 tasks: - name: find out who has drive E win_shell: dir e:\ register: adminPage ignore_errors: true # This sets a variable that can only be read for that host - name: set fact driveE_machine when rc is 0 set_fact: driveE_machine: "{{inventory_hostname}}" when: adminPage.rc == 0 - name: run program 1 include: tasks/program1.yml when: driveE_machine is defined # program2.yml executes program2 and needs to set some kind of variable # so this include can only be executed once for the other 3 machines # (not one that has driveE_machine defined and ??? - name: run program 2 include: tasks/program2.yml when: driveE_machine is undefined and ??? # please don't say run_once: true - that won't solve my variable access question Is there a way to set a variable that can be checked when running a task on another host?
No sure what you actually want, but you can set a fact for every host in a play with a single looped task (some simulation of global variable): playbook.yml --- - hosts: mytest gather_facts: no vars: tasks: # Set myvar fact for every host in a play - set_fact: myvar: "{{ inventory_hostname }}" delegate_to: "{{ item }}" with_items: "{{ play_hosts }}" run_once: yes # Ensure that myvar is a name of the first host - debug: msg: "{{ myvar }}" hosts [mytest] aaa ansible_connection=local bbb ansible_connection=local ccc ansible_connection=local result PLAY [mytest] ****************** META: ran handlers TASK [set_fact] ****************** ok: [aaa -> aaa] => (item=aaa) => {"ansible_facts": {"myvar": "aaa"}, "ansible_facts_cacheable": false, "changed": false, "failed": false, "item": "aaa"} ok: [aaa -> bbb] => (item=bbb) => {"ansible_facts": {"myvar": "aaa"}, "ansible_facts_cacheable": false, "changed": false, "failed": false, "item": "bbb"} ok: [aaa -> ccc] => (item=ccc) => {"ansible_facts": {"myvar": "aaa"}, "ansible_facts_cacheable": false, "changed": false, "failed": false, "item": "ccc"} TASK [debug] ****************** ok: [aaa] => { "msg": "aaa" } ok: [bbb] => { "msg": "aaa" } ok: [ccc] => { "msg": "aaa" }
Ansible
47,167,446
24
I have been trying to write playbooks where I can run different tasks based on the arch (i.e amd64, arm, ppc64le) that the playbook is running on. I can not figure out how do I get the arch of the system I am running it on. How to figure out the arch of the system in Ansible playbook?
To get the architecture of the system At the command line: ansible HOST -m setup -a 'filter=ansible_architecture' For an x86 architecture host, this would return: HOST | SUCCESS => { "ansible_facts": { "ansible_architecture": "x86_64" }, "changed": false } Here’s a sample playbook that will print out the architecture of all hosts in your inventory: - name: print out hosts architectures hosts: all gather_facts: True tasks: - debug: var= ansible_architecture To run tasks based off the architecture Use a when clause: - name: task to run for x86 architecture shell: echo "x86 arch found here" when: ansible_architecture == "x86_64"
Ansible
44,713,880
24
I wrote an ansible task to iterate over a list of settings using with_items. Now all my settings are logged when I run ansible. It is very verbose and makes it hard to see what is happening. But, if I disable all the output with no_log, I will have no way to identify specific items when they fail. How could the output be improved — to show only an identifier for each item? Example task: - authorized_key: user: "{{ item.user }}" key: "{{ item.key }}" with_items: "{{ ssh_keys }}" Example output: TASK [sshkey-alan-sysop : ssh authorized keys] ********************************* ok: [brick] => (item={u'user': u'alan-sysop', u'key': u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAgRe16yLZa8vbzsrxUpT5MdHoEYYd/awAnEWML4g+YoUvLDKr+zwmu78ze/E1NSipoZejXpggUYRVhh8MOiCX6qpUguBDWZFlvSCE/7uXWWg7Oht0f1kDS2xU7YiycPIzMN1dmUEFY9AixnN936Dq6nOtEzgBwjo66I1YC/5jrsQEqF19shx43A4DTFlPUz/PnsqHl2ESrkIk3e8zyidaPN2pRbA5iKzdvPW4E2W2tKw9ll40vqRXzaWIF7v293Ostwi1IPi2erlC777DhjZUhZ1VGXIR7FDAfANzalrMe6c/ZysiXewiUYgMw0I8Dh1LK3QMj9Kuo35S5E0Xj3TB alan-sysop@alan-laptop'})
There's loop_control for that: - authorized_key: user: "{{ item.user }}" key: "{{ item.key }}" with_items: "{{ ssh_keys }}" loop_control: label: "{{ item.user }}"
Ansible
42,832,530
24
Is it possible to define one notify block for several tasks? In next code snippet notify: restart tomcat defined 3 times, but I want to define it only one time and "apply" to list of tasks - name : template context.xml template: src: context.xml.j2 dest: /usr/share/tomcat/conf/context.xml group: tomcat mode: 0664 notify: restart tomcat - name : copy server.xml copy: src: server.xml dest: /etc/tomcat/server.xml group: tomcat mode: 0664 notify: restart tomcat - name : copy atomikos-integration-extension copy: src: atomikos-integration-extension-3.7.1-20120529.jar dest: /usr/share/tomcat/ext-libs/ group: tomcat mode: 0664 notify: restart tomcat
No, you cannot. Notify sets a trigger to run the specified handler based on the status of the task. There is no "status for a block of tasks" in Ansible hence you cannot define notify for a block. Besides, it wouldn't change anything functionally, only influence the visual appeal (and I would claim by obscuring things rather than simplifying). The handler is run only once regardless of how many tasks triggered it.
Ansible
41,613,343
24
I've been having some trouble with restarting the SSH daemon with Ansible. I'm using the latest software as of May 11 2015 (Ansible 1.9.1 / Vagrant 1.7.2 / VirtualBox 4.3.26 / Host: OS X 10.10.1 / Guest: ubuntu/trusty64) tl;dr: There appears to be something wrong with the way I'm invoking the service syntax. Problem With Original Use Case (Handler) Playbook - hosts: all - remote_user: vagrant - tasks: ... - name: Forbid SSH root login sudo: yes lineinfile: dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="permitRootLogin no" state=present notify: - restart ssh ... - handlers: - name: restart ssh sudo: yes service: name=ssh state=restarted Output NOTIFIED: [restart ssh] failed: [default] => {"failed": true} FATAL: all hosts have already failed -- aborting The nginx handler completed successfully with nearly identical syntax. Task Also Fails Playbook - name: Restart SSH server sudo: yes service: name=ssh state=restarted Same output as the handler use case. Ad Hoc Command Also Fails Shell > ansible all -i ansible_inventory -u vagrant -k -m service -a "name=ssh state=restarted" Inventory 127.0.0.1:8022 Output 127.0.0.1 | FAILED >> { "failed": true, "msg": "" } Shell command in box works When I SSH in and run the usual command, everything works fine. > vagrant ssh > sudo service ssh restart ssh stop/waiting ssh start/running, process 7899 > echo $? 0 Command task also works Output TASK: [Restart SSH server] **************************************************** changed: [default] => {"changed": true, "cmd": ["service", "ssh", "restart"], "delta": "0:00:00.060220", "end": "2015-05-11 07:59:25.310183", "rc": 0, "start": "2015-05-11 07:59:25.249963", "stderr": "", "stdout": "ssh stop/waiting\nssh start/running, process 8553", "warnings": ["Consider using service module rather than running service"]} As we can see in the warning, we're supposed to use the service module, but I'm still not sure where the snag is.
As the comments above state, this is an Ansible issue that will apparently be fixed in the 2.0 release. I just changed my handler to use the command module and moved on: - name: restart sshd command: service ssh restart
Ansible
30,162,528
24
I am trying to implement a reducer for Hadoop Streaming using R. However, I need to figure out a way to access certain libraries that are not built in R, dplyr..etc. Based on my research seems like there are two approaches: (1) In the reducer code, install the required libraries to a temporary folder and they will be disposed when the session is done, like this: .libPaths(c(.libPaths(), temp <- tempdir())) install.packages("dplyr", lib=temp, repos='http://cran.us.r-project.org') library(dplyr) ... However, this approach will have a dramatic overhead depending on how many libraries you are trying to install. So most of the time will be wasted on installing libraries(sophisticated libraries like dplyr has tons of dependencies which will take minutes to install on a vanilla R session). So sounds like I need to install it before hand, which leads us to approach2. (2) My cluster is fairly big. And I have to use some tool like Ansible to make it work. So I prefer to have one Linux shell command to install the library. I have seen R CMD INSTALL... before, however, it feels like will only install packages from source file instead of doing install.packages() in R console, figure out the mirror, pull the source file, install it in one command. Can anyone show me how to use one command line in shell to non-interactively install a R package? (sorry for this much background knowledge, if anyone thinks I am not even following the right phylosophy, feel free to leave in the comment how this whole cluster R package should be managed.)
tl;dr Rscript -e 'install.packages("drat", repos="https://cloud.r-project.org")' You mentioned you are trying to install dplyr into custom lib location on your disk. Be aware that dplyr package does not support that. You can read more in dplyr#4641. Moreover if you are installing private package published in internal CRAN-like repository (created by drat or tools::write_PACKAGES), you can easily combine repos argument and resolve dependencies from CRAN automatically. Rscript -e 'install.packages("priv.pkg", repos=c("cran.priv","https://cloud.r-project.org"))' This is very handy feature of R repositories, although for production use I would recommend to cache packages from CRAN locally, and use those, so you will never be surprised by a breaking changes in your dependencies. For quality information about handling R in production I suggest to look into talk by Wit Jakuczun at WhyR2019 How to make R great for machine learning in (not only) Enterprise: slides, video.
Ansible
26,985,112
24
I've installed ansible on my Mac using pip as advised by ansible's documentation: https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-on-macos However when I try to run ansible I get the following: zsh: command not found: ansible I've never had this problem when installing ansible before. pip-installing again tells me it's already installed under site packages: Requirement already satisfied: ansible in ./Library/Python/3.8/lib/python/site-packages (2.9.11) And my python installation in ~/.zshrc points to: # Add user python 3.7 to path export PATH="/usr/local/opt/python/libexec/bin:$PATH" Might be obvious to some but I can't figure out why this simple installation isn't working..
After installing ansible with python3 -m pip install --user ansible, I searched for the ansible binary and found it to be downloaded into ~/Library/Python/3.8/bin. The simplest way is to figure this out is: $ cd ~ $ find . | grep ansible <lines omitted> ./Library/Python/3.8/bin/ansible <lines omitted> From there, its pretty easy, just update your .bash_profile or .zshrc with export PATH="/path/to/Library/Python/3.8/bin:$PATH" And you should be good to go: $ source ~/.zshrc $ ansible --version ansible 2.10.8 config file = None configured module search path = ['/Users/dbove/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
Ansible
63,177,609
23
This is my ansible role: /roles /foo /tasks main.yml <----- I want to split this The main.yml file is really big, so I want to split it into multiple files, and call them in sequence. /roles /foo /tasks run-this-first.yml <--- 1st run-this-last.yml <--- last run-this-second.yml <--- 2nd How do I invoke those files, and how do I ensure they are run in order?
You can do it with include_tasks: /roles /foo /tasks main.yml run-this-first.yml <--- 1st run-this-last.yml <--- last run-this-second.yml <--- 2nd As you can notice that there is also main.yml inside the tasks directory and your main.yml simply contains this: --- - include_tasks: run-this-first.yml - include_tasks: run-this-second.yml - include_tasks: run-this-last.yml
Ansible
57,648,261
23
I am automating Canonical Kubernetes installation with Ansible. The intallation process required snap to be present on the host. Is there a standard way to install snap packages with Ansible already?
The snap module is available since version 2.8 of Ansible (released May 2019): https://docs.ansible.com/ansible/latest/modules/snap_module.html#snap-module The required task will be: - name: Install conjure-up for Canonical Kubernetes community.general.snap: name: conjure-up classic: yes
Ansible
47,305,658
23
I have a directory: /users/rolando/myfile I want to copy "myfile" to hostname "targetserver" in directory: /home/rolando/myfile What is the syntax in the playbook to do this? Examples I found with the copy command look like it's more about copying a file from a source directory on a remote server to a target directory on the same remote server. The line in my playbook .yml I tried that failed: - copy: src='/users/rolando/myfile' dest='rolando@targetserver:/home/rolando/myfile' What am I doing wrong?
From copy synopsis: The copy module copies a file on the local box to remote locations. - hosts: targetserver tasks: - copy: src: /users/rolando/myfile dest: /users/rolando/myfile
Ansible
44,595,867
23
I have a service call that returns system status in json format. I want to use the ansible URI module to make the call and then inspect the response to decide whether the system is up or down {"id":"20161024140306","version":"5.6.1","status":"UP"} This would be the json that is returned This is the ansible task that makes a call: - name: check sonar web is up uri: url: http://sonarhost:9000/sonar/api/system/status method: GET return_content: yes status_code: 200 body_format: json register: data Question is how can I access data and inspect it as per ansible documentation this is how we store results of a call. I am not sure of the final step which is to check the status.
This works for me. - name: check sonar web is up uri: url: http://sonarhost:9000/sonar/api/system/status method: GET return_content: yes status_code: 200 body_format: json register: result until: result.json.status == "UP" retries: 10 delay: 30 Notice that result is a ansible dictionary and when you set return_content=yes the response is added to this dictionary and is accessible using json key Also ensure you have indented the task properly as shown above.
Ansible
40,235,550
23
I need to create or overwrite files on remote hosts. The modules lineinfile or blockinfile are useful when updating files, but not to create ones from scratch or completely overwrite existing ones. The obvious solution is to use copy but I would like to have as much as possible a standalone playbook, without files on the side. Is it possible to include in a playbook the content of the file to create? Maybe something along the lines of having a variable with the content of the file which can be used as the src=parameter for copy (I tried this but it does not work as src expects a local file)
Copy with content: tasks: - copy: content: | This is some not too complex cotent for a file dest: content.txt But as per Ansible doc: This is for simple values, for anything complex or with formatting please switch to the template module.
Ansible
38,479,423
23
What is the difference between Ansible template module and Ansible copy module?
While very similar, template serves an extra function. copy takes a file from host, "as-is", and copies it to the remote destination. template takes a file (template) from host, changes variables based on Jinja2 filtering, and copies it to the remote destination. You could use template to copy a file without template formatting from host to the remote destination. An example of copy over template is when you need to import a custom config file based on parameters from the host (or elsewhere), such as a web config file that takes host/credential properties from a database instance. Note the same could be achieved by using copy/lineinfile--this is just a different way of doing so.
Ansible
37,675,262
23
How to run Ansible without hosts file? just like: $ ansible --"Some Options" IP -a 'uptime'
you can do like this: ansible all -i "<hostname-or-ip>," -a 'uptime' Note the , at the end of the IP address, or it will be considered a hosts inventory filename. Here is an example for reference: ansible all -i "192.168.33.100," -a 'uptime' 192.168.33.100 | SUCCESS | rc=0 >> 12:05:10 up 10 min, 1 user, load average: 0.46, 0.23, 0.08
Ansible
37,652,464
23
I'm working on a role that only needs to gather a single fact. Performance it's a concern and I know that gathering facts it's time-consuming. I'm looking for some way to filter gather_facts inside a playbook, this will allow me to gather only the required facts. This is possible using the setup core module: ansible -m setup -a 'filter=ansible_hostname' my_host 10.200.0.127 | success >> { "ansible_facts": { "ansible_hostname": "my_host" }, "changed": false } It's possible to use this feature inside the playbook? Something like this? - hosts: all sudo: yes gather_facts: True filter: "filter=ansible_*" PS: The code above throws syntax exception. EDIT 1: If someone needs to get hostname there's also another useful variable inventory_hostname.
Yes, that's possible, but not in the default behavior of gathering facts. Having set gather_facts to true simply calls the setup module as very first task of the play. This way you do not have any way to parameterize the setup module call. But you can disable the default behavior and call setup yourself with the filter parameter. - hosts: all sudo: yes gather_facts: False tasks: - setup: filter: ansible_* Since you're working on a role and might not want to have this setup call in your role, you could make use of pre_tasks. - hosts: all sudo: yes gather_facts: False pre_tasks: - setup: filter: ansible_* roles: - your_role_here
Ansible
34,485,286
23
How can an Ansible playbook register in a variable the result of including another playbook? For example, would the following register the result of executing tasks/foo.yml in result_of_foo? tasks: - include: tasks/foo.yml - register: result_of_foo How else can Ansible record the result of a task sequence?
The short answer is that this can't be done. The register statement is used to store the output of a single task into a variable. The exact contents of the registered variable can vary widely depending on the type of task (for example a shell task will include stdout & stderr output from the command you run in the registered variable, while the stat task will provide details of the file that is passed to the task). If you have an include file with an arbitrary number of tasks within it then Ansible would have no way of knowing what to store in the variable in your example. Each individual task within your include file can register variables, and you can reference those variables elsewhere, so there's really no need to even do something like this.
Ansible
33,701,062
23
HI i am new to jinja2 and trying to use regular expression as shown below {% if ansible_hostname == 'uat' %} {% set server = 'thinkingmonster.com' %} {% else %} {% set server = 'define yourself' %} {% endif %} {% if {{ server }} match('*thinking*') %} {% set ssl_certificate = 'akash' %} {% elif {{ server }} match( '*sleeping*')%} {% set ssl_certificate = 'akashthakur' %} {% endif %} based on the value of "server" i would like to evaluate as which certificates to use. ie if domain contains "thinking" keyword then use these certificates and if it contains "sleeping" keyword then use that certificate. But didn't found any jinja2 filter supporting this. Please help me.I found some python code and sure that can work but how to use python in jinja2 templates?
Jinja2 can quite easily do substr checks with a simple 'in' comparison, e.g. {% set server = 'www.thinkingmonster.com' %} {% if 'thinking' in server %} do something... {% endif %} So your substring regex filter isn't required. However if you want more advanced regex matching, then there are in fact filters available in ansible - see the regex filters in http://docs.ansible.com/playbooks_filters.html#other-useful-filters - funnily enough, your match syntax above is nearly exactly right. +1 for Bereal's answer though, it gives a nice alternative in the form of a map.
Ansible
30,413,616
23
I am trying to create a task in ansible which executes a shell command to run an executable in daemon mode using &. Something like following -name: Start daemon shell: myexeprogram arg1 arg2 & What am seeing is if I keep & the task returns immediately and the process is not started . If I remove & ansible task waits for quite some time without returning. Appreciate suggestion on proper way to start program in daemon mode through ansible. Pls note that I dont want to run this as a service but an adhoc background process based on certain conditions.
Running program with '&' does not make program a daemon, it just runs in background. To make a "true daemon" your program should do steps described here. If your program is written in C, you can call daemon() function, which will do it for you. Then you can start your program even without '&' at the end and it will be running as a daemon. The other option is to call your program using daemon, which should do the job as well. - name: Start daemon shell: daemon -- myexeprogram arg1 arg2
Ansible
29,806,673
23
I need to set up Apache/mod_wsgi in Centos 6.5 so my main YAML file is as such: --- - hosts: dev tasks: - name: Updates yum installed packages yum: name=* state=latest - hosts: dev roles: - { role: apache } This should update all yum-installed packages then execute the apache role. The apache role is configured to install Apache/mod_wsgi, set Apache to start at boot time and restart it. The following are the contents of roles/apache/tasks/main.yml: --- - name: Installs httpd and mod_wsgi yum: name={{ item }} state=latest with_items: - httpd - mod_wsgi notify: - enable httpd - restart httpd And the handlers in roles/apache/handlers/main.yml: --- - name: enable httpd service: name=httpd enabled=yes - name: restart httpd service: name=httpd state=restarted The handlers do not seem to run since the following output is given when I execute the playbook: PLAY [dev] ******************************************************************** GATHERING FACTS *************************************************************** ok: [dev.example.com] TASK: [Updates yum installed packages] **************************************** ok: [dev.example.com] PLAY [dev] ******************************************************************** GATHERING FACTS *************************************************************** ok: [dev.example.com] TASK: [apache | Installs httpd and mod_wsgi] ********************************** ok: [dev.example.com] => (item=httpd,mod_wsgi) PLAY RECAP ******************************************************************** dev.example.com : ok=4 changed=0 unreachable=0 failed=0 And when I vagrant ssh into the virtual machine, sudo service httpd status shows httpd is stopped and sudo chkconfig --list shows it has not been enabled to be started by init. I'm just starting out with Ansible, so is there something obvious I could be missing?
Well, to answer my own question, I realized that there's a subtle point I missed: http://docs.ansible.com/playbooks_intro.html#handlers-running-operations-on-change Specifically, the notify signal is produced only if the task introduces a change. So for my use case I think I'll go with enabling and starting Apache in standalone tasks instead of relying on change signal handlers.
Ansible
24,732,627
23
I try to write the playbook.yml for my vagrant machine and I'm faced with the following problem. Ansible prompt me to set these variables and I set these variables to null/false/no/[just enter], but the roles is executed no matter! How can I prevent this behavior? I just want no actions if no vars are set.. --- - name: Deploy Webserver hosts: webservers vars_prompt: run_common: "Run common tasks?" run_wordpress: "Run Wordpress tasks?" run_yii: "Run Yii tasks?" run_mariadb: "Run MariaDB tasks?" run_nginx: "Run Nginx tasks?" run_php5: "Run PHP5 tasks?" roles: - { role: common, when: run_common is defined } - { role: mariadb, when: run_mariadb is defined } - { role: wordpress, when: run_wordpress is defined } - { role: yii, when: run_yii is defined } - { role: nginx, when: run_nginx is defined } - { role: php5, when: run_php5 is defined }
I believe the variables will always be defined when you use vars_prompt, so "is defined" will always be true. What you probably want is something along these lines: - name: Deploy Webserver hosts: webservers vars_prompt: - name: run_common prompt: "Product release version" default: "Y" roles: - { role: common, when: run_common == "Y" } Edit: To answer your question, no it does not throw an error. I made a slightly different version and tested it using ansible 1.4.4: - name: Deploy Webserver hosts: localohst vars_prompt: - name: run_common prompt: "Product release version" default: "N" roles: - { role: common, when: run_common == "Y" or run_common == "y" } And roles/common/tasks/main.yml contains: - local_action: debug msg="Debug Message" If you run the above example and just hit Enter, accepting the default, then the role is skipped: Product release version [N]: PLAY [Deploy Webserver] ******************************************************* GATHERING FACTS *************************************************************** ok: [localhost] TASK: [common | debug msg="Debug Message"] ************************************ skipping: [localhost] PLAY RECAP ******************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=0 But if you run this and enter Y or y when prompted then the role is executed as desired: Product release version [N]:y PLAY [Deploy Webserver] ******************************************************* GATHERING FACTS *************************************************************** ok: [localhost] TASK: [common | debug msg="Debug Message"] ************************************ ok: [localhost] => { "item": "", "msg": "Debug Message" } PLAY RECAP ******************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0
Ansible
21,063,159
23
If I run apt, I can update the package cache: apt: name: postgresql state: present update_cache: yes I'm now trying to use the generic package command, but I don't see a way to do this. package: name: postgresql state: present Do I have to run an explicit command to run apt-get update, or can I do this using the package module?
This is not possible. The module package as of writing is just capable to handle package presence, so you have to use directly the package module to refresh the cache.
Ansible
49,087,220
22
I'm looking for an appropriate Ansible Role or Ansible YAML file for installing NodeJS LTS on a Ubuntu 16.04.3 xenial system. I tried more than 10 Ansible roles from Galaxy but didn't find any of them working (throws error such as potentially dangerous to add this PPA etc.. Can anyone provide any Ansible playbook or suggest me a role to install NodeJS LTS on Ubuntu 16.04?
Here is the working example: --- - hosts: all gather_facts: yes become: yes vars: NODEJS_VERSION: "8" tasks: - name: Install the gpg key for nodejs LTS apt_key: url: "https://deb.nodesource.com/gpgkey/nodesource.gpg.key" state: present - name: Install the nodejs LTS repos apt_repository: repo: "deb https://deb.nodesource.com/node_{{ NODEJS_VERSION }}.x {{ ansible_distribution_release }} main" state: present update_cache: yes - name: Install the nodejs apt: name: nodejs state: present Hope it will help you
Ansible
45,840,664
22
I am trying to setup a Django project in vagrant using ansible. I have used the following code for installing the pip packages: - name: Setup Virtualenv pip: virtualenv={{ virtualenv_path }} virtualenv_python=python3 requirements={{ virtualenv_path }}/requirements.txt I need to use python3 for the django project and even though I have explicitly mentioned to use python3, it is installing the pip packages via pip2. I have ensured that python3 is installed on the virtual machine. Please, help me install the packages via pip3.
Had the same issue. There is workaround with usage executable: - name: Install and upgrade pip pip: name: pip extra_args: --upgrade executable: pip3
Ansible
44,455,240
22
I have two variables: a, b. I want to assign a value to a variable c based on which: a or b contains greater numerical value. This what I tried: - set_fact: c: "test1" when: a <= b - set_fact: c: "test2" when: b <= a Look like it always sets c to test1 not test2.
Using if-else expression: - set_fact: c: "{{ 'test1' if (a >= b) else 'test2' }}" Using ternary operator: - set_fact: c: "{{ (a >= b) | ternary ('test1', 'test2') }}" Using your own code which is correct (see the notice below) Either of the above methods requires both variables used in comparison to be of the same type to give sensible results. And for a numerical comparison, they require both values to be integers. The code from the question: - set_fact: c: "test1" when: a <= b - set_fact: c: "test2" when: b <= a works properly in two cases: both: a and b are integers both: a and b are strings containing numerical value with the same number of digits However it produces unexpected result is when: one of the values is string and the other integer the string values contain numerical values with different number of digits
Ansible
42,660,653
22
I believe the Ansible copy module can take a whole bunch of "files" and copy them in one hit. This I believe can be achieved by copying a directory recursively. Can the Ansible template module take a whole bunch of "templates" and deploy them in one hit? Is there such a thing as deploying a folder of templates and applying them recursively?
The template module itself runs the action on a single file, but you can use with_filetree to loop recursively over a specified path: - name: Ensure directory structure exists ansible.builtin.file: path: '{{ templates_destination }}/{{ item.path }}' state: directory with_community.general.filetree: '{{ templates_source }}' when: item.state == 'directory' - name: Ensure files are populated from templates ansible.builtin.template: src: '{{ item.src }}' dest: '{{ templates_destination }}/{{ item.path }}' with_community.general.filetree: '{{ templates_source }}' when: item.state == 'file' And for templates in a single directory you can use with_fileglob.
Ansible
41,667,864
22
I have the following role in my Ansible playbook to determine the installed version of Packer and conditionally install it if it doesn't match the version of a local variable: --- # detect packer version - name: determine packer version shell: /usr/local/bin/packer -v || true register: packer_installed_version - name: install packer cli tools unarchive: src: https://releases.hashicorp.com/packer/{{ packer_version }}/packer_{{ packer_version }}_linux_amd64.zip dest: /usr/local/bin copy: no when: packer_installed_version.stdout != packer_version The problem/annoyance is that Ansible marks this step as having "changed": I'd like gather this fact without marking something as changed so I can know reliably at the end of my playbook execution if anything has, in fact, changed. Is there a better way to go about what I'm doing above?
From the Ansible docs: Overriding The Changed Result New in version 1.3. When a shell/command or other module runs it will typically report “changed” status based on whether it thinks it affected machine state. Sometimes you will know, based on the return code or output that it did not make any changes, and wish to override the “changed” result such that it does not appear in report output or does not cause handlers to fire: tasks: - shell: /usr/bin/billybass --mode="take me to the river" register: bass_result changed_when: "bass_result.rc != 2" # this will never report 'changed' status - shell: wall 'beep' changed_when: False In your case you would want: --- # detect packer version - name: determine packer version shell: /usr/local/bin/packer -v || true register: packer_installed_version changed_when: False - name: install packer cli tools unarchive: src: https://releases.hashicorp.com/packer/{{ packer_version }}/packer_{{ packer_version }}_linux_amd64.zip dest: /usr/local/bin copy: no when: packer_installed_version.stdout != packer_version
Ansible
37,057,086
22
I'm using Ansible to add a user to a variety of servers. Some of the servers have different UNIX groups defined. I'd like to find a way for Ansible to check for the existence of a group that I specify, and if that group exists, add it to a User's secondary groups list (but ignore the statement it if the group does not exist). Any thoughts on how I might do this with Ansible? Here is my starting point. Command ansible-playbook -i 'localhost,' -c local ansible_user.yml ansible_user.yml --- - hosts: all user: root become: yes vars: password: "!" user: testa tasks: - name: add user user: name="{{user}}" state=present password="{{password}}" shell=/bin/bash append=yes comment="test User" Updated: based on the solution suggested by @udondan, I was able to get this working with the following additional tasks. - name: Check if user exists shell: /usr/bin/getent group | awk -F":" '{print $1}' register: etc_groups - name: Add secondary Groups to user user: name="{{user}}" groups="{{item}}" append=yes when: '"{{item}}" in etc_groups.stdout_lines' with_items: - sudo - wheel
The getent module can be used to read /etc/group - name: Determine available groups getent: database: group - name: Add additional groups to user user: name="{{user}}" groups="{{item}}" append=yes when: item in ansible_facts.getent_group with_items: - sudo - wheel
Ansible
35,807,868
22
I have a playbook that is running in different way in Ansible 1.9.x and 2.0. I would like to check currently running ansible version in my playbook to avoid someone running it with old one. I don't think that this is the best solution: - local_action: command ansible --version register: version What would you suggest?
Ansible provides a global dict called ansible_version, dict contains the following "ansible_version": { "full": "2.7.4", "major": 2, "minor": 7, "revision": 4, "string": "2.7.4" } you can use any of the following ansible_version.full, ansible_version.major or any other combination in creating conditional statements to check the version of ansible that's installed. example playbook: using this dict and a when statement. --- - hosts: localhost tasks: - name: Print message if ansible version is greater than 2.7.0 debug: msg: "Ansible version is {{ ansible_version.full }}" when: ansible_version.full >= "2.7.4"
Ansible
34,809,845
22
What are the pros and cons to using Ansible Synchronize vs Copy modules. As far as I can tell synchronize has all the functionality that copy does but may be much faster so I'm considering changing everything to use synchronize. The only downside of synchronize is that rsync is required, which seems fairly ubiquitous in the Linux environment.
The differences are pretty similar to traditional rsync vs scp. Rsync has more features and is often faster, however it's a little bit trickier to setup and has more knobs to turn. Additionally, https://docs.ansible.com/ansible/copy_module.html states: The “copy” module recursively copy facility does not scale to lots (>hundreds) of files. For alternative, see synchronize module, which is a wrapper around rsync.
Ansible
32,468,350
22
In my Ansible script, I want to generate UUIDs on the fly and use them later on. Here is my approach: - shell: echo uuidgen with_sequence: count=5 register: uuid_list - uri: url: http://www.myapi.com method: POST body: "{{ item.item.stdout }}" with_items: uuid_list.result However I get the following error: fatal: [localhost] => One or more undefined variables: 'str object' has no attribute 'stdout' How can I solve this issue?
In ansible 1.9 there is a new filter : to_uuid , which given a string it will return an ansible domain specific UUID,you can find the usage in here https://docs.ansible.com/playbooks_filters.html#other-useful-filters
Ansible
30,516,011
22