question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I am wondering how can we stop and restart the AWS ec2 instance created using terraform. is there any way to do that?
As you asked, for example, there is a limit on the comment, so posting as the answer using local-exec. I assume that you already configure aws configure | aws configure --profile test using aws-cli. Here is the complete example to reboot an instance, change VPC SG ID, subnet and key name etc provider "aws" { region = "us-west-2" profile = "test" } resource "aws_instance" "ec2" { ami = "ami-0f2176987ee50226e" instance_type = "t2.micro" associate_public_ip_address = false subnet_id = "subnet-45454566645" vpc_security_group_ids = ["sg-45454545454"] key_name = "mytest-ec2key" tags = { Name = "Test EC2 Instance" } } resource "null_resource" "reboo_instance" { provisioner "local-exec" { on_failure = "fail" interpreter = ["/bin/bash", "-c"] command = <<EOT echo -e "\x1B[31m Warning! Restarting instance having id ${aws_instance.ec2.id}.................. \x1B[0m" # aws ec2 reboot-instances --instance-ids ${aws_instance.ec2.id} --profile test # To stop instance aws ec2 stop-instances --instance-ids ${aws_instance.ec2.id} --profile test echo "***************************************Rebooted****************************************************" EOT } # this setting will trigger script every time,change it something needed triggers = { always_run = "${timestamp()}" } } Now Run terraform apply Once created and you want later to reboot or stop just call terraform apply -target null_resource.reboo_instance See the logs
Terraform
57,158,310
12
Does anyone know if it's possible with possibly code snipits representing whether I can create a map variable within a map variable in terraform variables? variable "var" { type = map default = { firstchoice = { firstAChoice ="foo" firstBChoice = "bar" } secondchoice = { secondAChoice = "foobar" secondBChoice = "barfoo" } } } If anyone has any insight to whether this is possible or any documentation that elaborates that would be great.
Yes, it's possible to have map variable as value of map variable key. Your variable just needed right indentation. Also I am putting ways to access that variable. variable "var" { default = { firstchoice = { firstAChoice = "foo" firstBChoice = "bar" } secondchoice = { secondAChoice = "foobar" secondBChoice = "barfoo" } } } To access entire map value of a map key firstchoice, you can try following value = "${var.var["firstchoice"]}" output: { firstAChoice = foo firstBChoice = bar } To access specific key of that map key (example firstAChoice), you can try value = "${lookup(var.var["firstchoice"],"firstAChoice")}" output: foo
Terraform
56,776,130
12
I'm trying to build an ElasticSearch cluster using Terraform but i'm not able to assign more that 1 subnet! That's really weird cause in the documentation there is this : https://www.terraform.io/docs/providers/aws/r/elasticsearch_domain.html#subnet_ids subnet_ids - (Required) List of VPC Subnet IDs for the Elasticsearch domain endpoints to be created in. But when i try to do that i'm getting this error : Error: ValidationException: You must specify exactly one subnet This is my code : resource "aws_elasticsearch_domain" "es" { domain_name = "${var.es_domain}-${var.environment}" elasticsearch_version = "${var.es_version}" cluster_config { instance_type = "${var.es_instance_type}" instance_count = "${var.es_instance_count}" } vpc_options { subnet_ids = ["${data.aws_subnet.private_1.id}", "${data.aws_subnet.private_2.id}"] security_group_ids = ["${aws_security_group.es.id}"] } snapshot_options { automated_snapshot_start_hour = "${var.es_automated_spanshot_start_hour}" } ebs_options { ebs_enabled = true volume_type = "standard" volume_size = "20" } access_policies = <<CONFIG { "Version": "2012-10-17", "Statement": [ { "Action": "es:*", "Principal": "*", "Effect": "Allow", "Resource": "arn:aws:es:${var.aws_region}:${data.aws_caller_identity.current.account_id}:domain/${var.es_domain}/*" } ] } CONFIG } I'm using terraform v0.12.2 Thanks for your help.
You're missing zone_awareness_enabled parameter in the cluster_config which is required when using multi AZ Elasticsearch clusters.
Terraform
56,594,965
12
I'd like to use Terraform to create AWS Cognito User Pool with one test user. Creating a user pool is quite straightforward: resource "aws_cognito_user_pool" "users" { name = "${var.cognito_user_pool_name}" admin_create_user_config { allow_admin_create_user_only = true unused_account_validity_days = 7 } } However, I cannot find a resource that creates AWS Cognito user. It is doable with AWS Cli aws cognito-idp admin-create-user --user-pool-id <value> --username <value> Any idea on how to do it with Terraform?
In order to automate things, it can be done in terraform using a null_resource and local_exec provisioner to execute your aws cli command e.g. resource "aws_cognito_user_pool" "pool" { name = "mypool" } resource "null_resource" "cognito_user" { triggers = { user_pool_id = aws_cognito_user_pool.pool.id } provisioner "local-exec" { command = "aws cognito-idp admin-create-user --user-pool-id ${aws_cognito_user_pool.pool.id} --username myuser" } }
Terraform
55,087,715
12
I have set-up a terraform project with a remote back-end on GCP. Now when I want to deploy the infrastructure, I run into issues with credentials. I have a credentials file in \home\mike\.config\gcloud\credentials.json In my terraform project I have the following data referring to the remote state: data "terraform_remote_state" "project_id" { backend = "gcs" workspace = "${terraform.workspace}" config { bucket = "${var.bucket_name}" prefix = "${var.prefix_project}" } } and I specify the cloud provider with a the details of my credentials file. provider "google" { version = "~> 1.16" project = "${data.terraform_remote_state.project_id.project_id}" region = "${var.region}" credentials = "${file(var.credentials)}" } However, this runs into data.terraform_remote_state.project_id: data.terraform_remote_state.project_id: error initializing backend: storage.NewClient() failed: dialing: google: could not find default credentials. if I add export GOOGLE_APPLICATION_CREDENTIALS=/home/mike/.config/gcloud/credentials.json I do get it to run as desired. My issue is that I would like to specify the credentials in the terraform files as I am running the terraform commands in an automated way from a python script where I cannot set the environment variables. How can I let terraform know where the credentials are without setting the env variable?
I was facing the same error when trying to run terraform (version 1.1.5) commands in spite of having successfully authenticated via gcloud auth login. Error message in my case: Error: storage.NewClient() failed: dialing: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information. It turned out that I had to also authenticate via gcloud auth application-default login and was able to run terraform commands thereafter.
Terraform
55,068,363
12
I can't seem to get an SSL certificate from ACM working on API-Gateway, Route53, using terraform. There seems to be an interdependency problem. data "aws_route53_zone" "root_domain" { name = "${var.route53_root_domain_name}" private_zone = false } # The domain name to use with api-gateway resource "aws_api_gateway_domain_name" "domain_name" { domain_name = "${var.route53_sub_domain_name}" certificate_arn = "${aws_acm_certificate.cert.arn}" } resource "aws_route53_record" "sub_domain" { name = "${var.route53_sub_domain_name}" type = "A" zone_id = "${data.aws_route53_zone.root_domain.zone_id}" alias { name = "${aws_api_gateway_domain_name.domain_name.cloudfront_domain_name}" zone_id = "${aws_api_gateway_domain_name.domain_name.cloudfront_zone_id}" evaluate_target_health = false } } resource "aws_acm_certificate" "cert" { # api-gateway / cloudfront certificates need to use the us-east-1 region provider = "aws.cloudfront-acm-certs" domain_name = "${var.route53_sub_domain_name}" validation_method = "DNS" lifecycle { create_before_destroy = true } } resource "aws_route53_record" "cert_validation" { name = "${aws_acm_certificate.cert.domain_validation_options.0.resource_record_name}" type = "${aws_acm_certificate.cert.domain_validation_options.0.resource_record_type}" zone_id = "${aws_route53_record.sub_domain.zone_id}" records = ["${aws_acm_certificate.cert.domain_validation_options.0.resource_record_value}"] ttl = 60 } resource "aws_acm_certificate_validation" "cert" { # api-gateway / cloudfront certificates need to use the us-east-1 region provider = "aws.cloudfront-acm-certs" certificate_arn = "${aws_acm_certificate.cert.arn}" validation_record_fqdns = ["${aws_route53_record.cert_validation.fqdn}"] } The problem appears to be that: aws_api_gateway_domain_name requires aws_acm_certificate aws_acm_certificate has to be validated, so step 3 aws_route53_record.cert_validation requires aws_route53_record.sub_domain aws_route53_record.subdomain requires aws_api_gateway_domain_name Go to 1 Everytime I try to use the configuration given, I get the following error: aws_api_gateway_domain_name.domain_name: Error creating API Gateway Domain Name: BadRequestException: Unable to associate certificate arn:aws:acm:us-east-1:yyyy:certificate/zzzz with CloudFront. This error may prevent the domain name audit-log.taspli.com from being used in API Gateway for up to 40 minutes. Please ensure the certificate domain name matches the requested domain name, and that this user has permission to call cloudfront:UpdateDistribution on '*' resources. status code: 400, request id: xxxx
I seem to have fixed the problem by adding the certificate validation records to the root domain instead of the sub domain. Therefore breaking the cyclic dependency. The problem appears to be that the sub domain can't be created without the certificate and the certificate can't be validated without the sub domain. So the situation is stuck and unresolvable. You could manually create the sub domain, but then whats the point in automation if you have to make manual efforts to solve problems. So I tried adding the cert validation records to the root. Suddenly it starts to work, because the root domain is something that is created externally to the project. A sort of global infrastructure project which can be handled externally. Then your individual projects can hang off of that infrastructure on a case-by-case basis. Here is the terraform configuration which worked: data "aws_route53_zone" "root_domain" { name = "${var.route53_root_domain_name}" private_zone = false } # The domain name to use with api-gateway resource "aws_api_gateway_domain_name" "domain_name" { domain_name = "${var.route53_sub_domain_name}" certificate_arn = "${aws_acm_certificate.cert.arn}" } resource "aws_route53_record" "sub_domain" { name = "${var.route53_sub_domain_name}" type = "A" zone_id = "${data.aws_route53_zone.root_domain.zone_id}" alias { name = "${aws_api_gateway_domain_name.domain_name.cloudfront_domain_name}" zone_id = "${aws_api_gateway_domain_name.domain_name.cloudfront_zone_id}" evaluate_target_health = false } } resource "aws_acm_certificate" "cert" { # api-gateway / cloudfront certificates need to use the us-east-1 region provider = "aws.cloudfront-acm-certs" domain_name = "${var.route53_sub_domain_name}" validation_method = "DNS" } resource "aws_route53_record" "cert_validation" { name = "${aws_acm_certificate.cert.domain_validation_options.0.resource_record_name}" type = "${aws_acm_certificate.cert.domain_validation_options.0.resource_record_type}" zone_id = "${data.aws_route53_zone.root_domain.zone_id}" records = ["${aws_acm_certificate.cert.domain_validation_options.0.resource_record_value}"] ttl = 60 } resource "aws_acm_certificate_validation" "cert" { # api-gateway / cloudfront certificates need to use the us-east-1 region provider = "aws.cloudfront-acm-certs" certificate_arn = "${aws_acm_certificate.cert.arn}" validation_record_fqdns = ["${aws_route53_record.cert_validation.fqdn}"] timeouts { create = "45m" } }
Terraform
55,031,167
12
I am testing out some Terraform code to create a Kubernetes cluster so I chose the smallest/cheapest VM resource "azurerm_kubernetes_cluster" "k8s" { name = "${var.cluster_name}" location = "${azurerm_resource_group.resource_group.location}" resource_group_name = "${azurerm_resource_group.resource_group.name}" dns_prefix = "${var.dns_prefix}" agent_pool_profile { name = "agentpool" count = "${var.agent_count}" vm_size = "Standard_B1s" os_type = "Linux" os_disk_size_gb = "${var.agent_disk_size}" } service_principal { client_id = "${var.client_id}" client_secret = "${var.client_secret}" } } However, when I terraform apply I get this error message back from azure: "The VM SKU chosen for this cluster Standard_B1s does not have enough CPU/memory to run as an AKS node." How do I list the valid VM SKUs for AKS nodes and sort them by cost?
You need to select an instance with at least 3.5 GB of memory. Read A note on node size from this blog. You can list the VM size and price on the Azure sales site. Currently, the cheapest is Standard_B2s with 4 GB RAM. You can also sort it directly in the Azure portal.
Terraform
54,876,474
12
So, i've got Aurora MySql cluster with one RDS MySql instance provisioned. The obstacle occurs with the AWS underlying API allowing only for 1 logical DB to be created. Thus, I was wondering if any of you already had experience with such deployment coz I am running away from having to use Mysql client CLI for this step, would really like to automate it if possible. Any ideas?
Terraform has a Mysql provider https://www.terraform.io/docs/providers/mysql/index.html: # Configure the MySQL provider provider "mysql" { endpoint = "my-database.example.com:3306" username = "app-user" password = "app-password" } # Create a Database resource "mysql_database" "app" { name = "my_awesome_app" } So you can create your AWS db cluster/instance and then use the mysql provider to create another database: # Create a database server resource "aws_db_instance" "default" { engine = "mysql" engine_version = "5.6.17" instance_class = "db.t1.micro" name = "initial_db" username = "rootuser" password = "rootpasswd" # etc, etc; see aws_db_instance docs for more } # Configure the MySQL provider based on the outcome of # creating the aws_db_instance. provider "mysql" { endpoint = "${aws_db_instance.default.endpoint}" username = "${aws_db_instance.default.username}" password = "${aws_db_instance.default.password}" } # Create the second database beside "initial_db" # using the aws_db_instance resource above. resource "mysql_database" "app" { name = "another_db" }
Terraform
52,542,244
12
I've got a very simple piece of Terraform code: provider "aws" { region = "eu-west-1" } module ec2 { source = "./ec2_instance" name = "EC2 Instance 1" } where the module is: variable "name" { default = "Default Name from ec2_instance.tf" } resource "aws_instance" "example" { ami = "ami-e5083683" instance_type = "t2.nano" subnet_id = "subnet-3e976259" associate_public_ip_address = true security_groups = [ "sg-7310e10b" ] tags { Name = "${var.name}" } } When I first run it I get this output: security_groups.#: "" => "1" security_groups.1642973399: "" => "sg-7310e10b" However, the next time I try a plan I get: security_groups.#: "0" => "1" (forces new resource) security_groups.1642973399: "" => "sg-7310e10b" (forces new resource) What gives?!
You are incorrectly assigning a vpc_security_group_id into security_groups, instead of into vpc_security_group_ids. Change security_groups = [ "sg-7310e10b" ] to vpc_security_group_ids = [ "sg-7310e10b" ] and everything will be ok.
Terraform
51,496,944
12
AWS ALB supports rules based on matching both host and path conditions in the same rule. You can also create rules that combine host-based routing and path-based routing. I've checked the console and the UI does indeed allow for selecting host and path conditions in the same rule. Terraform aws_alb_listener_rule seems to support host OR path conditions. Must be one of path-pattern for path based routing or host-header for host based routing. Emphasis mine Is there a way to Terraform an ALB rule that only triggers when both the request hostname and path match some criteria?
You can specify two conditions, which results in an AND of the two conditions: resource "aws_alb_listener_rule" "host_header_rule" { condition { field = "host-header" values = ["some.host.name"] } condition { field = "path-pattern" values = ["/some-path/*"] } # etc. }
Terraform
46,304,015
12
I am trying to provision some AWS resources, specifically an API Gateway which is connected to a Lambda. I am using Terraform v0.8.8. I have a module which provisions the Lambda and returns the lambda function ARN as an output, which I then provide as a parameter to the following API Gateway provisioning code (which is based on the example in the TF docs): provider "aws" { access_key = "${var.access_key}" secret_key = "${var.secret_key}" region = "${var.region}" } # Variables variable "myregion" { default = "eu-west-2" } variable "accountId" { default = "" } variable "lambdaArn" { default = "" } variable "stageName" { default = "lab" } # API Gateway resource "aws_api_gateway_rest_api" "api" { name = "myapi" } resource "aws_api_gateway_method" "method" { rest_api_id = "${aws_api_gateway_rest_api.api.id}" resource_id = "${aws_api_gateway_rest_api.api.root_resource_id}" http_method = "GET" authorization = "NONE" } resource "aws_api_gateway_integration" "integration" { rest_api_id = "${aws_api_gateway_rest_api.api.id}" resource_id = "${aws_api_gateway_rest_api.api.root_resource_id}" http_method = "${aws_api_gateway_method.method.http_method}" integration_http_method = "POST" type = "AWS" uri = "arn:aws:apigateway:${var.myregion}:lambda:path/2015-03-31/functions/${var.lambdaArn}/invocations" } # Lambda resource "aws_lambda_permission" "apigw_lambda" { statement_id = "AllowExecutionFromAPIGateway" action = "lambda:InvokeFunction" function_name = "${var.lambdaArn}" principal = "apigateway.amazonaws.com" source_arn = "arn:aws:execute-api:${var.myregion}:${var.accountId}:${aws_api_gateway_rest_api.api.id}/*/${aws_api_gateway_method.method.http_method}/resourcepath/subresourcepath" } resource "aws_api_gateway_deployment" "deployment" { rest_api_id = "${aws_api_gateway_rest_api.api.id}" stage_name = "${var.stageName}" } When I run the above from scratch (i.e. when none of the resources exist) I get the following error: Error applying plan: 1 error(s) occurred: * aws_api_gateway_deployment.deployment: Error creating API Gateway Deployment: BadRequestException: No integration defined for method status code: 400, request id: 15604135-03f5-11e7-8321-f5a75dc2b0a3 Terraform does not automatically rollback in the face of errors. Instead, your Terraform state file has been partially updated with any resources that successfully completed. Please address the error above and apply again to incrementally change your infrastructure. If I perform a 2nd TF application it consistently applies successfully, but every time I destroy I then receive the above error upon the first application. This caused me to wonder if there's a dependency that I need to explicitly declare somewhere, I discovered #7486, which describes a similar pattern (although relating to an aws_api_gateway_integration_response rather than an aws_api_gateway_deployment). I tried manually adding an explicit dependency from the aws_api_gateway_deployment to the aws_api_gateway_integration but this had no effect. Grateful for any thoughts, including whether this may indeed be a TF bug in which case I will raise it in the issue tracker. I thought I'd check with the community before doing so in case I'm missing something obvious. Many thanks, Edd P.S. I've asked this question on the Terraform user group but this seems to get very little in the way of responses, I'm yet to figure out the cause of the issue hence now asking here.
You are right about the explicit dependency declaration. Normally Terraform would be able to figure out the relationships and schedule create/update/delete operations accordingly to that - this is mostly possible because of the interpolation mechanisms under the hood (${resource_type.ref_name.attribute}). You can display the relationships affecting this in a graph via terraform graph. Unfortunately in this specific case there's no direct relationship between API Gateway Deployments and Integrations - meaning the API interface for managing API Gateway resources doesn't require you to reference integration ID or anything like that to create deployment and the api_gateway_deployment resource in turn doesn't require that either. The documentation for aws_api_gateway_deployment does mention this caveat at the top of the page. Admittedly the Deployment not only requires the method to exist, but integration too. Here's how you can modify your code to get around it: resource "aws_api_gateway_deployment" "deployment" { rest_api_id = "${aws_api_gateway_rest_api.api.id}" stage_name = "${var.stageName}" depends_on = ["aws_api_gateway_method.method", "aws_api_gateway_integration.integration"] } Theoretically the "aws_api_gateway_method.method" is redundant since the integration already references the method in the config: http_method = "${aws_api_gateway_method.method.http_method}" so it will be scheduled for creation/update prior to the integration either way, but if you were to change that to something like http_method = "GET" then it would become necessary. I have submitted PR to update the docs accordingly.
Terraform
42,760,387
12
Let's say that I have a public hosted zone names example.com.. I use the following piece of Terraform code to dynamically fetch the hosted zone id based on the name as per the docs. data "aws_route53_zone" "main" { name = "example.com." # Notice the dot!!! private_zone = false } During terraform plan it comes up with this error: Error refreshing state: 1 error(s) occurred: * data.aws_route53_zone.main: no matching Route53Zone found Is there a bug that I should report or am I missing something?
The aws_route53_zone data source will list all the hosted zones in the account that Terraform has permissions to view. If you are trying to reference a zone in another account then you can do this by creating a role/user in the account with the zone that has permissions to list all the zones (route53:ListHostedZones*,route53:GetHostedZone*) and then having a second "provider" be used for this data source. So you might have something like this: provider "aws" { # ... access keys etc/assume role block } # DNS account provider "aws" { alias = "dns_zones" # ... access keys etc/assume role block } data "aws_route53_zone" "main" { provider = "aws.dns_zones" name = "example.com." # Notice the dot!!! private_zone = false } resource "aws_route53_record" "www" { zone_id = "${data.aws_route53_zone.main.zone_id}" name = "www.${data.aws_route53_zone.main.name}" ... }
Terraform
41,631,966
12
I'm facing a choice terraform of gcloud deployment manager. Both tools provide similar functionality and unfortunately lacks all resources. For example: gcloud can create service account (terraform cannot) terraform can manage DNS record set (gcloud cannot) and many others ... Questions: Can you recommend one tool over the other? What do you think, which tool will have a richer set of available resources in long run? Which solution are you using in your projects?
Someone may say this is not a question you should ask on stackoverflow, but I will answer anyway. It is possible to combine multiple tools. The primary tool you should run is Terraform. Use Terraform to manage all resources it supports natively, and use external provider to invoke gcloud (or anything else). While it will be not very elegant sometimes it will make the work. Practically I do same approach to invoke aws-cli in external.
Terraform
41,040,306
12
My question is similar to this git hub post, but unfortunately it is unsolved: https://github.com/hashicorp/terraform/issues/550 I want a simple way to give sudo privileges to the commands run in the provisioner "remote-exec" { } block of my terraform scripts. I am coming from an ansible background that has the sudo: yes option that allows any commands ansible runs to run commands with sudo privileges when using the --ask-sudo-pass optional in my ansible-playbook run commands. I would like to do something like that in the provisioner "remote-exec" block of my terraform script. Here is the provisioner "remote-exec" block I want to run: provisioner "remote-exec" { inline = [ "sudo apt-get update", "sudo apt-get install -y curl" ] } When I run this in my terraform apply I see the following lines appear in the output of this command: openstack_compute_instance_v2.test.0 (remote-exec): [sudo] password for myUserName: openstack_compute_instance_v2.test.1 (remote-exec): [sudo] password for myUserName: openstack_compute_instance_v2.test.2 (remote-exec): [sudo] password for myUserName: Then it just gives me an infinite number of these: openstack_compute_instance_v2.test.0: Still creating... openstack_compute_instance_v2.test.1: Still creating... openstack_compute_instance_v2.test.2: Still creating... So how do I fix this and let terraform run sudo commands? Note: The connection for my provisioner "remote-exec" block cannot be root, so even though that would be a simple solution its not what I can use.
The answer was to use the following syntax in my first sudo command: "echo yourPW | sudo -S someCommand" This bypasses the sudo password prompt and enters the password directly into the command. I already had my sudo password as a variable "${var.pw}" so running my sudo commands was the simple matter of changing my first command to: provisioner "remote-exec" { inline = [ "echo ${var.pw} | sudo -S apt-get update", "sudo apt-get install -y curl" ] }
Terraform
37,847,273
12
I'm trying to create an S3 bucket using Terraform, but keep getting Access Denied errors. I have the following Terraform code: resource "aws_s3_bucket" "prod_media" { bucket = var.prod_media_bucket acl = "public-read" } resource "aws_s3_bucket_cors_configuration" "prod_media" { bucket = aws_s3_bucket.prod_media.id cors_rule { allowed_headers = ["*"] allowed_methods = ["GET", "HEAD"] allowed_origins = ["*"] expose_headers = ["ETag"] max_age_seconds = 3000 } } resource "aws_s3_bucket_acl" "prod_media" { bucket = aws_s3_bucket.prod_media.id acl = "public-read" } resource "aws_iam_user" "prod_media_bucket" { name = "prod-media-bucket" } resource "aws_s3_bucket_policy" "prod_media_bucket" { bucket = aws_s3_bucket.prod_media.id policy = jsonencode({ Version = "2012-10-17" Statement = [ { Principal = "*" Action = [ "s3:*", ] Effect = "Allow" Resource = [ "arn:aws:s3:::${var.prod_media_bucket}", "arn:aws:s3:::${var.prod_media_bucket}/*" ] }, { Sid = "PublicReadGetObject" Principal = "*" Action = [ "s3:GetObject", ] Effect = "Allow" Resource = [ "arn:aws:s3:::${var.prod_media_bucket}", "arn:aws:s3:::${var.prod_media_bucket}/*" ] }, ] }) } resource "aws_iam_user_policy" "prod_media_bucket" { user = aws_iam_user.prod_media_bucket.name policy = aws_s3_bucket_policy.prod_media_bucket.id } resource "aws_iam_access_key" "prod_media_bucket" { user = aws_iam_user.prod_media_bucket.name } Whenever I run terraform apply I get the following error: ╷ │ Error: error creating S3 bucket ACL for prod-media-8675309: AccessDenied: Access Denied │ status code: 403, request id: XNW2R0KWFYB3KB9R, host id: CuBMdZSaJJgu+0Rprzlptt7oRsjMxBNNHJPhFq98ROGC9l9BUmfmv5YxYZuxf/V3GJBoiGJKJkg= │ │ with aws_s3_bucket_acl.prod_media, │ on s3.tf line 18, in resource "aws_s3_bucket_acl" "prod_media": │ 18: resource "aws_s3_bucket_acl" "prod_media" { │ ╵ ╷ │ Error: Error putting S3 policy: AccessDenied: Access Denied │ status code: 403, request id: XNW60T7SQXW1Y4SV, host id: AyGS46L37yIcI4JwddrjHo4GRF7T9JrnfD8TGNdUhpO5uLOWBbgY3+c4opoQTFc2jRdHtXwkqO8= │ │ with aws_s3_bucket_policy.prod_media_bucket, │ on s3.tf line 28, in resource "aws_s3_bucket_policy" "prod_media_bucket": │ 28: resource "aws_s3_bucket_policy" "prod_media_bucket" { │ The account running the Terraform has Administrator Access to all resources. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "*", "Resource": "*" } ] } Please help identify what is causing this error.
There are few issues in your code: acl attribute of aws_s3_bucket is deprecated and shouldn't be used. You don't have aws_s3_bucket_ownership_controls You don't have aws_s3_bucket_public_access_block You are missing relevant depends_on aws_iam_user_policy can't use aws_s3_bucket_policy.prod_media_bucket.id (its not even clear what do you want to accomplish here, so I removed it from the code below). The working code is: resource "aws_s3_bucket" "prod_media" { bucket = var.prod_media_bucket } resource "aws_s3_bucket_cors_configuration" "prod_media" { bucket = aws_s3_bucket.prod_media.id cors_rule { allowed_headers = ["*"] allowed_methods = ["GET", "HEAD"] allowed_origins = ["*"] expose_headers = ["ETag"] max_age_seconds = 3000 } } resource "aws_s3_bucket_acl" "prod_media" { bucket = aws_s3_bucket.prod_media.id acl = "public-read" depends_on = [aws_s3_bucket_ownership_controls.s3_bucket_acl_ownership] } resource "aws_s3_bucket_ownership_controls" "s3_bucket_acl_ownership" { bucket = aws_s3_bucket.prod_media.id rule { object_ownership = "BucketOwnerPreferred" } depends_on = [aws_s3_bucket_public_access_block.example] } resource "aws_iam_user" "prod_media_bucket" { name = "prod-media-bucket" } resource "aws_s3_bucket_public_access_block" "example" { bucket = aws_s3_bucket.prod_media.id block_public_acls = false block_public_policy = false ignore_public_acls = false restrict_public_buckets = false } resource "aws_s3_bucket_policy" "prod_media_bucket" { bucket = aws_s3_bucket.prod_media.id policy = jsonencode({ Version = "2012-10-17" Statement = [ { Principal = "*" Action = [ "s3:*", ] Effect = "Allow" Resource = [ "arn:aws:s3:::${var.prod_media_bucket}", "arn:aws:s3:::${var.prod_media_bucket}/*" ] }, { Sid = "PublicReadGetObject" Principal = "*" Action = [ "s3:GetObject", ] Effect = "Allow" Resource = [ "arn:aws:s3:::${var.prod_media_bucket}", "arn:aws:s3:::${var.prod_media_bucket}/*" ] }, ] }) depends_on = [aws_s3_bucket_public_access_block.example] }
Terraform
76,419,099
11
We have a few terraform configurations for which we use s3 as the backend. We have multiple AWS accounts, one for each of our environments. In all the environments and across multiple region, we have different s3 bucket & dynamodb_table names used which as of now do not follow a valid convention and make it difficult to identify the purpose of buckets from its name. We now want to follow a convention based naming for all the s3 terraform state buckets. For this I will need to migrate the state of existing terraform resources to the new s3 buckets. I am not sure what would be the best way to achieve this without having to run destroy on old state bucket and apply on the new one. A few options that I could think of were as below but I am not sure if they are the right thing to do. Rename the s3 bucket: I understand this is not possible (or is it?) Move the s3 bucket: A solution to renaming is as mentioned here. Not sure if this approach will disturb the setup we have. State migration: The terraform source code is Git controlled but .terraform is not. So, I can make changes to the source code with new bucket, commit it and create a new git tag from this commit. Now when actually doing the migration, I would checkout old git tag being used in my environment, run terraform init, checkout the new git tag and run terraform init again which would ideally ask me if I want to do a migration and do the needful. This process is something similar to what terraform suggests here, but what I am not sure is, will this approach work in the kind of movement that I am expecting to do. P.S.: I assume renaming the DynamoDB table or just using a new one instead of old would work out of the box as I would be making sure that when doing the state migration, I do not have any live terraform runs in progress.
Did you try to copy the StateFile From old bucket to new bucket and then change the S3 bucket in terraform backend configuration
Terraform
69,735,414
11
I have a single main.tf at the root and different modules under it for different parts of my Azure cloud e.g. main.tf - apim - firewall - ftp - function The main.tf passes variable down to the various modules e.g. resource group name or a map of tags. During development I have been investigating certain functionality using the portal and I don't have it in terraform yet. e.g. working out how best to add a mock service in the web module If I now want to update a different module (e.g. update firewall rules) terraform will suggest destroying the added service. How can I do terraform plan/apply for just a single module?
You can target only the module by specifying the module namespace as the target argument in your plan and apply commands: terraform plan -target=module.<declared module name> For example, if your module declaration was: module "the_firewall" { source = "${path.root}/firewall" } then the command would be: terraform plan -target=module.the_firewall to only target that module declaration. Note that this should only be used in development/testing scenarios, which it seems you already are focused on according to the question.
Terraform
68,408,060
11
I have a dynamic block like so: dynamic "origin" { for_each = var.ordered_cache_behaviors content { domain_name = "${origin.value.s3_target}.s3.amazonaws.com" origin_id = "S3-${origin.value.s3_target}" } } My list is defined like so: "ordered_cache_behaviors": [ { "path_pattern": "/my-app*", "s3_target": "${local.default_s3_target}", "ingress": "external" } ] In my dynamic block I want to render the block ONLY if this condition is true origin.value.s3_target !== var.default_s3_target how and where do I add the conditional to my dynamic block? Note, the rendering of the block is controlled by the value of the currently iterated object not on some variable that excludes the for loop altogether. I want to iterate over everything and conditionally exclude some items. So writing it in say Javascript it would look like this: for (origin in ordered_cache_behaviors) { if (origin.s3_target !== default_s3_target) { renderContent(); } else { console.log('Content was skipped!'); } }
The dynamic block for_each argument expects to receive a collection that has one element for each block you want to generate, so the best way to think about your problem is to think about producing a filtered version of var.ordered_cached_behaviors that only contains the elements you want to use to create blocks. The usual way to filter the elements of a collection is a for expression which includes an if clause. For example: dynamic "origin" { for_each = [ for b in var.ordered_cache_behaviors : b if b.s3_target == var.default_s3_target ] content { domain_name = "${origin.value.s3_target}.s3.amazonaws.com" origin_id = "S3-${origin.value.s3_target}" } } The result of that for expression will be a new list containing only the subset of source elements that have the expected s3_target attribute value. If none of them have that value then the resulting list would be zero-length and thus Terraform will generate no origin blocks at all.
Terraform
67,644,692
11
I am trying to increase size of my root volume for my ami ami-0d013c5896434b38a - I am using Terraform to provision this. Just to clarify - I have only one instance. And I want to make sure that if I need to increase the disk space, I don't have to destroy the machine first. Elasticity (EC2) is my reason to believe that it's doable. Does anyone know whether this is doable? Yes, I could simply do terraform plan and do a dry-run, but just double-checking.
I'm running Terraform 1.0.1 and would like to change my volume_size from 20gb to 30gb. After run terraform apply [...] # aws_instance.typo3_staging_1 will be updated in-place ~ resource "aws_instance" "staging_1" { id = "i-0eb2f8af6c8ac4125" tags = { "Name" = "Staging 1" "Team" = "DevOps" } # (28 unchanged attributes hidden) ~ root_block_device { tags = {} ~ volume_size = 20 -> 30 # (8 unchanged attributes hidden) } # (4 unchanged blocks hidden) } Plan: 0 to add, 1 to change, 0 to destroy. [...] I see that terraform will not destroy the system. Now a simple 'yes' change the volume. After ~33sec the root_block_device has been changed. A login on the ec2 shows nothing has been changed. df shows the old 20gb size of the root partition. But a simple sudo reboot increased the disk space by 10gb without destoring the current system. All docker containers on that instance runs as expected. Perfect. My Terraform resource config for such aws_instance is: resource "aws_instance" "staging_1" { instance_type = "t3.medium" ebs_optimized = true ami = "ami-001183208be54f75c" key_name = aws_key_pair.master_key.key_name subnet_id = aws_subnet.web_development_private_a.id vpc_security_group_ids = [aws_security_group.ec2_staging.id] root_block_device { volume_size = 30 # in GB <<----- I increased this! volume_type = "gp3" encrypted = true kms_key_id = data.aws_kms_key.customer_master_key.arn } # This is for T3 only (doesn't apply to M5/R5/...) # standard: Baseline of 20% or 30% CPU. Short bursts of 100% CPU are possible, but under a budget. Throttled, if budget is 0. # unlimited: Always 100% CPU possible, but costs are higher, if over burst budget. credit_specification { cpu_credits = "unlimited" } metadata_options { http_endpoint = "enabled" http_tokens = "required" } lifecycle { prevent_destroy = true } tags = { Name = "Staging 1" Team = "DevOps" } volume_tags = { Name = "Staging 1" Team = "DevOps" } }
Terraform
67,210,801
11
I'm trying to call multiple modules from terragrunt. I understand that currently, terragrunt doesn't support multiple sources and we can only call one module at a time. So, I created a main.tf file to frontend multiple modules. # main.tf module "foo" { source = "../modules/vpc" } module "bar" { source = "../modules/s3" } Inside terragrunt calling main.tf as a source, thinking that will call module foo and module bar. # terragrunt.hcl terraform { source = "./main.tf } inputs { } Is this possible using terragrunt? I need to group multiple terraform modules at once.
In short, yes, the two files snippets you've posted would work. terragrunt doesn't support multiple sources and we can only call one module at a time. Longer answer: It's useful to think of the terraform { ... } block in your terragrunt.hcl as a pointer to a "root terraform module". This root module is just any other terraform module, but is special because it is at the root, or the top, of all your terraform config. So terragrunt only supports one root module, but that root module can use as many additional modules as you need. The power terragrunt gives us is the ability to re-use these root modules. In vanilla terraform you cannot re-use this root modules. In your example the terragrunt file is pointing to a root module in the same file (./main.tf). This works just fine, but because we use terragrunt to keep things DRY we would normally put this root module in a different directory, perhaps even in a git repo and reference it appropriately in the terragrunt.hcl file Here is a quick diagram: +-------------------------------+ | /my/root-module | | | | main.tf | | | +--------------+----------------+ +------------------+ | +----------------+ | /my/modules/vpc | | | /my/modules/s3 | | | | | | | module "foo" <----+----> module "bar" | | | | | | | | | +------------------+ +----------------+ Not shown is the terragrunt.hcl file that would point to /my/root-module, this can be somewhere else on disk or in git.
Terraform
66,362,864
11
I'm trying to upgrade from terraform 0.12 to 0.13. it seems to have no specific problem of syntax when I run terraform 0.13upgrade nothing is changed. only a file version.tf is added +terraform { + required_providers { + aws = { + source = "hashicorp/aws" + } + } + required_version = ">= 0.13" +} and when I run terraform plan I got Error: Could not load plugin Plugin reinitialization required. Please run "terraform init". Plugins are external binaries that Terraform uses to access and manipulate resources. The configuration provided requires plugins which can't be located, don't satisfy the version constraints, or are otherwise incompatible. Terraform automatically discovers provider requirements from your configuration, including providers used in child modules. To see the requirements and constraints, run "terraform providers". 2 problems: - Failed to instantiate provider "registry.terraform.io/-/aws" to obtain schema: unknown provider "registry.terraform.io/-/aws" - Failed to instantiate provider "registry.terraform.io/-/template" to obtain schema: unknown provider "registry.terraform.io/-/template" running terraform providers shows Providers required by configuration: . ├── provider[registry.terraform.io/hashicorp/aws] ├── module.bastion │   ├── provider[registry.terraform.io/hashicorp/template] │   └── provider[registry.terraform.io/hashicorp/aws] └── module.vpc └── provider[registry.terraform.io/hashicorp/aws] >= 2.68.* Providers required by state: provider[registry.terraform.io/-/aws] provider[registry.terraform.io/-/template] So my guess is form some reason I have -/aws instead of hashicorp/aws in my tfstate, however I can't find this specific string at all in the tfstate. I tried: running terraform init terraform init -reconfigure deleting the .terraform folder deleting the ~/.terraform.d folder So I'm running out of ideas on how to solve this problem
I followed the steps here terraform state replace-provider registry.terraform.io/-/template registry.terraform.io/hashicorp/template terraform state replace-provider registry.terraform.io/-/aws registry.terraform.io/hashicorp/aws and it fixed my problem.
Terraform
65,583,605
11
Problem: I want to be able to granularly create/modify a PostgreSQL CloudSQL instance in Google Cloud Platform with Terraform. Currently there is a setting tier = "<instance_type>" Example: Taken from Terraform documentation name = "master-instance" database_version = "POSTGRES_11" region = "us-central1" settings { # Second-generation instance tiers are based on the machine # type. See argument reference below. tier = "db-f1-micro" } } Summary: How Can I modify this to match what I have now? Can I create a custom image to use in GCP? I see there is a way to make a custom image here, but how would I use it in Terraform? Current settings in CloudSQL
The instance tier is the machine type and and for custom machine types you can set the values in that variable like this: db-custom-<CPUs>-<Memory_in_MB> so for example in your case would be: name = "master-instance" database_version = "POSTGRES_11" region = "us-central1" settings { # Second-generation instance tiers are based on the machine # type. See argument reference below. tier = "db-custom-12-61440" } } I replicated it on my project and with this values I was able to create an instance with 12 CPUs and 60 GB memory
Terraform
64,682,873
11
I have a sample map like below and am trying to remove any accounts that have a key2 value matching 'bong'. So the starting map would look like this: sample_map={ account1 = { key1 ="foo" key2 ="bar" } account2 = { key1 ="bing" key2 ="bong" } } And the end result should look like this: new_map={ account1 = { key1 ="foo" key2 ="bar" } } I've tried manipulating the following for loop but it only works if var.exclude matches the label (not a key). new_map = { for k, v in var.sample_map : k => v if ! contains(var.exclude, k) }
You were almost there, if I understand correctly. It should be: contains(values(v), var.exclude) The working example is below: variable "sample_map" { default ={ account1 = { key1 ="foo" key2 ="bar" } account2 = { key1 ="bing" key2 ="bong" } } } variable "exclude" { default = "bong" } output "test" { value = { for k, v in var.sample_map: k => v if ! contains(values(v), var.exclude) } } Which gives: test = { "account1" = { "key1" = "foo" "key2" = "bar" } }
Terraform
63,463,671
11
I am using Terraform for most of my infrastructure, but at the same time I'm using the serverless framework to define some Lambda functions. Serverless uses CloudFormation under the hood where I need access to some ARNs for resources created by Terraform. My idea was to create a CloudFormation stack in Terraform and export all of the value that I need, but it complains that it cannot create a stack without any resources. I don't want to define any resources in CloudFormation, only the outputs, so I though maybe there is a way to define some dummy resource, but I couldn't find any. Is there a way to work around this issue? If not, I'm also open to other suggestions for getting parameters passed from Terraform to CloudFormation.
You can use AWS::CloudFormation::WaitConditionHandle for this. Example: Resources: NullResource: Type: AWS::CloudFormation::WaitConditionHandle
Terraform
62,990,653
11
I have seen several links, but I have to see an example. I have: resource "aws_iam_role" "role" { name = "role" assume_role_policy = <<-EOF { "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1590217939125", "Action": "s3:*", "Effect": "Allow", "Resource": "arn:aws:s3:::wwe" }, { "Sid": "Stmt1590217939125", "Action": "s3:*", "Effect": "Allow", "Resource": "arn:aws:s3:::wwe/*" }, { "Sid": "Stmt1577967806846", "Action": [ "secretsmanager:DescribeSecret", "secretsmanager:GetRandomPassword", "secretsmanager:GetResourcePolicy", "secretsmanager:GetSecretValue", "secretsmanager:ListSecretVersionIds", "secretsmanager:ListSecrets" ], "Effect": "Allow", "Resource": "*" } ] } EOF tags = { Name = wwe Environment = STAGE } } When I am making, terraform apply I see this: # aws_iam_role.role will be created + resource "aws_iam_role" "role" { + arn = (known after apply) + assume_role_policy = jsonencode( { + Statement = [ + { + Action = "s3:*" + Effect = "Allow" + Resource = "arn:aws:s3:::wwe" + Sid = "Stmt1590217939125" }, + { + Action = "s3:*" + Effect = "Allow" + Resource = "arn:aws:s3:::wwe/*" + Sid = "Stmt1590217939125" }, + { + Action = [ + "secretsmanager:DescribeSecret", + "secretsmanager:GetRandomPassword", + "secretsmanager:GetResourcePolicy", + "secretsmanager:GetSecretValue", + "secretsmanager:ListSecretVersionIds", + "secretsmanager:ListSecrets", ] + Effect = "Allow" + Resource = "*" + Sid = "Stmt1577967806846" }, ] + Version = "2012-10-17" } ) + create_date = (known after apply) + force_detach_policies = false + id = (known after apply) + max_session_duration = 3600 + name = "role" + path = "/" + tags = { + "Environment" = "STAGE" + "Name" = "wwe" } + unique_id = (known after apply) } After, when I am writing yes, I see: Error: Error creating IAM Role role: MalformedPolicyDocument: Has prohibited field Resource status code: 400 Where, I have an error ? Please don't post links, to the same questions. I don't understand, where I have an error, Could You please write an example, where I have an error, If it possible. Thanks for Your attention.
One issue is that you have two statements with the same Sid: Stmt1590217939125. Sids must be unique. From the docs: In IAM, the Sid value must be unique within a JSON policy. The second issue is that assume_role_policy is for a trust policy. Trust policies do not have Resource. They have different form. For instance: assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "ec2.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } To add your policies to the role, you have to use aws_iam_role_policy_attachment. For example, you could do: resource "aws_iam_policy" "policy" { name = "my-role" description = "My policy" policy = <<-EOF { "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1590217939128", "Action": "s3:*", "Effect": "Allow", "Resource": "arn:aws:s3:::wwe" }, { "Sid": "Stmt1590217939125", "Action": "s3:*", "Effect": "Allow", "Resource": "arn:aws:s3:::wwe/*" }, { "Sid": "Stmt1577967806846", "Action": [ "secretsmanager:DescribeSecret", "secretsmanager:GetRandomPassword", "secretsmanager:GetResourcePolicy", "secretsmanager:GetSecretValue", "secretsmanager:ListSecretVersionIds", "secretsmanager:ListSecrets" ], "Effect": "Allow", "Resource": "*" } ] } EOF } resource "aws_iam_role_policy_attachment" "test-attach" { role = "${aws_iam_role.role.name}" policy_arn = "${aws_iam_policy.policy.arn}" }
Terraform
61,971,160
11
I'm trying to import an existing resources into the terraform state. I used the following: terraform import azurerm_resource_group.main_rg /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-cm-main The resource group exist in the subscription with the name and ID. but when I run the command, I get this error: Error: Cannot import non-existent remote object do I need to do anything special in my script before I ran this command?
I've seen the problem as well. For me it worked to set the correct subscription in my az cli tool. For some reason it was trying to find the resource via az cli in the wrong subscription. az account list -o table az account set -s
Terraform
60,723,765
11
I want to deploy my terraform infrastructure with an Azure DevOps pipeline, but I'm running into a problem with the storage account firewall. Here an example for a storage account: resource "azurerm_storage_account" "storage_account" { name = "mystorageaccount" resource_group_name = "myresourcegroup" ... network_rules { default_action = "Deny" bypass = ["AzureServices", "Logging"] ip_rules = ["192.1.1.1"] } } The initial creation of the storage account is successful, but because of the firewall rule all further actions, for example adding a container, fail with a not authorized exception. Unfortunately adding a bypass rule for "AzureServices" does not work. The reason I have to add the firewall rule is because of company security guidelines, so I cannot just remove it. Is there a way to handle storage account firewall rules with azure devops?
For Terraform I would suggest running own agent pools. The agent pools for production environments should be separate from non production and should be located in separate vNets. Then add a network rule to your Storage Account to allow access from the agent pool subnet. The same will happen to most of the services when you use Service Endpoints as well. //EDIT: Check some fresh best practices for creating Terraform pipelines.
Terraform
60,486,835
11
I looked at the documentation of both azurerm_app_service and azurerm_application_insights and I just do not see a way to tie them. Yet on the App Service page in the portal there is a link to Application Insights, currently grayed out: So, how do I enable it with terraform?
You need numerous app settings to get this to work properly as intended. The ones I had to add to get it all working were: "APPINSIGHTS_INSTRUMENTATIONKEY" "APPINSIGHTS_PROFILERFEATURE_VERSION" "APPINSIGHTS_SNAPSHOTFEATURE_VERSION" "APPLICATIONINSIGHTS_CONNECTION_STRING" "ApplicationInsightsAgent_EXTENSION_VERSION" "DiagnosticServices_EXTENSION_VERSION" "InstrumentationEngine_EXTENSION_VERSION" "SnapshotDebugger_EXTENSION_VERSION" "XDT_MicrosoftApplicationInsights_BaseExtensions" "XDT_MicrosoftApplicationInsights_Mode"
Terraform
60,175,600
11
I am trying to serve a static content using AWS API Gateway. When I attempt to invoke the URL, both from the test page and from curl, I get the error: "Execution failed due to configuration error: statusCode should be an integer which defined in request template". This is my configuration on Terraform: resource "aws_api_gateway_rest_api" "raspberry_api" { name = "raspberry_api" } resource "aws_acm_certificate" "raspberry_alexa_mirko_io" { domain_name = "raspberry.alexa.mirko.io" validation_method = "DNS" lifecycle { create_before_destroy = true } } resource "aws_route53_record" "raspberry_alexa_mirko_io_cert_validation" { name = aws_acm_certificate.raspberry_alexa_mirko_io.domain_validation_options.0.resource_record_name type = aws_acm_certificate.raspberry_alexa_mirko_io.domain_validation_options.0.resource_record_type zone_id = var.route53_zone_id records = [aws_acm_certificate.raspberry_alexa_mirko_io.domain_validation_options.0.resource_record_value] ttl = 60 } resource "aws_route53_record" "raspberry_alexa_mirko_io" { zone_id = var.route53_zone_id name = aws_acm_certificate.raspberry_alexa_mirko_io.domain_name type = "A" alias { name = aws_api_gateway_domain_name.raspberry_alexa_mirko_io.cloudfront_domain_name zone_id = aws_api_gateway_domain_name.raspberry_alexa_mirko_io.cloudfront_zone_id evaluate_target_health = true } } resource "aws_acm_certificate_validation" "raspberry_alexa_mirko_io" { certificate_arn = aws_acm_certificate.raspberry_alexa_mirko_io.arn validation_record_fqdns = [aws_route53_record.raspberry_alexa_mirko_io_cert_validation.fqdn] provider = aws.useast1 } resource "aws_api_gateway_domain_name" "raspberry_alexa_mirko_io" { certificate_arn = aws_acm_certificate_validation.raspberry_alexa_mirko_io.certificate_arn domain_name = aws_acm_certificate.raspberry_alexa_mirko_io.domain_name } resource "aws_api_gateway_base_path_mapping" "raspberry_alexa_mirko_io_base_path_mapping" { api_id = aws_api_gateway_rest_api.raspberry_api.id domain_name = aws_api_gateway_domain_name.raspberry_alexa_mirko_io.domain_name } resource "aws_api_gateway_resource" "home" { rest_api_id = aws_api_gateway_rest_api.raspberry_api.id parent_id = aws_api_gateway_rest_api.raspberry_api.root_resource_id path_part = "login" } resource "aws_api_gateway_method" "login" { rest_api_id = aws_api_gateway_rest_api.raspberry_api.id resource_id = aws_api_gateway_resource.home.id http_method = "GET" authorization = "NONE" } resource "aws_api_gateway_integration" "integration" { rest_api_id = aws_api_gateway_rest_api.raspberry_api.id resource_id = aws_api_gateway_resource.subscribe_raspberry.id http_method = aws_api_gateway_method.subscribe.http_method integration_http_method = "POST" type = "AWS_PROXY" uri = aws_lambda_function.raspberry_lambda.invoke_arn # This was just a failed attempt. It did not fix anything request_templates = { "text/html" = "{\"statusCode\": 200}" } } resource "aws_api_gateway_integration" "login_page" { rest_api_id = aws_api_gateway_rest_api.raspberry_api.id resource_id = aws_api_gateway_resource.home.id http_method = aws_api_gateway_method.login.http_method type = "MOCK" timeout_milliseconds = 29000 } resource "aws_api_gateway_method_response" "response_200" { rest_api_id = aws_api_gateway_rest_api.raspberry_api.id resource_id = aws_api_gateway_resource.home.id http_method = aws_api_gateway_method.login.http_method status_code = "200" } resource "aws_api_gateway_integration_response" "login_page" { rest_api_id = aws_api_gateway_rest_api.raspberry_api.id resource_id = aws_api_gateway_resource.home.id http_method = aws_api_gateway_method.login.http_method status_code = aws_api_gateway_method_response.response_200.status_code response_templates = { "text/html" = data.template_file.login_page.rendered } } resource "aws_api_gateway_deployment" "example" { depends_on = [ aws_api_gateway_integration.login_page ] rest_api_id = aws_api_gateway_rest_api.raspberry_api.id stage_name = "production" } I have followed the instructions as in this blog, with no success.
Just to repost the excellent answer of TheClassic here, the format seems to be: request_templates = { "application/json" = jsonencode( { statusCode = 200 } ) } I also had this same problem, but looks like this works.
Terraform
59,911,777
11
I'm using Terraform to launch my cloud environments. It seems that even minor configuration change affects many of the resources behind the scenes. For example, In cases where I create AWS instances - a small change will lead to auto-generation of all the instances: -/+ aws_instance.DC (new resource required) id: "i-075deb0aaa57c2d" => <computed> (forces new resource) <----- How can we avoid that? ami: "ami-01e306baaaa0a6f65" => "ami-01e306baaaa0a6f65" arn: "arn:aws:ec2:ap-southeast-2:857671114786:instance/i-075deb0aaa57c2d" => <computed> associate_public_ip_address: "false" => <computed> availability_zone: "ap-southeast-2a" => <computed> . . My question is related specifically to AWS as the provider: How can we avoid the destruction/creation of resources each time? Maybe a relevant flag in Terraform? Related threads: Terraform > ipv6_address_count: "" => "0" (forces new resource) terraform > forces new resource on security group Edit: Diving inside the plan output it seems that there was a change in one of the resources: security_groups.#: "0" => "1" (forces new resource) security_groups.837544107: "" => "sg-0892062659392afa9" (forces new resource) Question is still relevant from the perspective of how to avoid the re-creation.
Terraform resources only force a new resource if there's no clear upgrade path when modifying a resource to match the new configuration. This is done at the provider level by setting the ForceNew: true flag on the parameter. An example is shown with the ami parameter on the aws_instance resource: Schema: map[string]*schema.Schema{ "ami": { Type: schema.TypeString, Required: true, ForceNew: true, }, This tells Terraform that if the ami parameter is changed then it shouldn't attempt to perform an update but instead destroy the resource and create a new one. You can override the destroy then create behaviour with the create_before_destroy lifecycle configuration block: resource "aws_instance" "example" { # ... lifecycle { create_before_destroy = true } } In the event you changed the ami or some other parameter that can't be updated then Terraform would then create a new instance and then destroy the old one. How you handle zero downtime upgrades of resources can be tricky and is largely determined on what the resource is and how you handle it. There's some more information about that in the official blog. In your very specific use case with it being the security_groups that has changed this is mentioned on the aws_instance resource docs: NOTE: If you are creating Instances in a VPC, use vpc_security_group_ids instead. This is because Terraform's AWS provider and the EC2 API that Terraform is using is backwards compatible with old EC2 Classic AWS accounts that predate VPCs. With those accounts you could create instances outside of VPCs but you couldn't change the security groups of the instance after it was created. If you wanted to change ingress/egress for the instance you needed to work within the group(s) you had attached to the instance already. With VPC based instances AWS allowed users to modify instance security groups without replacing the instance and so exposed a different way of specifying this in the API. If you move to using vpc_security_group_ids instead of security_groups then you will be able to modify these without replacing your instances.
Terraform
59,309,243
11
I have terraform directory structure as below: terraform/ main.tf modules outputs.tf provider.tf variables.tf ./modules: compute network resourcegroup ./modules/compute: main.tf outputs.tf variables.tf ./modules/network: main.tf outputs.tf variables.tf ./modules/resourcegroup: main.tf outputs.tf variables.tf resourcegroup module config files as below: Purpose: In this module, I am referencing an existing resource group which I would like to utilized to create a Virtual machine and its associated objects. main.tf data "azurerm_resource_group" "tf-rg-external" { name = var.rg_name } variables.tf variable "rg_name" { type = string } network module Purpose: I would like to use resource group from resourcegroup module to be referenced in this module. That way, I define at one place and use it in root and other modules example, compute, app service, aks etc main.tf # Reference existing Virtual Network data "azurerm_virtual_network" "tf-vn" { name = var.vnet_name resource_group_name = module.resource_groups.external_rg_name } # Reference existing subnet data "azurerm_subnet" "tf-sn" { name = var.subnet_name virtual_network_name = data.azurerm_virtual_network.tf-vn.name resource_group_name = module.resource_groups.external_rg_name } variables.tf # Declare env variable variable "vnet_name" { type = string } variable "subnet_name" { type = string } compute module. Purpose: To define all attributes for compute(VM). The idea is, root module will use this module to spin up different VM roles. main.tf module "vm_iis" { source = "Azure/compute/azurerm" location = data.resourcegroup.tf-rg-external.location vnet_subnet_id = data.network.tf-sn.id admin_password = var.admin_password data_sa_type = var.data_sa_type delete_os_disk_on_termination = var.delete_os_disk_on_termination nb_instances = var.nb_instances nb_public_ip = var.nb_public_ip public_ip_address_allocation = var.public_ip_address_allocation resource_group_name = data.resourcegroup.tf-rg-external.name . . . } variables.tf variable "admin_password" { type = string } variable "admin_username" { type = string } variable "boot_diagnostics" { type = bool } variable "boot_diagnostics_sa_type" { type = string }... terraform root module. Purpose: This should utilize modules defined to create a variety of VMs of different sizes and host names main.tf: module "sql_vm" { source = "./modules/compute/" #location = data.resourcegroup.tf-rg-external.location #vnet_subnet_id = data.network.tf-sn.id public_ip_address_allocation = var.public_ip_address_allocation #resource_group_name = data.resourcegroup.tf-rg-external.name storage_account_type = var.storage_account_type vm_hostname = var.vm_hostname } variables.tf: Declares all variables in main.tf file. Note: I have intentionally hard coded the variables in root module main/variable file. This is just get the communication between the modules right. Correct approach to understand and use modules. However, when I run terraform plan in the root module. I get the error below: Error: Reference to undeclared resource on modules/compute/main.tf line 3, in module "vm_iis": 3: location = data.resourcegroup.tf-rg-external.location A data resource "resourcegroup" "tf-rg-external" has not been declared in sql_vm. Error: Reference to undeclared resource on modules/compute/main.tf line 4, in module "vm_iis": 4: vnet_subnet_id = data.network.tf-sn.id A data resource "network" "tf-sn" has not been declared in sql_vm. Error: Reference to undeclared resource on modules/compute/main.tf line 22, in module "vm_iis": 22: resource_group_name = data.resourcegroup.tf-rg-external.name A data resource "resourcegroup" "tf-rg-external" has not been declared in sql_vm. What is the issue and how to resolve it? Also, possible to create different (roles) vms by some loop? example sql-vm, iis-vm, testvm, abcvm? What is going to change is their hostnames and vm sizes. ========== Post answer changes ========== I updated the values for subnet, resource group and location in compute/main.tf and terraform/main.tf as like below: location = module.resourcegroup.tf-rg-external-location vnet_subnet_id = module.network.subnet-id resource_group_name = module.resourcegroup.tf-rg-external-name My outputs.tf file in resourcegroup and network modules look like below: outputs.tf of network module output "subnet-id" { value = "data.network.tf-sn.id" } outputs.tf of resourcegroup module output "tf-rg-external-location" { value = data.resourcegroup.tf-rg-external.location } output "tf-rg-external-name" { value = data.resourcegroup.tf-rg-external.name } I'm unfortunately still getting errors like below Error: Unsupported argument on main.tf line 3, in module "sql_vm": 3: location = module.resourcegroup.tf-rg-external-location An argument named "location" is not expected here. Error: Unsupported argument on main.tf line 4, in module "sql_vm": 4: vnet_subnet_id = module.network.subnet-id An argument named "vnet_subnet_id" is not expected here. Error: Unsupported argument on main.tf line 5, in module "sql_vm": 5: resource_group_name = module.resourcegroup.tf-rg-external-name An argument named "resource_group_name" is not expected here. So, it appears that we should not be referencing them in the root module? Also, where their variables should be defined as in root modules variables.tf file as I believe you can override values for a variable of modules in the root module? Forgive me if I am appearing as stupid. I'm trying to understand how it works in real life. After last commit and public repo, error's are as below Error: Reference to undeclared module on main.tf line 3, in module "sql_vm": 3: location = module.resourcegroup.tf-rg-external-location No module call named "resourcegroup" is declared in the root module. Error: Reference to undeclared module on main.tf line 4, in module "sql_vm": 4: vnet_subnet_id = module.network.subnet-id No module call named "network" is declared in the root module. Error: Reference to undeclared module on main.tf line 5, in module "sql_vm": 5: resource_group_name = module.resourcegroup.tf-rg-external-name No module call named "resourcegroup" is declared in the root module. Error: Reference to undeclared module on modules/compute/main.tf line 3, in module "vm_iis": 3: location = module.resourcegroup.tf-rg-external-location No module call named "resourcegroup" is declared in sql_vm. Error: Reference to undeclared module on modules/compute/main.tf line 4, in module "vm_iis": 4: vnet_subnet_id = module.network.subnet-id No module call named "network" is declared in sql_vm. Error: Reference to undeclared module on modules/compute/main.tf line 5, in module "vm_iis": 5: resource_group_name = module.resourcegroup.tf-rg-external-name No module call named "resourcegroup" is declared in sql_vm. Error: Reference to undeclared module on modules/network/main.tf line 5, in data "azurerm_virtual_network" "tf-vn": 5: resource_group_name = module.resource_groups.external_rg_name No module call named "resource_groups" is declared in test2. Error: Reference to undeclared resource on modules/resourcegroup/outputs.tf line 2, in output "tf-rg-external-location": 2: value = data.resourcegroup.tf-rg-external.location A data resource "resourcegroup" "tf-rg-external" has not been declared in test1. Error: Reference to undeclared resource on modules/resourcegroup/outputs.tf line 5, in output "tf-rg-external-name": 5: value = data.resourcegroup.tf-rg-external.name A data resource "resourcegroup" "tf-rg-external" has not been declared in test1.
Go through the repo you are working on (https://github.com/ameyaagashe/help_me_cross/tree/d7485d2a3db339723e9c791e592b2f1dbc1f0788) . It makes sense for me now. The problem is, you mix the idea on how to use public modules with your own created modules. In fact, you needn't set any modules to reference other public terraform registry modules. Move all codes in sub-modules (module/compute, module/network, module/resourcegroup) to top folder (<repo_root>/main.tf). such as (codes are not validated, just for reference) data "azurerm_resource_group" "tf-rg-external" { name = var.rg_name } data "azurerm_virtual_network" "tf-vn" { name = var.vnet_name resource_group_name = var.rg_name } # Reference existing subnet data "azurerm_subnet" "tf-sn" { name = var.subnet_name virtual_network_name = data.azurerm_virtual_network.tf-vn.name resource_group_name = var.rg_name } module "sql_vm" { source = "Azure/compute/azurerm" location = data.azurerm_resource_group.tf-rg-external.location vnet_subnet_id = data.azurerm_virtual_network.tf-vn.subnets resource_group_name = data.azurerm_resource_group.tf-rg-external.name admin_password = var.admin_password admin_username = var.admin_username boot_diagnostics = var.boot_diagnostics boot_diagnostics_sa_type = var.boot_diagnostics_sa_type data_disk = var.data_disk data_disk_size_gb = var.data_disk_size_gb data_sa_type = var.data_sa_type delete_os_disk_on_termination = var.delete_os_disk_on_termination enable_accelerated_networking = var.enable_accelerated_networking # flag is_windows_image is required only when you use a custom image to spin up a VM # is_windows_image # flag vm_os_id is required only when you are using custom image # you need to provide id of your custom image # vm_os_id nb_instances = var.nb_instances nb_public_ip = var.nb_public_ip public_ip_address_allocation = var.public_ip_address_allocation storage_account_type = var.storage_account_type vm_hostname = var.vm_hostname vm_os_offer = var.vm_os_offer vm_os_publisher = var.vm_os_publisher # vm_os_simple is to be used is you do not wish to specify offer, publisher and sku # vm_os_simple = UbuntuServer, WindowsServer, RHEL, openSUSE-Leap, CentOS, Debian, CoreOS and SLES vm_os_sku = var.vm_os_sku vm_os_version = var.vm_os_version vm_size = var.vm_size }
Terraform
59,081,428
11
If I create a varaible definition like this: variable "aws_ecs_config" { type = object({ cpu = number memory = number ecs_image_address = string }) logs = { type = object({ group = string region = string stream_prefix = string }) } } How can I reuse that definition in multiple places without copy-pasting?
It is not possible to re-use variable declarations in Terraform. If variables in different modules will have the same type, that type must be repeated in each module. Terraform has a structural type system rather than a nominal type system, so types themselves are not named and instead are matched/constrained by their form. When passing values between modules, we can use type constraints to create conventions between a family of related modules, but there is no mechanism to define a type (or declare a variable) in one place and reuse it in other places. Terraform's type constraint mechanism considers any object with at least the attributes in the type constraint to be valid, so it's not necessary to define an exhaustive object type every time. For example, if you define a variable with the following type: object({ name = string }) The following object value is acceptable for that type constraint, because it has a name attribute of the correct type regardless of any other attributes it defines: { name = "foo" other = "bar" } For that reason, it can be better to limit the variable declaration in each module to only the subset of attributes that particular module actually requires, which reduces the coupling between the modules: they only need to be compatible to the extent that their attribute names overlap, and don't need to directly bind to one another.
Terraform
58,772,935
11
In my current terraform configuration I am using a static JSON file and importing into terraform using the file function to create an AWS IAM policy. Terraform code: resource "aws_iam_policy" "example" { policy = "${file("policy.json")}" } AWS IAM Policy definition in JSON file (policy.json): { "Version": "2012-10-17", "Id": "key-consolepolicy-2", "Statement": [ { "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111122223333:root" }, "Action": "kms:*", "Resource": "*" }, { "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::777788889999:root" ] }, "Action": [ "kms:Decrypt" ], "Resource": "*" }, { "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::444455556666:root" ] }, "Action": [ "kms:Decrypt" ], "Resource": "*" } ] } My goal is to use a list of account numbers stored in a terraform variable and use that to dynamically build the aws_iam_policy resource in terraform. My first idea was to try and use the terraform jsonencode function. However, it looks like there might be a way to implement this using the new terraform dynamic expressions foreach loop. The sticking point seems to be appending a variable number of resource blocks in the IAM policy. Pseudo code below: var account_number_list = ["123","456","789"] policy = {"Statement":[]} for each account_number in account_number_list: policy["Statement"].append(policy block with account_number var reference) Any help is appreciated. Best, Andrew
The aws_iam_policy_document data source from aws gives you a way to create json policies all in terraform, without needing to import raw json from a file or from a multiline string. Because you define your policy statements all in terraform, it has the benefit of letting you use looping/filtering on your principals array. In your example, you could do something like: data "aws_iam_policy_document" "example_doc" { statement { sid = "Enable IAM User Permissions" effect = "Allow" actions = [ "kms:*" ] resources = [ "*" ] principals { type = "AWS" identifiers = [ for account_id in account_number_list: account_id ] } } statement { ...other statements... } } resource "aws_iam_policy" "example" { // For terraform >=0.12 policy = data.aws_iam_policy_document.example_doc.json // For terraform <0.12 policy = "${data.aws_iam_policy_document.example_doc.json}" }
Terraform
57,824,936
11
I am new to templates, I am trying to change terraform modules to flex as many “nameservers” as needed. How can iterate through the values of variable? Right now I am doing: template.tf variable "nameserver" { type = list(string) } nameservers = [ "174.15.22.20", "174.15.12.21" ] nameserver_1 = element(var.nameservers, 0) #nameserver_1=174.15.22.20 nameserver_2 = element(var.nameservers, 1) #nameserver_2=174.15.12.21 user_data.yaml.tpl nameserver ${nameserver_1} nameserver ${nameserver_2} I want to do something like: template.tf vars = { count = length(var.nameserver) for nameserver in nameservers: nameserver_$(count.index)= ${element(var.nameserver, count.index)} } user_data.yaml.tpl for nameserver in nameservers: nameserver ${nameserver_[count.index]} But I am unable to figure out the right way to do this in template.tf and user_data.yaml.tpl Any help would be appreciated !
From what you've shown of template.tf I'm guessing that vars = { ... } declaration is inside a data "template_file" block. The template_file data source is primarily there for Terraform 0.11 compatibility and it only supports string values for the template variables, but since you are using Terraform 0.12 you can use the new templatefile function instead, which makes this easier by supporting values of any type. From the template name you used I'm guessing that you intend to use this result to assign to user_data, in which case the syntax for doing that in templatefile would look something like this: user_data = templatefile("${path.module}/user_data.yaml.tpl", { nameservers = var.nameservers }) In your user_data.yaml.tpl file: %{ for s in nameservers ~} nameserver ${s} %{ endfor ~} The %{ ... } sequences here are Terraform template syntax. That same syntax is also available directly in the main configuration file, so for a template this small you might prefer to just write the template inline to keep things simpler: user_data = <<-EOT %{ for s in var.nameservers ~} nameserver ${s} %{ endfor ~} EOT The template syntax is the same here, but because this is in the main .tf configuration file rather than in a separate template file we can just refer directly to var.nameservers here, rather than building a separate map of template variables. The name you gave your template file seems to suggest that you are generating YAML, though the template you showed doesn't actually generate valid YAML. If you are intending the result to be YAML, you have some other options in Terraform that might be better depending on your goals: Firstly, JSON is a subset of YAML, so you could ask Terraform to JSON-encode your data instead, and then the YAML parser in your instance (if it is YAML-spec-compliant) should be able to parse it: user_data = jsonencode({ nameservers = var.nameservers }) An advantage of this approach is that you can let Terraform's jsonencode function worry about the JSON syntax, escaping, etc and you can just pass it the data structure you want to represent. Using templates instead might require you to handle quoting or escaping of values if they might contain significant punctuation. Recent versions of Terraform also have a yamlencode function, but at the time of writing it's experimental and the exact details of how it formats its output might change in future releases. I would not recommend to use it as user_data right now because if the syntax details do change in a future version then that would cause your instance to be planned for replacement. In a future version of Terraform that output should be stabilized, once the team has enough feedback from real-world use to be confident that its YAML formatting decisions are acceptable for a broad set of use-cases.
Terraform
57,561,084
11
CloudFormation provides AllowedValues for Parameters which tells that the possible value of the parameter can be from this list. How can I achieve this with Terraform variables? The variable type of list does not provide this functionality. So, in case I want my variable to have value out of only two possible values, how can I achieve this with Terraform. CloudFormation script that I want to replicate is: "ParameterName": { "Description": "desc", "Type": "String", "Default": true, "AllowedValues": [ "true", "false" ] }
I don't know of an official way, but there's an interesting technique described in a Terraform issue: variable "values_list" { description = "acceptable values" type = "list" default = ["true", "false"] } variable "somevar" { description = "must be true or false" } resource "null_resource" "is_variable_value_valid" { count = "${contains(var.values_list, var.somevar) == true ? 0 : 1}" "ERROR: The somevar value can only be: true or false" = true } Update: Terraform now offers custom validation rules in Terraform 0.13: variable "somevar" { type = string description = "must be true or false" validation { condition = can(regex("^(true|false)$", var.somevar)) error_message = "Must be true or false." } }
Terraform
54,254,524
11
I'm trying to create a CodeBuild project using Terraform, but when I build I'm getting the following error on the DOWNLOAD_SOURCE step: CLIENT_ERROR: repository not found for primary source and source version This project uses a CodeCommit repository as the source. It's odd because all of the links to the repository from the CodeCommit console GUI work fine for this build - I can see the commits, click on the link and get to the CodeCommit repo, etc so the Source setup seems to be fine. The policy used for the build has "codecommit:GitPull" permissions on the repository. Strangely, if I go to the build in the console and uncheck the "Allow AWS CodeBuild to modify this service role so it can be used with this build project" checkbox then Update Sources, the build will work! But I can't find any way to set this from Terraform, and it will default back on if you go back to the Update Sources screen. Here is the Terraform code I'm using to create the build. # IAM role for CodeBuild resource "aws_iam_role" "codebuild_myapp_build_role" { name = "mycompany-codebuild-myapp-build-service-role" description = "Managed by Terraform" path = "/service-role/" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "codebuild.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } EOF } # IAM policy for the CodeBuild role resource "aws_iam_policy" "codebuild_myapp_build_policy" { name = "mycompany-codebuild-policy-myapp-build-us-east-1" description = "Managed by Terraform" policy = <<POLICY { "Version": "2012-10-17", "Statement": [ { "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:CompleteLayerUpload", "ecr:GetAuthorizationToken", "ecr:InitiateLayerUpload", "ecr:PutImage", "ecr:UploadLayerPart" ], "Resource": "*", "Effect": "Allow" }, { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "logs:CreateLogStream", "codecommit:GitPull", "logs:PutLogEvents", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:logs:us-east-1:000000000000:log-group:/aws/codebuild/myapp-build", "arn:aws:logs:us-east-1:000000000000:log-group:/aws/codebuild/myapp-build:*", "arn:aws:s3:::codepipeline-us-east-1-*", "arn:aws:codecommit:us-east-1:000000000000:mycompany-devops-us-east-1" ] }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "logs:CreateLogGroup", "Resource": [ "arn:aws:logs:us-east-1:000000000000:log-group:/aws/codebuild/myapp-build", "arn:aws:logs:us-east-1:000000000000:log-group:/aws/codebuild/myapp-build:*" ] } ] } POLICY } # attach the policy resource "aws_iam_role_policy_attachment" "codebuild_myapp_build_policy_att" { role = "${aws_iam_role.codebuild_myapp_build_role.name}" policy_arn = "${aws_iam_policy.codebuild_myapp_build_policy.arn}" } # codebuild project resource "aws_codebuild_project" "codebuild_myapp_build" { name = "myapp-build" build_timeout = "60" service_role = "${aws_iam_role.codebuild_myapp_build_role.arn}" artifacts { type = "NO_ARTIFACTS" } environment { compute_type = "BUILD_GENERAL1_SMALL" image = "aws/codebuild/docker:17.09.0" type = "LINUX_CONTAINER" privileged_mode = "true" environment_variable { "name" = "AWS_DEFAULT_REGION" "value" = "us-east-1" } environment_variable { "name" = "AWS_ACCOUNT_ID" "value" = "000000000000" } environment_variable { "name" = "IMAGE_REPO_NAME" "value" = "myapp-build" } environment_variable { "name" = "IMAGE_TAG" "value" = "latest" } environment_variable { "name" = "DOCKERFILE_PATH" "value" = "docker/codebuild/myapp_build_agent" } } source { type = "CODECOMMIT" location = "mycompany-devops-us-east-1" git_clone_depth = "1" buildspec = "docker/myapp/myapp_build/buildspec.yml" } tags { Name = "myapp-build" Environment = "${var.env_name}" Region = "${var.aws_region}" ResourceType = "CodeBuild Project" ManagedBy = "Terraform" } }
Your problem is the specification of the source: source { type = "CODECOMMIT" location = "mycompany-devops-us-east-1" Here's the Amazon documentation for the source, what's relevant with some emphasis: For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the build spec (for example, https://git-codecommit.region-ID.amazonaws.com/v1/repos/repo-name ). In your case, that is probably something like this, using the 'clone url' found in the codecommit console: https://git-codecommit.us-east-1.amazonaws.com/v1/repos/mycompany-devops-us-east-1 I ran into this while using a private github repository source. In my case I gave the URL, not the clone link to github, so the problem was very similar: bad: https://github.com/privaterepo/reponame good: https://github.com/privaterepo/reponame.git
Terraform
53,785,769
11
I am facing an issue in terraform where I want to read details of some existing resource (r1) created via AWS web console. I am using those details in creation on new resource (r2) via terraform. Problem is that it is trying to destroy and recreate that resource which is not desired as it will be failed. How can I manage not to destroy and recreate r1 when I do terraform apply. Here is how I am doing it : main.tf resource "aws_lb" "r1"{ } ... resource "aws_api_gateway_integration" "r2" { type = "HTTP" uri = "${aws_lb.r1.dns_name}}/o/v1/multi/get/m/content" } first I import that resource terraform import aws_lb.r1 {my_arn} next I apply terraform terraform apply error aws_lb.r1: Error deleting LB: ResourceInUse: Load balancer 'my_arn' cannot be deleted because it is currently associated with another service
The import statement is meant for taking control over existing resources in your Terraform setup. If your only intention is to derive information on existing resources (outside of your Terraform control), data sources are designed specifically for this need: data "aws_lb" "r1" { name = "lb_foo" arn = "some_specific_arn" #you can use any selector you wish to query the correct LB } resource "aws_api_gateway_integration" "r2" { type = "HTTP" uri = "${data.aws_lb.r1.dns_name}/o/v1/multi/get/m/content" }
Terraform
53,514,465
11
I would like to replace the 3 indepedent variables (dev_id, prod_id, stage_id), for a single list containing all the three variables, and iterate over them, applying them to the policy. Is this something terraform can do? data "aws_iam_policy_document" "iam_policy_document_dynamodb" { statement { effect = "Allow" resources = ["arn:aws:dynamodb:${var.region}:${var.account_id}:table:${var.dynamodb_table_name}"] actions = [ "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:DeleteItem", ] principals { type = "AWS" identifiers = [ "arn:aws:iam::${var.dev_id}:root", "arn:aws:iam::${var.prod_id}:root", "arn:aws:iam::${var.stage_id}:root" ] } } } I looked into cycles and interpolation, but It seems that 99% of the time the interpolation is done with "count" which only works for the creation of multiple resources (I hope I am not saying a big lie). For example, I used principals { count = "${length(var.list)}" identifiers = ["arn:aws:iam::${var.list[count.index]}"] } but that was unsuccessful. Is there some way of achieving the final goal of replacing those 3 variables by a single list (or map) and iterate over them?
Given you have the list of account ids, have you tried this? var "accounts" { default = ["123", "456", "789"] type = "list" } locals { accounts_arn = "${formatlist("arn:aws:iam::%s", var.accounts)}" } Then in your policy document: principals { type = "AWS" identifiers = ["${local.accounts_arn}"] } I haven't actually tried it, but can't think of a reason it wouldn't work.
Terraform
52,837,358
11
I have defined the following Terraform module: module "lambda" { source = "../lambda" region = "us-west-1" account = "${var.account}" } How can I take advantage from the module name to set the source parameter with an interpolation? I wish something like: module "lambda" { source = "../${this.name}" region = "us-west-1" account = "${var.account}" }
locals { module = basename(abspath(path.module)) } { ... some-id = local.module ... }
Terraform
52,603,758
11
I have been trying to use the same terraform stack to deploy resources in multiple azure subscriptions. Also need to pass parameters between these resources in different subscriptions. I had tried to use multiple Providers, but that is not supported. Error: provider.azurerm: multiple configurations present; only one configuration is allowed per provider If you have a way or an idea on how to accomplish this please let me know.
You can use multiple providers by using alias (doku). # The default provider configuration provider "azurerm" { subscription_id = "xxxxxxxxxx" } # Additional provider configuration for west coast region provider "azurerm" { alias = "y" subscription_id = "yyyyyyyyyyy" } And then specify whenever you want to use the alternative provider: resource "azurerm_resource_group" "network_x" { name = "production" location = "West US" } resource "azurerm_resource_group" "network_y" { provider = "azurerm.y" name = "production" location = "West US" }
Terraform
51,714,639
11
I'm creating a bucket using a module, how can I find the ARN for that bucket? create the module module "testbucket" { source = "github.com/tomfa/terraform-sandbox/s3-webfiles-bucket" aws_region = "${var.aws_region}" aws_access_key = "${var.aws_access_key}" aws_secret_key = "${var.aws_secret_key}" bucket_name = "${var.bucket_name}-test" } Then when I call the policy I need access to the ARN { "Sid": "accessToS3", "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "${aws_s3_bucket.${var.bucket_name}.arn}", ] } I am not referencing the ARN so i get the error: Error: resource 'aws_iam_role_policy.policy_Demo_lambda' config: unknown resource 'aws_s3_bucket.mybukk' referenced in variable aws_s3_bucket.mybukk.arn How to access my ARN? thanks
Your module will need to have an outputs.tf file, looking like this: output "bucket_arn" { value = "${aws_s3_bucket.RESOURCE_NAME.arn}" } Please note that you will have to replace RESOURCE_NAME with the name of the terraform S3 bucket resource. For example, if your resource looks like this: resource "aws_s3_bucket" "b" {... then you will need to replace RESOURCE_NAME to b then you can call it in your policy: { "Sid": "accessToS3", "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "${module.testbucket.bucket_arn}", ] }
Terraform
51,278,407
11
I have created an EC2 instance on AWS using terraform; What I want is to add a user in the OS level and provide a particular key to be added in its ~/.ssh/authorized_keys file. The aws_instance documentation does not seem to list this functionality. Is there a way to go about this? edit: I think a way to do this is via the remote-exec provisioner, but then again since I have already created my ec2 resource I need a way to force-run this;
Following up on comments and edits, what you are looking for might look like this: resource "aws_instance" "default" { ... provisioner "remote-exec" { inline = [ "sudo useradd someuser" ] connection { type = "ssh" user = "ubuntu" private_key = "${file("yourkey.pem")}" } } provisioner "file" { source = "authorized_keys" destination = "/home/someuser/.ssh/authorized_keys" connection { type = "ssh" user = "ubuntu" private_key = "${file("yourkey.pem")}" } } provisioner "remote-exec" { inline = [ "sudo chown someuser:someuser /home/someuser/.ssh/authorized_keys", "sudo chmod 0600 /home/someuser/.ssh/authorized_keys" ] connection { type = "ssh" user = "ubuntu" private_key = "${file("yourkey.pem")}" } } ... } Create the user Upload your authorized keys file Set the appropriate permissions on the file for the user You could also do this all in one remote-exec depending on how you want to handle setting up the authorized_keys file
Terraform
50,947,490
11
Right now I have the following in my main.tf: resource "aws_lambda_function" "terraform_lambda" { filename = "tf_lambda.zip" function_name = "tf_lambda" role = "lambda_basic_execution" handler = "tf_lambda.lambda_handler" source_code_hash = "${base64sha256(file("tf_lambda.zip"))}" runtime = "python3.6" } My directory structure is like so: . |-- main.tf |-- tf_lambda.zip |-- tf_lambda └── tf_lambda.py When I run terraform apply and then, in the console, go to the lambda created the code section is empty and it invites me to upload a zip file. How do I make sure the code actually gets uploaded?
You may also try this using archive_file, https://www.terraform.io/docs/providers/archive/d/archive_file.html So that when you run "terraform apply" the file will be re-zipped and uploaded. data "archive_file" "zipit" { type = "zip" source_file = "tf_lambda/tf_lambda.py" output_path = "tf_lambda.zip" } resource "aws_lambda_function" "terraform_lambda" { function_name = "tf_lambda" role = "lambda_basic_execution" handler = "tf_lambda.lambda_handler" filename = "tf_lambda.zip" source_code_hash = "${data.archive_file.zipit.output_base64sha256}" runtime = "python3.6" }
Terraform
50,357,651
11
Does it make sense to understand that it runs in the order defined in main.tf of terraform? I understand that it is necessary to describe the trigger option in order to define the order on terraform. but if it could not be used trigger option like this data "external" , How can I define the execution order? For example, I would like to run in order as follows. get_my_public_ip -> ec2 -> db -> test_http_status main.tf is as follows data "external" "get_my_public_ip" { program = ["sh", "scripts/get_my_public_ip.sh"] } module "ec2" { ... } module "db" { ... } data "external" "test_http_status" { program = ["sh", "scripts/test_http_status.sh"] }
I can only provide feedback on the code you provided but one way to ensure the test_status command is run once the DB is ready is to use a depends_on within a null_resource resource "null_resource" "test_status" { depends_on = ["module.db.id"] #or any output variable provisioner "local-exec" { command = "scripts/test_http_status.sh" } } But as @JimB already mentioned terraform isn't procedural so ensuring order isn't possible.
Terraform
49,641,484
11
I've been trying to create a terraform script for creating a cognito user pool and identity pool with a linked auth and unauth role, but I can't find a good example of doing this. Here is what I have so far: cognito.tf: resource "aws_cognito_user_pool" "pool" { name = "Sample User Pool" admin_create_user_config { allow_admin_create_user_only = false } /* More stuff here, not included*/ } resource "aws_cognito_user_pool_client" "client" { name = "client" user_pool_id = "${aws_cognito_user_pool.pool.id}" generate_secret = true explicit_auth_flows = ["ADMIN_NO_SRP_AUTH"] } resource "aws_cognito_identity_pool" "main" { identity_pool_name = "SampleIdentityPool" allow_unauthenticated_identities = false cognito_identity_providers { client_id = "${aws_cognito_user_pool_client.id}" provider_name = "" server_side_token_check = true } } So, I want to tack an auth role and an unauth role to this, but I'm still trying to get my head around how to define and link IAM roles in terraform, but here is what I have so far: resource "aws_cognito_identity_pool_roles_attachment" "main" { identity_pool_id = "${aws_cognito_identity_pool.main.id}" roles { "authenticated" = <<EOF { actions = ["sts:AssumeRoleWithWebIdentity"] principals { type = "Federated" identifiers = ["cognito-identity.amazonaws.com"] } condition { test = "StringEquals" variable = "cognito-identity.amazonaws.com:aud" values = ["${aws_cognito_identity_pool.main.id}"] } condition { test = "ForAnyValue:StringLike" variable = "cognito-identity.amazonaws.com:amr" values = ["authenticated"] } } EOF "unauthenticated" = <<EOF { actions = ["sts:AssumeRoleWithWebIdentity"] principals { type = "Federated" identifiers = ["cognito-identity.amazonaws.com"] } condition { test = "StringEquals" variable = "cognito-identity.amazonaws.com:aud" values = ["${aws_cognito_identity_pool.main.id}"] } } EOF } } This however, doesn't work. It creates the pools and client correctly, but doesn't attach anything to auth/unauth roles. I can't figure out what I'm missing, and I can't find any examples of how to do this correctly other than by using the AWS console. Any help on working this out correctly in terraform would be much appreciated!
After messing around with this for a few days, i finally figured it out. I was merely getting confused with "Assume Role Policy" and "Policy". Once I had that sorted out, it worked. Here is (roughly) what I have now. I'll put it here in hopes that it will save someone trying to figure this out for the first time a lot of grief. For User Pool: resource "aws_cognito_user_pool" "pool" { name = "Sample Pool" /* ... Lots more attributes */ } For User Pool Client: resource "aws_cognito_user_pool_client" "client" { name = "client" user_pool_id = aws_cognito_user_pool.pool.id generate_secret = true explicit_auth_flows = ["ADMIN_NO_SRP_AUTH"] } For Identity Pool: resource "aws_cognito_identity_pool" "main" { identity_pool_name = "SampleIdentities" allow_unauthenticated_identities = false cognito_identity_providers { client_id = aws_cognito_user_pool_client.client.id provider_name = aws_cognito_user_pool.pool.endpoint server_side_token_check = true } } Attach Roles to Identity Pool: resource "aws_cognito_identity_pool_roles_attachment" "main" { identity_pool_id = aws_cognito_identity_pool.main.id roles = { authenticated = aws_iam_role.auth_iam_role.arn unauthenticated = aws_iam_role.unauth_iam_role.arn } } And, finally, the roles and policies: resource "aws_iam_role" "auth_iam_role" { name = "auth_iam_role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Federated": "cognito-identity.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF } resource "aws_iam_role" "unauth_iam_role" { name = "unauth_iam_role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Federated": "cognito-identity.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF } resource "aws_iam_role_policy" "web_iam_unauth_role_policy" { name = "web_iam_unauth_role_policy" role = aws_iam_role.unauth_iam_role.id policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Action": "*", "Effect": "Deny", "Resource": "*" } ] } EOF } Note: Edited for updated terraform language changes that don't require "${...}" around references any more
Terraform
48,451,755
11
Let's say I've used Terraform to build my infrastructure and my tfstate gets deleted for some reason. This means I already have my resources defined in tf files, I just need to re-import everything. Does this have to be a manual process? For example this is how I import an EC2 instance: terraform import aws_instance.web i-123456 If I have to do that for every resource, that's quite painful (might as well delete everything and start over). If I already have my tf files is there a way to just import all the resources that have been defined in them? For example I needed the instance ID in order to import that instance. Can the Terraform import command just read my tf file and find the resource mapped to "aws_instance.web"? In order to do this Terraform would need to have a mapping of that aws instance to the resource in the tf file- this is of course the purpose of the tfstate. But does Terraform have a way of also tagging resources with their resource mappings? So I can do an import against a tf file and terraform just dynamically reads the tf file and finds the physical resources corresponding to the tf resources by unique tags?
No, there's no way to do this natively in Terraform; and even if you scripted a way together - I don't think it'd be very reliable and you wouldn't be able to trust that it selected the right resource (At least not easily). Terraform says multiple times in the documentation that you need to protect your state file; this is why remote stores, such as S3 are encouraged for anything you care about; and also why this is a selling point of Terraform Enterprise.
Terraform
47,613,926
11
I use Terraform to manage resources of Google Cloud Functions. But while the inital deployment of the cloud function worked, further deploments with changed cloud function source code (the source archive sourcecode.zip) were not redeployed when I use terraform apply after updating the source archive. The storage bucket object gets updated but this does not trigger an update/redeployment of the cloud function resource. Is this an error of the provider? Is there a way to redeploy a function in terraform when the code changes? The simplified source code I am using: resource "google_storage_bucket" "cloud_function_source_bucket" { name = "${local.project}-function-bucket" location = local.region uniform_bucket_level_access = true } resource "google_storage_bucket_object" "function_source_archive" { name = "sourcecode.zip" bucket = google_storage_bucket.cloud_function_source_bucket.name source = "./../../../sourcecode.zip" } resource "google_cloudfunctions_function" "test_function" { name = "test_func" runtime = "python39" region = local.region project = local.project available_memory_mb = 256 source_archive_bucket = google_storage_bucket.cloud_function_source_bucket.name source_archive_object = google_storage_bucket_object.function_source_archive.name trigger_http = true entry_point = "trigger_endpoint" service_account_email = google_service_account.function_service_account.email vpc_connector = "projects/${local.project}/locations/${local.region}/connectors/serverless-main" vpc_connector_egress_settings = "ALL_TRAFFIC" ingress_settings = "ALLOW_ALL" }
You can append MD5 or SHA256 checksum of the content of zip to the bucket object's name. That will trigger recreation of cloud function whenever source code changes. ${data.archive_file.function_src.output_md5} data "archive_file" "function_src" { type = "zip" source_dir = "SOURCECODE_PATH/sourcecode" output_path = "./SAVING/PATH/sourcecode.zip" } resource "google_storage_bucket_object" "function_source_archive" { name = "sourcecode.${data.archive_file.function_src.output_md5}.zip" bucket = google_storage_bucket.cloud_function_source_bucket.name source = data.archive_file.function_src.output_path } You can read more about terraform archive here - terraform archive_file
Terraform
71,320,503
10
I am looking for a way to invalidate the CloudFront distribution cache using Terraform. I could not find any information in the docs. Is this possible and if so, how?
There is no in-built support within the aws_cloudfront_distribution or aws_cloudfront_cache_policy resource for cache invalidation. As a last resort, the local_exec provisioner can be used. Typically, from my experience, the cache is invalidated within the CI/CD pipeline using the AWS CLI create-invalidation command. However, if this must be done within Terraform, you can use the local-exec provisioner to run commands on the local machine running Terraform after the resource has been created/updated. We can use this to run the above CLI invalidation command to invalidate the distribution cache. Use the self object to access all of the CloudFront distribution's attributes, including self.id to reference the CloudFront distribution ID for the invalidation Example: resource "aws_cloudfront_distribution" "s3_distribution" { # ... provisioner "local-exec" { command = "aws cloudfront create-invalidation --distribution-id ${self.id} --paths '...'" } }
Terraform
69,794,727
10
I have an API Gateway setup using Terraform. I need to be able to visit the API Gateway on the base path, i.e, without the stage name appended to the base URL. https://{api_id}.execute-api.{region}.amazonaws.com/ <- acceptable https://{api_id}.execute-api.{region}.amazonaws.com/{StageName} <- not acceptable I would do this from the console by creating a default deployment stage like here. I looked but could not find anything in the terraform documentation here for stages I want to be able to do this by creating the default stage, not using a aws_api_gateway_domain_name resource
From the AWS documentation: You can create a $default stage that is served from the base of your API's URL—for example, https://{api_id}.execute-api.{region}.amazonaws.com/. You use this URL to invoke an API stage. The Terraform documentation doesn't mention this, but you can create a stage with $default as the stage name. Calling the base path should then use that stage. resource "aws_apigatewayv2_stage" "example" { api_id = aws_apigatewayv2_api.example.id name = "$default" }
Terraform
66,977,149
10
I'm creating 4 vms through count in azurerm_virtual_machine but i want to create only one public IP and associate it with the first VM ? is that possible if so how ? below is my template file resource "azurerm_network_interface" "nics" { count = 4 name = ... location = ... resource_group_name = ... ip_configuration { subnet_id = ... private_ip_address_allocation = "Static" private_ip_address = ... } } resource "azurerm_public_ip" "public_ip" { name = ... location = ... resource_group_name = ... } resource "azurerm_virtual_machine" "vms" { count = 4 network_interface_ids = [element(azurerm_network_interface.nics.*.id, count.index)] } i have already gone through below questions but they are create multiple public ip's & add them to all vms. multiple-vms-with-public-ip set-dynamic-ip attach-public-ip
Public IPs are created using azurerm_public_ip: resource "azurerm_public_ip" "public_ip" { name = "acceptanceTestPublicIp1" resource_group_name = azurerm_resource_group.example.name location = azurerm_resource_group.example.location allocation_method = "Dynamic" } Having the address in your azurerm_network_interface you could do the following using Conditional Expressions: resource "azurerm_network_interface" "nics" { count = 4 name = ... location = ... resource_group_name = ... ip_configuration { subnet_id = ... private_ip_address_allocation = "Static" private_ip_address = ... public_ip_address_id = count.index == 1 ? azurerm_public_ip.public_ip.id : null } }
Terraform
65,935,259
10
I am trying to define a terraform output block that returns the ARN of a Lambda function. The Lambda is defined in a sub-module. According to the documentation it seems like the lambda should just have an ARN attribute already: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/lambda_function#arn Using that as a source I thought I should be able to do the following: output "lambda_arn" { value = module.aws_lambda_function.arn } This generates the following error: Error: Unsupported attribute on main.tf line 19, in output "lambda_arn": 19: value = module.aws_lambda_function.arn This object does not have an attribute named "arn". I would appreciate any input, thanks.
Documentation is correct. Data source data.aws_lambda_function has arn attribute. However, you are trying to access the arn from a custom module module.aws_lambda_function. To do this you have to define output arn in your module. So in your module you should have something like this: data "aws_lambda_function" "existing" { function_name = "function-to-get" } output "arn" { value = data.aws_lambda_function.existing.arn } Then if you have your module called aws_lambda_function: module "aws_lambda_function" { source = "path-to-module" } you will be able to access the arn: module.aws_lambda_function.arn
Terraform
65,798,783
10
I'm having issues iterating over a list of objects within a template interpreted by the templatefile function. I have the following var: variable "destinations" { description = "A list of EML Channel Destinations." type = list(object({ id = string url = string })) } This is passed in to the templatefile function as destinations. The snippet of template relevant is this: Destinations: %{ for dest in destinations ~} - Id: ${dest.id} Settings: URL: ${dest.url} %{ endfor } When planning Terraform this gives an error of: Error: "template_body" contains an invalid YAML: yaml: line 26: did not find expected key I have tried switching the template code to the following: Destinations: %{ for id, url in destinations ~} - Id: ${id} Settings: URL: ${url} %{ endfor } Which gives a different error: Call to function "templatefile" failed: ../../local-tfmodules/eml/templates/eml.yaml.tmpl:25,20-23: Invalid template interpolation value; Cannot include the given value in a string template: string required., and 2 other diagnostic(s). [!] something went wrong when creating the environment TF plan I get the impression my iterating over the data type here is somehow incorrect but I cannot fathom how and I cannot find any docs about this at all. Here is a cut down example of how I'm calling this module: module "eml" { source = "../../local-tfmodules/eml" name = "my_eml" destinations = [ { id = "6" url = "https://example.com" }, { id = "7" url = "https://example.net" } ] <cut> }
I've just found (after crafting a small Terraform module to test templatefile output only) that the original config DOES work (at least in TF v0.12.29). The errors given are a bit of a Red Herring - the issue is to do with indentation within the template, e.g. instead of: Destinations: %{ for destination in destinations ~} - Id: ${destination.id} Settings: URL: ${destination.url} %{ endfor ~} it should be: Destinations: %{~ for destination in destinations ~} - Id: ${destination.id} Settings: URL: ${destination.url} %{~ endfor ~} Notice the extra tilde's (~) at the beginning of the Terraform directives. This makes the Yaml alignment work correctly (you get some lines incorrectly indented and some blank lines). After this the original code in my question works as I expected it to & produces valid yaml.
Terraform
64,651,270
10
When trying to create elb(classic load balancer) in AWS via terraform, I am sending a list of public subnet ids that were created from another module. In this case I have 4 subnets which are spanned across 3 az's. I have 2 subnets from az-1a when I am trying to run the terraform , I get an error saying same az can't be used twice for ELB resource "aws_elb" "loadbalancer" { name = "loadbalancer-terraform" subnets = var.public_subnets listener { instance_port = 80 instance_protocol = "http" lb_port = 80 lb_protocol = "http" } depends_on = [aws_autoscaling_group.private_ec2] } Is there any way where I can select subnets from the given list in such a way I can only get subnet id's from distinct AZ's . subnetid1 -- az1-a subnetid2 -- az1-b subnetid3 -- az1-c subnetid4 -- az1-a now I need to get an output either subnet-1,2 and 3 or subnet-2,3 and 4.
It sounds like this problem decomposes into two smaller problems: Determine the availability zone of each of the subnets. For each distinct availability zone, choose any one of the subnets that belongs to it. (I'm assuming here that there is no reason to prefer one subnet over another if both are in the same AZ.) For step one, if we don't already have the subnets in question managed by the current configuration (which seems to be the case here -- you are receiving them from an input variable) then we can use the aws_subnet data source to read information about a subnet given its ID. Because you have more than one subnet here, we'll use resource for_each to look up each one. data "aws_subnet" "public" { for_each = toset(var.public_subnets) id = each.key } The above will make data.aws_subnet.public appear as a map from subnet id to subnet object, and the subnet objects each have availability_zone attributes specifying which zone each subnet belongs to. For our second step it's more convenient to invert that mapping, so that the keys are availability zones and the values are subnet ids: locals { availability_zone_subnets = { for s in data.aws_subnet.public : s.availability_zone => s.id... } } The above is a for expression, which in this case is using the ... suffix to activate grouping mode, because we're expecting to find more than one subnet per availability zone. As a result of this, local.availability_zone_subnets will be a map from availability zone name to a list of one or more subnet ids, like this: { "az1-a" = ["subnetid1", "subnetid4"] "az1-b" = ["subnetid2"] "az1-c" = ["subnetid3"] } This gets us the information we need to implement the second part of the problem: choosing any one of the elements from each of those lists. The easiest definition of "any one" is to take the first one, by using [0] to take the first element. resource "aws_elb" "loadbalancer" { depends_on = [aws_autoscaling_group.private_ec2] name = "loadbalancer-terraform" subnets = [for subnet_ids in local.availability_zone_subnets : subnet_ids[0]] listener { instance_port = 80 instance_protocol = "http" lb_port = 80 lb_protocol = "http" } } There are some caveats of the above solution which are important to consider: Taking the first element of each list of subnet ids means that the configuration could potentially be sensitive to the order of elements in var.public_subnets, but this particular combination above implicitly avoids that with the toset(var.public_subnets) in the initial for_each, which discards the original ordering of var.public_subnets and causes all of the downstream expressions to order the results by a lexical sort of the subnet ids. In other words, this will choose the subnet whose id is the "lowest" when doing a lexical sort. I don't really like it when that sort of decision is left implicit, because it can be confusing to future maintainers who might change the design and be surprised to see it now choosing a different subnet for each availability zone. I can see a couple different ways to mitigate that, and I'd probably do both if I were writing a long-lived module: Make sure variable "public_subnets" has type = set(string) for its type constraint, rather than type = list(string), to be explicit that this module discards the ordering of the subnets as given by the caller. If you do this, you can change toset(var.public_subnets) to just var.public_subnets, because it will already be a set. In the final for expression to choose the first subnet for each availability zone, include an explicit call to sort. This call is redundant with how the rest of this is implemented in my example, but I think it's a good clue to a future reader that it's using a lexical sort to decide which of the subnets to use: subnets = [ for subnet_ids in local.availability_zone_subnets : sort(subnet_ids)[0] ] Neither of those changes will actually affect the behavior immediately, but additions like this can be helpful to future maintainers as they read a module they might not be previously familiar with, so they don't need to read the entire module to understand a smaller part of it.
Terraform
63,727,252
10
A while ago I created a serverless Azure SQL resource in Terraform using the azurerm_sql_database block. Then in March, in azurerm version 2.3 they came out with the azurerm_mssql_database block, which as I understand is intended to replace azurerm_sql_database. I need to change the auto_pause_delay_in_minutes setting, which is only available in azurerm_mssql_database. So I guess I need to upgrade now, before there's any official guidance (that I can find) on how to perform the upgrade. If I perform these steps: Replace azurerm_sql_database with azurerm_mssql_database Remove resource_group_name Remove location Replace requested_service_objective_name with sku_name Replace server_name with server_id Then terraform tries to delete my database and create a new one, and I get an error like "A resource with the ID [id] already exists - to be managed via Terraform this resource needs to be imported into the State." How do I perform the upgrade and set auto_pause_delay_in_minutes without deleting my database?
The old resource in Azure needs to be imported into the new resource definition in terraform. Then the old resource state in terraform needs remove. See the following walk through. Modify for whatever additional parameters you need, it is the same workflow. First build the azurerm_sql_database resource: # cat .\main.tf provider "azurerm" { version = "~>2.19.0" features {} } resource "azurerm_resource_group" "example" { name = "example-resources" location = "East US" } resource "azurerm_sql_server" "example" { name = "pearcecexamplesqlserver" resource_group_name = azurerm_resource_group.example.name location = "East US" version = "12.0" administrator_login = "4dm1n157r470r" administrator_login_password = "4-v3ry-53cr37-p455w0rd" } resource "azurerm_sql_database" "example" { name = "pearcecexamplesqldatabase" resource_group_name = azurerm_resource_group.example.name location = "East US" server_name = azurerm_sql_server.example.name } Terraform Apply -- Assuming a clean creation Change the resource to azurerm_mssql_database and update the parameters cat .\main.tf provider "azurerm" { version = "~>2.19.0" features {} } resource "azurerm_resource_group" "example" { name = "example-resources" location = "East US" } resource "azurerm_sql_server" "example" { name = "pearcecexamplesqlserver" resource_group_name = azurerm_resource_group.example.name location = "East US" version = "12.0" administrator_login = "4dm1n157r470r" administrator_login_password = "4-v3ry-53cr37-p455w0rd" } resource "azurerm_mssql_database" "example" { name = "pearcecexamplesqldatabase" server_id = azurerm_sql_server.example.id } Terraform Apply -- Uh oh # terraform apply azurerm_resource_group.example: Refreshing state... [id=/subscriptions/redacted/resourceGroups/example-resources] azurerm_sql_database.example: Refreshing state... [id=/subscriptions/redacted/resourceGroups/example-resources/providers/Microsoft.Sql/servers/pearcecexamplesqlserver/databases/pearcecexamplesqldatabase] azurerm_sql_server.example: Refreshing state... [id=/subscriptions/redacted/resourceGroups/example-resources/providers/Microsoft.Sql/servers/pearcecexamplesqlserver] An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create - destroy Terraform will perform the following actions: # azurerm_mssql_database.example will be created + resource "azurerm_mssql_database" "example" { + auto_pause_delay_in_minutes = (known after apply) + collation = (known after apply) + create_mode = (known after apply) + creation_source_database_id = (known after apply) + id = (known after apply) + license_type = (known after apply) + max_size_gb = (known after apply) + min_capacity = (known after apply) + name = "pearcecexamplesqldatabase" + read_replica_count = (known after apply) + read_scale = (known after apply) + restore_point_in_time = (known after apply) + sample_name = (known after apply) + server_id = "/subscriptions/redacted/resourceGroups/example-resources/providers/Microsoft.Sql/servers/pearcecexamplesqlserver" + sku_name = (known after apply) + zone_redundant = (known after apply) + threat_detection_policy { + disabled_alerts = (known after apply) + email_account_admins = (known after apply) + email_addresses = (known after apply) + retention_days = (known after apply) + state = (known after apply) + storage_account_access_key = (sensitive value) + storage_endpoint = (known after apply) + use_server_default = (known after apply) } } # azurerm_sql_database.example will be destroyed - resource "azurerm_sql_database" "example" { - collation = "SQL_Latin1_General_CP1_CI_AS" -> null - create_mode = "Default" -> null - creation_date = "2020-07-31T17:54:48.453Z" -> null - default_secondary_location = "West US" -> null - edition = "GeneralPurpose" -> null - id = "/subscriptions/redacted/resourceGroups/example-resources/providers/Microsoft.Sql/servers/pearcecexamplesqlserver/databases/pearcecexamplesqldatabase" -> null - location = "eastus" -> null - max_size_bytes = "34359738368" -> null - name = "pearcecexamplesqldatabase" -> null - read_scale = false -> null - requested_service_objective_id = "f21733ad-9b9b-4d4e-a4fa-94a133c41718" -> null - requested_service_objective_name = "GP_Gen5_2" -> null - resource_group_name = "example-resources" -> null - server_name = "pearcecexamplesqlserver" -> null - tags = {} -> null - zone_redundant = false -> null - threat_detection_policy { - disabled_alerts = [] -> null - email_account_admins = "Disabled" -> null - email_addresses = [] -> null - retention_days = 0 -> null - state = "Disabled" -> null - use_server_default = "Disabled" -> null } } Plan: 1 to add, 0 to change, 1 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: Apply cancelled Terraform Import -- Import the resource # terraform import azurerm_mssql_database.example /subscriptions/redacted/resourceGroups/example-resources/providers/Microsoft.Sql/servers/pearcecexamplesqlserver/databases/pearcecexamplesqldatabase azurerm_mssql_database.example: Importing from ID "/subscriptions/redacted/resourceGroups/example-resources/providers/Microsoft.Sql/servers/pearcecexamplesqlserver/databases/pearcecexamplesqldatabase"... azurerm_mssql_database.example: Import prepared! Prepared azurerm_mssql_database for import azurerm_mssql_database.example: Refreshing state... [id=/subscriptions/redacted/resourceGroups/example-resources/providers/Microsoft.Sql/servers/pearcecexamplesqlserver/databases/pearcecexamplesqldatabase] Import successful! The resources that were imported are shown above. These resources are now in your Terraform state and will henceforth be managed by Terraform. Terraform State Remove -- Remove the old state terraform state rm azurerm_sql_database.example Removed azurerm_sql_database.example Successfully removed 1 resource instance(s) Terraform Apply - Clean # terraform apply azurerm_resource_group.example: Refreshing state... [id=/subscriptions/redacted/resourceGroups/example-resources] azurerm_sql_server.example: Refreshing state... [id=/subscriptions/redacted/resourceGroups/example-resources/providers/Microsoft.Sql/servers/pearcecexamplesqlserver] azurerm_mssql_database.example: Refreshing state... [id=/subscriptions/redacted/resourceGroups/example-resources/providers/Microsoft.Sql/servers/pearcecexamplesqlserver/databases/pearcecexamplesqldatabase] Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Terraform
63,194,330
10
I tried to create an AWS security group with multiple inbound rules, Normally we need to multiple ingresses in the sg for multiple inbound rules. Instead of creating multiple ingress rules separately, I tried to create a list of ingress and so that I can easily reuse the module for different applications. PFB, module/sg/sg.tf >> resource "aws_security_group" "ec2_security_groups" { name = var.name_security_groups vpc_id = var.vpc_id } module/sg/rules.tf >> resource "aws_security_group_rule" "ingress_rules" { count = lenght(var.ingress_rules) type = "ingress" from_port = var.ingress_rules[count.index][0] to_port = var.ingress_rules[count.index][1] protocol = var.ingress_rules[count.index][2] cidr_blocks = var.ingress_rules[count.index][3] description = var.ingress_rules[count.index][4] security_group_id = aws_security_group.ec2_security_groups.id } module/sg/variable.tf >> variable "vpc_id" { } variable "name_security_groups" { } variable "ingress_rules" { type = list(string) } In the application folder, application/dev/sg.tf >> module "sg_test" { source = "../modules/sg" vpc_id = "vpc-xxxxxxxxx" name_security_groups = "sg_test" ingress_rules = var.sg_ingress_rules } application/dev/variable.tf >> variable "sg_ingress_rules" { type = list(string) default = { [22, 22, "tcp", "1.2.3.4/32", "test"] [23, 23, "tcp", "1.2.3.4/32", "test"] } } Error: Error: Missing attribute value on test-sgs.tf line 21, in variable "sg_ingress_rules": 20: 21: 22: Expected an attribute value, introduced by an equals sign ("="). Please help to correct this or if there is any other method please suggest. Regards,
Thanks@apparentlymart, who helped to solve this in Terraform discussion The Security rule:- resource "aws_security_group_rule" "ingress_rules" { count = length(var.ingress_rules) type = "ingress" from_port = var.ingress_rules[count.index].from_port to_port = var.ingress_rules[count.index].to_port protocol = var.ingress_rules[count.index].protocol cidr_blocks = [var.ingress_rules[count.index].cidr_block] description = var.ingress_rules[count.index].description security_group_id = aws_security_group.ec2_security_groups.id } And the variable: variable "sg_ingress_rules" { type = list(object({ from_port = number to_port = number protocol = string cidr_block = string description = string })) default = [ { from_port = 22 to_port = 22 protocol = "tcp" cidr_block = "1.2.3.4/32" description = "test" }, { from_port = 23 to_port = 23 protocol = "tcp" cidr_block = "1.2.3.4/32" description = "test" }, ] }
Terraform
62,575,544
10
I have written a terraform configuration with variable definition like: variable "GOOGLE_CLOUD_REGION" { type = string } When I run terraform plan I am asked to fill in this variable even though this variable is set within my environment. Is there a way to tell terraform to work with current env vars? Or do I have to export them and pass them somehow manually one-by-one?
You can define the environment variable TF_VAR_GOOGLE_CLOUD_REGION to set that variable. If you are using bash, it might look like this: export TF_VAR_GOOGLE_CLOUD_REGION="$GOOGLE_CLOUD_REGION" terraform apply ... From Environment Variables under Configuration Language: Input Variables. As a fallback for the other ways of defining variables, Terraform searches the environment of its own process for environment variables named TF_VAR_ followed by the name of a declared variable. This can be useful when running Terraform in automation, or when running a sequence of Terraform commands in succession with the same variables. For example, at a bash prompt on a Unix system: $ export TF_VAR_image_id=ami-abc123 $ terraform plan ...
Terraform
62,482,719
10
I'm trying to get tf 0.12.x new dynamic feature to work with a nested map, config is below. As you can see below (simplified for this) I'm defining all the variables and adding variable required_resource_access which contains a map. I was hoping to use new dynamic feature to create read this map in a nested dyanmic block. variable prefix { description = "Prefix to applied to all top level resources" default = "abx" } variable suffix { description = "Suffix to applied to all valid top level resources, usually this is 2 letter region code such as we (westeurope), ne (northeurope)." default = "we" } variable env { description = "3 letter environment code appied to all top level resources" default = "dev" } variable location { description = "Where to create all resources in Azure" default = "westeurope" } variable available_to_other_tenants { default = false } variable oauth2_allow_implicit_flow { default = true } variable public_client { default = false } # other option is native variable application_type { default = "webapp/api" } variable required_resource_access { type = list(object({ resource_app_id = string resource_access = object({ id = string type = string }) })) default = [{ resource_app_id = "00000003-0000-0000-c000-000000000000" resource_access = { id = "7ab1d382-f21e-4acd-a863-ba3e13f7da61" type = "Role" } }] } variable reply_urls { default = [] } variable group_membership_claims { default = "All" } resource "azuread_application" "bootstrap" { name = "${var.prefix}-${var.env}-spn" homepage = "http://${var.prefix}-${var.env}-spn" identifier_uris = ["http://${var.prefix}-${var.env}-spn"] reply_urls = var.reply_urls available_to_other_tenants = var.available_to_other_tenants oauth2_allow_implicit_flow = var.oauth2_allow_implicit_flow type = var.application_type group_membership_claims = var.group_membership_claims dynamic "required_resource_access" { for_each = var.required_resource_access content { resource_app_id = required_resource_access.value["resource_app_id"] dynamic "resource_access" { for_each = required_resource_access.value["resource_access"] content { id = resource_access.value["id"] type = resource_access.value["type"] } } } } } But for reasons beyond my knowledge it keeps giving me this error (notice it's priting it twice as well), I've tried a few other options but this is the closest I managed to get where it would at least give me a meaningful error. ------------------------------------------------------------------------ Error: Invalid index on pe_kubernetes.tf line 24, in resource "azuread_application" "bootstrap": 24: id = resource_access.value["id"] |---------------- | resource_access.value is "7ab1d382-f21e-4acd-a863-ba3e13f7da61" This value does not have any indices. Error: Invalid index on pe_kubernetes.tf line 24, in resource "azuread_application" "bootstrap": 24: id = resource_access.value["id"] |---------------- | resource_access.value is "Role" This value does not have any indices. Error: Invalid index on pe_kubernetes.tf line 25, in resource "azuread_application" "bootstrap": 25: type = resource_access.value["type"] |---------------- | resource_access.value is "7ab1d382-f21e-4acd-a863-ba3e13f7da61" This value does not have any indices. Error: Invalid index on pe_kubernetes.tf line 25, in resource "azuread_application" "bootstrap": 25: type = resource_access.value["type"] |---------------- | resource_access.value is "Role" This value does not have any indices. Spent the best part of 2 days on this with no luck so any help or pointers would be much appreciated!
I had some time to test my comment... If I change the resource_access to a list it works. See code below: variable required_resource_access { type = list(object({ resource_app_id = string resource_access = list(object({ id = string type = string })) })) default = [{ resource_app_id = "00000003-0000-0000-c000-000000000000" resource_access = [{ id = "7ab1d382-f21e-4acd-a863-ba3e13f7da61" type = "Role" }] }] } resource "azuread_application" "bootstrap" { name = "test" type = "webapp/api" group_membership_claims = "All" dynamic "required_resource_access" { for_each = var.required_resource_access content { resource_app_id = required_resource_access.value["resource_app_id"] dynamic "resource_access" { for_each = required_resource_access.value["resource_access"] content { id = resource_access.value["id"] type = resource_access.value["type"] } } } } } And the plan shows: Terraform will perform the following actions: # azuread_application.bootstrap will be created + resource "azuread_application" "bootstrap" { + application_id = (known after apply) + available_to_other_tenants = false + group_membership_claims = "All" + homepage = (known after apply) + id = (known after apply) + identifier_uris = (known after apply) + name = "test" + oauth2_allow_implicit_flow = true + object_id = (known after apply) + owners = (known after apply) + public_client = (known after apply) + reply_urls = (known after apply) + type = "webapp/api" + oauth2_permissions { + admin_consent_description = (known after apply) ... } + required_resource_access { + resource_app_id = "00000003-0000-0000-c000-000000000000" + resource_access { + id = "7ab1d382-f21e-4acd-a863-ba3e13f7da61" + type = "Role" } } } Plan: 1 to add, 0 to change, 0 to destroy. I removed a lot of your variables an some of the optional Arguments for azuread_application to keep the code as small as possible, but the same principle applies to your code, use lists on for_each or it will loop on the object properties.
Terraform
62,221,306
10
I have a terraform file which fails when I run terraform plan and I get the error: Error: Cycle: module.hosting.data.template_file.bucket_policy, module.hosting.aws_s3_bucket.website It makes sense since the bucket refers to the policy and vice versa: data "template_file" "bucket_policy" { template = file("${path.module}/policy.json") vars = { bucket = aws_s3_bucket.website.arn } } resource "aws_s3_bucket" "website" { bucket = "xxx-website" website { index_document = "index.html" } policy = data.template_file.bucket_policy.rendered } How can I avoid this bidirectional reference?
You can use the aws_s3_bucket_policy resource. This allows you to create the resources without a circular dependency. This way, Terraform can: Create the bucket Create the template file, using the bucket ARN Create the policy, referring back to the template file and attaching it to the bucket. The code would look something like this: data "template_file" "bucket_policy" { template = file("${path.module}/policy.json") vars = { bucket = aws_s3_bucket.website.arn } } resource "aws_s3_bucket" "website" { bucket = "xxx-website" website { index_document = "index.html" } } resource "aws_s3_bucket_policy" "b" { bucket = "${aws_s3_bucket.website.id}" policy = data.template_file.bucket_policy.rendered }
Terraform
61,869,536
10
As the terraform azurerm provider misses support for azure webapp access restrictions (see github issue). We use a null_resource with local-exec to apply a access restriction: provisioner "local-exec" { command = <<COMMAND az webapp config access-restriction add --subscription ${self.triggers.subscription_id} --resource-group ${self.triggers.resource_group} \ --name ${self.triggers.web_app_name} --rule-name 'allow application gateway' --action Allow --vnet-name ${self.triggers.vnet_name} \ --subnet ${self.triggers.subnet_name} --priority 100 COMMAND } Our terraform code is then later run by an azure DevOps Pipeline, which uses a Service Connection (with Service Principal) to authenticate with Azure. The following task is trying to apply the terraform resources: - task: TerraformCLI@0 displayName: "Terraform apply" inputs: command: 'apply' commandOptions: '--var-file="./environments/${{ parameters.environment }}.tfvars"' workingDirectory: '$(System.DefaultWorkingDirectory)/${{ parameters.projectFolder }}' environmentServiceName: 'shared-${{ parameters.environment }}-001' which results in the following Error: Error: Error running command ' az webapp config access-restriction remove --subscription shared-staging-001 --resource-group rg-hub-network-staging \ --name landing-webapp-hub --rule-name 'allow application gateway' ': exit status 1. Output: Subscription 'shared-staging-001' not recognized. Command group 'webapp config access-restriction' is in preview. It may be changed/removed in a future release. Please run 'az login' to setup account. No we tried to replace the TerraformCLI@0 Task with either a plain bash script or a AzureCLI@2 Task. We could not get az login to work in a plain bash script due to the missing Infos. The approach described here does not work either. Running the terraform commands inside a AzureCLI@2 Task looks promissing but causes some strange errors related to the service principal login: - task: AzureCLI@2 displayName: "Terraform init" inputs: azureSubscription: shared-${{ parameters.environment }}-001 scriptType: bash scriptLocation: inlineScript inlineScript: | terraform init --backend-config="./environments/${{ parameters.environment }}_backend.tfvars" This causes the following error: Initializing modules... - app-gateway in modules/app-gateway - dummy1 in modules/BRZ365-AppService - dummy2 in modules/BRZ365-AppService - hub-network in modules/hub-network - landing_zone_app in modules/BRZ365-AppService - squad-area in modules/squad-area Initializing the backend... Error: Error building ARM Config: Authenticating using the Azure CLI is only supported as a User (not a Service Principal). To authenticate to Azure using a Service Principal, you can use the separate 'Authenticate using a Service Principal' auth method - instructions for which can be found here: Alternatively you can authenticate using the Azure CLI by using a User Account.
I finally got this to work with the AzureCLI approach I described in the first post. I use addSpnToEnvironment (it adds the service provider credentials to the environment, as described in the documentation) and set the required parameters as described by terraform. - task: AzureCLI@2 displayName: "Terraform" inputs: azureSubscription: shared-${{ parameters.environment }}-001 scriptType: bash addSpnToEnvironment: true scriptLocation: inlineScript inlineScript: | export ARM_CLIENT_ID=$servicePrincipalId export ARM_CLIENT_SECRET=$servicePrincipalKey export ARM_TENANT_ID=$tenantId terraform init .....
Terraform
61,268,776
10
Currently, I'm working on a requirement to make Terraform Tags for AWS resources more modular. In this instance, there will be one tag 'Function' that will be unique to each resource and the rest of the tags to be attached will apply to all resources. What I'm trying to do is combine the unique 'Function' value with the other tags for each resource. Here's what I've got so far: tags = { Resource = "Example", "${var.tags} This tags value is defined as a map in the variables.tf file like so: variable "tags" { type = map description = "Tags for infrastructure resources." } and populated in the tfvars file with: tags = { "Product" = "Name", "Application" = "App", "Owner" = "Email" } When I run TF Plan, however, I'm getting an error: Expected an attribute value, introduced by an equals sign ("="). How can variables be combined like this in Terraform? Thanks in advance for your help.
I tried to use map, it does work with new versions. The lines below works for me: tags = "${merge(var.resource_tags, {a="bb"})}"
Terraform
60,045,338
10
I created RDS instance using aws_db_instance (main.tf): resource "aws_db_instance" "default" { identifier = "${module.config.database["db_inst_name"]}" allocated_storage = 20 storage_type = "gp2" engine = "mysql" engine_version = "5.7" instance_class = "db.t3.micro" name = "${module.config.database["db_name_prefix"]}${terraform.workspace}" username = "${module.config.database["db_username"]}" password = "${module.config.database["db_password"]}" parameter_group_name = "default.mysql5.7" skip_final_snapshot = true } Can I also create database schemas from file schema.sql with terraform apply? $ tree -L 1 . ├── main.tf └── schema.sql
You can use a provisioner (https://www.terraform.io/docs/provisioners/index.html) for that: resource "aws_db_instance" "default" { identifier = module.config.database["db_inst_name"] allocated_storage = 20 storage_type = "gp2" engine = "mysql" engine_version = "5.7" instance_class = "db.t3.micro" name = "${module.config.database["db_name_prefix"]}${terraform.workspace}" username = module.config.database["db_username"] password = module.config.database["db_password"] parameter_group_name = "default.mysql5.7" skip_final_snapshot = true provisioner "local-exec" { command = "mysql --host=${self.address} --port=${self.port} --user=${self.username} --password=${self.password} < ./schema.sql" } } #Apply scheme by using bastion host resource "aws_db_instance" "default_bastion" { identifier = module.config.database["db_inst_name"] allocated_storage = 20 storage_type = "gp2" engine = "mysql" engine_version = "5.7" instance_class = "db.t3.micro" name = "${module.config.database["db_name_prefix"]}${terraform.workspace}" username = module.config.database["db_username"] password = module.config.database["db_password"] parameter_group_name = "default.mysql5.7" skip_final_snapshot = true provisioner "file" { connection { user = "ec2-user" host = "bastion.example.com" private_key = file("~/.ssh/ec2_cert.pem") } source = "./schema.sql" destination = "~" } provisioner "remote-exec" { connection { user = "ec2-user" host = "bastion.example.com" private_key = file("~/.ssh/ec2_cert.pem") } command = "mysql --host=${self.address} --port=${self.port} --user=${self.username} --password=${self.password} < ~/schema.sql" } } mysql client needs to be installed on your device. If you don't have direct access to your DB, there is also a remote-exec provisioner, where you can use a bastion host (transfer file to remote place with file provisioner first). If your schema is not to complex, you could also use the MySQL provider of terraform: https://www.terraform.io/docs/providers/mysql/index.html
Terraform
59,922,023
10
I have the following terraform module to setup app services under the same plan: provider "azurerm" { } variable "env" { type = string description = "The SDLC environment (qa, dev, prod, etc...)" } variable "appsvc_names" { type = list(string) description = "The names of the app services to create under the same app service plan" } locals { location = "eastus2" resource_group_name = "app505-dfpg-${var.env}-web-${local.location}" acr_name = "app505dfpgnedeploycr88836" } resource "azurerm_app_service_plan" "asp" { name = "${local.resource_group_name}-asp" location = local.location resource_group_name = local.resource_group_name kind = "Linux" reserved = true sku { tier = "Basic" size = "B1" } } resource "azurerm_app_service" "appsvc" { for_each = toset(var.appsvc_names) name = "${local.resource_group_name}-${each.value}-appsvc" location = local.location resource_group_name = local.resource_group_name app_service_plan_id = azurerm_app_service_plan.asp.id site_config { linux_fx_version = "DOCKER|${local.acr_name}/${each.value}:latest" } app_settings = { DOCKER_REGISTRY_SERVER_URL = "https://${local.acr_name}.azurecr.io" } } output "hostnames" { value = { for appsvc in azurerm_app_service.appsvc: appsvc.name => appsvc.default_site_hostname } } I am invoking it through the following configuration: terraform { backend "azurerm" { } } locals { appsvc_names = ["gateway"] } module "web" { source = "../../modules/web" env = "qa" appsvc_names = local.appsvc_names } output "hostnames" { description = "The hostnames of the created app services" value = module.web.hostnames } The container registry has the images I need: C:\> az acr login --name app505dfpgnedeploycr88836 Login Succeeded C:\> az acr repository list --name app505dfpgnedeploycr88836 [ "gateway" ] C:\> az acr repository show-tags --name app505dfpgnedeploycr88836 --repository gateway [ "latest" ] C:\> When I apply the terraform configuration everything is created fine, but inspecting the created app service resource in Azure Portal reveals that its Container Settings show no docker image: Now, I can manually switch to another ACR and then back to the one I want only to get this: Cannot perform credential operations for /subscriptions/0f1c414a-a389-47df-aab8-a351876ecd47/resourceGroups/app505-dfpg-ne-deploy-eastus2/providers/Microsoft.ContainerRegistry/registries/app505dfpgnedeploycr88836 as admin user is disabled. Kindly enable admin user as per docs: https://learn.microsoft.com/en-us/azure/container-registry/container-registry-authentication#admin-account This is confusing me. According to https://learn.microsoft.com/en-us/azure/container-registry/container-registry-authentication#admin-account the admin user should not be used and so my ACR does not have one. On the other hand, I understand that I need somehow configure the app service to authenticate with the ACR. What is the right way to do it then?
So this is now possible since the v2.71 version of the Azure RM provider. A couple of things have to happen... Assign a Managed Identity to the application (can also use User Assigned but a bit more work) Set the site_config.acr_use_managed_identity_credentials property to true Grant the application's identity ACRPull rights on the container. Below is a modified version of the code above, not tested but should be ok provider "azurerm" { } variable "env" { type = string description = "The SDLC environment (qa, dev, prod, etc...)" } variable "appsvc_names" { type = list(string) description = "The names of the app services to create under the same app service plan" } locals { location = "eastus2" resource_group_name = "app505-dfpg-${var.env}-web-${local.location}" acr_name = "app505dfpgnedeploycr88836" } resource "azurerm_app_service_plan" "asp" { name = "${local.resource_group_name}-asp" location = local.location resource_group_name = local.resource_group_name kind = "Linux" reserved = true sku { tier = "Basic" size = "B1" } } resource "azurerm_app_service" "appsvc" { for_each = toset(var.appsvc_names) name = "${local.resource_group_name}-${each.value}-appsvc" location = local.location resource_group_name = local.resource_group_name app_service_plan_id = azurerm_app_service_plan.asp.id site_config { linux_fx_version = "DOCKER|${local.acr_name}/${each.value}:latest" acr_use_managed_identity_credentials = true } app_settings = { DOCKER_REGISTRY_SERVER_URL = "https://${local.acr_name}.azurecr.io" } identity { type = "SystemAssigned" } } data "azurerm_container_registry" "this" { name = local.acr_name resource_group_name = local.resource_group_name } resource "azurerm_role_assignment" "acr" { for_each = azurerm_app_service.appsvc role_definition_name = "AcrPull" scope = azurerm_container_registry.this.id principal_id = each.value.identity[0].principal_id } output "hostnames" { value = { for appsvc in azurerm_app_service.appsvc: appsvc.name => appsvc.default_site_hostname } } EDITED 21 Dec 2021 The MS documentation issue regarding the value being reset by Azure has now been resolved and you can also control Managed Identity via the portal.
Terraform
59,914,397
10
I'm trying to get terraform to add an "A" record to my dns zone in GCP. Efforts to do so result in an error: "update server is not set". A similar error is described here. So I gather from comments made there that I need an update item in my dns provider. Which I dutifully tried to provide. provider "dns" { update { server = "xxx.xxx.x.x" } } Except that I have no idea what IP goes in there, and my first attempts have failed. Will I need other settings? I note in the documentation the following format... provider "dns" { update { server = "192.168.0.1" key_name = "example.com." key_algorithm = "hmac-md5" key_secret = "3VwZXJzZWNyZXQ=" } } I don't understand where these settings come from. Update: Martin's advice (accepted answer below) worked like a charm. For the next person struggling with this, the trick was to use google_dns_record_set instead of dns_a_record_set.
The dns provider is implementing the standard DNS update protocol defined in RFC 2136: Dynamic Updates in the Domain Name System, which tends to be implemented by self-hosted DNS server software like BIND. In that case, the credentials would be configured on the server side by the BIND operator and then you'd in turn pass the given credentials into the provider. Unfortunately, as DNS has tended towards being a managed service provided for you by various vendors, most of these vendors have chosen to ignore RFC 2136 and implement their own proprietary APIs instead. For that reason, the management capabilities of Terraform's dns provider are incompatible with most managed DNS products. Instead, we manage these using a vendor-specific provider. In your case, since you are apparently using Google Cloud DNS, you'd manage your DNS zones and records using resource types from the google Terraform provider. Specifically: google_dns_managed_zone for the zone itself google_dns_record_set for recordsets within the zone Here is a minimal example to get started: resource "google_dns_managed_zone" "example" { name = "example" dns_name = "example.com." } resource "google_dns_record_set" "example" { managed_zone = google_dns_managed_zone.example.name name = "www.${google_dns_managed_zone.example.dns_name}" type = "A" rrdatas = ["10.1.2.1", "10.1.2.2"] ttl = 300 } A key advantage of these vendors using vendor-specific APIs is that the management operations integrate with the authentication mechanisms used for the rest of their APIs, and so as long as your Google Cloud Platform provider has credentials with sufficient privileges to manage these objects you shouldn't need any additional provider configuration for this. Terraform has provider support for a number of different managed DNS vendors, so folks not using Google Cloud DNS will hopefully find that their chosen vendor is also supported in a similar way, by browsing the available providers.
Terraform
59,759,132
10
I'm writing a terraform module which should be reused across different environments. In order to make things simple, here's a basic example of calling a module from one of the environments root module: ##QA-resources.tf module "some_module" { source = "./path/to/module" } some_variable = ${module.some_module.some_output} The problem is that when a module was already created Terraform throws an error of: Error creating [resource-type] [resource-name]: EntityAlreadyExists: [resource-type] with [resource-name] already exists. status code: 409, request id: ... This is happening when the module was created under the scope of external terraform.tfstate and one of the resources has a unique field like 'Name'. In my case, it happened while trying to use an IAM module which already created an role with that specific name, but it can happen in many other cases (I don't want the discussion to be specific to my use case). I would expect that if one of the module's resources exist, no failure will occur and the module's outputs would be available to the root module. Any suggestions how to manage this (maybe using specific command or a flag)? A few related threads I found: Terraform doesn't reuse an AWS Role it just created and fails? what is the best way to solve EntityAlreadyExists error in terraform? Terraform error EntityAlreadyExists: Role with name iam_for_lambda already exists Edit For @Martin Atkins request here's the resource which caused the error. It is a basic role for an AWS EKS cluster which have 2 policies attached (passed via var.policies): resource "aws_iam_role" "k8s_role" { name = "k8s-role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "eks.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF } resource "aws_iam_role_policy_attachment" "role-policy-attach" { role = "${aws_iam_role.k8s_role.name}" count = "${length(var.policies)}" policy_arn = "${element(var.policies, count.index)}" } This role was wrapped as a module and was passed to the root module. The error mentioned above in blockquotes occurred because the role already exist while the root module tried to create it.
In Terraform's view, every object is either managed by Terraform or not. Terraform avoids implicitly taking ownership of existing objects because if it were to do that then when you subsequently run terraform destroy you may end up inadvertently destroying something you didn't intend Terraform to be managing. In your case, that means that you need to decide whether the role named k8s-role is managed by Terraform or not, and if you have more than one Terraform configuration you will need to choose exactly one configuration to manage that object. In your one Terraform configuration that will manage the object, you can use a resource "aws_iam_role" to specify that. If any other configurations need to access it, or if it will not be managed with Terraform at all, then you can just refer to the role name k8s-role directly in the situations where it is needed. If you need more information about that role than just its name then you can use the aws_iam_role data source to fetch that information without declaring that you want to manage the object: data "aws_iam_role" "k8s" { name = "k8s-role" } For example, if you need to use the ARN of this role then you could access the arn attribute of this data resource using data.aws_iam_role.k8s.arn. Finally, if your role is not currently managed by Terraform but you would like to put it under Terraform's ownership, you can explicitly tell Terraform to start managing that existing object by importing it to create the association between the existing object and your resource block: terraform import aws_iam_role.k8s_role k8s-role
Terraform
58,891,274
10
I've tried to get all subnet ids to add aws batch with terraform with following code: data "aws_subnet_ids" "test_subnet_ids" { vpc_id = "default" } data "aws_subnet" "test_subnet" { count = "${length(data.aws_subnet_ids.test_subnet_ids.ids)}" id = "${tolist(data.aws_subnet_ids.test_subnet_ids.ids)[count.index]}" } output "subnet_cidr_blocks" { value = ["${data.aws_subnet.test_subnet.*.id}"] } Fortunately, it was working fine when I've tested like that. But when I tried to integrate with batch terraform like: resource "aws_batch_compute_environment" "test-qr-processor" { compute_environment_name = "test-qr-processor-test" compute_resources { instance_role = "${aws_iam_instance_profile.test-ec2-role.arn}" instance_type = [ "optimal" ] max_vcpus = 256 min_vcpus = 0 security_group_ids = [ "${aws_security_group.test-processor-batch.id}" ] subnets = ["${data.aws_subnet.test_subnet.*.id}"] type = "EC2" } service_role = "${aws_iam_role.test-batch-service-role.arn}" type = "MANAGED" depends_on = [ "aws_iam_role_policy_attachment.test-batch-service-role" ] } I've encountered following error message, Error: Incorrect attribute value type on terraform.tf line 142, in resource "aws_batch_compute_environment" "test-processor": 142: subnets = ["${data.aws_subnet.test_subnet.*.id}"] Inappropriate value for attribute "subnets": element 0: string required. Please let me know why, thanks.
"${data.aws_subnet.test_subnet.*.id}" is already string array type. you should input value without [ ] write code like : subnets = "${data.aws_subnet.test_subnet.*.id}" See : Here's A document about Resource: aws_batch_compute_environment
Terraform
58,404,831
10
I am trying to build multiple vnets in Azure using Terraform 0.12+ and its new for_each and running into some trouble. I was hoping that the new capabilities would allow me to create a generic network module that takes in a complex variable but I perhaps have reached its limit or am just not thinking it through correctly.. Essentially I my variable is built like variable "networks" { type = list(object({ name = string, newbits = number, netnum = number, subnets = list(object({ name = string, newbits = number, netnum = number}))})) } You can see that its an array of networks with a subarray of the subnets for that network. Doing it this way would make it easy to document the network without the extra lines of the terraform resource requirements so our network team can easily adjust/expand without needing to worry about knowing the HCL. I can perform the necessary functions of building the multiple vnet resources using count and its index, but I would like to use the for_each as it allows for indexing off of the key rather than a count which could change over time (requiring redeployment which we cannot do). network object networks = [ { # x.x.1.0/24 name = "DMZ", newbits = "8", netnum = "1", subnets = [ { # x.x.1.0/25 name = "DMZ" newbits = "9", netnum = "2" } ] }, { # x.x.33.0/24 name = "Intermediary" newbits = "8", netnum = "33", subnets = [ { # x.x.33.0/25 name = "subnet1" newbits = "9", netnum = "66" }, { # x.x.33.128/25 name = "subnet2" newbits = "9", netnum = "67" } ] } ] I have tried and successfully built the vnets with a for_each by changing the object into a map and then using the each.key and each.value (doing a cidrsubnet for the each.value) but the problem lies in making the subnets. locals { vnets = { for vnet in var.networks: vnet.name => cidrsubnet(var.root_cidr, vnet.newbits, vnet.netnum) } } Since the map does not include those subnets it I am just banging my head against the wall. Does anyone have any suggestions? Or am I really making this overly complex when I dont need to be? The resource creation that works, but no subnets resource "azurerm_virtual_network" "vnets" { for_each = local.vnets name = each.key address_space = [each.value] location = azurerm_resource_group.network.location resource_group_name = azurerm_resource_group.network.name } I was hoping I could use a dynamic block and perhaps filtering it to match the each.key inside of the network resource. I also tried doing it with its own subnet resource after but just cant figure it out. This is the whole resource I was hoping would work resource "azurerm_virtual_network" "vnets" { for_each = local.vnets name = "99999-Nucleus-${each.key}" address_space = [each.value] location = azurerm_resource_group.network.location resource_group_name = azurerm_resource_group.network.name dynamic "subnet" { for_each = [for vnet in var.networks: [for s in vnet.subnets: { name = s.name prefix = cidrsubnet(var.root_cidr, s.newbits, s.netnum) }] if var.networks.name == each.key] content { name = subnet.value.name address_prefix = subnet.value.prefix } } }
Constructing this intermediate local.vnets map here is making this problem a little harder to solve, because it's throwing away all of the other information in those objects and thus making it hard to use that other information inside the resource "azurerm_virtual_network" "vnets" block. Instead, if we use repetition over the original var.networks value then we can have the subnets list available directly from inside each.value: resource "azurerm_virtual_network" "vnets" { for_each = { for n in var.networks : n.name => n } location = azurerm_resource_group.network.location resource_group_name = azurerm_resource_group.network.name name = "99999-Nucleus-${each.key}" address_space = [cidrsubnet(var.root_cidr, each.value.newbits, each.value.netnum)] dynamic "subnet" { for_each = each.value.subnets content { name = subnet.value.name address_prefix = cidrsubnet(var.root_cidr, subnet.value.newbits, subnet.value.netnum) } } }
Terraform
57,890,909
10
I use TerraForm as infrastructure framework in my application. Below is the configuration I use to deploy python code to lambda. It does three steps: 1. zip all dependencies and source code in a zip file; 2. upload the zipped file to s3 bucket; 3. deploy to lambda function. But what happens is the deploy command terraform apply will fail with below error: Error: Error modifying Lambda Function Code quote-crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist. status code: 400, request id: 2db6cb29-8988-474c-8166-f4332d7309de on config.tf line 48, in resource "aws_lambda_function" "test_lambda": 48: resource "aws_lambda_function" "test_lambda" { Error: Error modifying Lambda Function Code praw_crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist. status code: 400, request id: e01c83cf-40ee-4919-b322-fab84f87d594 on config.tf line 67, in resource "aws_lambda_function" "praw_crawler": 67: resource "aws_lambda_function" "praw_crawler" { It means the deploy file doesn't exist in s3 bucket. But it success in the second time when I run the command. It seems like a timing issue. After upload the zip file to s3 bucket, the zip file doesn't exist in s3 bucket. That's why the first time deploy failed. But after a few seconds later, the second command finishes successfully and very quick. Is there anything wrong in my configuration file? The full terraform configuration file can be found: https://github.com/zhaoyi0113/quote-datalake/blob/master/config.tf
You need to add dependency properly to achieve this, Otherwise, it will crash. First Zip the files # Zip the Lamda function on the fly data "archive_file" "source" { type = "zip" source_dir = "../lambda-functions/loadbalancer-to-es" output_path = "../lambda-functions/loadbalancer-to-es.zip" } then upload it s3 by specifying it dependency which zip,source = "${data.archive_file.source.output_path}" this will make it dependent on zip # upload zip to s3 and then update lamda function from s3 resource "aws_s3_bucket_object" "file_upload" { bucket = "${aws_s3_bucket.bucket.id}" key = "lambda-functions/loadbalancer-to-es.zip" source = "${data.archive_file.source.output_path}" # its mean it depended on zip } Then you are good to go to deploy Lambda, To make it depened just this line do the magic s3_key = "${aws_s3_bucket_object.file_upload.key}" resource "aws_lambda_function" "elb_logs_to_elasticsearch" { function_name = "alb-logs-to-elk" description = "elb-logs-to-elasticsearch" s3_bucket = "${var.env_prefix_name}${var.s3_suffix}" s3_key = "${aws_s3_bucket_object.file_upload.key}" # its mean its depended on upload key memory_size = 1024 timeout = 900 timeouts { create = "30m" } runtime = "nodejs8.10" role = "${aws_iam_role.role.arn}" source_code_hash = "${base64sha256(data.archive_file.source.output_path)}" handler = "index.handler" }
Terraform
57,145,037
10
Because of a timeout issue, terraform failed to create an ec2 instance. In order to recover from it I have manually removed the ec2 instance from aws console as well as the terraform state file. However now it tried to recreate + aws_iam_instance_profile.server id: <computed> arn: <computed> create_date: <computed> name: "server-profile" path: "/" role: "server-role" roles.#: <computed> unique_id: <computed> Therefore I want to locate it in the aws console and remove it. However I don't know where to find it. Where can I locate aws_iam_instance_profile.server?
I couldn't find the instance profile in the IAM section in the console as described by Slushysnowman. This solved my issue: aws iam delete-instance-profile --instance-profile-name 'your-profile-name'
Terraform
56,931,561
10
I'm trying to create an RDS Cluster Aurora-MySQL with one instance in it. I get this error: "InvalidParameterValue: The engine mode provisioned you requested is currently unavailable" I tried using "serverless" and get the same error. Region: Ireland (eu-west-1) Any suggestions?
This error is also encountered when incorrectly trying to configure a serverless v2 configuration. It's a bit unintuitive, but engine_mode = "serverless" works for v1 and engine_mode = "provisioned" is required for v2. To ensure that you have a serverless v2 cluster you need: engine_mode = "provisioned" instance_class = "db.serverless" engine_version = "15.2" Note, the engine version will change over time. You can see which ones are available by using this CLI command: aws rds describe-orderable-db-instance-options --engine aurora-postgresql --db-instance-class db.serverless \ --region us-east-1 --query 'OrderableDBInstanceOptions[].[EngineVersion]' --output text
Terraform
56,626,196
10
I'm creating a Kubernetes Service Account using terraform and trying to output the token from the Kubernetes Secret that it creates. resource "kubernetes_service_account" "ci" { metadata { name = "ci" } } data "kubernetes_secret" "ci" { metadata { name = "${kubernetes_service_account.ci.default_secret_name}" } } output "ci_token" { value = "${data.kubernetes_secret.ci.data.token}" } According to the docs this should make the data block defer getting its values until the 'apply' phase because of the computed value of default_secret_name, but when I run terraform apply it gives me this error: Error: Error running plan: 1 error(s) occurred: * output.ci_token: Resource 'data.kubernetes_secret.ci' does not have attribute 'data.token' for variable 'data.kubernetes_secret.ci.data.token' Adding depends_on to the kubernetes_secret data block doesn't make any difference. If I comment out the output block, it creates the resources fine, then I can uncomment it, apply again, and everything acts normally, since the Kubernetes Secret exists already. I've also made a Github issue here. Update The accepted answer does solve this problem, but I omitted another output to simplify the question, which doesn't work with this solution: output "ci_crt" { value = "${data.kubernetes_secret.ci.data.ca.crt}" } * output.ci_ca: lookup: lookup failed to find 'ca.crt' in: ${lookup(data.kubernetes_secret.ci.data, "ca.crt")} This particular issue is reported here due to a bug in Terraform, which is fixed in version 0.12.
This works: resource "kubernetes_service_account" "ci" { metadata { name = "ci" } } data "kubernetes_secret" "ci" { metadata { name = kubernetes_service_account.ci.default_secret_name } } output "ci_token" { sensitive = true value = lookup(data.kubernetes_secret.ci.data, "token") }
Terraform
56,080,359
10
I'm confused as to how I should use terraform to connect Athena to my Glue Catalog database. I use resource "aws_glue_catalog_database" "catalog_database" { name = "${var.glue_db_name}" } resource "aws_glue_crawler" "datalake_crawler" { database_name = "${var.glue_db_name}" name = "${var.crawler_name}" role = "${aws_iam_role.crawler_iam_role.name}" description = "${var.crawler_description}" table_prefix = "${var.table_prefix}" schedule = "${var.schedule}" s3_target { path = "s3://${var.data_bucket_name[0]}" } s3_target { path = "s3://${var.data_bucket_name[1]}" } } to create a Glue DB and the crawler to crawl an s3 bucket (here only two), but I don't know how I link the Athena query service to the Glue DB. In the terraform documentation for Athena, there doesn't appear to be a way to connect Athena to a Glue catalog but only to an S3 Bucket. Clearly, however, Athena can be integrated with Glue. How can I terraform an Athena database to use my Glue catalog as its data source rather than an S3 bucket?
Our current basic setup for having Glue crawl one S3 bucket and create/update a table in a Glue DB, which can then be queried in Athena, looks like this: Crawler role and role policy: The assume_role_policy of the IAM role needs only Glue as principal The IAM role policy allows actions for Glue, S3, and logs The Glue actions and resources can probably be narrowed down to the ones really needed The S3 actions are limited to those needed by the crawler resource "aws_iam_role" "glue_crawler_role" { name = "analytics_glue_crawler_role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "glue.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF } resource "aws_iam_role_policy" "glue_crawler_role_policy" { name = "analytics_glue_crawler_role_policy" role = "${aws_iam_role.glue_crawler_role.id}" policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "glue:*", ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:GetBucketAcl", "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::analytics-product-data", "arn:aws:s3:::analytics-product-data/*", ] }, { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:*:*:/aws-glue/*" ] } ] } EOF } S3 Bucket, Glue Database and Crawler: resource "aws_s3_bucket" "product_bucket" { bucket = "analytics-product-data" acl = "private" } resource "aws_glue_catalog_database" "analytics_db" { name = "inventory-analytics-db" } resource "aws_glue_crawler" "product_crawler" { database_name = "${aws_glue_catalog_database.analytics_db.name}" name = "analytics-product-crawler" role = "${aws_iam_role.glue_crawler_role.arn}" schedule = "cron(0 0 * * ? *)" configuration = "{\"Version\": 1.0, \"CrawlerOutput\": { \"Partitions\": { \"AddOrUpdateBehavior\": \"InheritFromTable\" }, \"Tables\": {\"AddOrUpdateBehavior\": \"MergeNewColumns\" } } }" schema_change_policy { delete_behavior = "DELETE_FROM_DATABASE" } s3_target { path = "s3://${aws_s3_bucket.product_bucket.bucket}/products" } }
Terraform
55,129,035
10
When you look at terraform's docs for security group, you can see that there is an option to define a security_groups argument inside the ingress/egress security rules. It seems quite strange to me, but maybe I'm missing something here. I saw this post but there are no real world use cases mentioned. My question is: In which cases we'll want to use this kind of configuration?
You can use this syntax to apply those ingress/egress rules to any infrastructure that belongs to a particular security group. This Terraform code, for example: ingress { from_port = "80" to_port = "80" protocol = "tcp" security_groups = [ "${aws_security_group.elb_sg.id}", ] } will allow HTTP access to any infrastructure that belongs to the elb_sg security group. This is helpful if you've got a large amount of infrastructure that needs to have the ingress/egress access and don't want to name all of the parts individually. Another example: you could create a security group for an Elastic Search cluster, and then state that all elements of an EC2 app server security group should have ingress/egress access to that cluster by using this syntax.
Terraform
55,032,506
10
is there a possibility to add a sql user to the azure sql via terraform? https://www.mssqltips.com/sqlservertip/5242/adding-users-to-azure-sql-databases/ Or is there a better suggestions how to create a SQL user? Thanks
Yes you can do it from Terraform if that is what you want to happen. I would use a null resource provider in Terraform to execute the commands from the box that is running Terraform. You could use PowerShell, CMD, etc. to connect to the database after it is created and create your user account. Here is an example of how to use the null resource provider. I would image it would look something like this, in this example I am using the SqlServer PowerShell module. resource "null_resource" "create-sql-user" { provisioner "local-exec" { command = "Add-SqlLogin -LoginName ${var.loginName} -LoginType ${var.loginType}" interpreter = ["PowerShell", "-Command"] } depends_on = ["azurerm_sql_database.demo"] } You could of course do it with our CLI tools, but this would guarantee it is part of your Terraform deployment.
Terraform
54,326,033
10
When I try to enable a private IP on my Cloud SQL instance (Postgresql 9.6) I get the follwoing error message: Network association failed due to the following error: set Service Networking service account as servicenetworking.serviceAgent role on consumer project I have a VPC which I select in the "Associated Network" dropdown and I chose a managed services network too which I have already setup so in theory it should all work. I cannot find anything under IAM that relates to the error message, either a service account or even the servicenetworking.serviceAgent permission. Update Including the relevant terraform snippets ## VPC Setup resource "google_compute_network" "my_network" { project = "${var.project_id}" name = "vpc-play" auto_create_subnetworks = "false" routing_mode = "REGIONAL" } # There is a bunch of subnets linked to this network which are not included here ## Managed services network resource "google_compute_global_address" "default" { name = "google-managed-services-vpc-${var.project_id}" project = "${var.project_id}" provider = "google-beta" ip_version = "IPV4" prefix_length = 16 address_type = "INTERNAL" purpose = "VPC_PEERING" network = "${google_compute_network.my_network.self_link}" } ## Error occurs on this step ## Error is : google_service_networking_connection.private_vpc_connection: set Service Networking service account as servicenetworking.serviceAgent role on consumer project resource "google_service_networking_connection" "private_vpc_connection" { provider = "google-beta" network = "${google_compute_network.my_network.self_link}" service = "servicenetworking.googleapis.com" reserved_peering_ranges = ["${google_compute_global_address.default.name}"] } ## Database configuration <-- omitted private ip stuff for now as doesn't even get to creation of this, error in previous step resource "google_sql_database_instance" "my_db" { depends_on = ["google_service_networking_connection.private_vpc_connection"] name = "my_db" project = "${var.project_id}" database_version = "POSTGRES_9_6" region = "${var.region}" lifecycle { prevent_destroy = true } settings { tier = "db-f1-micro" backup_configuration { enabled = true start_time = "02:00" } maintenance_window { day = 1 hour = 3 update_track = "stable" } ip_configuration { authorized_networks = [ { name = "office" value = "${var.my_ip}" }, ] } disk_size = 10 availability_type = "ZONAL" location_preference { zone = "${var.zone}" } } }
The Terraform code to create a Cloud SQL instance with Private IP has some errors. The first one is that the ${google_compute_network.private_network.self_link} variable get the entire name of the network, that means that will be something like www.googleapis.com/compute/v1/projects/PROJECT-ID/global/networks/testnw2. This value is not allowed in the field google_compute_global_address.private_ip_address.network, so, you need to change ${google_compute_network.private_network.self_link} to ${google_compute_network.private_network.name}. Another error is that the format in google_sql_database_instance.instance.settings.ip_configuration.private_network should be projects/PROJECT_ID/global/networks/NW_ID. so you need to change the field to projects/[PROJECT_ID]/global/networks/${google_compute_network.private_network.name} in order to work. The third error, and also, the one that you shared in your initial message, you need to set a service account in the Terraform code to have the proper privileges to avoid this error. Please, check the first lines of the shared code. The fourth error is that you need to do this using the google-beta provider, not the google default one As discussed in the comment that I posted, I saw the "An Unknown Error occurred" error before using that Terraform code, this error refers to an error when doing the VPC peering. I understand that is frustrating to troubleshoot this, because it doesn't show any useful information, but if you open a ticket in Google Cloud Platform Support we will be able to check the real error using our internal tools. As promised, this is the code that I'm using to create a private network and attach it to a Google Cloud SQL instance on creation. provider "google-beta" { credentials = "${file("CREDENTIALS.json")}" project = "PROJECT-ID" region = "us-central1" } resource "google_compute_network" "private_network" { name = "testnw" } resource "google_compute_global_address" "private_ip_address" { provider="google-beta" name = "${google_compute_network.private_network.name}" purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 16 network = "${google_compute_network.private_network.name}" } resource "google_service_networking_connection" "private_vpc_connection" { provider="google-beta" network = "${google_compute_network.private_network.self_link}" service = "servicenetworking.googleapis.com" reserved_peering_ranges = ["${google_compute_global_address.private_ip_address.name}"] } resource "google_sql_database_instance" "instance" { provider="google-beta" depends_on = ["google_service_networking_connection.private_vpc_connection"] name = "privateinstance" region = "us-central1" settings { tier = "db-f1-micro" ip_configuration { ipv4_enabled = "false" private_network = "projects/PROJECT-ID/global/networks/${google_compute_network.private_network.name}" } } }
Terraform
54,278,828
10
I have the following deploy.tf file: provider "aws" { region = "us-east-1" } provider "aws" { alias = "us_west_1" region = "us-west-2" } resource "aws_us_east_1" "my_test" { # provider = "aws.us_east_1" count = 1 ami = "ami-0820..." instance_type = "t2.micro" } resource "aws_us_west_1" "my_test" { provider = "aws.us_west_1" count = 1 ami = "ami-0d74..." instance_type = "t2.micro" } I am trying to use it to deploy 2 servers, one in each region. I keep getting errors like: aws_us_east_1.narc_test: Provider doesn't support resource: aws_us_east_1 I have tried setting alias's for both provider blocks, and referring to the correct region in a number of different ways. I've read up on multi region support, and some answers suggest this can be accomplished with modules, however, this is a simple test, and I'd like to keep it simple. Is this currently possible?
Yes it can be used to create resources in different regions even inside just one file. There is no need to use modules for your test scenario. Your error is caused by a typo probably. If you want to launch an ec2 instance the resource you wanna create is aws_instance and not aws_us_west_1 or aws_us_east_1. Sure enough Terraform does not know this kind of resource since it does simply not exist. Change it to aws_instance and you should be good to go! Additionally you should probably name them differently to avoid double naming using my_test for both resources.
Terraform
53,981,403
10
In my main.tf I have the following: data "template_file" "lambda_script_temp_file" { template = "${file("../../../fn/lambda_script.py")}" } data "template_file" "library_temp_file" { template = "${file("../../../library.py")}" } data "template_file" "init_temp_file" { template = "${file("../../../__init__.py")}" } data "archive_file" "lambda_resources_zip" { type = "zip" output_path = "${path.module}/lambda_resources.zip" source { content = "${data.template_file.lambda_script_temp_file.rendered}" filename = "lambda_script.py" } source { content = "${data.template_file.library_temp_file.rendered}" filename = "library.py" } source { content = "${data.template_file.init_temp_file.rendered}" filename = "__init__.py" } } resource "aws_lambda_function" "MyLambdaFunction" { filename = "${data.archive_file.lambda_resources_zip.output_path}" function_name = "awesome_lambda" role = "${var.my_role_arn}" handler = "lambda_script.lambda_handler" runtime = "python3.6" timeout = "300" } The problem is when I modify one of the source files, say lambda_script.py, upon a new terraform apply, even though the archive file (lambda_resources_zip) gets updated, the lambda function's script does not get updated (the new archive file does not get uploaded). I know that in order to avoid this, I could first run terraform destroy but that is not an option for my use case. *I'm using Terraform v0.11.10
I resolved the issue by adding the following line the resource definition: source_code_hash = "${data.archive_file.lambda_resources_zip.output_base64sha256}" when the source files are modified, the hashed value will change and trigger the source file to be updated.
Terraform
53,477,485
10
I'm trying to set up Terraform for use with GCP and I'm having trouble creating a new project from the gcloud cli: Terraform Lab The command I'm using is gcloud projects create testproject The error I get over and over is: ERROR: (gcloud.projects.create) Project creation failed. The project ID you specified is already in use by another project. Please try an alternative ID. Here's what I did so far: I created an "organization" and a user in Cloud Identity Logged into GCP console in the browser with the user I created The user has "Organization Administrator" role Using the Cloud Shell or gcloud configured on my home computer, I am not able to create a new project. I am able to do things like "gcloud projects list" and "gcloud organizations list" successfully in both cases (cloud shell & local gcloud install) I have tried this with different project ID names that are within the format requirements (eg 6-30 chars, lowercase, etc). I can also confirm that the project IDs do not exist. However, I am able to successfully create projects via GCP web console (https://console.cloud.google.com) (using the same IAM account configured in gcloud cli) I have tried "gcloud init" several times making sure I am using the right IAM account, just in case. Here's the error I get when I try to create a new project from the "gcloud init" command: Enter a Project ID. Note that a Project ID CANNOT be changed later. Project IDs must be 6-30 characters (lowercase ASCII, digits, or hyphens) in length and start with a lowercase letter. vincetest WARNING: Project creation failed: HttpError accessing <https://cloudresourcemanager.googleapis.com/v1/projects?alt=json>: response: <{'status': '409', 'content-length': '268', 'x-xss -protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'vary': 'Origin, X-Origin, Referer', 'server': 'ESF', '-content-encoding': 'gzip', 'cache-control': 'private', 'date': 'Fri, 28 Sep 2018 18:38:11 GMT', 'x-frame-options': 'SAMEORIGIN', 'content-type': 'application/json; charset=UTF-8'}>, content <{ "error": { "code": 409, "message": "Requested entity already exists", "status": "ALREADY_EXISTS", "details": [ { "@type": "type.googleapis.com/google.rpc.ResourceInfo", "resourceName": "projects/vincetest" } ] } } > Creating the project from the web page console worked fine.
Project IDs are unique across all projects. That means if any user ever had a project with that ID, you cannot use it. testproject is pretty common, so it's not surprising it's already taken. Try a more unique ID. One common technique is to user your organization's name as a prefix.
Terraform
52,561,383
10
So I have an application that runs terraform apply in a directory, then can also run terraform destroy. I was testing the application, and I accidentally interrupted the processes while running apply Now it seems to be stuck with a partially created instance, where it recognizes the name of my instance I was creating/destroying and when I try to apply it says that an instance of that name already exists. But then destroy says there is nothing to destroy. So I can't do either. Is there anyway to unsnarl this?
I'm afraid that the only option is by doing: execute terraform state rm RESOURCE example: terraform state rm aws_ebs_volume.volume. Manually remove the resource from your cloud provider.
Terraform
50,127,327
10
I'm using Terraform to provision some resources on AWS. Running the "plan" step of Terraform fails with the following vague error (for example): Error: Error loading state: AccessDenied: Access Denied status code: 403, request id: ABCDEF12345678, host id: SOMELONGBASE64LOOKINGSTRING=== Given a request id and a host id is it possible to see more in depth what went wrong? Setting TF_LOG=DEBUG (or some other level) seems to help, but I was curious if there is a CLI command to get more information from CloudTrail or something. Thanks!
Terraform won't have any privileged information about the access denial, but AWS does. Because you mentioned S3 was the problem I based my answer on finding the S3 request id. You have a couple options to find the request given a request id in AWS. Create a trail in AWS CloudTrail. CloudTrail will log the API calls (including request id) at the bucket level by default. If the request was for a specific object, you need to enable S3 data events when you create the trail. Turn on S3 server access logs. You can manually search for the request id in the log files in S3 or use Athena. For CloudTrail, you can also configure CloudWatch Logs and search within the Log Group that gets created via the search bar. CloudTrail records API calls from all services, not just S3. It could be a useful tool for diagnosing issues besides those related to S3. Note that there can be an up to 15-minute delay for logs to appear in CloudTrail.
Terraform
49,517,645
10
I have the following Terraform resource for configuring an Azure app service: resource "azurerm_app_service" "app_service" { name = "Test-App-Service-3479112" location = "${azurerm_resource_group.resource_group.location}" resource_group_name = "${azurerm_resource_group.resource_group.name}" app_service_plan_id = "${azurerm_app_service_plan.app_service_plan.id}" site_config { dotnet_framework_version = "v4.0" remote_debugging_version = "VS2012" } app_settings { "ASPNETCORE_ENVIRONMENT" = "test" "WEBSITE_NODE_DEFAULT_VERSION" = "4.4.7" } } I am attempting to add a CORS origin value to be utilized in my resource. Is there way to add this in Terraform, or if there is not, how could I go about configuring this in my Terraform file (possibly with the Azure SDK)?
All, This is now available here: https://www.terraform.io/docs/providers/azurerm/r/app_service.html#cors And here is my example: resource "azurerm_app_service" "my-app" { name = "${var.api_name}" location = "${var.location}" resource_group_name = "${var.resource_group}" app_service_plan_id = "${azurerm_app_service_plan.api_asp.id}" site_config { cors { allowed_origins = ["https://${var.ui_name}${var.dns_suffix}"] } } identity = { type = "SystemAssigned" } } And proof that the above works:
Terraform
47,718,205
10
I need to create several new EC2, RDS, etc.using Terraform, in an existing AWS VPC. and the existing subnet, security group, iam, etc. they are not created by Terraform. it is created manually. I heard the right way is to use terraform import (it is correct?). To test how terraform import works, I first tested how to import an existing EC2 in stead of an existing VPC, Because I do not want to accidentally change anything In an exist VPC. before running terraform import aws_instance.example i-XXXXXXXXXX It looks like I need to create a very detailed EC2 resource in my ec2.tf file, such as: resource "aws_instance" "example" { iam_instance_profile = XXXXXXXXXX instance_type = XXXXXXX ami = XXXXXXX tags { Name = XXXXX Department = XXXX .... } } if I just write: resource "aws_instance" "example" { } it showed I missed ami and instance type, if I write: resource "aws_instance" "example" { instance_type = XXXXXXX ami = XXXXXXX } then running "terraform apply" will change tags of my existing EC2 to nothing, change iam profile to nothing. I have not tried how to import existing vpc, subnet, security group yet. I am afraid if I try, I have to put a lot of information of the existing vpc, subnet, security group, etc. my system is complex. is it expected that I need to indicate so many details in my terraform code? isn't there a way so that I just simply indicate the id of existing stuff like vpc's id, and my new stuff will be created based on the existing id? sth. like: data "aws_subnet" "public" { id = XXXXXXX } resource "aws_instance" "example" { instance_type = "t2.micro" ami = "${var.master_ami}" ...... subnet_id = "${aws_subnet.public.id}" }
You can leave the body of the resource blank during the import, but you'll need to go back in and fill in the specific details once it's been imported. You can look at the imported resource with the terraform show command, and fill in all of the resource details, so when you try to run terraform plan it should show no changes needed. But, to answer your question, yes you can use your existing resources without having to import them. Just create a variables file that holds your existing resource ids that you need for your new resources, and then you can then reference the ones you need. So you could have a .vars file with something like: variable "ami_id" { description = "AMI ID" default = "ami-xxxxxxxx" } variable "subnet_prv1" { description = "Private Subnet 1" default = "subnet-xxxxxx" } Then in your main.tf to create the resource: resource "aws_instance" "example" { instance_type = "t2.micro" ami = "${var.ami_id}" ...... subnet_id = "${var.subnet_prv1}" } Just one way to go about it. There are others, which you can read up on in the terraform docs for AWS variables
Terraform
47,665,428
10
I've followed an excellent guide (Serverless Stack) that creates a typical CRUD serverless infrastructure with a react frontend. It's using the Serverless Framework for AWS. What I don't like is that to bootstrap the setup, there is a lot of manual clicking in GUIs (mostly Amazon's console interface) involved. I.e. the setup is not version controlled and is not easily reproducible. It would not be easy to extend it with a CI/CD process etc. In this example the following resources need to be setup manually: AWS Cognito User Pool AWS Cognite User Pool Application AWS Cognito Federated Identity Pool AWS DynamoDB instance AWS S3 buckets (x3) (this also hosts the frontend) AWS CloudFront distribution AWS Route53 zone file The only resources that are being built from code are the serverless functions (lambdas) themselves, as well as API Gateway instances. This is what the serverless framework does using its serverless.yml file. But all of the above resources are not automatically created. They sometimes need to be referenced to using their ARNs, but they are not being created by the serverless.yml configuration. Running such a system in production (which relies heavily on the manual creation of services through GUIs) would seem risky. I was thinking that a solution for this would be to use Terraform or Cloudformation. But the Serverless Framework itself is using Cloudformation for the setup of Lambdas already, though not for other resources. So how would one eliminate this gap? In other words, how would one rebuilt the entire setup described at Serverless Stack in code? It would seem strange, and perhaps not possible, to have CloudFormation setup Serverless, which then has its own Cloudformation templates to setup lambdas. It might make more sense to extend the Serverless Framework to not just define the functions and API Gateways that need to be created on a serverless deploy, but also other resources like a DynamoDB or a Cognito User Pool. Are there any examples or attempts of people doing this already?
I agree that documentation on this would make an excellent pull request here. You're correct that serverless is using CloudFormation under the hood. The framework does expose the underlying CloudFormation machinery to you, by way of the resources key of your serverless.yml. I think the intent of the framework is that you would put the rest of these resources (Cognito stuff, S3, etc.) in the resources: section of your serverless.yml file, using regular old CloudFormation syntax. For example, this file will create a DynamoDB table and S3 bucket, in addition to the serverless function: service: aws-nodejs # NOTE: update this with your service name provider: name: aws runtime: nodejs6.10 functions: hello: handler: handler.deletecustomer events: - http: path: /deletecustomer method: post cors: true resources: Resources: tablenotes: Type: AWS::DynamoDB::Table Properties: AttributeDefinitions: - AttributeName: noteId AttributeType: S - AttributeName: userId AttributeType: S KeySchema: - AttributeName: userId KeyType: HASH - AttributeName: noteId KeyType: RANGE ProvisionedThroughput: ReadCapacityUnits: '5' WriteCapacityUnits: '5' mysamplebucket: Type: AWS::S3::Bucket Properties: WebsiteConfiguration: IndexDocument: index.html ErrorDocument: error.html AccessControl: Private VersioningConfiguration: Status: Suspended If you're new to CloudFormation, I'd also recommend taking a peek at CloudFormer.
Terraform
46,861,678
10
I have the need to create and manage multiple customer environments in AWS and I'm wanting to leverage Terraform to deploy all of the necessary resources. Each customer environment is basically the same with the exception of the URL they use to access one of the servers. I have put together a Terraform configuration that deploys all of the resources for a given customer. BUT... How do I take that same configuration and apply it to the next customer without copying the entire Terraform directory and duplicating that for every customer. (I could have 100's of these) I've heard workspaces and modules or both. Anyone seen a best-practice article out there on this? Thx
You should modulerize your code, then you can easily reuse that module(from a git repository) with different variables to be used for that customer. In this case for each customer, you will end up with only a file that configures the main module. Have one directory for each customer, with a terraform file that loads up the module(s) and configures it. If you use terraform apply in that directory then the state will also be in that directory. To make sure your team can also deploy and make changes it is suggested to use a backend such as S3, so the state will be written there. Note that you have to configure a backend for each customer in their respective directory. Make sure the backend for each customer don't clash(For example use a different path in S3). Nicki Watt gave a good presentation on this. You can view the video here and slides at here.
Terraform
46,266,357
10
I am new to Terraform and I ran into some issue when trying to use environment variables with .tf file, I tried to use terraform.tfvars / variables.tf. ./terraform apply -var-file="terraform.tfvars" Failed to load root config module: Error parsing variables.tf: At 54:17: illegal char What am I missing here? Terraform Version: Terraform v0.9.2 main.tf: provider "aws" { access_key = "${var.aws_access_key}" secret_key = "${var.aws_secret_key}" region = "${var.aws_region}" allowed_account_ids = ["${var.aws_account_id}"] } resource "aws_instance" "db" { ami = "ami-49c9295" instance_type = "t2.micro" tags { Name = "test" } connection { user = "ubuntu" } security_groups = ["sg-ccc943b0"] availability_zone = "${var.availability_zone}" subnet_id = "${var.subnet_id}" } terraform.tfvars: aws_profile = "default" aws_access_key = "xxxxxx" aws_secret_key = "xxxxxx" aws_account_id = "xxxxxx" key_name = "keyname" key_path = "/home/user/.ssh/user.pem" aws_region = "us-east-1" subnet_id = "subnet-51997e7a" vpc_security_group_ids = "mysql" instance_type = "t2.xlarge" availability_zone = "us-east-1a" variables.tf: variable "key_name" { description = "Name of the SSH keypair to use in AWS." default = "keypairname" } variable "key_path" { description = "Path to the private portion of the SSH key specified." default = "/home/user/.ssh/mypem.pem" } variable "aws_region" { description = "AWS region to launch servers." default = "us-east-1" } variable "aws_access_key" { decscription = "AWS Access Key" default = "xxxxxx" } variable "aws_secret_key" { description = "AWS Secret Key" default = "xxxxxx" } variable "aws_account_id" { description = "AWS Account ID" default = "xxxxxx" } variable "subnet_id" { description = "Subnet ID to use in VPC" default = "subnet-51997e7a" } variable "vpc_security_group_ids" { description = "vpc_security_group_ids" default = "sec" } variable "instance_type" { description = "Instance type" default = "t2.xlarge" } variable "instance_name" { description = "Instance Name" default = "test" } variable "availability_zone" { description = "availability_zone" default = "us-east-1a" } variable "aws_amis" { default = { "us-east-1": "ami-49c9295f", "eu-west-1": "ami-49c9295f", "us-west-1": "ami-49c9295f", "us-west-2": "ami-49c9295f" } } Update After removing variable "aws_amis" section from variables.tf, I ran into another issue: Failed to load root config module: Error loading variables.tf: 1 error(s) occurred: * variable[aws_access_key]: invalid key: decscription
The aws_amis variable being used as a lookup map looks incorrectly formatted to me. Instead it should probably be of the format: variable "aws_amis" { default = { us-east-1 = "ami-49c9295f" eu-west-1 = "ami-49c9295f" us-west-1 = "ami-49c9295f" us-west-2 = "ami-49c9295f" } } As an aside Terraform will look for a terraform.tfvars file by default so you can drop the -var-file="terraform.tfvars". You'll need to pass the -var-file option if you want to use a differently named file (such as prod.tfvars) but for this you can omit it.
Terraform
43,392,090
10
Scenario: I am running an AWS autoscaling group (ASG), and I have changed the associated launch configuration during terraform apply. The ASG stays unaffected. How do I recreate now the instances in that ASG (i.e., replace them one-by-one to do a rolling replace), which then is based on the changed/new launch configuration? What I've tried: With terraform taint one can mark resources to be destroyed and recreated during the next apply. However, I don't want to taint the autoscaling group (which is a resource, and single instances are not in this case), but single instances in it. Is there a way to taint single instances or am I thinking in the wrong direction?
The normal thing to do here is to use Terraform's lifecycle management to force it to create new resources before destroying the old ones. In this case you might set your launch configuration and autoscaling group up something like this: resource "aws_launch_configuration" "as_conf" { name_prefix = "terraform-lc-example-" image_id = "${var.ami_id}" instance_type = "t1.micro" lifecycle { create_before_destroy = true } } resource "aws_autoscaling_group" "bar" { name = "terraform-asg-example-${aws_launch_configuration.as_conf.name}" launch_configuration = "${aws_launch_configuration.as_conf.name}" lifecycle { create_before_destroy = true } } Then if you change the ami_id variable to use another AMI Terraform will realise it has to change the launch configuration and so create a new one before destroying the old one. The new name generated by the new LC is then interpolated in the ASG name forcing a new ASG to be rebuilt. As you are using create_before_destroy Terraform will create the new LC and ASG and wait for the new ASG to reach the desired capacity (which can be configured with health checks) before destroying the old ASG and then the old LC. This will flip all the instances in the ASG at once. So if you had a minimum capacity of 2 in the ASG then this will create 2 more instances and as soon as both of those pass health checks then the 2 older instances will be destroyed. In the event you are using an ELB with the ASG then it will join the 2 new instances to the ELB so, temporarily, you will have all 4 instances in service before then destroying the older 2.
Terraform
39,345,609
10
I want to create 2 VPC security groups. One for the Bastion host of the VPC and one for the Private subnet. # BASTION # resource "aws_security_group" "VPC-BastionSG" { name = "VPC-BastionSG" description = "The sec group for the Bastion instance" vpc_id = "aws_vpc.VPC.id" ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["my.super.ip/32"] } egress { # Access to the Private subnet from the bastion host[ssh] from_port = 22 to_port = 22 protocol = "tcp" security_groups = ["${aws_security_group.VPC-PrivateSG.id}"] } egress { # Access to the Private subnet from the bastion host[jenkins] from_port = 8686 to_port = 8686 protocol = "tcp" security_groups = ["${aws_security_group.VPC-PrivateSG.id}"] } tags = { Name = "VPC-BastionSG" } } # PRIVATE # resource "aws_security_group" "VPC-PrivateSG" { name = "VPC-PrivateSG" description = "The sec group for the private subnet" vpc_id = "aws_vpc.VPC.id" ingress { from_port = 22 to_port = 22 protocol = "tcp" security_groups = ["${aws_security_group.VPC-BastionSG.id}"] } ingress { from_port = 80 to_port = 80 protocol = "tcp" security_groups = ["${aws_security_group.VPC-PublicSG.id}"] } ingress { from_port = 443 to_port = 443 protocol = "tcp" security_groups = ["${aws_security_group.VPC-PublicSG.id}"] } ingress { from_port = 3306 to_port = 3306 protocol = "tcp" security_groups = ["${aws_security_group.VPC-PublicSG.id}"] } ingress { from_port = 8686 to_port = 8686 protocol = "tcp" security_groups = ["${aws_security_group.VPC-BastionSG.id}"] } ingress { # ALL TRAFFIC from the same subnet from_port = 0 to_port = 0 protocol = "-1" self = true } egress { # ALL TRAFFIC to outside world from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "VPC-PrivateSG" } } When I terraform plan it, this error is returned: **`Error configuring: 1 error(s) occurred: * Cycle: aws_security_group.VPC-BastionSG, aws_security_group.VPC-PrivateSG`** If I comment out the ingress rules for the BastionSG from the PrivateSG the plan executes fine. Also, if I comment out the egress rules for the PrivateSG from the BastionSG it also executes fine. The AWS Scenario 2 for building a VPC with Public/Private subnets and Bastion host describes the architecture I am trying to setup. I have the exact same settings configured via the AWS console and it plays fine. Why isn't Terraform accepting it? Is there another way to connect the Bastion security group with the Private security group? EDIT As I understand there is a circular reference between the two sec groups that somehow needs to break even though in AWS it is valid. So, I thought of allowing all outbound traffic (0.0.0.0/0) from the Bastion sec group and not specifying it to individual security groups. Would it have a bad security impact?
Terraform attempts to build a dependency chain for all of the resources defined in the folder that it is working on. Doing this enables it to work out if it needs to build things in a specific order and is pretty key to how it all works. Your example is going to fail because you have a cyclic dependency (as Terraform helpfully points out) where each security group is dependent on the other one being created already. Sometimes these can be tricky to solve and may mean you need to rethink what you're trying to do (as you mention, one option would be to simply allow all egress traffic out from the bastion host and only restrict the ingress traffic on the private instances) but in this case you have the option of using the aws_security_group_rule resource in combination with the aws_security_group resource. This means we can define empty security groups with no rules in them at first which we can then use as targets for the security group rules we create for the groups. A quick example might look something like this: resource "aws_security_group" "bastion" { name = "bastion" description = "Bastion security group" } resource "aws_security_group_rule" "bastion-to-private-ssh-egress" { type = "egress" from_port = 22 to_port = 22 protocol = "tcp" security_group_id = "${aws_security_group.bastion.id}" source_security_group_id = "${aws_security_group.private.id}" } resource "aws_security_group" "private" { name = "private" description = "Private security group" } resource "aws_security_group_rule" "private-from-bastion-ssh-ingress" { type = "ingress" from_port = 22 to_port = 22 protocol = "tcp" security_group_id = "${aws_security_group.private.id}" source_security_group_id = "${aws_security_group.bastion.id}" } Now, Terraform can see that the dependency chain says that both security groups must be created before either of those security group rules as both of them are dependent on the groups already having been created.
Terraform
38,246,326
10
I have an drawing app for Android and I am currently trying to add a real eraser to it. Before, I had just used white paint for an eraser, but that won't do anymore since now I allow background colors and images. I do this by having an image view underneath my transparent canvas. The problem that I am facing is that whenever I enable my eraser, it draws a solid black trail while I have my finger down, but once I release it goes to transparent. See the screen shots below: This is how it looks while my finger is on the screen - a solid black trail This is what it looks like once I remove my finger from the screen So, it seems like I am getting close, but I can't find the right combination of settings to avoid the black trail while my finger is touching while erasing. Here are some relevant code snippets: onDraw @Override protected void onDraw(Canvas canvas) { canvas.drawColor(Color.TRANSPARENT); canvas.drawBitmap(mBitmap, 0, 0, mBitmapPaint); canvas.drawPath(mPath, mPaint); canvas.drawPath(mPreviewPath, mPaint); } onTouchEvent @Override public boolean onTouchEvent(MotionEvent event) { float currentX = event.getX(); float currentY = event.getY(); switch (event.getAction()) { case MotionEvent.ACTION_DOWN: touchStart(currentX, currentY); invalidate(); break; case MotionEvent.ACTION_MOVE: touchMove(currentX, currentY); invalidate(); break; case MotionEvent.ACTION_UP: touchUp(currentX, currentY); invalidate(); break; } return true; } Current attempt at eraser settings public void startEraser() { mPaint.setAlpha(0); mColor = Color.TRANSPARENT; mPaint.setColor(Color.TRANSPARENT); mPaint.setStrokeWidth(mBrushSize); mPaint.setStyle(Paint.Style.STROKE); mPaint.setMaskFilter(null); mPaint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.CLEAR)); mPaint.setAntiAlias(true); } There are several other posts about erasers, but most of them just say to use PorterDuff.Mode.CLEAR, setMakFilter(null) and that that should work. In my case, it doesn't. No matter what I try, I get the black trail first and then the desired result only after I release. I can provide more code if necessary.
I could suggest you to read the official sample of FingerPaint.java It exactly matches what you are trying to achieve here. To not show the trail when you erase content, take a look at the onDraw() method and the eraserMode variable: @Override protected void onDraw(Canvas canvas) { canvas.drawColor(0xFFAAAAAA); canvas.drawBitmap(mBitmap, 0, 0, mBitmapPaint); if (!eraserMode) { canvas.drawPath(mPath, mPaint); } } boolean eraserMode = false; @Override public boolean onOptionsItemSelected(MenuItem item) { eraserMode = false; mPaint.setXfermode(null); mPaint.setAlpha(0xFF); switch (item.getItemId()) { /*...*/ case ERASE_MENU_ID: // Add this line eraserMode = true; mPaint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.CLEAR)); return true; /*...*/ } return super.onOptionsItemSelected(item); }
Eraser
25,094,845
19
My pipelines and schedulers were running smoothly without any problems. After I went out to lunch, I changed the number of epochs a Neural Network would run, save the .yaml file again and leave it in the bucket named "budgetff". Afterwards, everything stopped working. There are the errors and I have 0 clue as to how they are appearing. The code within the components doesn't even seem to start. I've made several different components without any success because they just fail at this step. If it helps, I installed the kfp --pre and did the imports like this import kfp.v2.dsl, kfp.v2.compiler from kfp.v2.dsl import Artifact, Dataset, Input, Metrics, Model, Output kfp-2.0.0-beta.15 - This is the kfp version running on VertexAi and I'm using Kubeflow with @kfp.v2.dsl.components. I was trying to just run my pipelines. Forcing a run on the scheduler. When it didn't work, I just tried on the notebook.
The cause is that the latest version of requests does not support urllib3 2.0.0. This is fixed in kfp-2.0.0b16 (see PR with the change), so you can either upgrade to that, or create a new image that downgrades urllib. Maybe this is triggered by versions of requests-toolbelt and/or urllib3 that were both released in the past few days (May 1 and May 4, 2023, resp). I have fixed this by building a new container with the following Dockerfile (I use Python 3.9 but use whatever you want): FROM python:3.9 RUN pip install urllib3==1.26.15 requests-toolbelt==0.10.1 I recommend to build the image using Cloud Build and specify it as the base image for the component.
Kubeflow
76,175,487
37
While running kubeflow pipeline having code that uses tensorflow 2.0. below error is displayed at end of each epoch W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Cancelled: Operation was cancelled Also, after some epochs, it does not show log and shows this error This step is in Failed state with this message: The node was low on resource: memory. Container main was using 100213872Ki, which exceeds its request of 0. Container wait was using 25056Ki, which exceeds its request of 0.
In my case, I didn't match the batch_size and steps_per_epoch For example, his = Test_model.fit_generator(datagen.flow(trainrancrop_images, trainrancrop_labels, batch_size=batchsize), steps_per_epoch=len(trainrancrop_images)/batchsize, validation_data=(test_images, test_labels), epochs=1, callbacks=[callback]) batch_size in the datagen.flow must correspond to the steps_per_epoch in Test_model.fit_generator (actually, I used the wrong value on the steps_per_epoch) This is one of the cases for the Error, I guess. As a result, I think the problem arises when there is wrong correspondence on the batch size and steps(iterations) Maybe the floats can be a problem when you get the step by dividing... Check your code about this issue. Good luck :)
Kubeflow
60,000,573
18
I'm exploring Kubeflow as an option to deploy and connect various components of a typical ML pipeline. I'm using docker containers as Kubeflow components and so far I've been unable to successfully use ContainerOp.file_outputs object to pass results between components. Based on my understanding of the feature, creating and saving to a file that's declared as one of the file_outputs of a component should cause it to persist and be accessible for reading by the following component. This is how I attempted to declare this in my pipeline python code: import kfp.dsl as dsl import kfp.gcp as gcp @dsl.pipeline(name='kubeflow demo') def pipeline(project_id='kubeflow-demo-254012'): data_collector = dsl.ContainerOp( name='data collector', image='eu.gcr.io/kubeflow-demo-254012/data-collector', arguments=[ "--project_id", project_id ], file_outputs={ "output": '/output.txt' } ) data_preprocessor = dsl.ContainerOp( name='data preprocessor', image='eu.gcr.io/kubeflow-demo-254012/data-preprocessor', arguments=[ "--project_id", project_id ] ) data_preprocessor.after(data_collector) #TODO: add other components if __name__ == '__main__': import kfp.compiler as compiler compiler.Compiler().compile(pipeline, __file__ + '.tar.gz') In the python code for the data-collector.py component I fetch the dataset then write it to output.txt. I'm able to read from the file within the same component but not inside data-preprocessor.py where I get a FileNotFoundError. Is the use of file_outputs invalid for container-based Kubeflow components or am I incorrectly using it in my code? If it's not an option in my case, is it possible to programmatically create Kubernetes volumes inside the pipeline declaration python code and use them instead of file_outputs?
Files created in one Kubeflow pipeline component are local to the container. To reference it in the subsequent steps, you would need to pass it as: data_preprocessor = dsl.ContainerOp( name='data preprocessor', image='eu.gcr.io/kubeflow-demo-254012/data-preprocessor', arguments=["--fetched_dataset", data_collector.outputs['output'], "--project_id", project_id, ] Note: data_collector.outputs['output'] will contain the actual string contents of the file /output.txt (not a path to the file). If you want for it to contain the path of the file, you'll need to write the dataset to shared storage (like s3, or a mounted PVC volume) and write the path/link to the shared storage to /output.txt. data_preprocessor can then read the dataset based on the path.
Kubeflow
58,150,368
11
I am trying to find when it makes sense to create your own Kubeflow MLOps platform: If you are Tensorflow only shop, do you still need Kubeflow? Why not TFX only? Orchestration can be done with Airflow. Why use Kubeflow if all you are using scikit-learn as it does not support GPU, distributed training anyways? Orchestration can be done with Airflow. If you are convinced to use Kubeflow, cloud providers (Azure and GCP) are delivering ML pipeline concept (Google is using Kubeflow under the hood) as managed services. When it makes sense to deploy your own Kubeflow environment then? Even if you have a requirement to deploy on-prem, you have the option to use the cloud resources (nodes and data on cloud) to train your models, and only deploy the model to on-prem. Thus, using Azure or GCP AI Platform as managed service makes the most sense to deliver ML pipelines?
Building an MLOps platform is an action companies take in order to accelerate and manage the workflow of their data scientists in production. This workflow is reflected in ML pipelines, and includes the 3 main tasks of feature engineering, training and serving. Feature engineering and model training are tasks which require a pipeline orchestrator, as they have dependencies of subsequent tasks and that makes the whole pipeline prone to errors. Software building pipelines are different from data pipelines, which are in turn different from ML pipelines. A software CI/CD flow compiles the code to deploy-able artifacts and accelerates the software delivery process. So, code in, artifact out. It's being achieved by the invocation of compilation tasks, execution of tests and deployment of the artifact. Dominant orchestrators for such pipelines are Jenkins, Gitlab-CI, etc. A data processing flow gets raw data and performs transformation to create features, aggregations, counts, etc. So data in, data out. This is achieved by the invokation of remote distributed tasks, which perform data transformations by storing intermediate artifacts in data repositories. Tools for such pipelines are Airflow, Luigi and some hadoop ecosystem solutions. In the machine learning flow, the ML engineer writes code to train models, uses the data to evaluate them and then observes how they perform in production in order to improve them. So code and data in, model out. Hence the implementation of such a workflow requires a combination of the orchestration technologies we've discussed above. TFX present this pipeline and proposes the use of components that perform these subsequent tasks. It defines a modern, complete ML pipeline, from building the features, to running the training, evaluating the results, deploying and serving the model in production Kubernetes is the most advanced system for orchestrating containers, the defacto tool to run workloads in production, the cloud-agnostic solution to save you from a cloud vendor lock-in and hence optimize your costs. Kubeflow is positioned as the way to do ML in Kubernetes, by implementing TFX. Eventually it handling the code and data in, model out. It provides a coding environment by implementing jupyter notebooks in the form of kubernetes resources, called notebooks. All cloud providers are onboard with the project and implement their data loading mechanisms across KF's components. The orchestration is implemented via KF pipelines and the serving of the model via KF serving. The metadata across its components are specified in the specs of the kubernetes resources throughout the platform. In Kubeflow, the TFX components exist in the form of reusable tasks, implemented as containers. The management of the lifecycle of these components is achieved through Argo, the orchestrator of KF pipelines. Argo implements these workflows as kubernetes CRDs. In a workflow spec we define the dag tasks, the TFX components as containers, the metadata which will be written in the metadata store, etc. The execution of these workflows is happening nicely using standard kubernetes resources like pods, as well as custom resource definitions like experiments. That makes the implementation of the pipeline and the components language-agnostic, unline Airflow which implements the tasks in python only. These tasks and their lifecycle is then managed natively by kubernetes, without the need to use duct-tape solutions like Airflow's kubernetes-operator. Since everything is implemented as kubernetes resources, everything is a yaml and so the most Git friendly configuration you can find. Good luck trying to enforce version control in Airflow's dag directory. The deployment and management of the model in production is done via KF serving using the CRD of inferenceservice. It utilizes Istio's secure access to the models via its virtualservices, serverless resources using Knative Serving's scale-from-zero pods, revisions for versioning, prometheus metrics for observability, logs in ELK for debugging and more. Running models in production could not be more SRE friendly than that. On the topic of splitting training/serving between cloud and on-premise, the use of kubernetes is even more important, as it abstracts the custom infrastructure implementation of each provider, and so provides a unified environment to the developer/ml engineer.
Kubeflow
60,787,646
11
That simple. Moving my layout into a fluid territory, working on scalable images. Using the img tag and setting max-width to 100% works perfectly, but i'd rather use a div with the image set as its background. The issue I'm running into is that the image doesn't scale to the size of the div it's in the background of. Any way to add markup to the background code to set it to 100% width of it's container? #one { background: url('../img/blahblah.jpg') no-repeat; max-width: 100%; }
As thirtydot said, you can use the CSS3 background-size syntax: For example: -o-background-size:35% auto; -webkit-background-size:35% auto; -moz-background-size:35% auto; background-size:35% auto; However, as also stated by thirtydot, this does not work in IE6, 7 and 8. See the following links for more information about background-size: http://www.w3.org/TR/css3-background/#the-background-size
Fluid
6,300,749
59
I'm trying to create an optional foreign key using Entity Framework 7 and the Fluent-API. In EF v6.x we had the option to add this using .WithOptional or .HasOptional, but I cant find any equivalent functionality in EF 7.. any ideas? Br, Inx
Found the answer.. you can pass in "false" as a parameter to .IsRequired().. For instance: EntityShortcut<ContentEntity>() .HasMany(e => e.Children) .WithOne(e => e.Parent) .IsRequired(); That would be an requried relation EntityShortcut<ContentEntity>() .HasMany(e => e.Children) .WithOne(e => e.Parent) .IsRequired(false) While that would NOT be a required relation. FYI: private static EntityTypeBuilder<T> EntityShortcut<T>() where T : class { return _modelBuilder.Entity<T>(); }
Fluid
34,578,981
20
I've noticed in my own work that 3 fluid columns fill out their parent element much better when their widths are set to 33.333% as opposed to just 33%. I've also noticed when researching various CSS frameworks (i.e. bootstrap.css) that they have 14 decimal places specified on their column widths! That seems like it would be either excessive or clever... but I don't know which. So what is the value/benefit of having so many decimal places? From what I have gathered, there is an open debate on whether you should avoid decimal places or take advantage of them and I want to know if this should be of interest to me or to just not worry about it.
It is required in some cases. I'm working on a site using the Twitter Bootstrap which has 6 divs stretching the full width of the site. If I just make the width of each one 16.66% a noticeable gap is left at the end, if I make the width 16.67% one of the divs is pushed onto the line below. This meant to get the divs to fill the full space, I had to set the width to 16.6667% which works perfectly in Chrome and Firefox but it seems that Safari and IE round the decimal point down to 2 places so I'm left with a gap when using them. So sometimes it might seem excessive but other times it is actually needed. Dave
Fluid
14,364,485
19
I am trying to write the following if condition in fluid but it is not working as I would hope. Condition As part of a for loop I want to check if the item is the first one or 4th, 8th etc I would have thought the following would work but it display the code for every iteration. <f:if condition="{logoIterator.isFirst} || {logoIterator.cycle % 4} == 0"> I have managed to get it working with a nested if but it just feels wrong having the same section of code twice and also having the cycle check use a <f:else> instead of == 0 <f:if condition="{logoIterator.isFirst}"> <f:then> Do Something </f:then> <f:else> <f:if condition="{logoIterator.cycle} % 4"> <f:else> Do Something </f:else> </f:if> </f:else> </f:if>
TYPO3 v8 Updated the answer for TYPO3 v8. This is quoted from Claus answer below: Updating this information with current situation: On TYPO3v8 and later, the following syntax is supported which fits perfectly with your use case: <f:if condition="{logoIterator.isFirst}"> <f:then>First</f:then> <f:else if="{logoIterator.cycle % 4}">n4th</f:else> <f:else if="{logoIterator.cycle % 8}">n8th</f:else> <f:else>Not first, not n4th, not n8th - fallback/normal</f:else> </f:if> In addition there is support for syntax like this: <f:if condition="{logoIterator.isFirst} || {logoIterator.cycle} % 4"> Is first or n4th </f:if> Which can be more appropriate for some cases (in particular when using a condition in inline syntax where you can't expand to tag mode in order to gain access to the f:else with the new if argument). TYPO3 6.2 LTS and 7 LTS For more complex if-Conditions (like several or/and combinations) you can add your own ViewHelper in your_extension/Classes/ViewHelpers/. You just have to extend Fluids AbstractConditionViewHelper. The simple if-ViewHelper that shipps with Fluid looks like this: class IfViewHelper extends \TYPO3\CMS\Fluid\Core\ViewHelper\AbstractConditionViewHelper { /** * renders <f:then> child if $condition is true, otherwise renders <f:else> child. * * @param boolean $condition View helper condition * @return string the rendered string * @api */ public function render($condition) { if ($condition) { return $this->renderThenChild(); } else { return $this->renderElseChild(); } } } All you have to do in your own ViewHelper is to add more parameter than $condition, like $or, $and, $not etc. Then you just write your if-Conditions in php and render either the then or else child. For your Example, you can go with something like this: class ExtendedIfViewHelper extends \TYPO3\CMS\Fluid\Core\ViewHelper\AbstractConditionViewHelper { /** * renders <f:then> child if $condition or $or is true, otherwise renders <f:else> child. * * @param boolean $condition View helper condition * @param boolean $or View helper condition * @return string the rendered string */ public function render($condition, $or) { if ($condition || $or) { return $this->renderThenChild(); } else { return $this->renderElseChild(); } } } The File would be in your_extension/Classes/ViewHelpers/ExtendedIfViewHelper.php Then you have to add your namespace in the Fluid-Template like this (which enables all your self-written ViewHelpers from your_extension/Classes/ViewHelpers/ in the template: {namespace vh=Vendor\YourExtension\ViewHelpers} and call it in your template like this: <vh:extendedIf condition="{logoIterator.isFirst}" or="{logoIterator.cycle} % 4"> <f:then>Do something</f:then> <f:else>Do something else</f:else> </vh:extendedIf> Edit: updated.
Fluid
19,731,150
16
Below you see the debug for an object of type FileReference in fluid. In fluid the debug looks like this: <f:debug>{fileReference}</f:debug> The question is how do I access the properties highlighted in green, being width, height, and hovertext. The original file is an image, so width & height are default T3 properties, hovertext has been added by my extension with it's own getter/setter. I tried the following: {fileReference.width} {fileReference.mergedProperties.width} {fileReference.originalResource.width} No luck so far, what is the right way to access the values in mergedProperties? Many Thanks Florian
The f:debug shows something similar to the var_dump function, so the properties of an object. In fluid you can only access the getter functions or if it is an array the values of the array. So if you write something like {fileReference.mergedProperties} the method getMergedProperties() is called if it is present. Knowing that you can look inside the sysext/core/Classes/Resource/FileReference.php File and see what getters it has. We can quickly find the public function getProperties() that returns the merged properties you marked, so the right solution should be this: {fileReference.properties.width}
Fluid
40,135,241
15
Just need help as I have been trying sort this out for ages now. What I need: I've got a 2 column layout, where the left column has a fixed width 220px and the right column has a fluid width. Code is: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Fluid</title> <style type="text/css" media="screen"> html, body { background: #ccc; } .wrap { margin: 20px; padding: 20px; background: #fff; } .main { margin-left: 220px; width: auto } .sidebar { width: 200px; float: left; } .main, .sidebar { background: #eee; min-height: 100px; } </style> </head> <body> <div class="wrap"> <div class="sidebar">This is the static sidebar</div> <div class="main">This is the main, and fluid div</div> </div> </body> </html> There's no problem at all. When I use a css syntax clear: both in the right column, all content after gets moved under the left column. This is a right behaviour and nothing against it. But I relly need to use clear: both in the way, that it stays just in context of the right column (doesn't get affected by the left column at all, and doesn't move underneath) Is there any simple get around with retaining a basic float concept of page design? UPDATE: Please see this link to know what I'm on about as it may be a bit confusing from my description. Link: http://jsfiddle.net/k4L5K/1/
Here's your altered CSS: html, body { background: #ccc; } .wrap { margin: 20px; padding: 20px; padding-right:240px; background: #fff; overflow:hidden; } .main { margin: 0 -220px 0 auto; width: 100%; float:right; } .sidebar { width: 200px; float: left; height: 200px; } .main, .sidebar { background: #eee; min-height: 100px; } .clear { clear:both; } span { background: yellow } Basically what I've done is change the way your layout is done, so that .main div is floated on the right. To do this, we had to add 2 things: A padding of 240px on the .wrap div, and A right margin on the .main div of -220px to properly align the fluid part of the page. Because we've floated the .main div on the right, the clear: both; now only affects content inside the .main div, as you want. You can see a demonstration here: http://jsfiddle.net/6d2qF/1/
Fluid
6,797,172
12
Is it possible to get the current language key (or code) in a TYPO3 Fluid template? In the meantime I've found another solution using a view helper found here: <?php class Tx_AboUnitReservation_ViewHelpers_LanguageViewHelper extends Tx_Fluid_Core_ViewHelper_AbstractViewHelper { /** * Get the current language */ protected function getLanguage() { if (TYPO3_MODE === 'FE') { if (isset($GLOBALS['TSFE']->config['config']['language'])) { return $GLOBALS['TSFE']->config['config']['language']; } } elseif (strlen($GLOBALS['BE_USER']->uc['lang']) > 0) { return $GLOBALS['BE_USER']->uc['lang']; } return 'en'; //default } /** * Return current language * @return string */ public function render() { return $this->getLanguage(); } } Which I use in the fluid template as follows. <f:alias map="{isGerman: 'de'}"> <f:if condition="{aboUnitReservation:language()} == {isGerman}"> <script type="text/javascript" src="{f:uri.resource(path:'js/jquery.ui.datepicker-de-CH.js')}"></script> </f:if> </f:alias>
Another solution using TypoScript object in Fluid template: # German language temp.language = TEXT temp.language.value = at # English language [globalVar = GP:L = 1] temp.language.value = en [global] lib.language < temp.language And Fluid code: <f:if condition="{f:cObject(typoscriptObjectPath: 'lib.language')} == 'at'"> <f:then> ... </f:then> <f:else> ... </f:else> </f:if> Object temp.language can contain any values, of course.
Fluid
10,446,432
12
For a project I'm using Typo3 v6.0. I'm looking to create nested content elements, or a content element container. I want to be able to create an inline two-column layout without using a specific template for it. I'm looking to do this without the use of templavoila. Extensions I have tried are gridelements, kb_nescefe, bs_fce, multicolumn but these do not work because they are not compatible with Typo3 V6. I'm aiming for an end result like the attached image. Where the inline two-column content can be ommitted, used once or used multiple times, containing any other content element. I'm looking for the most simple solution here. I prefer not having to invest a lot of learning time in a solution like flux and whatnot (http://fedext.net/ - looks cool, but also too timeconsuming for now) Any ideas?
I'm the author of the Fluid extension suite (flux, fluidcontent, fluidpages etc.) and would of course like to help you learn about using FluidContent to make FCEs. It's really not as advanced as one might fear. At the very least, it's much more compact than the example above. The following achieves the same result as your example, in FluidContent: TypoScript (static loaded: css_styled_content, fluid_content) plugin.tx_fed.fce.yourname { templateRootPath = fileadmin/Templates # if you don't want to use an extension (1) # partial and layout root paths not defined (2) } Regarding (1) you really, really should. Using an extension separates your user uploaded media etc. from your site content. If you do that instead, simply use an EXT:... path to the Private resources folder. And regarding (2) these paths are only necessary if you actually wish to use partials. Then, the template file itself (auto-detected when path where file is located is added in TS): {namespace flux=Tx_Flux_ViewHelpers} <f:layout name="Content" /> <f:section name="Configuration"> <flux:flexform id="columns" label="Columns" icon="path/to/iconfile.jpg"> <flux:flexform.grid> <flux:flexform.grid.row> <flux:flexform.grid.column> <flux:flexform.content name="left" label="Left content" /> </flux:flexform.grid.column> <flux:flexform.grid.column> <flux:flexform.content name="right" label="Right content" /> </flux:flexform.grid.column> </flux:flexform.grid.row> </flux:flexform.grid> </flux:flexform> </f:section> <f:section name="Preview"> <flux:widget.grid /> </f:section> <f:section name="Main"> <div class="row"> <div class="span6"> <flux:flexform.renderContent area="left" /> </div> <div class="span6"> <flux:flexform.renderContent area="right" /> </div> </div> </f:section> As you can see, you are entirely free to add any HTML you wish, use any ViewHelpers (even render TS objects if that's your thing). To add additional content elements, simply add new template files - they will automatically be recognised. But it will work differently from IRRE (which you can also achieve using Flux fields - let me know if you wish to see a demo of that): it will allos you to use the native drag-n-drop in TYPO3 to place your child content elements into actual content containers - like you used to do with TV. As such, Fluid Content is probably the closest you will come to TV. Regarding Flux being overkill, I'd like to give you an idea of what it actually performs: Cached reading of TS to know paths Cached lists of detected templates Fluid caches to native PHP, Flux only uses Fluid to store config (which means it's native PHP all the way through) Flux itself does register a hook subscriber which reacts to content being saved, this does slow the backend (unnoticeably) Flux itself doesn't create load on the FE with one exception: when in uncached plugins (FluidContent is cached!) Flux may call upon the native PHP cached code to read configurations. FluidContent consists of an extremely simple controller; the output is fully cached. You may want to add the VHS ViewHelper collection - it by itself creates absolutely zero load: it only uses resources where you use its ViewHelpers. It contains a heap of ViewHelpers I'm sure you will find useful. It may look overwhelming at first but I guarantee you there's less to know and to remember than in pibase, FlexForm XML, TS or native Extbase plugins. If you want even more of a safety net I highly recommend using XSD schemas in your editor - this gets you auto-completion of the special <flux:....> tags and others. However: it will require you to learn a small bit about Fluid's logic: what Layouts and Partials are (you will most likely want to use those at some point) and how to use the special tags and refer to variables (which will be required in other use cases - but not the one at hand; it only requires simple ViewHelper tags). I hope this helps. And that I've reduced your fear that Flux is overkill and too much to learn ;) Cheers, Claus aka. NamelessCoder
Fluid
15,156,751
11
In TYPO3 6.x, what is an easy way to quickly create custom content elements? A typical example (Maybe for a collection of testimonials): In the backend (with adequate labels): An image An input field A textarea When rendering: Image resized to xy input wrapped in h2 textarea passed through parseFunc and wrapped in more markup Ideally, these would be available in the page module as cType, but at least in the list module. And use fluid templates. My questions: From another CMS I am used to content item templates being applied to the BE and the FE at the same time (you write the template for what it should do, and then there's a backend item just for that type of content element) - but that's not how fluid works - or can it be done? Is there an extension that would handle such custom content elements (other than Templavoila)? Or do I have to create a custom extbase/fluid extension for each such field type? And, by the way: is there a recommendable tutorial for the new extbase kickstarter? I got scared away by all that domain modelling stuff.
That scaring domain modeling stuff is probably best option for you :) Create an extension with FE plugin which holds and displays data as you want, so you can place it as a "Insert plugin". It's possible to add this plugin as a custom CType and I will find a sample for you, but little bit later. Note, you don't need to create additional models as you can store required data ie. in FlexForm. From FE plugin to CType Let's consider that you have an extension with key hello which contains News controller with list and single actions in it. In your ext_tables.php you have registered a FE plugin: \TYPO3\CMS\Extbase\Utility\ExtensionUtility::registerPlugin($_EXTKEY, 'News', 'Scared Hello News'); When it's working fine you can add it to the list of content types (available in TCA) just by adding fifth param to the configurePlugin method in your ext_localconf.php: \TYPO3\CMS\Extbase\Utility\ExtensionUtility::configurePlugin( 'TYPO3.' . $_EXTKEY, 'News', array('News' => 'list, show'), array('News' => ''), \TYPO3\CMS\Extbase\Utility\ExtensionUtility::PLUGIN_TYPE_CONTENT_ELEMENT // <- this one ); Next part (basing on this site) is adding your plugin to the New Content Element Wizard as noticed in TYPO3 Wiki since TYPO3 ver. 6.0.0 changed a little, so easiest way is adding something like this into your ext_tables.php: \TYPO3\CMS\Core\Utility\ExtensionManagementUtility::addPageTSConfig('<INCLUDE_TYPOSCRIPT: source="FILE:EXT:hello/Configuration/TypoScript/pageTsConfig.ts">'); and in /typo3conf/ext/hello/Configuration/TypoScript/pageTsConfig.ts file write add this: mod.wizards.newContentElement.wizardItems.plugins.elements.tx_hello_news { icon = gfx/c_wiz/regular_text.gif title = Scared Hello News description = Displays Scared News tt_content_defValues.CType = hello_news } # Below the same for TemplaVoila templavoila.wizards.newContentElement.wizardItems.plugins.elements.tx_hello_news { icon = gfx/c_wiz/regular_text.gif title = Scared Hello News description = Displays Scared News tt_content_defValues.CType = hello_news } Note that proper key tx_hello_news should be combination of lowercased tx_, $_EXTKEY and plugin name - used in registerPlugin method. You can stop here if you are bored ;) Bring tt_content's fields back into your CType Above steps will cause that no typical fields will be available in the TCA for your element, so you need to copy something or create own. To see how it works just see some sample, in the backend in left menu choose ADMIN TOOLS > Configuration > TCA > tt_content > types There you'll find all types in the system, choose the most required and copy its [showitem] node into your own. Again in ext_tables.php add this PHP array: $TCA['tt_content']['types']['hello_news']['showitem'] = $TCA['tt_content']['types']['textpic']['showitem']; Again: hello_news is combination of lowercased $_EXTKEY and FE plugin name... Of course if it's required you can compose quite own set of fields, one by one by custom string: $TCA['tt_content']['types']['hello_news']['showitem'] = '--palette--;LLL:EXT:cms/locallang_ttc.xml:palette.general;general, --palette--;LLL:EXT:cms/locallang_ttc.xml:palette.header;header'; Access the fields in Extbase Controller: Fortunately is easiest part as you can just access it as an Array: $currentTtContent = $this->configurationManager->getContentObject()->data; $header = $currentTtContent['header']; debug($currentTtContent); debug($header);
Fluid
18,464,356
11