merge upstream

This commit is contained in:
Scott Nowicki 2017-04-27 11:44:57 -05:00
commit 64b52ee25f
218 changed files with 14930 additions and 2179 deletions

View File

@ -1,3 +1,7 @@
####################################################################################
## NOT FOR UPSTREAM PROPOSAL; INTENDED FOR CI OF AZURE EXAMPLES IN THIS REPO ONLY ##
####################################################################################
sudo: required
services:

View File

@ -1,96 +1,136 @@
## 0.9.4 (Unreleased)
## 0.9.5 (Unreleased)
BACKWARDS INCOMPATIBILITIES / NOTES:
* provider/aws: Users of aws_cloudfront_distributions with custom_origins have been broken due to changes in the AWS API requiring `OriginReadTimeout` being set for updates. This has been fixed and will show as a change in terraform plan / apply. [GH-13367]
FEATURES:
* **New Provider:** `gitlab` [GH-13898]
IMPROVEMENTS:
* provider/aws: Add support for CustomOrigin timeouts to aws_cloudfront_distribution [GH-13367]
* provider/dnsimple: Add support for import for dnsimple_records [GH-9130]
* provider/triton: Add support for reading provider configuration from `TRITON_*` environment variables in addition to `SDC_*`[GH-14000]
BUG FIXES:
* provider/aws: Update aws_ebs_volume when attached [GH-14005]
* provider/aws: Set aws_instance volume_tags to be Computed [GH-14007]
* providers/heroku: Configure buildpacks correctly for both Org Apps and non-org Apps [GH-13990]
## 0.9.4 (26th April 2017)
BACKWARDS INCOMPATIBILITIES / NOTES:
* provider/template: Fix invalid MIME formatting in `template_cloudinit_config`.
While the change itself is not breaking the data source it may be referenced
e.g. in `aws_launch_configuration` and similar resources which are immutable
and the formatting change will therefore trigger recreation [GH-13752]
and the formatting change will therefore trigger recreation ([#13752](https://github.com/hashicorp/terraform/issues/13752))
FEATURES:
* **New Provider:** `opc` - Oracle Public Cloud [GH-13468]
* **New Provider:** `oneandone` [GH-13633]
* **New Data Source:** `aws_ami_ids` [GH-13844]
* **New Data Source:** `aws_ebs_snapshot_ids` [GH-13844]
* **New Data Source:** `aws_kms_alias` [GH-13669]
* **New Data Source:** `aws_kinesis_stream` [GH-13562]
* **New Data Source:** `digitalocean_image` [GH-13787]
* **New Data Source:** `google_compute_network` [GH-12442]
* **New Data Source:** `google_compute_subnetwork` [GH-12442]
* **New Resource:** `local_file` for creating local files (please see the docs for caveats) [GH-12757]
* **New Resource:** `alicloud_ess_scalinggroup` [GH-13731]
* **New Resource:** `alicloud_ess_scalingconfiguration` [GH-13731]
* **New Resource:** `alicloud_ess_scalingrule` [GH-13731]
* **New Resource:** `alicloud_ess_schedule` [GH-13731]
* **New Resource:** `alicloud_snat_entry` [GH-13731]
* **New Resource:** `alicloud_forward_entry` [GH-13731]
* **New Resource:** `aws_cognito_identity_pool` [GH-13783]
* **New Resource:**  `aws_network_interface_attachment` [GH-13861]
* **New Resource:** `github_branch_protection` [GH-10476]
* **New Resource:** `google_bigquery_dataset` [GH-13436]
* **New Interpolation Function:** `coalescelist()` [GH-12537]
* **New Provider:** `opc` - Oracle Public Cloud ([#13468](https://github.com/hashicorp/terraform/issues/13468))
* **New Provider:** `oneandone` ([#13633](https://github.com/hashicorp/terraform/issues/13633))
* **New Data Source:** `aws_ami_ids` ([#13844](https://github.com/hashicorp/terraform/issues/13844)] [[#13866](https://github.com/hashicorp/terraform/issues/13866))
* **New Data Source:** `aws_ebs_snapshot_ids` ([#13844](https://github.com/hashicorp/terraform/issues/13844)] [[#13866](https://github.com/hashicorp/terraform/issues/13866))
* **New Data Source:** `aws_kms_alias` ([#13669](https://github.com/hashicorp/terraform/issues/13669))
* **New Data Source:** `aws_kinesis_stream` ([#13562](https://github.com/hashicorp/terraform/issues/13562))
* **New Data Source:** `digitalocean_image` ([#13787](https://github.com/hashicorp/terraform/issues/13787))
* **New Data Source:** `google_compute_network` ([#12442](https://github.com/hashicorp/terraform/issues/12442))
* **New Data Source:** `google_compute_subnetwork` ([#12442](https://github.com/hashicorp/terraform/issues/12442))
* **New Resource:** `local_file` for creating local files (please see the docs for caveats) ([#12757](https://github.com/hashicorp/terraform/issues/12757))
* **New Resource:** `alicloud_ess_scalinggroup` ([#13731](https://github.com/hashicorp/terraform/issues/13731))
* **New Resource:** `alicloud_ess_scalingconfiguration` ([#13731](https://github.com/hashicorp/terraform/issues/13731))
* **New Resource:** `alicloud_ess_scalingrule` ([#13731](https://github.com/hashicorp/terraform/issues/13731))
* **New Resource:** `alicloud_ess_schedule` ([#13731](https://github.com/hashicorp/terraform/issues/13731))
* **New Resource:** `alicloud_snat_entry` ([#13731](https://github.com/hashicorp/terraform/issues/13731))
* **New Resource:** `alicloud_forward_entry` ([#13731](https://github.com/hashicorp/terraform/issues/13731))
* **New Resource:** `aws_cognito_identity_pool` ([#13783](https://github.com/hashicorp/terraform/issues/13783))
* **New Resource:**  `aws_network_interface_attachment` ([#13861](https://github.com/hashicorp/terraform/issues/13861))
* **New Resource:** `github_branch_protection` ([#10476](https://github.com/hashicorp/terraform/issues/10476))
* **New Resource:** `google_bigquery_dataset` ([#13436](https://github.com/hashicorp/terraform/issues/13436))
* **New Resource:** `heroku_space` ([#13921](https://github.com/hashicorp/terraform/issues/13921))
* **New Resource:** `template_dir` for producing a directory from templates ([#13652](https://github.com/hashicorp/terraform/issues/13652))
* **New Interpolation Function:** `coalescelist()` ([#12537](https://github.com/hashicorp/terraform/issues/12537))
IMPROVEMENTS:
* helper/schema: Disallow validation+diff suppression on computed fields [GH-13878]
* config: The interpolation function `cidrhost` now accepts a negative host number to count backwards from the end of the range [GH-13765]
* config: New interpolation function `matchkeys` for using values from one list to filter corresponding values from another list using a matching set. [GH-13847]
* state/remote/swift: Support Openstack request logging [GH-13583]
* provider/aws: Add an option to skip getting the supported EC2 platforms [GH-13672]
* provider/aws: Add `name_prefix` support to `aws_cloudwatch_log_group` [GH-13273]
* provider/aws: Add `bucket_prefix` to `aws_s3_bucket` [GH-13274]
* provider/aws: Add replica_source_db to the aws_db_instance datasource [GH-13842]
* provider/aws: Add IPv6 outputs to aws_subnet datasource [GH-13841]
* provider/aws: Exercise SecondaryPrivateIpAddressCount for network interface [GH-10590]
* provider/aws: Expose execution ARN + invoke URL for APIG deployment [GH-13889]
* provider/aws: Expose invoke ARN from Lambda function (for API Gateway) [GH-13890]
* provider/aws: Add tagging support to the 'aws_lambda_function' resource [GH-13873]
* provider/aws: Validate WAF metric names [GH-13885]
* provider/aws: Allow AWS Subnet to change IPv6 CIDR Block without ForceNew [GH-13909]
* provider/azurerm: VM Scale Sets - import support [GH-13464]
* provider/azurerm: Allow Azure China region support [GH-13767]
* provider/digitalocean: Export droplet prices [GH-13720]
* provider/fastly: Add support for GCS logging [GH-13553]
* provider/google: `google_compute_address` and `google_compute_global_address` are now importable [GH-13270]
* provider/google: `google_compute_network` is now importable [GH-13834]
* provider/vault: `vault_generic_secret` resource can now optionally detect drift if it has appropriate access [GH-11776]
* core: Add a `-reconfigure` flag to the `init` command, to configure a backend while ignoring any saved configuration ([#13825](https://github.com/hashicorp/terraform/issues/13825))
* helper/schema: Disallow validation+diff suppression on computed fields ([#13878](https://github.com/hashicorp/terraform/issues/13878))
* config: The interpolation function `cidrhost` now accepts a negative host number to count backwards from the end of the range ([#13765](https://github.com/hashicorp/terraform/issues/13765))
* config: New interpolation function `matchkeys` for using values from one list to filter corresponding values from another list using a matching set. ([#13847](https://github.com/hashicorp/terraform/issues/13847))
* state/remote/swift: Support Openstack request logging ([#13583](https://github.com/hashicorp/terraform/issues/13583))
* provider/aws: Add an option to skip getting the supported EC2 platforms ([#13672](https://github.com/hashicorp/terraform/issues/13672))
* provider/aws: Add `name_prefix` support to `aws_cloudwatch_log_group` ([#13273](https://github.com/hashicorp/terraform/issues/13273))
* provider/aws: Add `bucket_prefix` to `aws_s3_bucket` ([#13274](https://github.com/hashicorp/terraform/issues/13274))
* provider/aws: Add replica_source_db to the aws_db_instance datasource ([#13842](https://github.com/hashicorp/terraform/issues/13842))
* provider/aws: Add IPv6 outputs to aws_subnet datasource ([#13841](https://github.com/hashicorp/terraform/issues/13841))
* provider/aws: Exercise SecondaryPrivateIpAddressCount for network interface ([#10590](https://github.com/hashicorp/terraform/issues/10590))
* provider/aws: Expose execution ARN + invoke URL for APIG deployment ([#13889](https://github.com/hashicorp/terraform/issues/13889))
* provider/aws: Expose invoke ARN from Lambda function (for API Gateway) ([#13890](https://github.com/hashicorp/terraform/issues/13890))
* provider/aws: Add tagging support to the 'aws_lambda_function' resource ([#13873](https://github.com/hashicorp/terraform/issues/13873))
* provider/aws: Validate WAF metric names ([#13885](https://github.com/hashicorp/terraform/issues/13885))
* provider/aws: Allow AWS Subnet to change IPv6 CIDR Block without ForceNew ([#13909](https://github.com/hashicorp/terraform/issues/13909))
* provider/aws: Allow filtering of aws_subnet_ids by tags ([#13937](https://github.com/hashicorp/terraform/issues/13937))
* provider/aws: Support aws_instance and volume tagging on creation ([#13945](https://github.com/hashicorp/terraform/issues/13945))
* provider/aws: Add network_interface to aws_instance ([#12933](https://github.com/hashicorp/terraform/issues/12933))
* provider/azurerm: VM Scale Sets - import support ([#13464](https://github.com/hashicorp/terraform/issues/13464))
* provider/azurerm: Allow Azure China region support ([#13767](https://github.com/hashicorp/terraform/issues/13767))
* provider/digitalocean: Export droplet prices ([#13720](https://github.com/hashicorp/terraform/issues/13720))
* provider/fastly: Add support for GCS logging ([#13553](https://github.com/hashicorp/terraform/issues/13553))
* provider/google: `google_compute_address` and `google_compute_global_address` are now importable ([#13270](https://github.com/hashicorp/terraform/issues/13270))
* provider/google: `google_compute_network` is now importable ([#13834](https://github.com/hashicorp/terraform/issues/13834))
* provider/google: add attached_disk field to google_compute_instance ([#13443](https://github.com/hashicorp/terraform/issues/13443))
* provider/heroku: Set App buildpacks from config ([#13910](https://github.com/hashicorp/terraform/issues/13910))
* provider/heroku: Create Heroku app in a private space ([#13862](https://github.com/hashicorp/terraform/issues/13862))
* provider/vault: `vault_generic_secret` resource can now optionally detect drift if it has appropriate access ([#11776](https://github.com/hashicorp/terraform/issues/11776))
BUG FIXES:
* core: Add the close provider/provisioner transformers back [GH-13102]
* core: Fix a crash condition by improving the flatmap.Expand() logic [GH-13541]
* provider/alicloud: Fix create PrePaid instance [GH-13662]
* provider/alicloud: Fix allocate public ip error [GH-13268]
* provider/alicloud: alicloud_security_group_rule: check ptr before use it [GH-13731)
* provider/alicloud: alicloud_instance: fix ecs internet_max_bandwidth_out cannot set zero bug [GH-13731]
* provider/aws: Allow force-destroying `aws_route53_zone` which has trailing dot [GH-12421]
* provider/aws: Allow GovCloud KMS ARNs to pass validation in `kms_key_id` attributes [GH-13699]
* provider/aws: Changing aws_opsworks_instance should ForceNew [GH-13839]
* provider/aws: Fix DB Parameter Group Name [GH-13279]
* provider/aws: Fix issue importing some Security Groups and Rules based on rule structure [GH-13630]
* provider/aws: Fix issue for cross account IAM role with `aws_lambda_permission` [GH-13865]
* provider/aws: Fix WAF IPSet descriptors removal on update [GH-13766]
* provider/aws: Increase default number of retries from 11 to 25 [GH-13673]
* provider/aws: Remove aws_vpc_dhcp_options if not found [GH-13610]
* provider/aws: Remove aws_network_acl_rule if not found [GH-13608]
* provider/aws: Use mutex & retry for WAF change operations [GH-13656]
* provider/aws: Adding support for ipv6 to aws_subnets needs migration [GH-13876]
* provider/azurerm: azurerm_redis_cache resource missing hostname [GH-13650]
* provider/azurerm: Locking around Network Security Group / Subnets [GH-13637]
* provider/azurerm: Locking route table on subnet create/delete [GH-13791]
* provider/azurerm: VM's - fixes a bug where ssh_keys could contain a null entry [GH-13755]
* provider/azurerm: fixing a bug refreshing the `azurerm_redis_cache` [GH-13899]
* provider/fastly: Fix issue with using 0 for `default_ttl` [GH-13648]
* provider/fastly: Add ability to associate a healthcheck to a backend [GH-13539]
* provider/google: Stop setting the id when project creation fails [GH-13644]
* provider/google: Make ports in resource_compute_forwarding_rule ForceNew [GH-13833]
* provider/logentries: Refresh from state when resources not found [GH-13810]
* provider/newrelic: newrelic_alert_condition - `condition_scope` must be `application` or `instance` [GH-12972]
* provider/opc: fixed issue with unqualifying nats [GH-13826]
* provider/opc: Fix instance label if unset [GH-13846]
* provider/openstack: Fix updating Ports [GH-13604]
* provider/rabbitmq: Allow users without tags [GH-13798]
* core: Prevent resource.Retry from adding untracked resources after the timeout: ([#13778](https://github.com/hashicorp/terraform/issues/13778))
* core: Allow a schema.TypeList to be ForceNew and computed ([#13863](https://github.com/hashicorp/terraform/issues/13863))
* core: Fix crash when refresh or apply build an invalid graph ([#13665](https://github.com/hashicorp/terraform/issues/13665))
* core: Add the close provider/provisioner transformers back ([#13102](https://github.com/hashicorp/terraform/issues/13102))
* core: Fix a crash condition by improving the flatmap.Expand() logic ([#13541](https://github.com/hashicorp/terraform/issues/13541))
* provider/alicloud: Fix create PrePaid instance ([#13662](https://github.com/hashicorp/terraform/issues/13662))
* provider/alicloud: Fix allocate public ip error ([#13268](https://github.com/hashicorp/terraform/issues/13268))
* provider/alicloud: alicloud_security_group_rule: check ptr before use it [[#13731](https://github.com/hashicorp/terraform/issues/13731))
* provider/alicloud: alicloud_instance: fix ecs internet_max_bandwidth_out cannot set zero bug ([#13731](https://github.com/hashicorp/terraform/issues/13731))
* provider/aws: Allow force-destroying `aws_route53_zone` which has trailing dot ([#12421](https://github.com/hashicorp/terraform/issues/12421))
* provider/aws: Allow GovCloud KMS ARNs to pass validation in `kms_key_id` attributes ([#13699](https://github.com/hashicorp/terraform/issues/13699))
* provider/aws: Changing aws_opsworks_instance should ForceNew ([#13839](https://github.com/hashicorp/terraform/issues/13839))
* provider/aws: Fix DB Parameter Group Name ([#13279](https://github.com/hashicorp/terraform/issues/13279))
* provider/aws: Fix issue importing some Security Groups and Rules based on rule structure ([#13630](https://github.com/hashicorp/terraform/issues/13630))
* provider/aws: Fix issue for cross account IAM role with `aws_lambda_permission` ([#13865](https://github.com/hashicorp/terraform/issues/13865))
* provider/aws: Fix WAF IPSet descriptors removal on update ([#13766](https://github.com/hashicorp/terraform/issues/13766))
* provider/aws: Increase default number of retries from 11 to 25 ([#13673](https://github.com/hashicorp/terraform/issues/13673))
* provider/aws: Remove aws_vpc_dhcp_options if not found ([#13610](https://github.com/hashicorp/terraform/issues/13610))
* provider/aws: Remove aws_network_acl_rule if not found ([#13608](https://github.com/hashicorp/terraform/issues/13608))
* provider/aws: Use mutex & retry for WAF change operations ([#13656](https://github.com/hashicorp/terraform/issues/13656))
* provider/aws: Adding support for ipv6 to aws_subnets needs migration ([#13876](https://github.com/hashicorp/terraform/issues/13876))
* provider/aws: Fix validation of the `name_prefix` parameter of the `aws_alb` resource ([#13441](https://github.com/hashicorp/terraform/issues/13441))
* provider/azurerm: azurerm_redis_cache resource missing hostname ([#13650](https://github.com/hashicorp/terraform/issues/13650))
* provider/azurerm: Locking around Network Security Group / Subnets ([#13637](https://github.com/hashicorp/terraform/issues/13637))
* provider/azurerm: Locking route table on subnet create/delete ([#13791](https://github.com/hashicorp/terraform/issues/13791))
* provider/azurerm: VM's - fixes a bug where ssh_keys could contain a null entry ([#13755](https://github.com/hashicorp/terraform/issues/13755))
* provider/azurerm: VM's - ignoring the case on the `create_option` field during Diff's ([#13933](https://github.com/hashicorp/terraform/issues/13933))
* provider/azurerm: fixing a bug refreshing the `azurerm_redis_cache` [[#13899](https://github.com/hashicorp/terraform/issues/13899)]
* provider/fastly: Fix issue with using 0 for `default_ttl` ([#13648](https://github.com/hashicorp/terraform/issues/13648))
* provider/google: Fix panic in GKE provisioning with addons ([#13954](https://github.com/hashicorp/terraform/issues/13954))
* provider/fastly: Add ability to associate a healthcheck to a backend ([#13539](https://github.com/hashicorp/terraform/issues/13539))
* provider/google: Stop setting the id when project creation fails ([#13644](https://github.com/hashicorp/terraform/issues/13644))
* provider/google: Make ports in resource_compute_forwarding_rule ForceNew ([#13833](https://github.com/hashicorp/terraform/issues/13833))
* provider/google: Validation fixes for forwarding rules ([#13952](https://github.com/hashicorp/terraform/issues/13952))
* provider/ignition: Internal cache moved to global, instead per provider instance ([#13919](https://github.com/hashicorp/terraform/issues/13919))
* provider/logentries: Refresh from state when resources not found ([#13810](https://github.com/hashicorp/terraform/issues/13810))
* provider/newrelic: newrelic_alert_condition - `condition_scope` must be `application` or `instance` ([#12972](https://github.com/hashicorp/terraform/issues/12972))
* provider/opc: fixed issue with unqualifying nats ([#13826](https://github.com/hashicorp/terraform/issues/13826))
* provider/opc: Fix instance label if unset ([#13846](https://github.com/hashicorp/terraform/issues/13846))
* provider/openstack: Fix updating Ports ([#13604](https://github.com/hashicorp/terraform/issues/13604))
* provider/rabbitmq: Allow users without tags ([#13798](https://github.com/hashicorp/terraform/issues/13798))
## 0.9.3 (April 12, 2017)
@ -98,111 +138,111 @@ BACKWARDS INCOMPATIBILITIES / NOTES:
* provider/aws: Fix a critical bug in `aws_emr_cluster` in order to preserve the ordering
of any arguments in `bootstrap_action`. Terraform will now enforce the ordering
from the configuration. As a result, `aws_emr_cluster` resources may need to be
recreated, as there is no API to update them in-place [GH-13580]
recreated, as there is no API to update them in-place ([#13580](https://github.com/hashicorp/terraform/issues/13580))
FEATURES:
* **New Resource:** `aws_api_gateway_method_settings` [GH-13542]
* **New Resource:** `aws_api_gateway_stage` [GH-13540]
* **New Resource:** `aws_iam_openid_connect_provider` [GH-13456]
* **New Resource:** `aws_lightsail_static_ip` [GH-13175]
* **New Resource:** `aws_lightsail_static_ip_attachment` [GH-13207]
* **New Resource:** `aws_ses_domain_identity` [GH-13098]
* **New Resource:** `azurerm_managed_disk` [GH-12455]
* **New Resource:** `kubernetes_persistent_volume` [GH-13277]
* **New Resource:** `kubernetes_persistent_volume_claim` [GH-13527]
* **New Resource:** `kubernetes_secret` [GH-12960]
* **New Data Source:** `aws_iam_role` [GH-13213]
* **New Resource:** `aws_api_gateway_method_settings` ([#13542](https://github.com/hashicorp/terraform/issues/13542))
* **New Resource:** `aws_api_gateway_stage` ([#13540](https://github.com/hashicorp/terraform/issues/13540))
* **New Resource:** `aws_iam_openid_connect_provider` ([#13456](https://github.com/hashicorp/terraform/issues/13456))
* **New Resource:** `aws_lightsail_static_ip` ([#13175](https://github.com/hashicorp/terraform/issues/13175))
* **New Resource:** `aws_lightsail_static_ip_attachment` ([#13207](https://github.com/hashicorp/terraform/issues/13207))
* **New Resource:** `aws_ses_domain_identity` ([#13098](https://github.com/hashicorp/terraform/issues/13098))
* **New Resource:** `azurerm_managed_disk` ([#12455](https://github.com/hashicorp/terraform/issues/12455))
* **New Resource:** `kubernetes_persistent_volume` ([#13277](https://github.com/hashicorp/terraform/issues/13277))
* **New Resource:** `kubernetes_persistent_volume_claim` ([#13527](https://github.com/hashicorp/terraform/issues/13527))
* **New Resource:** `kubernetes_secret` ([#12960](https://github.com/hashicorp/terraform/issues/12960))
* **New Data Source:** `aws_iam_role` ([#13213](https://github.com/hashicorp/terraform/issues/13213))
IMPROVEMENTS:
* core: add `-lock-timeout` option, which will block and retry locks for the given duration [GH-13262]
* core: new `chomp` interpolation function which returns the given string with any trailing newline characters removed [GH-13419]
* backend/remote-state: Add support for assume role extensions to s3 backend [GH-13236]
* backend/remote-state: Filter extra entries from s3 environment listings [GH-13596]
* config: New interpolation functions `basename` and `dirname`, for file path manipulation [GH-13080]
* helper/resource: Allow unknown "pending" states [GH-13099]
* command/hook_ui: Increase max length of state IDs from 20 to 80 [GH-13317]
* provider/aws: Add support to set iam_role_arn on cloudformation Stack [GH-12547]
* provider/aws: Support priority and listener_arn update of alb_listener_rule [GH-13125]
* provider/aws: Deprecate roles in favour of role in iam_instance_profile [GH-13130]
* provider/aws: Make alb_target_group_attachment port optional [GH-13139]
* provider/aws: `aws_api_gateway_domain_name` `certificate_private_key` field marked as sensitive [GH-13147]
* provider/aws: `aws_directory_service_directory` `password` field marked as sensitive [GH-13147]
* provider/aws: `aws_kinesis_firehose_delivery_stream` `password` field marked as sensitive [GH-13147]
* provider/aws: `aws_opsworks_application` `app_source.0.password` & `ssl_configuration.0.private_key` fields marked as sensitive [GH-13147]
* provider/aws: `aws_opsworks_stack` `custom_cookbooks_source.0.password` field marked as sensitive [GH-13147]
* provider/aws: Support the ability to enable / disable ipv6 support in VPC [GH-12527]
* provider/aws: Added API Gateway integration update [GH-13249]
* provider/aws: Add `identifier` | `name_prefix` to RDS resources [GH-13232]
* provider/aws: Validate `aws_ecs_task_definition.container_definitions` [GH-12161]
* provider/aws: Update caller_identity data source [GH-13092]
* provider/aws: `aws_subnet_ids` data source for getting a list of subnet ids matching certain criteria [GH-13188]
* provider/aws: Support ip_address_type for aws_alb [GH-13227]
* provider/aws: Migrate `aws_dms_*` resources away from AWS waiters [GH-13291]
* provider/aws: Add support for treat_missing_data to cloudwatch_metric_alarm [GH-13358]
* provider/aws: Add support for evaluate_low_sample_count_percentiles to cloudwatch_metric_alarm [GH-13371]
* provider/aws: Add `name_prefix` to `aws_alb_target_group` [GH-13442]
* provider/aws: Add support for EMR clusters to aws_appautoscaling_target [GH-13368]
* provider/aws: Add import capabilities to codecommit_repository [GH-13577]
* provider/bitbucket: Improved error handling [GH-13390]
* provider/cloudstack: Do not force a new resource when updating `cloudstack_loadbalancer_rule` members [GH-11786]
* provider/fastly: Add support for Sumologic logging [GH-12541]
* provider/github: Handle the case when issue labels already exist [GH-13182]
* provider/google: Mark `google_container_cluster`'s `client_key` & `password` inside `master_auth` as sensitive [GH-13148]
* provider/google: Add node_pool field in resource_container_cluster [GH-13402]
* provider/kubernetes: Allow defining custom config context [GH-12958]
* provider/openstack: Add support for 'value_specs' options to `openstack_compute_servergroup_v2` [GH-13380]
* provider/statuscake: Add support for StatusCake TriggerRate field [GH-13340]
* provider/triton: Move to joyent/triton-go [GH-13225]
* provisioner/chef: Make sure we add new Chef-Vault clients as clients [GH-13525]
* core: add `-lock-timeout` option, which will block and retry locks for the given duration ([#13262](https://github.com/hashicorp/terraform/issues/13262))
* core: new `chomp` interpolation function which returns the given string with any trailing newline characters removed ([#13419](https://github.com/hashicorp/terraform/issues/13419))
* backend/remote-state: Add support for assume role extensions to s3 backend ([#13236](https://github.com/hashicorp/terraform/issues/13236))
* backend/remote-state: Filter extra entries from s3 environment listings ([#13596](https://github.com/hashicorp/terraform/issues/13596))
* config: New interpolation functions `basename` and `dirname`, for file path manipulation ([#13080](https://github.com/hashicorp/terraform/issues/13080))
* helper/resource: Allow unknown "pending" states ([#13099](https://github.com/hashicorp/terraform/issues/13099))
* command/hook_ui: Increase max length of state IDs from 20 to 80 ([#13317](https://github.com/hashicorp/terraform/issues/13317))
* provider/aws: Add support to set iam_role_arn on cloudformation Stack ([#12547](https://github.com/hashicorp/terraform/issues/12547))
* provider/aws: Support priority and listener_arn update of alb_listener_rule ([#13125](https://github.com/hashicorp/terraform/issues/13125))
* provider/aws: Deprecate roles in favour of role in iam_instance_profile ([#13130](https://github.com/hashicorp/terraform/issues/13130))
* provider/aws: Make alb_target_group_attachment port optional ([#13139](https://github.com/hashicorp/terraform/issues/13139))
* provider/aws: `aws_api_gateway_domain_name` `certificate_private_key` field marked as sensitive ([#13147](https://github.com/hashicorp/terraform/issues/13147))
* provider/aws: `aws_directory_service_directory` `password` field marked as sensitive ([#13147](https://github.com/hashicorp/terraform/issues/13147))
* provider/aws: `aws_kinesis_firehose_delivery_stream` `password` field marked as sensitive ([#13147](https://github.com/hashicorp/terraform/issues/13147))
* provider/aws: `aws_opsworks_application` `app_source.0.password` & `ssl_configuration.0.private_key` fields marked as sensitive ([#13147](https://github.com/hashicorp/terraform/issues/13147))
* provider/aws: `aws_opsworks_stack` `custom_cookbooks_source.0.password` field marked as sensitive ([#13147](https://github.com/hashicorp/terraform/issues/13147))
* provider/aws: Support the ability to enable / disable ipv6 support in VPC ([#12527](https://github.com/hashicorp/terraform/issues/12527))
* provider/aws: Added API Gateway integration update ([#13249](https://github.com/hashicorp/terraform/issues/13249))
* provider/aws: Add `identifier` | `name_prefix` to RDS resources ([#13232](https://github.com/hashicorp/terraform/issues/13232))
* provider/aws: Validate `aws_ecs_task_definition.container_definitions` ([#12161](https://github.com/hashicorp/terraform/issues/12161))
* provider/aws: Update caller_identity data source ([#13092](https://github.com/hashicorp/terraform/issues/13092))
* provider/aws: `aws_subnet_ids` data source for getting a list of subnet ids matching certain criteria ([#13188](https://github.com/hashicorp/terraform/issues/13188))
* provider/aws: Support ip_address_type for aws_alb ([#13227](https://github.com/hashicorp/terraform/issues/13227))
* provider/aws: Migrate `aws_dms_*` resources away from AWS waiters ([#13291](https://github.com/hashicorp/terraform/issues/13291))
* provider/aws: Add support for treat_missing_data to cloudwatch_metric_alarm ([#13358](https://github.com/hashicorp/terraform/issues/13358))
* provider/aws: Add support for evaluate_low_sample_count_percentiles to cloudwatch_metric_alarm ([#13371](https://github.com/hashicorp/terraform/issues/13371))
* provider/aws: Add `name_prefix` to `aws_alb_target_group` ([#13442](https://github.com/hashicorp/terraform/issues/13442))
* provider/aws: Add support for EMR clusters to aws_appautoscaling_target ([#13368](https://github.com/hashicorp/terraform/issues/13368))
* provider/aws: Add import capabilities to codecommit_repository ([#13577](https://github.com/hashicorp/terraform/issues/13577))
* provider/bitbucket: Improved error handling ([#13390](https://github.com/hashicorp/terraform/issues/13390))
* provider/cloudstack: Do not force a new resource when updating `cloudstack_loadbalancer_rule` members ([#11786](https://github.com/hashicorp/terraform/issues/11786))
* provider/fastly: Add support for Sumologic logging ([#12541](https://github.com/hashicorp/terraform/issues/12541))
* provider/github: Handle the case when issue labels already exist ([#13182](https://github.com/hashicorp/terraform/issues/13182))
* provider/google: Mark `google_container_cluster`'s `client_key` & `password` inside `master_auth` as sensitive ([#13148](https://github.com/hashicorp/terraform/issues/13148))
* provider/google: Add node_pool field in resource_container_cluster ([#13402](https://github.com/hashicorp/terraform/issues/13402))
* provider/kubernetes: Allow defining custom config context ([#12958](https://github.com/hashicorp/terraform/issues/12958))
* provider/openstack: Add support for 'value_specs' options to `openstack_compute_servergroup_v2` ([#13380](https://github.com/hashicorp/terraform/issues/13380))
* provider/statuscake: Add support for StatusCake TriggerRate field ([#13340](https://github.com/hashicorp/terraform/issues/13340))
* provider/triton: Move to joyent/triton-go ([#13225](https://github.com/hashicorp/terraform/issues/13225))
* provisioner/chef: Make sure we add new Chef-Vault clients as clients ([#13525](https://github.com/hashicorp/terraform/issues/13525))
BUG FIXES:
* core: Escaped interpolation-like sequences (like `$${foo}`) now permitted in variable defaults [GH-13137]
* core: Fix strange issues with computed values in provider configuration that were worked around with `-input=false` [GH-11264], [GH-13264]
* core: Fix crash when providing nested maps as variable values in a `module` block [GH-13343]
* core: `connection` block attributes are now subject to basic validation of attribute names during validate walk [GH-13400]
* provider/aws: Add Support for maintenance_window and back_window to rds_cluster_instance [GH-13134]
* provider/aws: Increase timeout for AMI registration [GH-13159]
* provider/aws: Increase timeouts for ELB [GH-13161]
* provider/aws: `volume_type` of `aws_elasticsearch_domain.0.ebs_options` marked as `Computed` which prevents spurious diffs [GH-13160]
* provider/aws: Don't set DBName on `aws_db_instance` from snapshot [GH-13140]
* provider/aws: Add DiffSuppression to aws_ecs_service placement_strategies [GH-13220]
* provider/aws: Refresh aws_alb_target_group stickiness on manual updates [GH-13199]
* provider/aws: Preserve default retain_on_delete in cloudfront import [GH-13209]
* provider/aws: Refresh aws_alb_target_group tags [GH-13200]
* provider/aws: Set aws_vpn_connection to recreate when in deleted state [GH-13204]
* provider/aws: Wait for aws_opsworks_instance to be running when it's specified [GH-13218]
* provider/aws: Handle `aws_lambda_function` missing s3 key error [GH-10960]
* provider/aws: Set stickiness to computed in alb_target_group [GH-13278]
* provider/aws: Increase timeout for deploying `cloudfront_distribution` from 40 to 70 mins [GH-13319]
* provider/aws: Increase AMI retry timeouts [GH-13324]
* provider/aws: Increase subnet deletion timeout [GH-13356]
* provider/aws: Increase launch_configuration creation timeout [GH-13357]
* provider/aws: Increase Beanstalk env 'ready' timeout [GH-13359]
* provider/aws: Raise timeout for deleting APIG REST API [GH-13414]
* provider/aws: Raise timeout for attaching/detaching VPN Gateway [GH-13457]
* provider/aws: Recreate opsworks_stack on change of service_role_arn [GH-13325]
* provider/aws: Fix KMS Key reading with Exists method [GH-13348]
* provider/aws: Fix DynamoDB issues about GSIs indexes [GH-13256]
* provider/aws: Fix `aws_s3_bucket` drift detection of logging options [GH-13281]
* provider/aws: Update ElasticTranscoderPreset to have default for MaxFrameRate [GH-13422]
* provider/aws: Fix aws_ami_launch_permission refresh when AMI disappears [GH-13469]
* provider/aws: Add support for updating SSM documents [GH-13491]
* provider/aws: Fix panic on nil route configs [GH-13548]
* provider/azurerm: Network Security Group - ignoring protocol casing at Import time [GH-13153]
* provider/azurerm: Fix crash when importing Local Network Gateways [GH-13261]
* provider/azurerm: Defaulting the value of `duplicate_detection_history_time_window` for `azurerm_servicebus_topic` [GH-13223]
* provider/azurerm: Event Hubs making the Location field idempotent [GH-13570]
* provider/bitbucket: Fixed issue where provider would fail with an "EOF" error on some operations [GH-13390]
* provider/dnsimple: Handle 404 on DNSimple records [GH-13131]
* provider/kubernetes: Use PATCH to update namespace [GH-13114]
* provider/ns1: No splitting answer on SPF records. [GH-13260]
* provider/openstack: Refresh volume_attachment from state if NotFound [GH-13342]
* provider/openstack: Add SOFT_DELETED to delete status [GH-13444]
* provider/profitbricks: Changed output type of ips variable of ip_block ProfitBricks resource [GH-13290]
* provider/template: Fix panic in cloudinit config [GH-13581]
* core: Escaped interpolation-like sequences (like `$${foo}`) now permitted in variable defaults ([#13137](https://github.com/hashicorp/terraform/issues/13137))
* core: Fix strange issues with computed values in provider configuration that were worked around with `-input=false` ([#11264](https://github.com/hashicorp/terraform/issues/11264)], [[#13264](https://github.com/hashicorp/terraform/issues/13264))
* core: Fix crash when providing nested maps as variable values in a `module` block ([#13343](https://github.com/hashicorp/terraform/issues/13343))
* core: `connection` block attributes are now subject to basic validation of attribute names during validate walk ([#13400](https://github.com/hashicorp/terraform/issues/13400))
* provider/aws: Add Support for maintenance_window and back_window to rds_cluster_instance ([#13134](https://github.com/hashicorp/terraform/issues/13134))
* provider/aws: Increase timeout for AMI registration ([#13159](https://github.com/hashicorp/terraform/issues/13159))
* provider/aws: Increase timeouts for ELB ([#13161](https://github.com/hashicorp/terraform/issues/13161))
* provider/aws: `volume_type` of `aws_elasticsearch_domain.0.ebs_options` marked as `Computed` which prevents spurious diffs ([#13160](https://github.com/hashicorp/terraform/issues/13160))
* provider/aws: Don't set DBName on `aws_db_instance` from snapshot ([#13140](https://github.com/hashicorp/terraform/issues/13140))
* provider/aws: Add DiffSuppression to aws_ecs_service placement_strategies ([#13220](https://github.com/hashicorp/terraform/issues/13220))
* provider/aws: Refresh aws_alb_target_group stickiness on manual updates ([#13199](https://github.com/hashicorp/terraform/issues/13199))
* provider/aws: Preserve default retain_on_delete in cloudfront import ([#13209](https://github.com/hashicorp/terraform/issues/13209))
* provider/aws: Refresh aws_alb_target_group tags ([#13200](https://github.com/hashicorp/terraform/issues/13200))
* provider/aws: Set aws_vpn_connection to recreate when in deleted state ([#13204](https://github.com/hashicorp/terraform/issues/13204))
* provider/aws: Wait for aws_opsworks_instance to be running when it's specified ([#13218](https://github.com/hashicorp/terraform/issues/13218))
* provider/aws: Handle `aws_lambda_function` missing s3 key error ([#10960](https://github.com/hashicorp/terraform/issues/10960))
* provider/aws: Set stickiness to computed in alb_target_group ([#13278](https://github.com/hashicorp/terraform/issues/13278))
* provider/aws: Increase timeout for deploying `cloudfront_distribution` from 40 to 70 mins ([#13319](https://github.com/hashicorp/terraform/issues/13319))
* provider/aws: Increase AMI retry timeouts ([#13324](https://github.com/hashicorp/terraform/issues/13324))
* provider/aws: Increase subnet deletion timeout ([#13356](https://github.com/hashicorp/terraform/issues/13356))
* provider/aws: Increase launch_configuration creation timeout ([#13357](https://github.com/hashicorp/terraform/issues/13357))
* provider/aws: Increase Beanstalk env 'ready' timeout ([#13359](https://github.com/hashicorp/terraform/issues/13359))
* provider/aws: Raise timeout for deleting APIG REST API ([#13414](https://github.com/hashicorp/terraform/issues/13414))
* provider/aws: Raise timeout for attaching/detaching VPN Gateway ([#13457](https://github.com/hashicorp/terraform/issues/13457))
* provider/aws: Recreate opsworks_stack on change of service_role_arn ([#13325](https://github.com/hashicorp/terraform/issues/13325))
* provider/aws: Fix KMS Key reading with Exists method ([#13348](https://github.com/hashicorp/terraform/issues/13348))
* provider/aws: Fix DynamoDB issues about GSIs indexes ([#13256](https://github.com/hashicorp/terraform/issues/13256))
* provider/aws: Fix `aws_s3_bucket` drift detection of logging options ([#13281](https://github.com/hashicorp/terraform/issues/13281))
* provider/aws: Update ElasticTranscoderPreset to have default for MaxFrameRate ([#13422](https://github.com/hashicorp/terraform/issues/13422))
* provider/aws: Fix aws_ami_launch_permission refresh when AMI disappears ([#13469](https://github.com/hashicorp/terraform/issues/13469))
* provider/aws: Add support for updating SSM documents ([#13491](https://github.com/hashicorp/terraform/issues/13491))
* provider/aws: Fix panic on nil route configs ([#13548](https://github.com/hashicorp/terraform/issues/13548))
* provider/azurerm: Network Security Group - ignoring protocol casing at Import time ([#13153](https://github.com/hashicorp/terraform/issues/13153))
* provider/azurerm: Fix crash when importing Local Network Gateways ([#13261](https://github.com/hashicorp/terraform/issues/13261))
* provider/azurerm: Defaulting the value of `duplicate_detection_history_time_window` for `azurerm_servicebus_topic` ([#13223](https://github.com/hashicorp/terraform/issues/13223))
* provider/azurerm: Event Hubs making the Location field idempotent ([#13570](https://github.com/hashicorp/terraform/issues/13570))
* provider/bitbucket: Fixed issue where provider would fail with an "EOF" error on some operations ([#13390](https://github.com/hashicorp/terraform/issues/13390))
* provider/dnsimple: Handle 404 on DNSimple records ([#13131](https://github.com/hashicorp/terraform/issues/13131))
* provider/kubernetes: Use PATCH to update namespace ([#13114](https://github.com/hashicorp/terraform/issues/13114))
* provider/ns1: No splitting answer on SPF records. ([#13260](https://github.com/hashicorp/terraform/issues/13260))
* provider/openstack: Refresh volume_attachment from state if NotFound ([#13342](https://github.com/hashicorp/terraform/issues/13342))
* provider/openstack: Add SOFT_DELETED to delete status ([#13444](https://github.com/hashicorp/terraform/issues/13444))
* provider/profitbricks: Changed output type of ips variable of ip_block ProfitBricks resource ([#13290](https://github.com/hashicorp/terraform/issues/13290))
* provider/template: Fix panic in cloudinit config ([#13581](https://github.com/hashicorp/terraform/issues/13581))
## 0.9.2 (March 28, 2017)

2
Vagrantfile vendored
View File

@ -5,7 +5,7 @@
VAGRANTFILE_API_VERSION = "2"
# Software version variables
GOVERSION = "1.8"
GOVERSION = "1.8.1"
UBUNTUVERSION = "16.04"
# CPU and RAM can be adjusted depending on your system

View File

@ -170,9 +170,30 @@ func (b *Local) DeleteState(name string) error {
}
func (b *Local) State(name string) (state.State, error) {
statePath, stateOutPath, backupPath := b.StatePaths(name)
// If we have a backend handling state, defer to that.
if b.Backend != nil {
return b.Backend.State(name)
s, err := b.Backend.State(name)
if err != nil {
return nil, err
}
// make sure we always have a backup state, unless it disabled
if backupPath == "" {
return s, nil
}
// see if the delegated backend returned a BackupState of its own
if s, ok := s.(*state.BackupState); ok {
return s, nil
}
s = &state.BackupState{
Real: s,
Path: backupPath,
}
return s, nil
}
if s, ok := b.states[name]; ok {
@ -183,8 +204,6 @@ func (b *Local) State(name string) (state.State, error) {
return nil, err
}
statePath, stateOutPath, backupPath := b.StatePaths(name)
// Otherwise, we need to load the state.
var s state.State = &state.LocalState{
Path: statePath,

View File

@ -121,7 +121,7 @@ func (b *Local) opApply(
select {
case <-ctx.Done():
if b.CLI != nil {
b.CLI.Output("Interrupt received. Gracefully shutting down...")
b.CLI.Output("stopping apply operation...")
}
// Stop execution

View File

@ -169,6 +169,11 @@ func TestLocal_addAndRemoveStates(t *testing.T) {
// verify it's being called.
type testDelegateBackend struct {
*Local
// return a sentinel error on these calls
stateErr bool
statesErr bool
deleteErr bool
}
var errTestDelegateState = errors.New("State called")
@ -176,22 +181,39 @@ var errTestDelegateStates = errors.New("States called")
var errTestDelegateDeleteState = errors.New("Delete called")
func (b *testDelegateBackend) State(name string) (state.State, error) {
return nil, errTestDelegateState
if b.stateErr {
return nil, errTestDelegateState
}
s := &state.LocalState{
Path: "terraform.tfstate",
PathOut: "terraform.tfstate",
}
return s, nil
}
func (b *testDelegateBackend) States() ([]string, error) {
return nil, errTestDelegateStates
if b.statesErr {
return nil, errTestDelegateStates
}
return []string{"default"}, nil
}
func (b *testDelegateBackend) DeleteState(name string) error {
return errTestDelegateDeleteState
if b.deleteErr {
return errTestDelegateDeleteState
}
return nil
}
// verify that the MultiState methods are dispatched to the correct Backend.
func TestLocal_multiStateBackend(t *testing.T) {
// assign a separate backend where we can read the state
b := &Local{
Backend: &testDelegateBackend{},
Backend: &testDelegateBackend{
stateErr: true,
statesErr: true,
deleteErr: true,
},
}
if _, err := b.State("test"); err != errTestDelegateState {
@ -205,7 +227,43 @@ func TestLocal_multiStateBackend(t *testing.T) {
if err := b.DeleteState("test"); err != errTestDelegateDeleteState {
t.Fatal("expected errTestDelegateDeleteState, got:", err)
}
}
// verify that a remote state backend is always wrapped in a BackupState
func TestLocal_remoteStateBackup(t *testing.T) {
// assign a separate backend to mock a remote state backend
b := &Local{
Backend: &testDelegateBackend{},
}
s, err := b.State("default")
if err != nil {
t.Fatal(err)
}
bs, ok := s.(*state.BackupState)
if !ok {
t.Fatal("remote state is not backed up")
}
if bs.Path != DefaultStateFilename+DefaultBackupExtension {
t.Fatal("bad backup location:", bs.Path)
}
// do the same with a named state, which should use the local env directories
s, err = b.State("test")
if err != nil {
t.Fatal(err)
}
bs, ok = s.(*state.BackupState)
if !ok {
t.Fatal("remote state is not backed up")
}
if bs.Path != filepath.Join(DefaultEnvDir, "test", DefaultStateFilename+DefaultBackupExtension) {
t.Fatal("bad backup location:", bs.Path)
}
}
// change into a tmp dir and return a deferable func to change back and cleanup

View File

@ -0,0 +1,12 @@
package main
import (
"github.com/hashicorp/terraform/builtin/providers/gitlab"
"github.com/hashicorp/terraform/plugin"
)
func main() {
plugin.Serve(&plugin.ServeOpts{
ProviderFunc: gitlab.Provider,
})
}

View File

@ -1,12 +1,12 @@
package main
import (
"github.com/hashicorp/terraform/builtin/providers/localfile"
"github.com/hashicorp/terraform/builtin/providers/local"
"github.com/hashicorp/terraform/plugin"
)
func main() {
plugin.Serve(&plugin.ServeOpts{
ProviderFunc: localfile.Provider,
ProviderFunc: local.Provider,
})
}

View File

@ -773,21 +773,31 @@ func originCustomHeaderHash(v interface{}) int {
}
func expandCustomOriginConfig(m map[string]interface{}) *cloudfront.CustomOriginConfig {
return &cloudfront.CustomOriginConfig{
OriginProtocolPolicy: aws.String(m["origin_protocol_policy"].(string)),
HTTPPort: aws.Int64(int64(m["http_port"].(int))),
HTTPSPort: aws.Int64(int64(m["https_port"].(int))),
OriginSslProtocols: expandCustomOriginConfigSSL(m["origin_ssl_protocols"].([]interface{})),
customOrigin := &cloudfront.CustomOriginConfig{
OriginProtocolPolicy: aws.String(m["origin_protocol_policy"].(string)),
HTTPPort: aws.Int64(int64(m["http_port"].(int))),
HTTPSPort: aws.Int64(int64(m["https_port"].(int))),
OriginSslProtocols: expandCustomOriginConfigSSL(m["origin_ssl_protocols"].([]interface{})),
OriginReadTimeout: aws.Int64(int64(m["origin_read_timeout"].(int))),
OriginKeepaliveTimeout: aws.Int64(int64(m["origin_keepalive_timeout"].(int))),
}
return customOrigin
}
func flattenCustomOriginConfig(cor *cloudfront.CustomOriginConfig) map[string]interface{} {
return map[string]interface{}{
"origin_protocol_policy": *cor.OriginProtocolPolicy,
"http_port": int(*cor.HTTPPort),
"https_port": int(*cor.HTTPSPort),
"origin_ssl_protocols": flattenCustomOriginConfigSSL(cor.OriginSslProtocols),
customOrigin := map[string]interface{}{
"origin_protocol_policy": *cor.OriginProtocolPolicy,
"http_port": int(*cor.HTTPPort),
"https_port": int(*cor.HTTPSPort),
"origin_ssl_protocols": flattenCustomOriginConfigSSL(cor.OriginSslProtocols),
"origin_read_timeout": int(*cor.OriginReadTimeout),
"origin_keepalive_timeout": int(*cor.OriginKeepaliveTimeout),
}
return customOrigin
}
// Assemble the hash for the aws_cloudfront_distribution custom_origin_config
@ -801,6 +811,9 @@ func customOriginConfigHash(v interface{}) int {
for _, v := range sortInterfaceSlice(m["origin_ssl_protocols"].([]interface{})) {
buf.WriteString(fmt.Sprintf("%s-", v.(string)))
}
buf.WriteString(fmt.Sprintf("%d-", m["origin_keepalive_timeout"].(int)))
buf.WriteString(fmt.Sprintf("%d-", m["origin_read_timeout"].(int)))
return hashcode.String(buf.String())
}

View File

@ -117,10 +117,12 @@ func originCustomHeaderConf2() map[string]interface{} {
func customOriginConf() map[string]interface{} {
return map[string]interface{}{
"origin_protocol_policy": "http-only",
"http_port": 80,
"https_port": 443,
"origin_ssl_protocols": customOriginSslProtocolsConf(),
"origin_protocol_policy": "http-only",
"http_port": 80,
"https_port": 443,
"origin_ssl_protocols": customOriginSslProtocolsConf(),
"origin_read_timeout": 30,
"origin_keepalive_timeout": 5,
}
}
@ -785,6 +787,12 @@ func TestCloudFrontStructure_expandCustomOriginConfig(t *testing.T) {
if *co.HTTPSPort != 443 {
t.Fatalf("Expected HTTPSPort to be 443, got %v", *co.HTTPSPort)
}
if *co.OriginReadTimeout != 30 {
t.Fatalf("Expected Origin Read Timeout to be 30, got %v", *co.OriginReadTimeout)
}
if *co.OriginKeepaliveTimeout != 5 {
t.Fatalf("Expected Origin Keepalive Timeout to be 5, got %v", *co.OriginKeepaliveTimeout)
}
}
func TestCloudFrontStructure_flattenCustomOriginConfig(t *testing.T) {

View File

@ -5,8 +5,6 @@ import (
"fmt"
"log"
"regexp"
"sort"
"time"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/hashicorp/terraform/helper/hashcode"
@ -249,21 +247,9 @@ func dataSourceAwsAmiRead(d *schema.ResourceData, meta interface{}) error {
return amiDescriptionAttributes(d, image)
}
type imageSort []*ec2.Image
func (a imageSort) Len() int { return len(a) }
func (a imageSort) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a imageSort) Less(i, j int) bool {
itime, _ := time.Parse(time.RFC3339, *a[i].CreationDate)
jtime, _ := time.Parse(time.RFC3339, *a[j].CreationDate)
return itime.Unix() < jtime.Unix()
}
// Returns the most recent AMI out of a slice of images.
func mostRecentAmi(images []*ec2.Image) *ec2.Image {
sortedImages := images
sort.Sort(imageSort(sortedImages))
return sortedImages[len(sortedImages)-1]
return sortImages(images)[0]
}
// populate the numerous fields that the image description returns.

View File

@ -36,10 +36,9 @@ func dataSourceAwsAmiIds() *schema.Resource {
},
"tags": dataSourceTagsSchema(),
"ids": &schema.Schema{
Type: schema.TypeSet,
Type: schema.TypeList,
Computed: true,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
},
}
@ -101,7 +100,7 @@ func dataSourceAwsAmiIdsRead(d *schema.ResourceData, meta interface{}) error {
filteredImages = resp.Images[:]
}
for _, image := range filteredImages {
for _, image := range sortImages(filteredImages) {
imageIds = append(imageIds, *image.ImageId)
}

View File

@ -1,9 +1,11 @@
package aws
import (
"fmt"
"testing"
"github.com/hashicorp/terraform/helper/resource"
"github.com/satori/uuid"
)
func TestAccDataSourceAwsAmiIds_basic(t *testing.T) {
@ -21,6 +23,37 @@ func TestAccDataSourceAwsAmiIds_basic(t *testing.T) {
})
}
func TestAccDataSourceAwsAmiIds_sorted(t *testing.T) {
uuid := uuid.NewV4().String()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsAmiIdsConfig_sorted1(uuid),
Check: resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttrSet("aws_ami_from_instance.a", "id"),
resource.TestCheckResourceAttrSet("aws_ami_from_instance.b", "id"),
),
},
{
Config: testAccDataSourceAwsAmiIdsConfig_sorted2(uuid),
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ami_ids.test"),
resource.TestCheckResourceAttr("data.aws_ami_ids.test", "ids.#", "2"),
resource.TestCheckResourceAttrPair(
"data.aws_ami_ids.test", "ids.0",
"aws_ami_from_instance.b", "id"),
resource.TestCheckResourceAttrPair(
"data.aws_ami_ids.test", "ids.1",
"aws_ami_from_instance.a", "id"),
),
},
},
})
}
func TestAccDataSourceAwsAmiIds_empty(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@ -39,15 +72,52 @@ func TestAccDataSourceAwsAmiIds_empty(t *testing.T) {
const testAccDataSourceAwsAmiIdsConfig_basic = `
data "aws_ami_ids" "ubuntu" {
owners = ["099720109477"]
owners = ["099720109477"]
filter {
name = "name"
values = ["ubuntu/images/ubuntu-*-*-amd64-server-*"]
}
filter {
name = "name"
values = ["ubuntu/images/ubuntu-*-*-amd64-server-*"]
}
}
`
func testAccDataSourceAwsAmiIdsConfig_sorted1(uuid string) string {
return fmt.Sprintf(`
resource "aws_instance" "test" {
ami = "ami-efd0428f"
instance_type = "m3.medium"
count = 2
}
resource "aws_ami_from_instance" "a" {
name = "tf-test-%s-a"
source_instance_id = "${aws_instance.test.*.id[0]}"
snapshot_without_reboot = true
}
resource "aws_ami_from_instance" "b" {
name = "tf-test-%s-b"
source_instance_id = "${aws_instance.test.*.id[1]}"
snapshot_without_reboot = true
// We want to ensure that 'aws_ami_from_instance.a.creation_date' is less
// than 'aws_ami_from_instance.b.creation_date' so that we can ensure that
// the images are being sorted correctly.
depends_on = ["aws_ami_from_instance.a"]
}
`, uuid, uuid)
}
func testAccDataSourceAwsAmiIdsConfig_sorted2(uuid string) string {
return testAccDataSourceAwsAmiIdsConfig_sorted1(uuid) + fmt.Sprintf(`
data "aws_ami_ids" "test" {
owners = ["self"]
name_regex = "^tf-test-%s-"
}
`, uuid)
}
const testAccDataSourceAwsAmiIdsConfig_empty = `
data "aws_ami_ids" "empty" {
filter {

View File

@ -3,7 +3,6 @@ package aws
import (
"fmt"
"log"
"sort"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/hashicorp/terraform/helper/schema"
@ -138,20 +137,8 @@ func dataSourceAwsEbsSnapshotRead(d *schema.ResourceData, meta interface{}) erro
return snapshotDescriptionAttributes(d, snapshot)
}
type snapshotSort []*ec2.Snapshot
func (a snapshotSort) Len() int { return len(a) }
func (a snapshotSort) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a snapshotSort) Less(i, j int) bool {
itime := *a[i].StartTime
jtime := *a[j].StartTime
return itime.Unix() < jtime.Unix()
}
func mostRecentSnapshot(snapshots []*ec2.Snapshot) *ec2.Snapshot {
sortedSnapshots := snapshots
sort.Sort(snapshotSort(sortedSnapshots))
return sortedSnapshots[len(sortedSnapshots)-1]
return sortSnapshots(snapshots)[0]
}
func snapshotDescriptionAttributes(d *schema.ResourceData, snapshot *ec2.Snapshot) error {

View File

@ -28,10 +28,9 @@ func dataSourceAwsEbsSnapshotIds() *schema.Resource {
},
"tags": dataSourceTagsSchema(),
"ids": &schema.Schema{
Type: schema.TypeSet,
Type: schema.TypeList,
Computed: true,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
},
}
@ -67,7 +66,7 @@ func dataSourceAwsEbsSnapshotIdsRead(d *schema.ResourceData, meta interface{}) e
snapshotIds := make([]string, 0)
for _, snapshot := range resp.Snapshots {
for _, snapshot := range sortSnapshots(resp.Snapshots) {
snapshotIds = append(snapshotIds, *snapshot.SnapshotId)
}

View File

@ -1,9 +1,11 @@
package aws
import (
"fmt"
"testing"
"github.com/hashicorp/terraform/helper/resource"
"github.com/satori/uuid"
)
func TestAccDataSourceAwsEbsSnapshotIds_basic(t *testing.T) {
@ -21,6 +23,37 @@ func TestAccDataSourceAwsEbsSnapshotIds_basic(t *testing.T) {
})
}
func TestAccDataSourceAwsEbsSnapshotIds_sorted(t *testing.T) {
uuid := uuid.NewV4().String()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsEbsSnapshotIdsConfig_sorted1(uuid),
Check: resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttrSet("aws_ebs_snapshot.a", "id"),
resource.TestCheckResourceAttrSet("aws_ebs_snapshot.b", "id"),
),
},
{
Config: testAccDataSourceAwsEbsSnapshotIdsConfig_sorted2(uuid),
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ebs_snapshot_ids.test"),
resource.TestCheckResourceAttr("data.aws_ebs_snapshot_ids.test", "ids.#", "2"),
resource.TestCheckResourceAttrPair(
"data.aws_ebs_snapshot_ids.test", "ids.0",
"aws_ebs_snapshot.b", "id"),
resource.TestCheckResourceAttrPair(
"data.aws_ebs_snapshot_ids.test", "ids.1",
"aws_ebs_snapshot.a", "id"),
),
},
},
})
}
func TestAccDataSourceAwsEbsSnapshotIds_empty(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@ -40,7 +73,7 @@ func TestAccDataSourceAwsEbsSnapshotIds_empty(t *testing.T) {
const testAccDataSourceAwsEbsSnapshotIdsConfig_basic = `
resource "aws_ebs_volume" "test" {
availability_zone = "us-west-2a"
size = 40
size = 1
}
resource "aws_ebs_snapshot" "test" {
@ -52,6 +85,45 @@ data "aws_ebs_snapshot_ids" "test" {
}
`
func testAccDataSourceAwsEbsSnapshotIdsConfig_sorted1(uuid string) string {
return fmt.Sprintf(`
resource "aws_ebs_volume" "test" {
availability_zone = "us-west-2a"
size = 1
count = 2
}
resource "aws_ebs_snapshot" "a" {
volume_id = "${aws_ebs_volume.test.*.id[0]}"
description = "tf-test-%s"
}
resource "aws_ebs_snapshot" "b" {
volume_id = "${aws_ebs_volume.test.*.id[1]}"
description = "tf-test-%s"
// We want to ensure that 'aws_ebs_snapshot.a.creation_date' is less than
// 'aws_ebs_snapshot.b.creation_date'/ so that we can ensure that the
// snapshots are being sorted correctly.
depends_on = ["aws_ebs_snapshot.a"]
}
`, uuid, uuid)
}
func testAccDataSourceAwsEbsSnapshotIdsConfig_sorted2(uuid string) string {
return testAccDataSourceAwsEbsSnapshotIdsConfig_sorted1(uuid) + fmt.Sprintf(`
data "aws_ebs_snapshot_ids" "test" {
owners = ["self"]
filter {
name = "description"
values = ["tf-test-%s"]
}
}
`, uuid)
}
const testAccDataSourceAwsEbsSnapshotIdsConfig_empty = `
data "aws_ebs_snapshot_ids" "empty" {
owners = ["000000000000"]

View File

@ -12,10 +12,14 @@ func dataSourceAwsSubnetIDs() *schema.Resource {
return &schema.Resource{
Read: dataSourceAwsSubnetIDsRead,
Schema: map[string]*schema.Schema{
"tags": tagsSchemaComputed(),
"vpc_id": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"ids": &schema.Schema{
Type: schema.TypeSet,
Computed: true,
@ -37,6 +41,10 @@ func dataSourceAwsSubnetIDsRead(d *schema.ResourceData, meta interface{}) error
},
)
req.Filters = append(req.Filters, buildEC2TagFilterList(
tagsFromMap(d.Get("tags").(map[string]interface{})),
)...)
log.Printf("[DEBUG] DescribeSubnets %s\n", req)
resp, err := conn.DescribeSubnets(req)
if err != nil {

View File

@ -21,7 +21,8 @@ func TestAccDataSourceAwsSubnetIDs(t *testing.T) {
{
Config: testAccDataSourceAwsSubnetIDsConfigWithDataSource(rInt),
Check: resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttr("data.aws_subnet_ids.selected", "ids.#", "1"),
resource.TestCheckResourceAttr("data.aws_subnet_ids.selected", "ids.#", "3"),
resource.TestCheckResourceAttr("data.aws_subnet_ids.private", "ids.#", "2"),
),
},
},
@ -39,20 +40,50 @@ func testAccDataSourceAwsSubnetIDsConfigWithDataSource(rInt int) string {
}
}
resource "aws_subnet" "test" {
resource "aws_subnet" "test_public_a" {
vpc_id = "${aws_vpc.test.id}"
cidr_block = "172.%d.123.0/24"
availability_zone = "us-west-2a"
tags {
Name = "terraform-testacc-subnet-ids-data-source"
Name = "terraform-testacc-subnet-ids-data-source-public-a"
Tier = "Public"
}
}
resource "aws_subnet" "test_private_a" {
vpc_id = "${aws_vpc.test.id}"
cidr_block = "172.%d.125.0/24"
availability_zone = "us-west-2a"
tags {
Name = "terraform-testacc-subnet-ids-data-source-private-a"
Tier = "Private"
}
}
resource "aws_subnet" "test_private_b" {
vpc_id = "${aws_vpc.test.id}"
cidr_block = "172.%d.126.0/24"
availability_zone = "us-west-2b"
tags {
Name = "terraform-testacc-subnet-ids-data-source-private-b"
Tier = "Private"
}
}
data "aws_subnet_ids" "selected" {
vpc_id = "${aws_vpc.test.id}"
}
`, rInt, rInt)
data "aws_subnet_ids" "private" {
vpc_id = "${aws_vpc.test.id}"
tags {
Tier = "Private"
}
}
`, rInt, rInt, rInt, rInt)
}
func testAccDataSourceAwsSubnetIDsConfig(rInt int) string {
@ -65,14 +96,37 @@ func testAccDataSourceAwsSubnetIDsConfig(rInt int) string {
}
}
resource "aws_subnet" "test" {
resource "aws_subnet" "test_public_a" {
vpc_id = "${aws_vpc.test.id}"
cidr_block = "172.%d.123.0/24"
availability_zone = "us-west-2a"
tags {
Name = "terraform-testacc-subnet-ids-data-source"
Name = "terraform-testacc-subnet-ids-data-source-public-a"
Tier = "Public"
}
}
`, rInt, rInt)
resource "aws_subnet" "test_private_a" {
vpc_id = "${aws_vpc.test.id}"
cidr_block = "172.%d.125.0/24"
availability_zone = "us-west-2a"
tags {
Name = "terraform-testacc-subnet-ids-data-source-private-a"
Tier = "Private"
}
}
resource "aws_subnet" "test_private_b" {
vpc_id = "${aws_vpc.test.id}"
cidr_block = "172.%d.126.0/24"
availability_zone = "us-west-2b"
tags {
Name = "terraform-testacc-subnet-ids-data-source-private-b"
Tier = "Private"
}
}
`, rInt, rInt, rInt, rInt)
}

View File

@ -3,11 +3,13 @@ package aws
import (
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccAWSGlacierVault_importBasic(t *testing.T) {
resourceName := "aws_glacier_vault.full"
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@ -15,7 +17,7 @@ func TestAccAWSGlacierVault_importBasic(t *testing.T) {
CheckDestroy: testAccCheckGlacierVaultDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccGlacierVault_full,
Config: testAccGlacierVault_full(rInt),
},
resource.TestStep{

View File

@ -48,7 +48,7 @@ func resourceAwsAlb() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
ValidateFunc: validateElbName,
ValidateFunc: validateElbNamePrefix,
},
"internal": {

View File

@ -113,7 +113,7 @@ func TestAccAWSALB_namePrefix(t *testing.T) {
testAccCheckAWSALBExists("aws_alb.alb_test", &conf),
resource.TestCheckResourceAttrSet("aws_alb.alb_test", "name"),
resource.TestMatchResourceAttr("aws_alb.alb_test", "name",
regexp.MustCompile("^tf-lb")),
regexp.MustCompile("^tf-lb-")),
),
},
},
@ -851,7 +851,7 @@ resource "aws_security_group" "alb_test" {
func testAccAWSALBConfig_namePrefix() string {
return fmt.Sprintf(`
resource "aws_alb" "alb_test" {
name_prefix = "tf-lb"
name_prefix = "tf-lb-"
internal = true
security_groups = ["${aws_security_group.alb_test.id}"]
subnets = ["${aws_subnet.alb_test.*.id}"]

View File

@ -11,6 +11,7 @@ import (
"github.com/hashicorp/errwrap"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/helper/validation"
)
func resourceAwsCloudFrontDistribution() *schema.Resource {
@ -356,6 +357,18 @@ func resourceAwsCloudFrontDistribution() *schema.Resource {
Type: schema.TypeInt,
Required: true,
},
"origin_keepalive_timeout": {
Type: schema.TypeInt,
Optional: true,
Default: 5,
ValidateFunc: validation.IntBetween(1, 60),
},
"origin_read_timeout": {
Type: schema.TypeInt,
Optional: true,
Default: 30,
ValidateFunc: validation.IntBetween(4, 60),
},
"origin_protocol_policy": {
Type: schema.TypeString,
Required: true,

View File

@ -27,7 +27,7 @@ func TestAccAWSCloudFrontDistribution_S3Origin(t *testing.T) {
Providers: testAccProviders,
CheckDestroy: testAccCheckCloudFrontDistributionDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudFrontDistributionExistence(
@ -95,7 +95,7 @@ func TestAccAWSCloudFrontDistribution_customOrigin(t *testing.T) {
Providers: testAccProviders,
CheckDestroy: testAccCheckCloudFrontDistributionDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testAccAWSCloudFrontDistributionCustomConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudFrontDistributionExistence(
@ -118,7 +118,7 @@ func TestAccAWSCloudFrontDistribution_multiOrigin(t *testing.T) {
Providers: testAccProviders,
CheckDestroy: testAccCheckCloudFrontDistributionDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testAccAWSCloudFrontDistributionMultiOriginConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudFrontDistributionExistence(
@ -141,7 +141,7 @@ func TestAccAWSCloudFrontDistribution_noOptionalItemsConfig(t *testing.T) {
Providers: testAccProviders,
CheckDestroy: testAccCheckCloudFrontDistributionDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testAccAWSCloudFrontDistributionNoOptionalItemsConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudFrontDistributionExistence(
@ -165,7 +165,7 @@ func TestAccAWSCloudFrontDistribution_HTTP11Config(t *testing.T) {
Providers: testAccProviders,
CheckDestroy: testAccCheckCloudFrontDistributionDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testAccAWSCloudFrontDistributionHTTP11Config,
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudFrontDistributionExistence(
@ -183,7 +183,7 @@ func TestAccAWSCloudFrontDistribution_IsIPV6EnabledConfig(t *testing.T) {
Providers: testAccProviders,
CheckDestroy: testAccCheckCloudFrontDistributionDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testAccAWSCloudFrontDistributionIsIPV6EnabledConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudFrontDistributionExistence(
@ -203,7 +203,7 @@ func TestAccAWSCloudFrontDistribution_noCustomErrorResponseConfig(t *testing.T)
Providers: testAccProviders,
CheckDestroy: testAccCheckCloudFrontDistributionDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testAccAWSCloudFrontDistributionNoCustomErroResponseInfo,
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudFrontDistributionExistence(
@ -477,6 +477,8 @@ resource "aws_cloudfront_distribution" "custom_distribution" {
https_port = 443
origin_protocol_policy = "http-only"
origin_ssl_protocols = [ "SSLv3", "TLSv1" ]
origin_read_timeout = 30
origin_keepalive_timeout = 5
}
}
enabled = true
@ -542,6 +544,7 @@ resource "aws_cloudfront_distribution" "multi_origin_distribution" {
https_port = 443
origin_protocol_policy = "http-only"
origin_ssl_protocols = [ "SSLv3", "TLSv1" ]
origin_keepalive_timeout = 45
}
}
enabled = true

View File

@ -5,15 +5,41 @@ import (
"log"
"testing"
"regexp"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/cloudtrail"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"regexp"
)
func TestAccAWSCloudTrail_basic(t *testing.T) {
func TestAccAWSCloudTrail(t *testing.T) {
testCases := map[string]map[string]func(t *testing.T){
"Trail": {
"basic": testAccAWSCloudTrail_basic,
"enableLogging": testAccAWSCloudTrail_enable_logging,
"isMultiRegion": testAccAWSCloudTrail_is_multi_region,
"logValidation": testAccAWSCloudTrail_logValidation,
"kmsKey": testAccAWSCloudTrail_kmsKey,
"tags": testAccAWSCloudTrail_tags,
},
}
for group, m := range testCases {
m := m
t.Run(group, func(t *testing.T) {
for name, tc := range m {
tc := tc
t.Run(name, func(t *testing.T) {
tc(t)
})
}
})
}
}
func testAccAWSCloudTrail_basic(t *testing.T) {
var trail cloudtrail.Trail
cloudTrailRandInt := acctest.RandInt()
@ -45,7 +71,7 @@ func TestAccAWSCloudTrail_basic(t *testing.T) {
})
}
func TestAccAWSCloudTrail_enable_logging(t *testing.T) {
func testAccAWSCloudTrail_enable_logging(t *testing.T) {
var trail cloudtrail.Trail
cloudTrailRandInt := acctest.RandInt()
@ -87,7 +113,7 @@ func TestAccAWSCloudTrail_enable_logging(t *testing.T) {
})
}
func TestAccAWSCloudTrail_is_multi_region(t *testing.T) {
func testAccAWSCloudTrail_is_multi_region(t *testing.T) {
var trail cloudtrail.Trail
cloudTrailRandInt := acctest.RandInt()
@ -127,7 +153,7 @@ func TestAccAWSCloudTrail_is_multi_region(t *testing.T) {
})
}
func TestAccAWSCloudTrail_logValidation(t *testing.T) {
func testAccAWSCloudTrail_logValidation(t *testing.T) {
var trail cloudtrail.Trail
cloudTrailRandInt := acctest.RandInt()
@ -160,7 +186,7 @@ func TestAccAWSCloudTrail_logValidation(t *testing.T) {
})
}
func TestAccAWSCloudTrail_kmsKey(t *testing.T) {
func testAccAWSCloudTrail_kmsKey(t *testing.T) {
var trail cloudtrail.Trail
cloudTrailRandInt := acctest.RandInt()
keyRegex := regexp.MustCompile("^arn:aws:([a-zA-Z0-9\\-])+:([a-z]{2}-[a-z]+-\\d{1})?:(\\d{12})?:(.*)$")
@ -184,7 +210,7 @@ func TestAccAWSCloudTrail_kmsKey(t *testing.T) {
})
}
func TestAccAWSCloudTrail_tags(t *testing.T) {
func testAccAWSCloudTrail_tags(t *testing.T) {
var trail cloudtrail.Trail
var trailTags []*cloudtrail.Tag
var trailTagsModified []*cloudtrail.Tag

View File

@ -179,7 +179,7 @@ func resourceAWSEbsVolumeUpdate(d *schema.ResourceData, meta interface{}) error
stateConf := &resource.StateChangeConf{
Pending: []string{"creating", "modifying"},
Target: []string{"available"},
Target: []string{"available", "in-use"},
Refresh: volumeStateRefreshFunc(conn, *result.VolumeModification.VolumeId),
Timeout: 5 * time.Minute,
Delay: 10 * time.Second,

View File

@ -30,6 +30,31 @@ func TestAccAWSEBSVolume_basic(t *testing.T) {
})
}
func TestAccAWSEBSVolume_updateAttachedEbsVolume(t *testing.T) {
var v ec2.Volume
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
IDRefreshName: "aws_ebs_volume.test",
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccAwsEbsAttachedVolumeConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckVolumeExists("aws_ebs_volume.test", &v),
resource.TestCheckResourceAttr("aws_ebs_volume.test", "size", "10"),
),
},
{
Config: testAccAwsEbsAttachedVolumeConfigUpdateSize,
Check: resource.ComposeTestCheckFunc(
testAccCheckVolumeExists("aws_ebs_volume.test", &v),
resource.TestCheckResourceAttr("aws_ebs_volume.test", "size", "20"),
),
},
},
})
}
func TestAccAWSEBSVolume_updateSize(t *testing.T) {
var v ec2.Volume
resource.Test(t, resource.TestCase{
@ -200,6 +225,124 @@ resource "aws_ebs_volume" "test" {
}
`
const testAccAwsEbsAttachedVolumeConfig = `
data "aws_ami" "debian_jessie_latest" {
most_recent = true
filter {
name = "name"
values = ["debian-jessie-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "architecture"
values = ["x86_64"]
}
filter {
name = "root-device-type"
values = ["ebs"]
}
owners = ["379101102735"] # Debian
}
resource "aws_instance" "test" {
ami = "${data.aws_ami.debian_jessie_latest.id}"
associate_public_ip_address = true
count = 1
instance_type = "t2.medium"
root_block_device {
volume_size = "10"
volume_type = "standard"
delete_on_termination = true
}
tags {
Name = "test-terraform"
}
}
resource "aws_ebs_volume" "test" {
depends_on = ["aws_instance.test"]
availability_zone = "${aws_instance.test.availability_zone}"
type = "gp2"
size = "10"
}
resource "aws_volume_attachment" "test" {
depends_on = ["aws_ebs_volume.test"]
device_name = "/dev/xvdg"
volume_id = "${aws_ebs_volume.test.id}"
instance_id = "${aws_instance.test.id}"
}
`
const testAccAwsEbsAttachedVolumeConfigUpdateSize = `
data "aws_ami" "debian_jessie_latest" {
most_recent = true
filter {
name = "name"
values = ["debian-jessie-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "architecture"
values = ["x86_64"]
}
filter {
name = "root-device-type"
values = ["ebs"]
}
owners = ["379101102735"] # Debian
}
resource "aws_instance" "test" {
ami = "${data.aws_ami.debian_jessie_latest.id}"
associate_public_ip_address = true
count = 1
instance_type = "t2.medium"
root_block_device {
volume_size = "10"
volume_type = "standard"
delete_on_termination = true
}
tags {
Name = "test-terraform"
}
}
resource "aws_ebs_volume" "test" {
depends_on = ["aws_instance.test"]
availability_zone = "${aws_instance.test.availability_zone}"
type = "gp2"
size = "20"
}
resource "aws_volume_attachment" "test" {
depends_on = ["aws_ebs_volume.test"]
device_name = "/dev/xvdg"
volume_id = "${aws_ebs_volume.test.id}"
instance_id = "${aws_instance.test.id}"
}
`
const testAccAwsEbsVolumeConfigUpdateSize = `
resource "aws_ebs_volume" "test" {
availability_zone = "us-west-2a"

View File

@ -85,20 +85,21 @@ func TestParseTaskDefinition(t *testing.T) {
}
func TestAccAWSEcsServiceWithARN(t *testing.T) {
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSEcsServiceDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSEcsService,
Config: testAccAWSEcsService(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"),
),
},
{
Config: testAccAWSEcsServiceModified,
Config: testAccAWSEcsServiceModified(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"),
),
@ -181,13 +182,14 @@ func TestAccAWSEcsService_withIamRole(t *testing.T) {
}
func TestAccAWSEcsService_withDeploymentValues(t *testing.T) {
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSEcsServiceDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSEcsServiceWithDeploymentValues,
Config: testAccAWSEcsServiceWithDeploymentValues(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"),
resource.TestCheckResourceAttr(
@ -262,20 +264,21 @@ func TestAccAWSEcsService_withAlb(t *testing.T) {
}
func TestAccAWSEcsServiceWithPlacementStrategy(t *testing.T) {
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSEcsServiceDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSEcsService,
Config: testAccAWSEcsService(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"),
resource.TestCheckResourceAttr("aws_ecs_service.mongo", "placement_strategy.#", "0"),
),
},
{
Config: testAccAWSEcsServiceWithPlacementStrategy,
Config: testAccAWSEcsServiceWithPlacementStrategy(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"),
resource.TestCheckResourceAttr("aws_ecs_service.mongo", "placement_strategy.#", "1"),
@ -286,13 +289,14 @@ func TestAccAWSEcsServiceWithPlacementStrategy(t *testing.T) {
}
func TestAccAWSEcsServiceWithPlacementConstraints(t *testing.T) {
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSEcsServiceDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSEcsServiceWithPlacementConstraint,
Config: testAccAWSEcsServiceWithPlacementConstraint(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"),
resource.TestCheckResourceAttr("aws_ecs_service.mongo", "placement_constraints.#", "1"),
@ -303,13 +307,14 @@ func TestAccAWSEcsServiceWithPlacementConstraints(t *testing.T) {
}
func TestAccAWSEcsServiceWithPlacementConstraints_emptyExpression(t *testing.T) {
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSEcsServiceDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSEcsServiceWithPlacementConstraintEmptyExpression,
Config: testAccAWSEcsServiceWithPlacementConstraintEmptyExpression(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"),
resource.TestCheckResourceAttr("aws_ecs_service.mongo", "placement_constraints.#", "1"),
@ -366,9 +371,10 @@ func testAccCheckAWSEcsServiceExists(name string) resource.TestCheckFunc {
}
}
var testAccAWSEcsService = `
func testAccAWSEcsService(rInt int) string {
return fmt.Sprintf(`
resource "aws_ecs_cluster" "default" {
name = "terraformecstest1"
name = "terraformecstest%d"
}
resource "aws_ecs_task_definition" "mongo" {
@ -387,16 +393,18 @@ DEFINITION
}
resource "aws_ecs_service" "mongo" {
name = "mongodb"
name = "mongodb-%d"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.mongo.arn}"
desired_count = 1
}
`
`, rInt, rInt)
}
var testAccAWSEcsServiceModified = `
func testAccAWSEcsServiceModified(rInt int) string {
return fmt.Sprintf(`
resource "aws_ecs_cluster" "default" {
name = "terraformecstest1"
name = "terraformecstest%d"
}
resource "aws_ecs_task_definition" "mongo" {
@ -415,16 +423,18 @@ DEFINITION
}
resource "aws_ecs_service" "mongo" {
name = "mongodb"
name = "mongodb-%d"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.mongo.arn}"
desired_count = 2
}
`
`, rInt, rInt)
}
var testAccAWSEcsServiceWithPlacementStrategy = `
func testAccAWSEcsServiceWithPlacementStrategy(rInt int) string {
return fmt.Sprintf(`
resource "aws_ecs_cluster" "default" {
name = "terraformecstest1"
name = "terraformecstest%d"
}
resource "aws_ecs_task_definition" "mongo" {
@ -443,7 +453,7 @@ DEFINITION
}
resource "aws_ecs_service" "mongo" {
name = "mongodb"
name = "mongodb-%d"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.mongo.arn}"
desired_count = 1
@ -452,43 +462,47 @@ resource "aws_ecs_service" "mongo" {
field = "memory"
}
}
`
`, rInt, rInt)
}
var testAccAWSEcsServiceWithPlacementConstraint = `
func testAccAWSEcsServiceWithPlacementConstraint(rInt int) string {
return fmt.Sprintf(`
resource "aws_ecs_cluster" "default" {
name = "terraformecstest%d"
}
resource "aws_ecs_task_definition" "mongo" {
family = "mongodb"
container_definitions = <<DEFINITION
[
{
"cpu": 128,
"essential": true,
"image": "mongo:latest",
"memory": 128,
"name": "mongodb"
}
]
DEFINITION
}
resource "aws_ecs_service" "mongo" {
name = "mongodb-%d"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.mongo.arn}"
desired_count = 1
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-west-2a, us-west-2b]"
}
}
`, rInt, rInt)
}
func testAccAWSEcsServiceWithPlacementConstraintEmptyExpression(rInt int) string {
return fmt.Sprintf(`
resource "aws_ecs_cluster" "default" {
name = "terraformecstest21"
}
resource "aws_ecs_task_definition" "mongo" {
family = "mongodb"
container_definitions = <<DEFINITION
[
{
"cpu": 128,
"essential": true,
"image": "mongo:latest",
"memory": 128,
"name": "mongodb"
}
]
DEFINITION
}
resource "aws_ecs_service" "mongo" {
name = "mongodb"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.mongo.arn}"
desired_count = 1
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-west-2a, us-west-2b]"
}
}
`
var testAccAWSEcsServiceWithPlacementConstraintEmptyExpression = `
resource "aws_ecs_cluster" "default" {
name = "terraformecstest212"
name = "terraformecstest%d"
}
resource "aws_ecs_task_definition" "mongo" {
family = "mongodb"
@ -505,7 +519,7 @@ resource "aws_ecs_task_definition" "mongo" {
DEFINITION
}
resource "aws_ecs_service" "mongo" {
name = "mongodb"
name = "mongodb-%d"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.mongo.arn}"
desired_count = 1
@ -513,7 +527,8 @@ resource "aws_ecs_service" "mongo" {
type = "distinctInstance"
}
}
`
`, rInt, rInt)
}
var testAccAWSEcsService_withIamRole = `
resource "aws_ecs_cluster" "main" {
@ -609,9 +624,10 @@ resource "aws_ecs_service" "ghost" {
}
`
var testAccAWSEcsServiceWithDeploymentValues = `
func testAccAWSEcsServiceWithDeploymentValues(rInt int) string {
return fmt.Sprintf(`
resource "aws_ecs_cluster" "default" {
name = "terraformecstest1"
name = "terraformecstest-%d"
}
resource "aws_ecs_task_definition" "mongo" {
@ -630,12 +646,13 @@ DEFINITION
}
resource "aws_ecs_service" "mongo" {
name = "mongodb"
name = "mongodb-%d"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.mongo.arn}"
desired_count = 1
}
`
`, rInt, rInt)
}
var tpl_testAccAWSEcsService_withLbChanges = `
resource "aws_ecs_cluster" "main" {

View File

@ -9,18 +9,20 @@ import (
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/glacier"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestAccAWSGlacierVault_basic(t *testing.T) {
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckGlacierVaultDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccGlacierVault_basic,
Config: testAccGlacierVault_basic(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckGlacierVaultExists("aws_glacier_vault.test"),
),
@ -30,13 +32,14 @@ func TestAccAWSGlacierVault_basic(t *testing.T) {
}
func TestAccAWSGlacierVault_full(t *testing.T) {
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckGlacierVaultDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccGlacierVault_full,
Config: testAccGlacierVault_full(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckGlacierVaultExists("aws_glacier_vault.full"),
),
@ -46,19 +49,20 @@ func TestAccAWSGlacierVault_full(t *testing.T) {
}
func TestAccAWSGlacierVault_RemoveNotifications(t *testing.T) {
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckGlacierVaultDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccGlacierVault_full,
Config: testAccGlacierVault_full(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckGlacierVaultExists("aws_glacier_vault.full"),
),
},
resource.TestStep{
Config: testAccGlacierVault_withoutNotification,
Config: testAccGlacierVault_withoutNotification(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckGlacierVaultExists("aws_glacier_vault.full"),
testAccCheckVaultNotificationsMissing("aws_glacier_vault.full"),
@ -205,19 +209,22 @@ func testAccCheckGlacierVaultDestroy(s *terraform.State) error {
return nil
}
const testAccGlacierVault_basic = `
func testAccGlacierVault_basic(rInt int) string {
return fmt.Sprintf(`
resource "aws_glacier_vault" "test" {
name = "my_test_vault"
name = "my_test_vault_%d"
}
`, rInt)
}
`
const testAccGlacierVault_full = `
func testAccGlacierVault_full(rInt int) string {
return fmt.Sprintf(`
resource "aws_sns_topic" "aws_sns_topic" {
name = "glacier-sns-topic"
name = "glacier-sns-topic-%d"
}
resource "aws_glacier_vault" "full" {
name = "my_test_vault"
name = "my_test_vault_%d"
notification {
sns_topic = "${aws_sns_topic.aws_sns_topic.arn}"
events = ["ArchiveRetrievalCompleted","InventoryRetrievalCompleted"]
@ -226,17 +233,20 @@ resource "aws_glacier_vault" "full" {
Test="Test1"
}
}
`
`, rInt, rInt)
}
const testAccGlacierVault_withoutNotification = `
func testAccGlacierVault_withoutNotification(rInt int) string {
return fmt.Sprintf(`
resource "aws_sns_topic" "aws_sns_topic" {
name = "glacier-sns-topic"
name = "glacier-sns-topic-%d"
}
resource "aws_glacier_vault" "full" {
name = "my_test_vault"
name = "my_test_vault_%d"
tags {
Test="Test1"
}
}
`
`, rInt, rInt)
}

View File

@ -128,11 +128,45 @@ func resourceAwsInstance() *schema.Resource {
Computed: true,
},
// TODO: Deprecate me v0.10.0
"network_interface_id": {
Type: schema.TypeString,
Computed: true,
Deprecated: "Please use `primary_network_interface_id` instead",
},
"primary_network_interface_id": {
Type: schema.TypeString,
Computed: true,
},
"network_interface": {
ConflictsWith: []string{"associate_public_ip_address", "subnet_id", "private_ip", "vpc_security_group_ids", "security_groups", "ipv6_addresses", "ipv6_address_count", "source_dest_check"},
Type: schema.TypeSet,
Optional: true,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"delete_on_termination": {
Type: schema.TypeBool,
Default: false,
Optional: true,
ForceNew: true,
},
"network_interface_id": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"device_index": {
Type: schema.TypeInt,
Required: true,
ForceNew: true,
},
},
},
},
"public_ip": {
Type: schema.TypeString,
Computed: true,
@ -200,6 +234,8 @@ func resourceAwsInstance() *schema.Resource {
"tags": tagsSchema(),
"volume_tags": tagsSchemaComputed(),
"block_device": {
Type: schema.TypeMap,
Optional: true,
@ -396,6 +432,34 @@ func resourceAwsInstanceCreate(d *schema.ResourceData, meta interface{}) error {
runOpts.Ipv6Addresses = ipv6Addresses
}
tagsSpec := make([]*ec2.TagSpecification, 0)
if v, ok := d.GetOk("tags"); ok {
tags := tagsFromMap(v.(map[string]interface{}))
spec := &ec2.TagSpecification{
ResourceType: aws.String("instance"),
Tags: tags,
}
tagsSpec = append(tagsSpec, spec)
}
if v, ok := d.GetOk("volume_tags"); ok {
tags := tagsFromMap(v.(map[string]interface{}))
spec := &ec2.TagSpecification{
ResourceType: aws.String("volume"),
Tags: tags,
}
tagsSpec = append(tagsSpec, spec)
}
if len(tagsSpec) > 0 {
runOpts.TagSpecifications = tagsSpec
}
// Create the instance
log.Printf("[DEBUG] Run configuration: %s", runOpts)
@ -528,23 +592,62 @@ func resourceAwsInstanceRead(d *schema.ResourceData, meta interface{}) error {
d.Set("private_ip", instance.PrivateIpAddress)
d.Set("iam_instance_profile", iamInstanceProfileArnToName(instance.IamInstanceProfile))
// Set configured Network Interface Device Index Slice
// We only want to read, and populate state for the configured network_interface attachments. Otherwise, other
// resources have the potential to attach network interfaces to the instance, and cause a perpetual create/destroy
// diff. We should only read on changes configured for this specific resource because of this.
var configuredDeviceIndexes []int
if v, ok := d.GetOk("network_interface"); ok {
vL := v.(*schema.Set).List()
for _, vi := range vL {
mVi := vi.(map[string]interface{})
configuredDeviceIndexes = append(configuredDeviceIndexes, mVi["device_index"].(int))
}
}
var ipv6Addresses []string
if len(instance.NetworkInterfaces) > 0 {
for _, ni := range instance.NetworkInterfaces {
if *ni.Attachment.DeviceIndex == 0 {
d.Set("subnet_id", ni.SubnetId)
d.Set("network_interface_id", ni.NetworkInterfaceId)
d.Set("associate_public_ip_address", ni.Association != nil)
d.Set("ipv6_address_count", len(ni.Ipv6Addresses))
for _, address := range ni.Ipv6Addresses {
ipv6Addresses = append(ipv6Addresses, *address.Ipv6Address)
var primaryNetworkInterface ec2.InstanceNetworkInterface
var networkInterfaces []map[string]interface{}
for _, iNi := range instance.NetworkInterfaces {
ni := make(map[string]interface{})
if *iNi.Attachment.DeviceIndex == 0 {
primaryNetworkInterface = *iNi
}
// If the attached network device is inside our configuration, refresh state with values found.
// Otherwise, assume the network device was attached via an outside resource.
for _, index := range configuredDeviceIndexes {
if index == int(*iNi.Attachment.DeviceIndex) {
ni["device_index"] = *iNi.Attachment.DeviceIndex
ni["network_interface_id"] = *iNi.NetworkInterfaceId
ni["delete_on_termination"] = *iNi.Attachment.DeleteOnTermination
}
}
// Don't add empty network interfaces to schema
if len(ni) == 0 {
continue
}
networkInterfaces = append(networkInterfaces, ni)
}
if err := d.Set("network_interface", networkInterfaces); err != nil {
return fmt.Errorf("Error setting network_interfaces: %v", err)
}
// Set primary network interface details
d.Set("subnet_id", primaryNetworkInterface.SubnetId)
d.Set("network_interface_id", primaryNetworkInterface.NetworkInterfaceId) // TODO: Deprecate me v0.10.0
d.Set("primary_network_interface_id", primaryNetworkInterface.NetworkInterfaceId)
d.Set("associate_public_ip_address", primaryNetworkInterface.Association != nil)
d.Set("ipv6_address_count", len(primaryNetworkInterface.Ipv6Addresses))
for _, address := range primaryNetworkInterface.Ipv6Addresses {
ipv6Addresses = append(ipv6Addresses, *address.Ipv6Address)
}
} else {
d.Set("subnet_id", instance.SubnetId)
d.Set("network_interface_id", "")
d.Set("network_interface_id", "") // TODO: Deprecate me v0.10.0
d.Set("primary_network_interface_id", "")
}
if err := d.Set("ipv6_addresses", ipv6Addresses); err != nil {
@ -563,6 +666,10 @@ func resourceAwsInstanceRead(d *schema.ResourceData, meta interface{}) error {
d.Set("tags", tagsToMap(instance.Tags))
if err := readVolumeTags(conn, d); err != nil {
return err
}
if err := readSecurityGroups(d, instance); err != nil {
return err
}
@ -605,16 +712,27 @@ func resourceAwsInstanceUpdate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).ec2conn
d.Partial(true)
if err := setTags(conn, d); err != nil {
return err
} else {
d.SetPartial("tags")
if d.HasChange("tags") && !d.IsNewResource() {
if err := setTags(conn, d); err != nil {
return err
} else {
d.SetPartial("tags")
}
}
if d.HasChange("volume_tags") && !d.IsNewResource() {
if err := setVolumeTags(conn, d); err != nil {
return err
} else {
d.SetPartial("volume_tags")
}
}
if d.HasChange("iam_instance_profile") && !d.IsNewResource() {
request := &ec2.DescribeIamInstanceProfileAssociationsInput{
Filters: []*ec2.Filter{
&ec2.Filter{
{
Name: aws.String("instance-id"),
Values: []*string{aws.String(d.Id())},
},
@ -671,24 +789,28 @@ func resourceAwsInstanceUpdate(d *schema.ResourceData, meta interface{}) error {
}
}
if d.HasChange("source_dest_check") || d.IsNewResource() {
// SourceDestCheck can only be set on VPC instances // AWS will return an error of InvalidParameterCombination if we attempt
// to modify the source_dest_check of an instance in EC2 Classic
log.Printf("[INFO] Modifying `source_dest_check` on Instance %s", d.Id())
_, err := conn.ModifyInstanceAttribute(&ec2.ModifyInstanceAttributeInput{
InstanceId: aws.String(d.Id()),
SourceDestCheck: &ec2.AttributeBooleanValue{
Value: aws.Bool(d.Get("source_dest_check").(bool)),
},
})
if err != nil {
if ec2err, ok := err.(awserr.Error); ok {
// Toloerate InvalidParameterCombination error in Classic, otherwise
// return the error
if "InvalidParameterCombination" != ec2err.Code() {
return err
// SourceDestCheck can only be modified on an instance without manually specified network interfaces.
// SourceDestCheck, in that case, is configured at the network interface level
if _, ok := d.GetOk("network_interface"); !ok {
if d.HasChange("source_dest_check") || d.IsNewResource() {
// SourceDestCheck can only be set on VPC instances // AWS will return an error of InvalidParameterCombination if we attempt
// to modify the source_dest_check of an instance in EC2 Classic
log.Printf("[INFO] Modifying `source_dest_check` on Instance %s", d.Id())
_, err := conn.ModifyInstanceAttribute(&ec2.ModifyInstanceAttributeInput{
InstanceId: aws.String(d.Id()),
SourceDestCheck: &ec2.AttributeBooleanValue{
Value: aws.Bool(d.Get("source_dest_check").(bool)),
},
})
if err != nil {
if ec2err, ok := err.(awserr.Error); ok {
// Tolerate InvalidParameterCombination error in Classic, otherwise
// return the error
if "InvalidParameterCombination" != ec2err.Code() {
return err
}
log.Printf("[WARN] Attempted to modify SourceDestCheck on non VPC instance: %s", ec2err.Message())
}
log.Printf("[WARN] Attempted to modify SourceDestCheck on non VPC instance: %s", ec2err.Message())
}
}
}
@ -1008,6 +1130,55 @@ func fetchRootDeviceName(ami string, conn *ec2.EC2) (*string, error) {
return rootDeviceName, nil
}
func buildNetworkInterfaceOpts(d *schema.ResourceData, groups []*string, nInterfaces interface{}) []*ec2.InstanceNetworkInterfaceSpecification {
networkInterfaces := []*ec2.InstanceNetworkInterfaceSpecification{}
// Get necessary items
associatePublicIPAddress := d.Get("associate_public_ip_address").(bool)
subnet, hasSubnet := d.GetOk("subnet_id")
if hasSubnet && associatePublicIPAddress {
// If we have a non-default VPC / Subnet specified, we can flag
// AssociatePublicIpAddress to get a Public IP assigned. By default these are not provided.
// You cannot specify both SubnetId and the NetworkInterface.0.* parameters though, otherwise
// you get: Network interfaces and an instance-level subnet ID may not be specified on the same request
// You also need to attach Security Groups to the NetworkInterface instead of the instance,
// to avoid: Network interfaces and an instance-level security groups may not be specified on
// the same request
ni := &ec2.InstanceNetworkInterfaceSpecification{
AssociatePublicIpAddress: aws.Bool(associatePublicIPAddress),
DeviceIndex: aws.Int64(int64(0)),
SubnetId: aws.String(subnet.(string)),
Groups: groups,
}
if v, ok := d.GetOk("private_ip"); ok {
ni.PrivateIpAddress = aws.String(v.(string))
}
if v := d.Get("vpc_security_group_ids").(*schema.Set); v.Len() > 0 {
for _, v := range v.List() {
ni.Groups = append(ni.Groups, aws.String(v.(string)))
}
}
networkInterfaces = append(networkInterfaces, ni)
} else {
// If we have manually specified network interfaces, build and attach those here.
vL := nInterfaces.(*schema.Set).List()
for _, v := range vL {
ini := v.(map[string]interface{})
ni := &ec2.InstanceNetworkInterfaceSpecification{
DeviceIndex: aws.Int64(int64(ini["device_index"].(int))),
NetworkInterfaceId: aws.String(ini["network_interface_id"].(string)),
DeleteOnTermination: aws.Bool(ini["delete_on_termination"].(bool)),
}
networkInterfaces = append(networkInterfaces, ni)
}
}
return networkInterfaces
}
func readBlockDeviceMappingsFromConfig(
d *schema.ResourceData, conn *ec2.EC2) ([]*ec2.BlockDeviceMapping, error) {
blockDevices := make([]*ec2.BlockDeviceMapping, 0)
@ -1125,6 +1296,39 @@ func readBlockDeviceMappingsFromConfig(
return blockDevices, nil
}
func readVolumeTags(conn *ec2.EC2, d *schema.ResourceData) error {
volumeIds, err := getAwsInstanceVolumeIds(conn, d)
if err != nil {
return err
}
tagsResp, err := conn.DescribeTags(&ec2.DescribeTagsInput{
Filters: []*ec2.Filter{
{
Name: aws.String("resource-id"),
Values: volumeIds,
},
},
})
if err != nil {
return err
}
var tags []*ec2.Tag
for _, t := range tagsResp.Tags {
tag := &ec2.Tag{
Key: t.Key,
Value: t.Value,
}
tags = append(tags, tag)
}
d.Set("volume_tags", tagsToMap(tags))
return nil
}
// Determine whether we're referring to security groups with
// IDs or names. We use a heuristic to figure this out. By default,
// we use IDs if we're in a VPC. However, if we previously had an
@ -1260,33 +1464,14 @@ func buildAwsInstanceOpts(
}
}
if hasSubnet && associatePublicIPAddress {
// If we have a non-default VPC / Subnet specified, we can flag
// AssociatePublicIpAddress to get a Public IP assigned. By default these are not provided.
// You cannot specify both SubnetId and the NetworkInterface.0.* parameters though, otherwise
// you get: Network interfaces and an instance-level subnet ID may not be specified on the same request
// You also need to attach Security Groups to the NetworkInterface instead of the instance,
// to avoid: Network interfaces and an instance-level security groups may not be specified on
// the same request
ni := &ec2.InstanceNetworkInterfaceSpecification{
AssociatePublicIpAddress: aws.Bool(associatePublicIPAddress),
DeviceIndex: aws.Int64(int64(0)),
SubnetId: aws.String(subnetID),
Groups: groups,
}
networkInterfaces, interfacesOk := d.GetOk("network_interface")
if v, ok := d.GetOk("private_ip"); ok {
ni.PrivateIpAddress = aws.String(v.(string))
}
if v := d.Get("vpc_security_group_ids").(*schema.Set); v.Len() > 0 {
for _, v := range v.List() {
ni.Groups = append(ni.Groups, aws.String(v.(string)))
}
}
opts.NetworkInterfaces = []*ec2.InstanceNetworkInterfaceSpecification{ni}
// If setting subnet and public address, OR manual network interfaces, populate those now.
if hasSubnet && associatePublicIPAddress || interfacesOk {
// Otherwise we're attaching (a) network interface(s)
opts.NetworkInterfaces = buildNetworkInterfaceOpts(d, groups, networkInterfaces)
} else {
// If simply specifying a subnetID, privateIP, Security Groups, or VPC Security Groups, build these now
if subnetID != "" {
opts.SubnetID = aws.String(subnetID)
}
@ -1319,7 +1504,6 @@ func buildAwsInstanceOpts(
if len(blockDevices) > 0 {
opts.BlockDeviceMappings = blockDevices
}
return opts, nil
}
@ -1372,3 +1556,27 @@ func userDataHashSum(user_data string) string {
hash := sha1.Sum(v)
return hex.EncodeToString(hash[:])
}
func getAwsInstanceVolumeIds(conn *ec2.EC2, d *schema.ResourceData) ([]*string, error) {
volumeIds := make([]*string, 0)
opts := &ec2.DescribeVolumesInput{
Filters: []*ec2.Filter{
{
Name: aws.String("attachment.instance-id"),
Values: []*string{aws.String(d.Id())},
},
},
}
resp, err := conn.DescribeVolumes(opts)
if err != nil {
return nil, err
}
for _, v := range resp.Volumes {
volumeIds = append(volumeIds, v.VolumeId)
}
return volumeIds, nil
}

View File

@ -15,13 +15,13 @@ func resourceAwsInstanceMigrateState(
switch v {
case 0:
log.Println("[INFO] Found AWS Instance State v0; migrating to v1")
return migrateStateV0toV1(is)
return migrateAwsInstanceStateV0toV1(is)
default:
return is, fmt.Errorf("Unexpected schema version: %d", v)
}
}
func migrateStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) {
func migrateAwsInstanceStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) {
if is.Empty() || is.Attributes == nil {
log.Println("[DEBUG] Empty InstanceState; nothing to migrate.")
return is, nil

View File

@ -616,7 +616,6 @@ func TestAccAWSInstance_tags(t *testing.T) {
testAccCheckTags(&v.Tags, "#", ""),
),
},
{
Config: testAccCheckInstanceConfigTagsUpdate,
Check: resource.ComposeTestCheckFunc(
@ -629,6 +628,75 @@ func TestAccAWSInstance_tags(t *testing.T) {
})
}
func TestAccAWSInstance_volumeTags(t *testing.T) {
var v ec2.Instance
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
{
Config: testAccCheckInstanceConfigNoVolumeTags,
Check: resource.ComposeTestCheckFunc(
testAccCheckInstanceExists("aws_instance.foo", &v),
resource.TestCheckNoResourceAttr(
"aws_instance.foo", "volume_tags"),
),
},
{
Config: testAccCheckInstanceConfigWithVolumeTags,
Check: resource.ComposeTestCheckFunc(
testAccCheckInstanceExists("aws_instance.foo", &v),
resource.TestCheckResourceAttr(
"aws_instance.foo", "volume_tags.%", "1"),
resource.TestCheckResourceAttr(
"aws_instance.foo", "volume_tags.Name", "acceptance-test-volume-tag"),
),
},
{
Config: testAccCheckInstanceConfigWithVolumeTagsUpdate,
Check: resource.ComposeTestCheckFunc(
testAccCheckInstanceExists("aws_instance.foo", &v),
resource.TestCheckResourceAttr(
"aws_instance.foo", "volume_tags.%", "2"),
resource.TestCheckResourceAttr(
"aws_instance.foo", "volume_tags.Name", "acceptance-test-volume-tag"),
resource.TestCheckResourceAttr(
"aws_instance.foo", "volume_tags.Environment", "dev"),
),
},
{
Config: testAccCheckInstanceConfigNoVolumeTags,
Check: resource.ComposeTestCheckFunc(
testAccCheckInstanceExists("aws_instance.foo", &v),
resource.TestCheckNoResourceAttr(
"aws_instance.foo", "volume_tags"),
),
},
},
})
}
func TestAccAWSInstance_volumeTagsComputed(t *testing.T) {
var v ec2.Instance
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
{
Config: testAccCheckInstanceConfigWithAttachedVolume,
Check: resource.ComposeTestCheckFunc(
testAccCheckInstanceExists("aws_instance.foo", &v),
),
ExpectNonEmptyPlan: false,
},
},
})
}
func TestAccAWSInstance_instanceProfileChange(t *testing.T) {
var v ec2.Instance
rName := acctest.RandString(5)
@ -877,6 +945,58 @@ func TestAccAWSInstance_changeInstanceType(t *testing.T) {
})
}
func TestAccAWSInstance_primaryNetworkInterface(t *testing.T) {
var instance ec2.Instance
var ini ec2.NetworkInterface
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
{
Config: testAccInstanceConfigPrimaryNetworkInterface,
Check: resource.ComposeTestCheckFunc(
testAccCheckInstanceExists("aws_instance.foo", &instance),
testAccCheckAWSENIExists("aws_network_interface.bar", &ini),
resource.TestCheckResourceAttr("aws_instance.foo", "network_interface.#", "1"),
),
},
},
})
}
func TestAccAWSInstance_addSecondaryInterface(t *testing.T) {
var before ec2.Instance
var after ec2.Instance
var iniPrimary ec2.NetworkInterface
var iniSecondary ec2.NetworkInterface
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
{
Config: testAccInstanceConfigAddSecondaryNetworkInterfaceBefore,
Check: resource.ComposeTestCheckFunc(
testAccCheckInstanceExists("aws_instance.foo", &before),
testAccCheckAWSENIExists("aws_network_interface.primary", &iniPrimary),
resource.TestCheckResourceAttr("aws_instance.foo", "network_interface.#", "1"),
),
},
{
Config: testAccInstanceConfigAddSecondaryNetworkInterfaceAfter,
Check: resource.ComposeTestCheckFunc(
testAccCheckInstanceExists("aws_instance.foo", &after),
testAccCheckAWSENIExists("aws_network_interface.secondary", &iniSecondary),
resource.TestCheckResourceAttr("aws_instance.foo", "network_interface.#", "1"),
),
},
},
})
}
func testAccCheckInstanceNotRecreated(t *testing.T,
before, after *ec2.Instance) resource.TestCheckFunc {
return func(s *terraform.State) error {
@ -1281,6 +1401,180 @@ resource "aws_instance" "foo" {
}
`
const testAccCheckInstanceConfigWithAttachedVolume = `
data "aws_ami" "debian_jessie_latest" {
most_recent = true
filter {
name = "name"
values = ["debian-jessie-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "architecture"
values = ["x86_64"]
}
filter {
name = "root-device-type"
values = ["ebs"]
}
owners = ["379101102735"] # Debian
}
resource "aws_instance" "foo" {
ami = "${data.aws_ami.debian_jessie_latest.id}"
associate_public_ip_address = true
count = 1
instance_type = "t2.medium"
root_block_device {
volume_size = "10"
volume_type = "standard"
delete_on_termination = true
}
tags {
Name = "test-terraform"
}
}
resource "aws_ebs_volume" "test" {
depends_on = ["aws_instance.foo"]
availability_zone = "${aws_instance.foo.availability_zone}"
type = "gp2"
size = "10"
tags {
Name = "test-terraform"
}
}
resource "aws_volume_attachment" "test" {
depends_on = ["aws_ebs_volume.test"]
device_name = "/dev/xvdg"
volume_id = "${aws_ebs_volume.test.id}"
instance_id = "${aws_instance.foo.id}"
}
`
const testAccCheckInstanceConfigNoVolumeTags = `
resource "aws_instance" "foo" {
ami = "ami-55a7ea65"
instance_type = "m3.medium"
root_block_device {
volume_type = "gp2"
volume_size = 11
}
ebs_block_device {
device_name = "/dev/sdb"
volume_size = 9
}
ebs_block_device {
device_name = "/dev/sdc"
volume_size = 10
volume_type = "io1"
iops = 100
}
ebs_block_device {
device_name = "/dev/sdd"
volume_size = 12
encrypted = true
}
ephemeral_block_device {
device_name = "/dev/sde"
virtual_name = "ephemeral0"
}
}
`
const testAccCheckInstanceConfigWithVolumeTags = `
resource "aws_instance" "foo" {
ami = "ami-55a7ea65"
instance_type = "m3.medium"
root_block_device {
volume_type = "gp2"
volume_size = 11
}
ebs_block_device {
device_name = "/dev/sdb"
volume_size = 9
}
ebs_block_device {
device_name = "/dev/sdc"
volume_size = 10
volume_type = "io1"
iops = 100
}
ebs_block_device {
device_name = "/dev/sdd"
volume_size = 12
encrypted = true
}
ephemeral_block_device {
device_name = "/dev/sde"
virtual_name = "ephemeral0"
}
volume_tags {
Name = "acceptance-test-volume-tag"
}
}
`
const testAccCheckInstanceConfigWithVolumeTagsUpdate = `
resource "aws_instance" "foo" {
ami = "ami-55a7ea65"
instance_type = "m3.medium"
root_block_device {
volume_type = "gp2"
volume_size = 11
}
ebs_block_device {
device_name = "/dev/sdb"
volume_size = 9
}
ebs_block_device {
device_name = "/dev/sdc"
volume_size = 10
volume_type = "io1"
iops = 100
}
ebs_block_device {
device_name = "/dev/sdd"
volume_size = 12
encrypted = true
}
ephemeral_block_device {
device_name = "/dev/sde"
virtual_name = "ephemeral0"
}
volume_tags {
Name = "acceptance-test-volume-tag"
Environment = "dev"
}
}
`
const testAccCheckInstanceConfigTagsUpdate = `
resource "aws_instance" "foo" {
ami = "ami-4fccb37f"
@ -1536,3 +1830,129 @@ resource "aws_instance" "foo" {
subnet_id = "${aws_subnet.foo.id}"
}
`
const testAccInstanceConfigPrimaryNetworkInterface = `
resource "aws_vpc" "foo" {
cidr_block = "172.16.0.0/16"
tags {
Name = "tf-instance-test"
}
}
resource "aws_subnet" "foo" {
vpc_id = "${aws_vpc.foo.id}"
cidr_block = "172.16.10.0/24"
availability_zone = "us-west-2a"
tags {
Name = "tf-instance-test"
}
}
resource "aws_network_interface" "bar" {
subnet_id = "${aws_subnet.foo.id}"
private_ips = ["172.16.10.100"]
tags {
Name = "primary_network_interface"
}
}
resource "aws_instance" "foo" {
ami = "ami-22b9a343"
instance_type = "t2.micro"
network_interface {
network_interface_id = "${aws_network_interface.bar.id}"
device_index = 0
}
}
`
const testAccInstanceConfigAddSecondaryNetworkInterfaceBefore = `
resource "aws_vpc" "foo" {
cidr_block = "172.16.0.0/16"
tags {
Name = "tf-instance-test"
}
}
resource "aws_subnet" "foo" {
vpc_id = "${aws_vpc.foo.id}"
cidr_block = "172.16.10.0/24"
availability_zone = "us-west-2a"
tags {
Name = "tf-instance-test"
}
}
resource "aws_network_interface" "primary" {
subnet_id = "${aws_subnet.foo.id}"
private_ips = ["172.16.10.100"]
tags {
Name = "primary_network_interface"
}
}
resource "aws_network_interface" "secondary" {
subnet_id = "${aws_subnet.foo.id}"
private_ips = ["172.16.10.101"]
tags {
Name = "secondary_network_interface"
}
}
resource "aws_instance" "foo" {
ami = "ami-22b9a343"
instance_type = "t2.micro"
network_interface {
network_interface_id = "${aws_network_interface.primary.id}"
device_index = 0
}
}
`
const testAccInstanceConfigAddSecondaryNetworkInterfaceAfter = `
resource "aws_vpc" "foo" {
cidr_block = "172.16.0.0/16"
tags {
Name = "tf-instance-test"
}
}
resource "aws_subnet" "foo" {
vpc_id = "${aws_vpc.foo.id}"
cidr_block = "172.16.10.0/24"
availability_zone = "us-west-2a"
tags {
Name = "tf-instance-test"
}
}
resource "aws_network_interface" "primary" {
subnet_id = "${aws_subnet.foo.id}"
private_ips = ["172.16.10.100"]
tags {
Name = "primary_network_interface"
}
}
// Attach previously created network interface, observe no state diff on instance resource
resource "aws_network_interface" "secondary" {
subnet_id = "${aws_subnet.foo.id}"
private_ips = ["172.16.10.101"]
tags {
Name = "secondary_network_interface"
}
attachment {
instance = "${aws_instance.foo.id}"
device_index = 1
}
}
resource "aws_instance" "foo" {
ami = "ami-22b9a343"
instance_type = "t2.micro"
network_interface {
network_interface_id = "${aws_network_interface.primary.id}"
device_index = 0
}
}
`

View File

@ -31,6 +31,25 @@ func TestAccAWSSNSTopicSubscription_basic(t *testing.T) {
})
}
func TestAccAWSSNSTopicSubscription_autoConfirmingEndpoint(t *testing.T) {
ri := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSSNSTopicSubscriptionDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSSNSTopicSubscriptionConfig_autoConfirmingEndpoint(ri),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSSNSTopicExists("aws_sns_topic.test_topic"),
testAccCheckAWSSNSTopicSubscriptionExists("aws_sns_topic_subscription.test_subscription"),
),
},
},
})
}
func testAccCheckAWSSNSTopicSubscriptionDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).snsconn
@ -103,3 +122,126 @@ resource "aws_sns_topic_subscription" "test_subscription" {
}
`, i)
}
func testAccAWSSNSTopicSubscriptionConfig_autoConfirmingEndpoint(i int) string {
return fmt.Sprintf(`
resource "aws_sns_topic" "test_topic" {
name = "tf-acc-test-sns-%d"
}
resource "aws_api_gateway_rest_api" "test" {
name = "tf-acc-test-sns-%d"
description = "Terraform Acceptance test for SNS subscription"
}
resource "aws_api_gateway_method" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
resource_id = "${aws_api_gateway_rest_api.test.root_resource_id}"
http_method = "POST"
authorization = "NONE"
}
resource "aws_api_gateway_method_response" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
resource_id = "${aws_api_gateway_rest_api.test.root_resource_id}"
http_method = "${aws_api_gateway_method.test.http_method}"
status_code = "200"
response_parameters {
"method.response.header.Access-Control-Allow-Origin" = true
}
}
resource "aws_api_gateway_integration" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
resource_id = "${aws_api_gateway_rest_api.test.root_resource_id}"
http_method = "${aws_api_gateway_method.test.http_method}"
integration_http_method = "POST"
type = "AWS"
uri = "${aws_lambda_function.lambda.invoke_arn}"
}
resource "aws_api_gateway_integration_response" "test" {
depends_on = ["aws_api_gateway_integration.test"]
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
resource_id = "${aws_api_gateway_rest_api.test.root_resource_id}"
http_method = "${aws_api_gateway_method.test.http_method}"
status_code = "${aws_api_gateway_method_response.test.status_code}"
response_parameters {
"method.response.header.Access-Control-Allow-Origin" = "'*'"
}
}
resource "aws_iam_role" "iam_for_lambda" {
name = "tf-acc-test-sns-%d"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy" "policy" {
name = "tf-acc-test-sns-%d"
role = "${aws_iam_role.iam_for_lambda.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_lambda_permission" "apigw_lambda" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = "${aws_lambda_function.lambda.arn}"
principal = "apigateway.amazonaws.com"
source_arn = "${aws_api_gateway_deployment.test.execution_arn}/*"
}
resource "aws_lambda_function" "lambda" {
filename = "test-fixtures/lambda_confirm_sns.zip"
function_name = "tf-acc-test-sns-%d"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "main.confirm_subscription"
source_code_hash = "${base64sha256(file("test-fixtures/lambda_confirm_sns.zip"))}"
runtime = "python3.6"
}
resource "aws_api_gateway_deployment" "test" {
depends_on = ["aws_api_gateway_integration_response.test"]
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
stage_name = "acctest"
}
resource "aws_sns_topic_subscription" "test_subscription" {
depends_on = ["aws_lambda_permission.apigw_lambda"]
topic_arn = "${aws_sns_topic.test_topic.arn}"
protocol = "https"
endpoint = "${aws_api_gateway_deployment.test.invoke_url}"
endpoint_auto_confirms = true
}
`, i, i, i, i, i)
}

View File

@ -0,0 +1,53 @@
package aws
import (
"sort"
"time"
"github.com/aws/aws-sdk-go/service/ec2"
)
type imageSort []*ec2.Image
type snapshotSort []*ec2.Snapshot
func (a imageSort) Len() int {
return len(a)
}
func (a imageSort) Swap(i, j int) {
a[i], a[j] = a[j], a[i]
}
func (a imageSort) Less(i, j int) bool {
itime, _ := time.Parse(time.RFC3339, *a[i].CreationDate)
jtime, _ := time.Parse(time.RFC3339, *a[j].CreationDate)
return itime.Unix() < jtime.Unix()
}
// Sort images by creation date, in descending order.
func sortImages(images []*ec2.Image) []*ec2.Image {
sortedImages := images
sort.Sort(sort.Reverse(imageSort(sortedImages)))
return sortedImages
}
func (a snapshotSort) Len() int {
return len(a)
}
func (a snapshotSort) Swap(i, j int) {
a[i], a[j] = a[j], a[i]
}
func (a snapshotSort) Less(i, j int) bool {
itime := *a[i].StartTime
jtime := *a[j].StartTime
return itime.Unix() < jtime.Unix()
}
// Sort snapshots by creation date, in descending order.
func sortSnapshots(snapshots []*ec2.Snapshot) []*ec2.Snapshot {
sortedSnapshots := snapshots
sort.Sort(sort.Reverse(snapshotSort(sortedSnapshots)))
return sortedSnapshots
}

View File

@ -69,6 +69,63 @@ func setElbV2Tags(conn *elbv2.ELBV2, d *schema.ResourceData) error {
return nil
}
func setVolumeTags(conn *ec2.EC2, d *schema.ResourceData) error {
if d.HasChange("volume_tags") {
oraw, nraw := d.GetChange("volume_tags")
o := oraw.(map[string]interface{})
n := nraw.(map[string]interface{})
create, remove := diffTags(tagsFromMap(o), tagsFromMap(n))
volumeIds, err := getAwsInstanceVolumeIds(conn, d)
if err != nil {
return err
}
if len(remove) > 0 {
err := resource.Retry(2*time.Minute, func() *resource.RetryError {
log.Printf("[DEBUG] Removing volume tags: %#v from %s", remove, d.Id())
_, err := conn.DeleteTags(&ec2.DeleteTagsInput{
Resources: volumeIds,
Tags: remove,
})
if err != nil {
ec2err, ok := err.(awserr.Error)
if ok && strings.Contains(ec2err.Code(), ".NotFound") {
return resource.RetryableError(err) // retry
}
return resource.NonRetryableError(err)
}
return nil
})
if err != nil {
return err
}
}
if len(create) > 0 {
err := resource.Retry(2*time.Minute, func() *resource.RetryError {
log.Printf("[DEBUG] Creating vol tags: %s for %s", create, d.Id())
_, err := conn.CreateTags(&ec2.CreateTagsInput{
Resources: volumeIds,
Tags: create,
})
if err != nil {
ec2err, ok := err.(awserr.Error)
if ok && strings.Contains(ec2err.Code(), ".NotFound") {
return resource.RetryableError(err) // retry
}
return resource.NonRetryableError(err)
}
return nil
})
if err != nil {
return err
}
}
}
return nil
}
// setTags is a helper to set the tags for a resource. It expects the
// tags field to be named "tags"
func setTags(conn *ec2.EC2, d *schema.ResourceData) error {

View File

@ -177,8 +177,9 @@ func resourceArmVirtualMachine() *schema.Resource {
},
"create_option": {
Type: schema.TypeString,
Required: true,
Type: schema.TypeString,
Required: true,
DiffSuppressFunc: ignoreCaseDiffSuppressFunc,
},
"disk_size_gb": {
@ -232,8 +233,9 @@ func resourceArmVirtualMachine() *schema.Resource {
},
"create_option": {
Type: schema.TypeString,
Required: true,
Type: schema.TypeString,
Required: true,
DiffSuppressFunc: ignoreCaseDiffSuppressFunc,
},
"caching": {

View File

@ -0,0 +1,41 @@
package dnsimple
import (
"fmt"
"os"
"testing"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccDnsimpleRecord_import(t *testing.T) {
resourceName := "dnsimple_record.foobar"
domain := os.Getenv("DNSIMPLE_DOMAIN")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckDNSimpleRecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(testAccCheckDNSimpleRecordConfig_import, domain),
},
resource.TestStep{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
ImportStateIdPrefix: fmt.Sprintf("%s_", domain),
},
},
})
}
const testAccCheckDNSimpleRecordConfig_import = `
resource "dnsimple_record" "foobar" {
domain = "%s"
name = "terraform"
value = "192.168.0.10"
type = "A"
ttl = 3600
}`

View File

@ -16,6 +16,9 @@ func resourceDNSimpleRecord() *schema.Resource {
Read: resourceDNSimpleRecordRead,
Update: resourceDNSimpleRecordUpdate,
Delete: resourceDNSimpleRecordDelete,
Importer: &schema.ResourceImporter{
State: resourceDNSimpleRecordImport,
},
Schema: map[string]*schema.Schema{
"domain": {
@ -184,3 +187,19 @@ func resourceDNSimpleRecordDelete(d *schema.ResourceData, meta interface{}) erro
return nil
}
func resourceDNSimpleRecordImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) {
parts := strings.Split(d.Id(), "_")
if len(parts) != 2 {
return nil, fmt.Errorf("Error Importing dnsimple_record. Please make sure the record ID is in the form DOMAIN_RECORDID (i.e. example.com_1234")
}
d.SetId(parts[1])
d.Set("domain", parts[0])
if err := resourceDNSimpleRecordRead(d, meta); err != nil {
return nil, err
}
return []*schema.ResourceData{d}, nil
}

View File

@ -2010,7 +2010,7 @@ func resourceServiceV1Read(d *schema.ResourceData, meta interface{}) error {
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up GCS for (%s), version (%s): %s", d.Id(), s.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up GCS for (%s), version (%v): %s", d.Id(), s.ActiveVersion.Number, err)
}
gcsl := flattenGCS(GCSList)

View File

@ -75,7 +75,7 @@ func testAccCheckFastlyServiceV1CacheSettingsAttributes(service *gofastly.Servic
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up Request Setting for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up Request Setting for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
if len(rqList) != len(rqs) {

View File

@ -57,7 +57,7 @@ func testAccCheckFastlyServiceV1ConditionalAttributes(service *gofastly.ServiceD
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up Conditions for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up Conditions for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
if len(conditionList) != len(conditions) {

View File

@ -85,7 +85,7 @@ func testAccCheckFastlyServiceV1Attributes_gcs(service *gofastly.ServiceDetail,
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up GCSs for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up GCSs for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
if len(gcsList) != 1 {

View File

@ -169,7 +169,7 @@ func testAccCheckFastlyServiceV1GzipsAttributes(service *gofastly.ServiceDetail,
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up Gzips for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up Gzips for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
if len(gzipsList) != len(gzips) {

View File

@ -163,7 +163,7 @@ func testAccCheckFastlyServiceV1HeaderAttributes(service *gofastly.ServiceDetail
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up Headers for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up Headers for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
if len(headersList) != len(headers) {

View File

@ -88,7 +88,7 @@ func testAccCheckFastlyServiceV1HealthCheckAttributes(service *gofastly.ServiceD
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up Healthcheck for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up Healthcheck for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
if len(healthcheckList) != len(healthchecks) {

View File

@ -75,7 +75,7 @@ func testAccCheckFastlyServiceV1PapertrailAttributes(service *gofastly.ServiceDe
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up Papertrail for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up Papertrail for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
if len(papertrailList) != len(papertrails) {

View File

@ -56,7 +56,7 @@ func testAccCheckFastlyServiceV1RequestSettingsAttributes(service *gofastly.Serv
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up Request Setting for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up Request Setting for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
if len(rqList) != len(rqs) {

View File

@ -80,7 +80,7 @@ func testAccCheckFastlyServiceV1ResponseObjectAttributes(service *gofastly.Servi
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up Response Object for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up Response Object for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
if len(responseObjectList) != len(responseObjects) {

View File

@ -171,7 +171,7 @@ func testAccCheckFastlyServiceV1S3LoggingAttributes(service *gofastly.ServiceDet
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up S3 Logging for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up S3 Logging for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
if len(s3List) != len(s3s) {

View File

@ -83,7 +83,7 @@ func testAccCheckFastlyServiceV1Attributes_sumologic(service *gofastly.ServiceDe
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up Sumologics for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up Sumologics for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
if len(sumologicList) != 1 {

View File

@ -305,7 +305,7 @@ func testAccCheckFastlyServiceV1Attributes(service *gofastly.ServiceDetail, name
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up Domains for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up Domains for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
expected := len(domains)
@ -339,7 +339,7 @@ func testAccCheckFastlyServiceV1Attributes_backends(service *gofastly.ServiceDet
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up Backends for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up Backends for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
expected := len(backendList)

View File

@ -61,7 +61,7 @@ func testAccCheckFastlyServiceV1VCLAttributes(service *gofastly.ServiceDetail, n
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up VCL for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
return fmt.Errorf("[ERR] Error looking up VCL for (%s), version (%v): %s", service.Name, service.ActiveVersion.Number, err)
}
if len(vclList) != vclCount {

View File

@ -0,0 +1,31 @@
package gitlab
import (
"github.com/xanzy/go-gitlab"
)
// Config is per-provider, specifies where to connect to gitlab
type Config struct {
Token string
BaseURL string
}
// Client returns a *gitlab.Client to interact with the configured gitlab instance
func (c *Config) Client() (interface{}, error) {
client := gitlab.NewClient(nil, c.Token)
if c.BaseURL != "" {
err := client.SetBaseURL(c.BaseURL)
if err != nil {
// The BaseURL supplied wasn't valid, bail.
return nil, err
}
}
// Test the credentials by checking we can get information about the authenticated user.
_, _, err := client.Users.CurrentUser()
if err != nil {
return nil, err
}
return client, nil
}

View File

@ -0,0 +1,52 @@
package gitlab
import (
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/terraform"
)
// Provider returns a terraform.ResourceProvider.
func Provider() terraform.ResourceProvider {
// The actual provider
return &schema.Provider{
Schema: map[string]*schema.Schema{
"token": {
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc("GITLAB_TOKEN", nil),
Description: descriptions["token"],
},
"base_url": {
Type: schema.TypeString,
Optional: true,
DefaultFunc: schema.EnvDefaultFunc("GITLAB_BASE_URL", ""),
Description: descriptions["base_url"],
},
},
ResourcesMap: map[string]*schema.Resource{
"gitlab_project": resourceGitlabProject(),
},
ConfigureFunc: providerConfigure,
}
}
var descriptions map[string]string
func init() {
descriptions = map[string]string{
"token": "The OAuth token used to connect to GitLab.",
"base_url": "The GitLab Base API URL",
}
}
func providerConfigure(d *schema.ResourceData) (interface{}, error) {
config := Config{
Token: d.Get("token").(string),
BaseURL: d.Get("base_url").(string),
}
return config.Client()
}

View File

@ -0,0 +1,35 @@
package gitlab
import (
"os"
"testing"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/terraform"
)
var testAccProviders map[string]terraform.ResourceProvider
var testAccProvider *schema.Provider
func init() {
testAccProvider = Provider().(*schema.Provider)
testAccProviders = map[string]terraform.ResourceProvider{
"gitlab": testAccProvider,
}
}
func TestProvider(t *testing.T) {
if err := Provider().(*schema.Provider).InternalValidate(); err != nil {
t.Fatalf("err: %s", err)
}
}
func TestProvider_impl(t *testing.T) {
var _ terraform.ResourceProvider = Provider()
}
func testAccPreCheck(t *testing.T) {
if v := os.Getenv("GITLAB_TOKEN"); v == "" {
t.Fatal("GITLAB_TOKEN must be set for acceptance tests")
}
}

View File

@ -0,0 +1,192 @@
package gitlab
import (
"fmt"
"log"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/helper/validation"
gitlab "github.com/xanzy/go-gitlab"
)
func resourceGitlabProject() *schema.Resource {
return &schema.Resource{
Create: resourceGitlabProjectCreate,
Read: resourceGitlabProjectRead,
Update: resourceGitlabProjectUpdate,
Delete: resourceGitlabProjectDelete,
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"description": {
Type: schema.TypeString,
Optional: true,
},
"default_branch": {
Type: schema.TypeString,
Optional: true,
},
"issues_enabled": {
Type: schema.TypeBool,
Optional: true,
Default: true,
},
"merge_requests_enabled": {
Type: schema.TypeBool,
Optional: true,
Default: true,
},
"wiki_enabled": {
Type: schema.TypeBool,
Optional: true,
Default: true,
},
"snippets_enabled": {
Type: schema.TypeBool,
Optional: true,
Default: true,
},
"visibility_level": {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validation.StringInSlice([]string{"private", "internal", "public"}, true),
Default: "private",
},
"ssh_url_to_repo": {
Type: schema.TypeString,
Computed: true,
},
"http_url_to_repo": {
Type: schema.TypeString,
Computed: true,
},
"web_url": {
Type: schema.TypeString,
Computed: true,
},
},
}
}
func resourceGitlabProjectSetToState(d *schema.ResourceData, project *gitlab.Project) {
d.Set("name", project.Name)
d.Set("description", project.Description)
d.Set("default_branch", project.DefaultBranch)
d.Set("issues_enabled", project.IssuesEnabled)
d.Set("merge_requests_enabled", project.MergeRequestsEnabled)
d.Set("wiki_enabled", project.WikiEnabled)
d.Set("snippets_enabled", project.SnippetsEnabled)
d.Set("visibility_level", visibilityLevelToString(project.VisibilityLevel))
d.Set("ssh_url_to_repo", project.SSHURLToRepo)
d.Set("http_url_to_repo", project.HTTPURLToRepo)
d.Set("web_url", project.WebURL)
}
func resourceGitlabProjectCreate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*gitlab.Client)
options := &gitlab.CreateProjectOptions{
Name: gitlab.String(d.Get("name").(string)),
IssuesEnabled: gitlab.Bool(d.Get("issues_enabled").(bool)),
MergeRequestsEnabled: gitlab.Bool(d.Get("merge_requests_enabled").(bool)),
WikiEnabled: gitlab.Bool(d.Get("wiki_enabled").(bool)),
SnippetsEnabled: gitlab.Bool(d.Get("snippets_enabled").(bool)),
}
if v, ok := d.GetOk("description"); ok {
options.Description = gitlab.String(v.(string))
}
if v, ok := d.GetOk("visibility_level"); ok {
options.VisibilityLevel = stringToVisibilityLevel(v.(string))
}
log.Printf("[DEBUG] create gitlab project %q", options.Name)
project, _, err := client.Projects.CreateProject(options)
if err != nil {
return err
}
d.SetId(fmt.Sprintf("%d", project.ID))
return resourceGitlabProjectRead(d, meta)
}
func resourceGitlabProjectRead(d *schema.ResourceData, meta interface{}) error {
client := meta.(*gitlab.Client)
log.Printf("[DEBUG] read gitlab project %s", d.Id())
project, response, err := client.Projects.GetProject(d.Id())
if err != nil {
if response.StatusCode == 404 {
log.Printf("[WARN] removing project %s from state because it no longer exists in gitlab", d.Id())
d.SetId("")
return nil
}
return err
}
resourceGitlabProjectSetToState(d, project)
return nil
}
func resourceGitlabProjectUpdate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*gitlab.Client)
options := &gitlab.EditProjectOptions{}
if d.HasChange("name") {
options.Name = gitlab.String(d.Get("name").(string))
}
if d.HasChange("description") {
options.Description = gitlab.String(d.Get("description").(string))
}
if d.HasChange("default_branch") {
options.DefaultBranch = gitlab.String(d.Get("description").(string))
}
if d.HasChange("visibility_level") {
options.VisibilityLevel = stringToVisibilityLevel(d.Get("visibility_level").(string))
}
if d.HasChange("issues_enabled") {
options.IssuesEnabled = gitlab.Bool(d.Get("issues_enabled").(bool))
}
if d.HasChange("merge_requests_enabled") {
options.MergeRequestsEnabled = gitlab.Bool(d.Get("merge_requests_enabled").(bool))
}
if d.HasChange("wiki_enabled") {
options.WikiEnabled = gitlab.Bool(d.Get("wiki_enabled").(bool))
}
if d.HasChange("snippets_enabled") {
options.SnippetsEnabled = gitlab.Bool(d.Get("snippets_enabled").(bool))
}
log.Printf("[DEBUG] update gitlab project %s", d.Id())
_, _, err := client.Projects.EditProject(d.Id(), options)
if err != nil {
return err
}
return resourceGitlabProjectRead(d, meta)
}
func resourceGitlabProjectDelete(d *schema.ResourceData, meta interface{}) error {
client := meta.(*gitlab.Client)
log.Printf("[DEBUG] Delete gitlab project %s", d.Id())
_, err := client.Projects.DeleteProject(d.Id())
return err
}

View File

@ -0,0 +1,191 @@
package gitlab
import (
"fmt"
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"github.com/xanzy/go-gitlab"
)
func TestAccGitlabProject_basic(t *testing.T) {
var project gitlab.Project
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckGitlabProjectDestroy,
Steps: []resource.TestStep{
// Create a project with all the features on
{
Config: testAccGitlabProjectConfig(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckGitlabProjectExists("gitlab_project.foo", &project),
testAccCheckGitlabProjectAttributes(&project, &testAccGitlabProjectExpectedAttributes{
Name: fmt.Sprintf("foo-%d", rInt),
Description: "Terraform acceptance tests",
IssuesEnabled: true,
MergeRequestsEnabled: true,
WikiEnabled: true,
SnippetsEnabled: true,
VisibilityLevel: 20,
}),
),
},
// Update the project to turn the features off
{
Config: testAccGitlabProjectUpdateConfig(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckGitlabProjectExists("gitlab_project.foo", &project),
testAccCheckGitlabProjectAttributes(&project, &testAccGitlabProjectExpectedAttributes{
Name: fmt.Sprintf("foo-%d", rInt),
Description: "Terraform acceptance tests!",
VisibilityLevel: 20,
}),
),
},
//Update the project to turn the features on again
{
Config: testAccGitlabProjectConfig(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckGitlabProjectExists("gitlab_project.foo", &project),
testAccCheckGitlabProjectAttributes(&project, &testAccGitlabProjectExpectedAttributes{
Name: fmt.Sprintf("foo-%d", rInt),
Description: "Terraform acceptance tests",
IssuesEnabled: true,
MergeRequestsEnabled: true,
WikiEnabled: true,
SnippetsEnabled: true,
VisibilityLevel: 20,
}),
),
},
},
})
}
func testAccCheckGitlabProjectExists(n string, project *gitlab.Project) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not Found: %s", n)
}
repoName := rs.Primary.ID
if repoName == "" {
return fmt.Errorf("No project ID is set")
}
conn := testAccProvider.Meta().(*gitlab.Client)
gotProject, _, err := conn.Projects.GetProject(repoName)
if err != nil {
return err
}
*project = *gotProject
return nil
}
}
type testAccGitlabProjectExpectedAttributes struct {
Name string
Description string
DefaultBranch string
IssuesEnabled bool
MergeRequestsEnabled bool
WikiEnabled bool
SnippetsEnabled bool
VisibilityLevel gitlab.VisibilityLevelValue
}
func testAccCheckGitlabProjectAttributes(project *gitlab.Project, want *testAccGitlabProjectExpectedAttributes) resource.TestCheckFunc {
return func(s *terraform.State) error {
if project.Name != want.Name {
return fmt.Errorf("got repo %q; want %q", project.Name, want.Name)
}
if project.Description != want.Description {
return fmt.Errorf("got description %q; want %q", project.Description, want.Description)
}
if project.DefaultBranch != want.DefaultBranch {
return fmt.Errorf("got default_branch %q; want %q", project.DefaultBranch, want.DefaultBranch)
}
if project.IssuesEnabled != want.IssuesEnabled {
return fmt.Errorf("got issues_enabled %t; want %t", project.IssuesEnabled, want.IssuesEnabled)
}
if project.MergeRequestsEnabled != want.MergeRequestsEnabled {
return fmt.Errorf("got merge_requests_enabled %t; want %t", project.MergeRequestsEnabled, want.MergeRequestsEnabled)
}
if project.WikiEnabled != want.WikiEnabled {
return fmt.Errorf("got wiki_enabled %t; want %t", project.WikiEnabled, want.WikiEnabled)
}
if project.SnippetsEnabled != want.SnippetsEnabled {
return fmt.Errorf("got snippets_enabled %t; want %t", project.SnippetsEnabled, want.SnippetsEnabled)
}
if project.VisibilityLevel != want.VisibilityLevel {
return fmt.Errorf("got default branch %q; want %q", project.VisibilityLevel, want.VisibilityLevel)
}
return nil
}
}
func testAccCheckGitlabProjectDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*gitlab.Client)
for _, rs := range s.RootModule().Resources {
if rs.Type != "gitlab_project" {
continue
}
gotRepo, resp, err := conn.Projects.GetProject(rs.Primary.ID)
if err == nil {
if gotRepo != nil && fmt.Sprintf("%d", gotRepo.ID) == rs.Primary.ID {
return fmt.Errorf("Repository still exists")
}
}
if resp.StatusCode != 404 {
return err
}
return nil
}
return nil
}
func testAccGitlabProjectConfig(rInt int) string {
return fmt.Sprintf(`
resource "gitlab_project" "foo" {
name = "foo-%d"
description = "Terraform acceptance tests"
# So that acceptance tests can be run in a gitlab organization
# with no billing
visibility_level = "public"
}
`, rInt)
}
func testAccGitlabProjectUpdateConfig(rInt int) string {
return fmt.Sprintf(`
resource "gitlab_project" "foo" {
name = "foo-%d"
description = "Terraform acceptance tests!"
# So that acceptance tests can be run in a gitlab organization
# with no billing
visibility_level = "public"
issues_enabled = false
merge_requests_enabled = false
wiki_enabled = false
snippets_enabled = false
}
`, rInt)
}

View File

@ -0,0 +1,54 @@
package gitlab
import (
"fmt"
"github.com/hashicorp/terraform/helper/schema"
gitlab "github.com/xanzy/go-gitlab"
)
// copied from ../github/util.go
func validateValueFunc(values []string) schema.SchemaValidateFunc {
return func(v interface{}, k string) (we []string, errors []error) {
value := v.(string)
valid := false
for _, role := range values {
if value == role {
valid = true
break
}
}
if !valid {
errors = append(errors, fmt.Errorf("%s is an invalid value for argument %s", value, k))
}
return
}
}
func stringToVisibilityLevel(s string) *gitlab.VisibilityLevelValue {
lookup := map[string]gitlab.VisibilityLevelValue{
"private": gitlab.PrivateVisibility,
"internal": gitlab.InternalVisibility,
"public": gitlab.PublicVisibility,
}
value, ok := lookup[s]
if !ok {
return nil
}
return &value
}
func visibilityLevelToString(v gitlab.VisibilityLevelValue) *string {
lookup := map[gitlab.VisibilityLevelValue]string{
gitlab.PrivateVisibility: "private",
gitlab.InternalVisibility: "internal",
gitlab.PublicVisibility: "public",
}
value, ok := lookup[v]
if !ok {
return nil
}
return &value
}

View File

@ -0,0 +1,65 @@
package gitlab
import (
"testing"
"github.com/xanzy/go-gitlab"
)
func TestGitlab_validation(t *testing.T) {
cases := []struct {
Value string
ErrCount int
}{
{
Value: "invalid",
ErrCount: 1,
},
{
Value: "valid_one",
ErrCount: 0,
},
{
Value: "valid_two",
ErrCount: 0,
},
}
validationFunc := validateValueFunc([]string{"valid_one", "valid_two"})
for _, tc := range cases {
_, errors := validationFunc(tc.Value, "test_arg")
if len(errors) != tc.ErrCount {
t.Fatalf("Expected 1 validation error")
}
}
}
func TestGitlab_visbilityHelpers(t *testing.T) {
cases := []struct {
String string
Level gitlab.VisibilityLevelValue
}{
{
String: "private",
Level: gitlab.PrivateVisibility,
},
{
String: "public",
Level: gitlab.PublicVisibility,
},
}
for _, tc := range cases {
level := stringToVisibilityLevel(tc.String)
if level == nil || *level != tc.Level {
t.Fatalf("got %v expected %v", level, tc.Level)
}
sv := visibilityLevelToString(tc.Level)
if sv == nil || *sv != tc.String {
t.Fatalf("got %v expected %v", sv, tc.String)
}
}
}

View File

@ -90,6 +90,7 @@ func resourceComputeForwardingRule() *schema.Resource {
Optional: true,
ForceNew: true,
Set: schema.HashString,
MaxItems: 5,
},
"project": &schema.Schema{

View File

@ -28,7 +28,7 @@ func resourceComputeInstance() *schema.Resource {
Schema: map[string]*schema.Schema{
"disk": &schema.Schema{
Type: schema.TypeList,
Required: true,
Optional: true,
ForceNew: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
@ -91,6 +91,40 @@ func resourceComputeInstance() *schema.Resource {
},
},
// Preferred way of adding persistent disks to an instance.
// Use this instead of `disk` when possible.
"attached_disk": &schema.Schema{
Type: schema.TypeList,
Optional: true,
ForceNew: true, // TODO(danawillow): Remove this, support attaching/detaching
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"source": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"device_name": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"disk_encryption_key_raw": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Sensitive: true,
ForceNew: true,
},
"disk_encryption_key_sha256": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
},
},
},
"machine_type": &schema.Schema{
Type: schema.TypeString,
Required: true,
@ -371,7 +405,11 @@ func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) err
// Build up the list of disks
disksCount := d.Get("disk.#").(int)
disks := make([]*compute.AttachedDisk, 0, disksCount)
attachedDisksCount := d.Get("attached_disk.#").(int)
if disksCount+attachedDisksCount == 0 {
return fmt.Errorf("At least one disk or attached_disk must be set")
}
disks := make([]*compute.AttachedDisk, 0, disksCount+attachedDisksCount)
for i := 0; i < disksCount; i++ {
prefix := fmt.Sprintf("disk.%d", i)
@ -457,6 +495,28 @@ func resourceComputeInstanceCreate(d *schema.ResourceData, meta interface{}) err
disks = append(disks, &disk)
}
for i := 0; i < attachedDisksCount; i++ {
prefix := fmt.Sprintf("attached_disk.%d", i)
disk := compute.AttachedDisk{
Source: d.Get(prefix + ".source").(string),
AutoDelete: false, // Don't allow autodelete; let terraform handle disk deletion
}
disk.Boot = i == 0 && disksCount == 0 // TODO(danawillow): This is super hacky, let's just add a boot field.
if v, ok := d.GetOk(prefix + ".device_name"); ok {
disk.DeviceName = v.(string)
}
if v, ok := d.GetOk(prefix + ".disk_encryption_key_raw"); ok {
disk.DiskEncryptionKey = &compute.CustomerEncryptionKey{
RawKey: v.(string),
}
}
disks = append(disks, &disk)
}
networksCount := d.Get("network.#").(int)
networkInterfacesCount := d.Get("network_interface.#").(int)
@ -791,24 +851,54 @@ func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error
d.Set("tags_fingerprint", instance.Tags.Fingerprint)
}
disks := make([]map[string]interface{}, 0, 1)
for i, disk := range instance.Disks {
di := map[string]interface{}{
"disk": d.Get(fmt.Sprintf("disk.%d.disk", i)),
"image": d.Get(fmt.Sprintf("disk.%d.image", i)),
"type": d.Get(fmt.Sprintf("disk.%d.type", i)),
"scratch": d.Get(fmt.Sprintf("disk.%d.scratch", i)),
"auto_delete": d.Get(fmt.Sprintf("disk.%d.auto_delete", i)),
"size": d.Get(fmt.Sprintf("disk.%d.size", i)),
"device_name": d.Get(fmt.Sprintf("disk.%d.device_name", i)),
"disk_encryption_key_raw": d.Get(fmt.Sprintf("disk.%d.disk_encryption_key_raw", i)),
disksCount := d.Get("disk.#").(int)
attachedDisksCount := d.Get("attached_disk.#").(int)
disks := make([]map[string]interface{}, 0, disksCount)
attachedDisks := make([]map[string]interface{}, 0, attachedDisksCount)
if expectedDisks := disksCount + attachedDisksCount; len(instance.Disks) != expectedDisks {
return fmt.Errorf("Expected %d disks, API returned %d", expectedDisks, len(instance.Disks))
}
attachedDiskSources := make(map[string]struct{}, attachedDisksCount)
for i := 0; i < attachedDisksCount; i++ {
attachedDiskSources[d.Get(fmt.Sprintf("attached_disk.%d.source", i)).(string)] = struct{}{}
}
dIndex := 0
adIndex := 0
for _, disk := range instance.Disks {
if _, ok := attachedDiskSources[disk.Source]; !ok {
di := map[string]interface{}{
"disk": d.Get(fmt.Sprintf("disk.%d.disk", dIndex)),
"image": d.Get(fmt.Sprintf("disk.%d.image", dIndex)),
"type": d.Get(fmt.Sprintf("disk.%d.type", dIndex)),
"scratch": d.Get(fmt.Sprintf("disk.%d.scratch", dIndex)),
"auto_delete": d.Get(fmt.Sprintf("disk.%d.auto_delete", dIndex)),
"size": d.Get(fmt.Sprintf("disk.%d.size", dIndex)),
"device_name": d.Get(fmt.Sprintf("disk.%d.device_name", dIndex)),
"disk_encryption_key_raw": d.Get(fmt.Sprintf("disk.%d.disk_encryption_key_raw", dIndex)),
}
if disk.DiskEncryptionKey != nil && disk.DiskEncryptionKey.Sha256 != "" {
di["disk_encryption_key_sha256"] = disk.DiskEncryptionKey.Sha256
}
disks = append(disks, di)
dIndex++
} else {
di := map[string]interface{}{
"source": disk.Source,
"device_name": disk.DeviceName,
"disk_encryption_key_raw": d.Get(fmt.Sprintf("attached_disk.%d.disk_encryption_key_raw", adIndex)),
}
if disk.DiskEncryptionKey != nil && disk.DiskEncryptionKey.Sha256 != "" {
di["disk_encryption_key_sha256"] = disk.DiskEncryptionKey.Sha256
}
attachedDisks = append(attachedDisks, di)
adIndex++
}
if disk.DiskEncryptionKey != nil && disk.DiskEncryptionKey.Sha256 != "" {
di["disk_encryption_key_sha256"] = disk.DiskEncryptionKey.Sha256
}
disks = append(disks, di)
}
d.Set("disk", disks)
d.Set("attached_disk", attachedDisks)
d.Set("self_link", instance.SelfLink)
d.SetId(instance.Name)

View File

@ -27,7 +27,7 @@ func TestAccComputeInstanceTemplate_basic(t *testing.T) {
"google_compute_instance_template.foobar", &instanceTemplate),
testAccCheckComputeInstanceTemplateTag(&instanceTemplate, "foo"),
testAccCheckComputeInstanceTemplateMetadata(&instanceTemplate, "foo", "bar"),
testAccCheckComputeInstanceTemplateDisk(&instanceTemplate, "https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-8-jessie-v20160803", true, true),
testAccCheckComputeInstanceTemplateDisk(&instanceTemplate, "projects/debian-cloud/global/images/debian-8-jessie-v20160803", true, true),
),
},
},
@ -67,7 +67,7 @@ func TestAccComputeInstanceTemplate_disks(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
testAccCheckComputeInstanceTemplateExists(
"google_compute_instance_template.foobar", &instanceTemplate),
testAccCheckComputeInstanceTemplateDisk(&instanceTemplate, "https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-8-jessie-v20160803", true, true),
testAccCheckComputeInstanceTemplateDisk(&instanceTemplate, "projects/debian-cloud/global/images/debian-8-jessie-v20160803", true, true),
testAccCheckComputeInstanceTemplateDisk(&instanceTemplate, "terraform-test-foobar", false, false),
),
},

View File

@ -244,6 +244,44 @@ func TestAccComputeInstance_diskEncryption(t *testing.T) {
})
}
func TestAccComputeInstance_attachedDisk(t *testing.T) {
var instance compute.Instance
var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10))
var diskName = fmt.Sprintf("instance-testd-%s", acctest.RandString(10))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckComputeInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccComputeInstance_attachedDisk(diskName, instanceName),
Check: resource.ComposeTestCheckFunc(
testAccCheckComputeInstanceExists(
"google_compute_instance.foobar", &instance),
testAccCheckComputeInstanceDisk(&instance, diskName, false, true),
),
},
},
})
}
func TestAccComputeInstance_noDisk(t *testing.T) {
var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckComputeInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccComputeInstance_noDisk(instanceName),
ExpectError: regexp.MustCompile("At least one disk or attached_disk must be set"),
},
},
})
}
func TestAccComputeInstance_local_ssd(t *testing.T) {
var instance compute.Instance
var instanceName = fmt.Sprintf("instance-test-%s", acctest.RandString(10))
@ -1121,6 +1159,51 @@ func testAccComputeInstance_disks_encryption(disk, instance string) string {
}`, disk, instance)
}
func testAccComputeInstance_attachedDisk(disk, instance string) string {
return fmt.Sprintf(`
resource "google_compute_disk" "foobar" {
name = "%s"
size = 10
type = "pd-ssd"
zone = "us-central1-a"
}
resource "google_compute_instance" "foobar" {
name = "%s"
machine_type = "n1-standard-1"
zone = "us-central1-a"
attached_disk {
source = "${google_compute_disk.foobar.self_link}"
}
network_interface {
network = "default"
}
metadata {
foo = "bar"
}
}`, disk, instance)
}
func testAccComputeInstance_noDisk(instance string) string {
return fmt.Sprintf(`
resource "google_compute_instance" "foobar" {
name = "%s"
machine_type = "n1-standard-1"
zone = "us-central1-a"
network_interface {
network = "default"
}
metadata {
foo = "bar"
}
}`, instance)
}
func testAccComputeInstance_local_ssd(instance string) string {
return fmt.Sprintf(`
resource "google_compute_instance" "local-ssd" {

View File

@ -408,14 +408,14 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er
addonsConfig := v.([]interface{})[0].(map[string]interface{})
cluster.AddonsConfig = &container.AddonsConfig{}
if v, ok := addonsConfig["http_load_balancing"]; ok {
if v, ok := addonsConfig["http_load_balancing"]; ok && len(v.([]interface{})) > 0 {
addon := v.([]interface{})[0].(map[string]interface{})
cluster.AddonsConfig.HttpLoadBalancing = &container.HttpLoadBalancing{
Disabled: addon["disabled"].(bool),
}
}
if v, ok := addonsConfig["horizontal_pod_autoscaling"]; ok {
if v, ok := addonsConfig["horizontal_pod_autoscaling"]; ok && len(v.([]interface{})) > 0 {
addon := v.([]interface{})[0].(map[string]interface{})
cluster.AddonsConfig.HorizontalPodAutoscaling = &container.HorizontalPodAutoscaling{
Disabled: addon["disabled"].(bool),

View File

@ -403,7 +403,7 @@ var testAccContainerCluster_withVersion = fmt.Sprintf(`
resource "google_container_cluster" "with_version" {
name = "cluster-test-%s"
zone = "us-central1-a"
node_version = "1.6.0"
node_version = "1.6.1"
initial_node_count = 1
master_auth {

View File

@ -25,11 +25,12 @@ func Provider() terraform.ResourceProvider {
},
ResourcesMap: map[string]*schema.Resource{
"heroku_app": resourceHerokuApp(),
"heroku_addon": resourceHerokuAddon(),
"heroku_app": resourceHerokuApp(),
"heroku_cert": resourceHerokuCert(),
"heroku_domain": resourceHerokuDomain(),
"heroku_drain": resourceHerokuDrain(),
"heroku_cert": resourceHerokuCert(),
"heroku_space": resourceHerokuSpace(),
},
ConfigureFunc: providerConfigure,

View File

@ -16,6 +16,7 @@ import (
type herokuApplication struct {
Name string
Region string
Space string
Stack string
GitURL string
WebURL string
@ -30,6 +31,7 @@ type application struct {
App *herokuApplication // The heroku application
Client *heroku.Service // Client to interact with the heroku API
Vars map[string]string // The vars on the application
Buildpacks []string // The application's buildpack names or URLs
Organization bool // is the application organization app
}
@ -62,6 +64,9 @@ func (a *application) Update() error {
a.App.Stack = app.Stack.Name
a.App.GitURL = app.GitURL
a.App.WebURL = app.WebURL
if app.Space != nil {
a.App.Space = app.Space.Name
}
if app.Organization != nil {
a.App.OrganizationName = app.Organization.Name
} else {
@ -71,6 +76,11 @@ func (a *application) Update() error {
}
}
a.Buildpacks, err = retrieveBuildpacks(a.Id, a.Client)
if err != nil {
errs = append(errs, err)
}
a.Vars, err = retrieveConfigVars(a.Id, a.Client)
if err != nil {
errs = append(errs, err)
@ -96,6 +106,12 @@ func resourceHerokuApp() *schema.Resource {
Required: true,
},
"space": {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
},
"region": {
Type: schema.TypeString,
Required: true,
@ -109,6 +125,14 @@ func resourceHerokuApp() *schema.Resource {
ForceNew: true,
},
"buildpacks": {
Type: schema.TypeList,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
"config_vars": {
Type: schema.TypeList,
Optional: true,
@ -208,11 +232,8 @@ func resourceHerokuAppCreate(d *schema.ResourceData, meta interface{}) error {
d.SetId(a.Name)
log.Printf("[INFO] App ID: %s", d.Id())
if v, ok := d.GetOk("config_vars"); ok {
err = updateConfigVars(d.Id(), client, nil, v.([]interface{}))
if err != nil {
return err
}
if err := performAppPostCreateTasks(d, client); err != nil {
return err
}
return resourceHerokuAppRead(d, meta)
@ -257,6 +278,11 @@ func resourceHerokuOrgAppCreate(d *schema.ResourceData, meta interface{}) error
log.Printf("[DEBUG] App region: %s", vs)
opts.Region = &vs
}
if v, ok := d.GetOk("space"); ok {
vs := v.(string)
log.Printf("[DEBUG] App space: %s", vs)
opts.Space = &vs
}
if v, ok := d.GetOk("stack"); ok {
vs := v.(string)
log.Printf("[DEBUG] App stack: %s", vs)
@ -272,11 +298,8 @@ func resourceHerokuOrgAppCreate(d *schema.ResourceData, meta interface{}) error
d.SetId(a.Name)
log.Printf("[INFO] App ID: %s", d.Id())
if v, ok := d.GetOk("config_vars"); ok {
err = updateConfigVars(d.Id(), client, nil, v.([]interface{}))
if err != nil {
return err
}
if err := performAppPostCreateTasks(d, client); err != nil {
return err
}
return resourceHerokuAppRead(d, meta)
@ -293,6 +316,9 @@ func resourceHerokuAppRead(d *schema.ResourceData, meta interface{}) error {
}
}
// Only track buildpacks when set in the configuration.
_, buildpacksConfigured := d.GetOk("buildpacks")
organizationApp := isOrganizationApp(d)
// Only set the config_vars that we have set in the configuration.
@ -317,9 +343,14 @@ func resourceHerokuAppRead(d *schema.ResourceData, meta interface{}) error {
d.Set("region", app.App.Region)
d.Set("git_url", app.App.GitURL)
d.Set("web_url", app.App.WebURL)
if buildpacksConfigured {
d.Set("buildpacks", app.Buildpacks)
}
d.Set("config_vars", configVarsValue)
d.Set("all_config_vars", app.Vars)
if organizationApp {
d.Set("space", app.App.Space)
orgDetails := map[string]interface{}{
"name": app.App.OrganizationName,
"locked": app.App.Locked,
@ -374,6 +405,13 @@ func resourceHerokuAppUpdate(d *schema.ResourceData, meta interface{}) error {
}
}
if d.HasChange("buildpacks") {
err := updateBuildpacks(d.Id(), client, d.Get("buildpacks").([]interface{}))
if err != nil {
return err
}
}
return resourceHerokuAppRead(d, meta)
}
@ -402,6 +440,21 @@ func resourceHerokuAppRetrieve(id string, organization bool, client *heroku.Serv
return &app, nil
}
func retrieveBuildpacks(id string, client *heroku.Service) ([]string, error) {
results, err := client.BuildpackInstallationList(context.TODO(), id, nil)
if err != nil {
return nil, err
}
buildpacks := []string{}
for _, installation := range results {
buildpacks = append(buildpacks, installation.Buildpack.Name)
}
return buildpacks, nil
}
func retrieveConfigVars(id string, client *heroku.Service) (map[string]string, error) {
vars, err := client.ConfigVarInfoForApp(context.TODO(), id)
@ -450,3 +503,41 @@ func updateConfigVars(
return nil
}
func updateBuildpacks(id string, client *heroku.Service, v []interface{}) error {
opts := heroku.BuildpackInstallationUpdateOpts{
Updates: []struct {
Buildpack string `json:"buildpack" url:"buildpack,key"`
}{}}
for _, buildpack := range v {
opts.Updates = append(opts.Updates, struct {
Buildpack string `json:"buildpack" url:"buildpack,key"`
}{
Buildpack: buildpack.(string),
})
}
if _, err := client.BuildpackInstallationUpdate(context.TODO(), id, opts); err != nil {
return fmt.Errorf("Error updating buildpacks: %s", err)
}
return nil
}
// performAppPostCreateTasks performs post-create tasks common to both org and non-org apps.
func performAppPostCreateTasks(d *schema.ResourceData, client *heroku.Service) error {
if v, ok := d.GetOk("config_vars"); ok {
if err := updateConfigVars(d.Id(), client, nil, v.([]interface{})); err != nil {
return err
}
}
if v, ok := d.GetOk("buildpacks"); ok {
if err := updateBuildpacks(d.Id(), client, v.([]interface{})); err != nil {
return err
}
}
return nil
}

View File

@ -109,6 +109,75 @@ func TestAccHerokuApp_NukeVars(t *testing.T) {
})
}
func TestAccHerokuApp_Buildpacks(t *testing.T) {
var app heroku.AppInfoResult
appName := fmt.Sprintf("tftest-%s", acctest.RandString(10))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckHerokuAppDestroy,
Steps: []resource.TestStep{
{
Config: testAccCheckHerokuAppConfig_go(appName),
Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAppExists("heroku_app.foobar", &app),
testAccCheckHerokuAppBuildpacks(appName, false),
resource.TestCheckResourceAttr("heroku_app.foobar", "buildpacks.0", "heroku/go"),
),
},
{
Config: testAccCheckHerokuAppConfig_multi(appName),
Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAppExists("heroku_app.foobar", &app),
testAccCheckHerokuAppBuildpacks(appName, true),
resource.TestCheckResourceAttr(
"heroku_app.foobar", "buildpacks.0", "https://github.com/heroku/heroku-buildpack-multi-procfile"),
resource.TestCheckResourceAttr("heroku_app.foobar", "buildpacks.1", "heroku/go"),
),
},
{
Config: testAccCheckHerokuAppConfig_no_vars(appName),
Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAppExists("heroku_app.foobar", &app),
testAccCheckHerokuAppNoBuildpacks(appName),
resource.TestCheckNoResourceAttr("heroku_app.foobar", "buildpacks.0"),
),
},
},
})
}
func TestAccHerokuApp_ExternallySetBuildpacks(t *testing.T) {
var app heroku.AppInfoResult
appName := fmt.Sprintf("tftest-%s", acctest.RandString(10))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckHerokuAppDestroy,
Steps: []resource.TestStep{
{
Config: testAccCheckHerokuAppConfig_no_vars(appName),
Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAppExists("heroku_app.foobar", &app),
testAccCheckHerokuAppNoBuildpacks(appName),
resource.TestCheckNoResourceAttr("heroku_app.foobar", "buildpacks.0"),
),
},
{
PreConfig: testAccInstallUnconfiguredBuildpack(t, appName),
Config: testAccCheckHerokuAppConfig_no_vars(appName),
Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAppExists("heroku_app.foobar", &app),
testAccCheckHerokuAppBuildpacks(appName, false),
resource.TestCheckNoResourceAttr("heroku_app.foobar", "buildpacks.0"),
),
},
},
})
}
func TestAccHerokuApp_Organization(t *testing.T) {
var app heroku.OrganizationApp
appName := fmt.Sprintf("tftest-%s", acctest.RandString(10))
@ -128,7 +197,37 @@ func TestAccHerokuApp_Organization(t *testing.T) {
Config: testAccCheckHerokuAppConfig_organization(appName, org),
Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAppExistsOrg("heroku_app.foobar", &app),
testAccCheckHerokuAppAttributesOrg(&app, appName, org),
testAccCheckHerokuAppAttributesOrg(&app, appName, "", org),
),
},
},
})
}
func TestAccHerokuApp_Space(t *testing.T) {
var app heroku.OrganizationApp
appName := fmt.Sprintf("tftest-%s", acctest.RandString(10))
org := os.Getenv("HEROKU_ORGANIZATION")
space := os.Getenv("HEROKU_SPACE")
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
if org == "" {
t.Skip("HEROKU_ORGANIZATION is not set; skipping test.")
}
if space == "" {
t.Skip("HEROKU_SPACE is not set; skipping test.")
}
},
Providers: testAccProviders,
CheckDestroy: testAccCheckHerokuAppDestroy,
Steps: []resource.TestStep{
{
Config: testAccCheckHerokuAppConfig_space(appName, space, org),
Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAppExistsOrg("heroku_app.foobar", &app),
testAccCheckHerokuAppAttributesOrg(&app, appName, space, org),
),
},
},
@ -230,14 +329,76 @@ func testAccCheckHerokuAppAttributesNoVars(app *heroku.AppInfoResult, appName st
}
}
func testAccCheckHerokuAppAttributesOrg(app *heroku.OrganizationApp, appName string, org string) resource.TestCheckFunc {
func testAccCheckHerokuAppBuildpacks(appName string, multi bool) resource.TestCheckFunc {
return func(s *terraform.State) error {
client := testAccProvider.Meta().(*heroku.Service)
if app.Region.Name != "us" {
results, err := client.BuildpackInstallationList(context.TODO(), appName, nil)
if err != nil {
return err
}
buildpacks := []string{}
for _, installation := range results {
buildpacks = append(buildpacks, installation.Buildpack.Name)
}
if multi {
herokuMulti := "https://github.com/heroku/heroku-buildpack-multi-procfile"
if len(buildpacks) != 2 || buildpacks[0] != herokuMulti || buildpacks[1] != "heroku/go" {
return fmt.Errorf("Bad buildpacks: %v", buildpacks)
}
return nil
}
if len(buildpacks) != 1 || buildpacks[0] != "heroku/go" {
return fmt.Errorf("Bad buildpacks: %v", buildpacks)
}
return nil
}
}
func testAccCheckHerokuAppNoBuildpacks(appName string) resource.TestCheckFunc {
return func(s *terraform.State) error {
client := testAccProvider.Meta().(*heroku.Service)
results, err := client.BuildpackInstallationList(context.TODO(), appName, nil)
if err != nil {
return err
}
buildpacks := []string{}
for _, installation := range results {
buildpacks = append(buildpacks, installation.Buildpack.Name)
}
if len(buildpacks) != 0 {
return fmt.Errorf("Bad buildpacks: %v", buildpacks)
}
return nil
}
}
func testAccCheckHerokuAppAttributesOrg(app *heroku.OrganizationApp, appName, space, org string) resource.TestCheckFunc {
return func(s *terraform.State) error {
client := testAccProvider.Meta().(*heroku.Service)
if app.Region.Name != "us" && app.Region.Name != "virginia" {
return fmt.Errorf("Bad region: %s", app.Region.Name)
}
var appSpace string
if app.Space != nil {
appSpace = app.Space.Name
}
if appSpace != space {
return fmt.Errorf("Bad space: %s", appSpace)
}
if app.Stack.Name != "cedar-14" {
return fmt.Errorf("Bad stack: %s", app.Stack.Name)
}
@ -323,6 +484,25 @@ func testAccCheckHerokuAppExistsOrg(n string, app *heroku.OrganizationApp) resou
}
}
func testAccInstallUnconfiguredBuildpack(t *testing.T, appName string) func() {
return func() {
client := testAccProvider.Meta().(*heroku.Service)
opts := heroku.BuildpackInstallationUpdateOpts{
Updates: []struct {
Buildpack string `json:"buildpack" url:"buildpack,key"`
}{
{Buildpack: "heroku/go"},
},
}
_, err := client.BuildpackInstallationUpdate(context.TODO(), appName, opts)
if err != nil {
t.Fatalf("Error updating buildpacks: %s", err)
}
}
}
func testAccCheckHerokuAppConfig_basic(appName string) string {
return fmt.Sprintf(`
resource "heroku_app" "foobar" {
@ -335,6 +515,29 @@ resource "heroku_app" "foobar" {
}`, appName)
}
func testAccCheckHerokuAppConfig_go(appName string) string {
return fmt.Sprintf(`
resource "heroku_app" "foobar" {
name = "%s"
region = "us"
buildpacks = ["heroku/go"]
}`, appName)
}
func testAccCheckHerokuAppConfig_multi(appName string) string {
return fmt.Sprintf(`
resource "heroku_app" "foobar" {
name = "%s"
region = "us"
buildpacks = [
"https://github.com/heroku/heroku-buildpack-multi-procfile",
"heroku/go"
]
}`, appName)
}
func testAccCheckHerokuAppConfig_updated(appName string) string {
return fmt.Sprintf(`
resource "heroku_app" "foobar" {
@ -371,3 +574,20 @@ resource "heroku_app" "foobar" {
}
}`, appName, org)
}
func testAccCheckHerokuAppConfig_space(appName, space, org string) string {
return fmt.Sprintf(`
resource "heroku_app" "foobar" {
name = "%s"
space = "%s"
region = "virginia"
organization {
name = "%s"
}
config_vars {
FOO = "bar"
}
}`, appName, space, org)
}

View File

@ -0,0 +1,117 @@
package heroku
import (
"context"
"log"
heroku "github.com/cyberdelia/heroku-go/v3"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceHerokuSpace() *schema.Resource {
return &schema.Resource{
Create: resourceHerokuSpaceCreate,
Read: resourceHerokuSpaceRead,
Update: resourceHerokuSpaceUpdate,
Delete: resourceHerokuSpaceDelete,
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"organization": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"region": {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
},
},
}
}
func resourceHerokuSpaceCreate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*heroku.Service)
opts := heroku.SpaceCreateOpts{}
opts.Name = d.Get("name").(string)
opts.Organization = d.Get("organization").(string)
if v, ok := d.GetOk("region"); ok {
vs := v.(string)
opts.Region = &vs
}
space, err := client.SpaceCreate(context.TODO(), opts)
if err != nil {
return err
}
d.SetId(space.ID)
log.Printf("[INFO] Space ID: %s", d.Id())
// The type conversion here can be dropped when the vendored version of
// heroku-go is updated.
setSpaceAttributes(d, (*heroku.Space)(space))
return nil
}
func resourceHerokuSpaceRead(d *schema.ResourceData, meta interface{}) error {
client := meta.(*heroku.Service)
space, err := client.SpaceInfo(context.TODO(), d.Id())
if err != nil {
return err
}
// The type conversion here can be dropped when the vendored version of
// heroku-go is updated.
setSpaceAttributes(d, (*heroku.Space)(space))
return nil
}
func resourceHerokuSpaceUpdate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*heroku.Service)
if !d.HasChange("name") {
return nil
}
name := d.Get("name").(string)
opts := heroku.SpaceUpdateOpts{Name: &name}
space, err := client.SpaceUpdate(context.TODO(), d.Id(), opts)
if err != nil {
return err
}
// The type conversion here can be dropped when the vendored version of
// heroku-go is updated.
setSpaceAttributes(d, (*heroku.Space)(space))
return nil
}
func setSpaceAttributes(d *schema.ResourceData, space *heroku.Space) {
d.Set("name", space.Name)
d.Set("organization", space.Organization.Name)
d.Set("region", space.Region.Name)
}
func resourceHerokuSpaceDelete(d *schema.ResourceData, meta interface{}) error {
client := meta.(*heroku.Service)
log.Printf("[INFO] Deleting space: %s", d.Id())
_, err := client.SpaceDelete(context.TODO(), d.Id())
if err != nil {
return err
}
d.SetId("")
return nil
}

View File

@ -0,0 +1,114 @@
package heroku
import (
"context"
"fmt"
"os"
"testing"
heroku "github.com/cyberdelia/heroku-go/v3"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestAccHerokuSpace_Basic(t *testing.T) {
var space heroku.SpaceInfoResult
spaceName := fmt.Sprintf("tftest-%s", acctest.RandString(10))
spaceName2 := fmt.Sprintf("tftest-%s", acctest.RandString(10))
org := os.Getenv("HEROKU_ORGANIZATION")
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
if org == "" {
t.Skip("HEROKU_ORGANIZATION is not set; skipping test.")
}
},
Providers: testAccProviders,
CheckDestroy: testAccCheckHerokuSpaceDestroy,
Steps: []resource.TestStep{
{
Config: testAccCheckHerokuSpaceConfig_basic(spaceName, org),
Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuSpaceExists("heroku_space.foobar", &space),
testAccCheckHerokuSpaceAttributes(&space, spaceName),
),
},
{
Config: testAccCheckHerokuSpaceConfig_basic(spaceName2, org),
Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuSpaceExists("heroku_space.foobar", &space),
testAccCheckHerokuSpaceAttributes(&space, spaceName2),
),
},
},
})
}
func testAccCheckHerokuSpaceConfig_basic(spaceName, orgName string) string {
return fmt.Sprintf(`
resource "heroku_space" "foobar" {
name = "%s"
organization = "%s"
region = "virginia"
}
`, spaceName, orgName)
}
func testAccCheckHerokuSpaceExists(n string, space *heroku.SpaceInfoResult) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No space name set")
}
client := testAccProvider.Meta().(*heroku.Service)
foundSpace, err := client.SpaceInfo(context.TODO(), rs.Primary.ID)
if err != nil {
return err
}
if foundSpace.ID != rs.Primary.ID {
return fmt.Errorf("Space not found")
}
*space = *foundSpace
return nil
}
}
func testAccCheckHerokuSpaceAttributes(space *heroku.SpaceInfoResult, spaceName string) resource.TestCheckFunc {
return func(s *terraform.State) error {
if space.Name != spaceName {
return fmt.Errorf("Bad name: %s", space.Name)
}
return nil
}
}
func testAccCheckHerokuSpaceDestroy(s *terraform.State) error {
client := testAccProvider.Meta().(*heroku.Service)
for _, rs := range s.RootModule().Resources {
if rs.Type != "heroku_space" {
continue
}
_, err := client.SpaceInfo(context.TODO(), rs.Primary.ID)
if err == nil {
return fmt.Errorf("Space still exists")
}
}
return nil
}

View File

@ -15,6 +15,21 @@ import (
"github.com/hashicorp/terraform/terraform"
)
// globalCache keeps the instances of the internal types of ignition generated
// by the different data resources with the goal to be reused by the
// ignition_config data resource. The key of the maps are a hash of the types
// calculated on the type serialized to JSON.
var globalCache = &cache{
disks: make(map[string]*types.Disk, 0),
arrays: make(map[string]*types.Raid, 0),
filesystems: make(map[string]*types.Filesystem, 0),
files: make(map[string]*types.File, 0),
systemdUnits: make(map[string]*types.SystemdUnit, 0),
networkdUnits: make(map[string]*types.NetworkdUnit, 0),
users: make(map[string]*types.User, 0),
groups: make(map[string]*types.Group, 0),
}
func Provider() terraform.ResourceProvider {
return &schema.Provider{
DataSourcesMap: map[string]*schema.Resource{
@ -66,18 +81,6 @@ func Provider() terraform.ResourceProvider {
resourceGroup(),
),
},
ConfigureFunc: func(*schema.ResourceData) (interface{}, error) {
return &cache{
disks: make(map[string]*types.Disk, 0),
arrays: make(map[string]*types.Raid, 0),
filesystems: make(map[string]*types.Filesystem, 0),
files: make(map[string]*types.File, 0),
systemdUnits: make(map[string]*types.SystemdUnit, 0),
networkdUnits: make(map[string]*types.NetworkdUnit, 0),
users: make(map[string]*types.User, 0),
groups: make(map[string]*types.Group, 0),
}, nil
},
}
}

View File

@ -91,7 +91,7 @@ func resourceConfig() *schema.Resource {
}
func resourceIgnitionFileRead(d *schema.ResourceData, meta interface{}) error {
rendered, err := renderConfig(d, meta.(*cache))
rendered, err := renderConfig(d, globalCache)
if err != nil {
return err
}
@ -105,7 +105,7 @@ func resourceIgnitionFileRead(d *schema.ResourceData, meta interface{}) error {
}
func resourceIgnitionFileExists(d *schema.ResourceData, meta interface{}) (bool, error) {
rendered, err := renderConfig(d, meta.(*cache))
rendered, err := renderConfig(d, globalCache)
if err != nil {
return false, err
}

View File

@ -59,7 +59,7 @@ func resourceDisk() *schema.Resource {
}
func resourceDiskRead(d *schema.ResourceData, meta interface{}) error {
id, err := buildDisk(d, meta.(*cache))
id, err := buildDisk(d, globalCache)
if err != nil {
return err
}
@ -69,7 +69,7 @@ func resourceDiskRead(d *schema.ResourceData, meta interface{}) error {
}
func resourceDiskExists(d *schema.ResourceData, meta interface{}) (bool, error) {
id, err := buildDisk(d, meta.(*cache))
id, err := buildDisk(d, globalCache)
if err != nil {
return false, err
}

View File

@ -90,7 +90,7 @@ func resourceFile() *schema.Resource {
}
func resourceFileRead(d *schema.ResourceData, meta interface{}) error {
id, err := buildFile(d, meta.(*cache))
id, err := buildFile(d, globalCache)
if err != nil {
return err
}
@ -100,7 +100,7 @@ func resourceFileRead(d *schema.ResourceData, meta interface{}) error {
}
func resourceFileExists(d *schema.ResourceData, meta interface{}) (bool, error) {
id, err := buildFile(d, meta.(*cache))
id, err := buildFile(d, globalCache)
if err != nil {
return false, err
}

View File

@ -63,7 +63,7 @@ func resourceFilesystem() *schema.Resource {
}
func resourceFilesystemRead(d *schema.ResourceData, meta interface{}) error {
id, err := buildFilesystem(d, meta.(*cache))
id, err := buildFilesystem(d, globalCache)
if err != nil {
return err
}
@ -73,7 +73,7 @@ func resourceFilesystemRead(d *schema.ResourceData, meta interface{}) error {
}
func resourceFilesystemExists(d *schema.ResourceData, meta interface{}) (bool, error) {
id, err := buildFilesystem(d, meta.(*cache))
id, err := buildFilesystem(d, globalCache)
if err != nil {
return false, err
}

View File

@ -30,7 +30,7 @@ func resourceGroup() *schema.Resource {
}
func resourceGroupRead(d *schema.ResourceData, meta interface{}) error {
id, err := buildGroup(d, meta.(*cache))
id, err := buildGroup(d, globalCache)
if err != nil {
return err
}
@ -40,7 +40,7 @@ func resourceGroupRead(d *schema.ResourceData, meta interface{}) error {
}
func resourceGroupExists(d *schema.ResourceData, meta interface{}) (bool, error) {
id, err := buildGroup(d, meta.(*cache))
id, err := buildGroup(d, globalCache)
if err != nil {
return false, err
}

View File

@ -25,7 +25,7 @@ func resourceNetworkdUnit() *schema.Resource {
}
func resourceNetworkdUnitRead(d *schema.ResourceData, meta interface{}) error {
id, err := buildNetworkdUnit(d, meta.(*cache))
id, err := buildNetworkdUnit(d, globalCache)
if err != nil {
return err
}
@ -40,7 +40,7 @@ func resourceNetworkdUnitDelete(d *schema.ResourceData, meta interface{}) error
}
func resourceNetworkdUnitExists(d *schema.ResourceData, meta interface{}) (bool, error) {
id, err := buildNetworkdUnit(d, meta.(*cache))
id, err := buildNetworkdUnit(d, globalCache)
if err != nil {
return false, err
}

View File

@ -36,7 +36,7 @@ func resourceRaid() *schema.Resource {
}
func resourceRaidRead(d *schema.ResourceData, meta interface{}) error {
id, err := buildRaid(d, meta.(*cache))
id, err := buildRaid(d, globalCache)
if err != nil {
return err
}
@ -46,7 +46,7 @@ func resourceRaidRead(d *schema.ResourceData, meta interface{}) error {
}
func resourceRaidExists(d *schema.ResourceData, meta interface{}) (bool, error) {
id, err := buildRaid(d, meta.(*cache))
id, err := buildRaid(d, globalCache)
if err != nil {
return false, err
}

View File

@ -55,7 +55,7 @@ func resourceSystemdUnit() *schema.Resource {
}
func resourceSystemdUnitRead(d *schema.ResourceData, meta interface{}) error {
id, err := buildSystemdUnit(d, meta.(*cache))
id, err := buildSystemdUnit(d, globalCache)
if err != nil {
return err
}
@ -65,7 +65,7 @@ func resourceSystemdUnitRead(d *schema.ResourceData, meta interface{}) error {
}
func resourceSystemdUnitExists(d *schema.ResourceData, meta interface{}) (bool, error) {
id, err := buildSystemdUnit(d, meta.(*cache))
id, err := buildSystemdUnit(d, globalCache)
if err != nil {
return false, err
}

View File

@ -79,7 +79,7 @@ func resourceUser() *schema.Resource {
}
func resourceUserRead(d *schema.ResourceData, meta interface{}) error {
id, err := buildUser(d, meta.(*cache))
id, err := buildUser(d, globalCache)
if err != nil {
return err
}
@ -89,7 +89,7 @@ func resourceUserRead(d *schema.ResourceData, meta interface{}) error {
}
func resourceUserExists(d *schema.ResourceData, meta interface{}) (bool, error) {
id, err := buildUser(d, meta.(*cache))
id, err := buildUser(d, globalCache)
if err != nil {
return false, err
}

View File

@ -82,7 +82,7 @@ func resourceOPCIPAddressAssociationRead(d *schema.ResourceData, meta interface{
name := d.Id()
getInput := compute.GetIPAddressAssociationInput{
name,
Name: name,
}
result, err := client.GetIPAddressAssociation(&getInput)
if err != nil {

View File

@ -20,6 +20,7 @@ func Provider() terraform.ResourceProvider {
"template_cloudinit_config",
dataSourceCloudinitConfig(),
),
"template_dir": resourceDir(),
},
}
}

View File

@ -0,0 +1,234 @@
package template
import (
"archive/tar"
"bytes"
"crypto/sha1"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"path/filepath"
"github.com/hashicorp/terraform/helper/pathorcontents"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceDir() *schema.Resource {
return &schema.Resource{
Create: resourceTemplateDirCreate,
Read: resourceTemplateDirRead,
Delete: resourceTemplateDirDelete,
Schema: map[string]*schema.Schema{
"source_dir": {
Type: schema.TypeString,
Description: "Path to the directory where the files to template reside",
Required: true,
ForceNew: true,
},
"vars": {
Type: schema.TypeMap,
Optional: true,
Default: make(map[string]interface{}),
Description: "Variables to substitute",
ValidateFunc: validateVarsAttribute,
ForceNew: true,
},
"destination_dir": {
Type: schema.TypeString,
Description: "Path to the directory where the templated files will be written",
Required: true,
ForceNew: true,
},
},
}
}
func resourceTemplateDirRead(d *schema.ResourceData, meta interface{}) error {
sourceDir := d.Get("source_dir").(string)
destinationDir := d.Get("destination_dir").(string)
// If the output doesn't exist, mark the resource for creation.
if _, err := os.Stat(destinationDir); os.IsNotExist(err) {
d.SetId("")
return nil
}
// If the combined hash of the input and output directories is different from
// the stored one, mark the resource for re-creation.
//
// The output directory is technically enough for the general case, but by
// hashing the input directory as well, we make development much easier: when
// a developer modifies one of the input files, the generation is
// re-triggered.
hash, err := generateID(sourceDir, destinationDir)
if err != nil {
return err
}
if hash != d.Id() {
d.SetId("")
return nil
}
return nil
}
func resourceTemplateDirCreate(d *schema.ResourceData, meta interface{}) error {
sourceDir := d.Get("source_dir").(string)
destinationDir := d.Get("destination_dir").(string)
vars := d.Get("vars").(map[string]interface{})
// Always delete the output first, otherwise files that got deleted from the
// input directory might still be present in the output afterwards.
if err := resourceTemplateDirDelete(d, meta); err != nil {
return err
}
// Create the destination directory and any other intermediate directories
// leading to it.
if _, err := os.Stat(destinationDir); err != nil {
if err := os.MkdirAll(destinationDir, 0777); err != nil {
return err
}
}
// Recursively crawl the input files/directories and generate the output ones.
err := filepath.Walk(sourceDir, func(p string, f os.FileInfo, err error) error {
if err != nil {
return err
}
if f.IsDir() {
return nil
}
relPath, _ := filepath.Rel(sourceDir, p)
return generateDirFile(p, path.Join(destinationDir, relPath), f, vars)
})
if err != nil {
return err
}
// Compute ID.
hash, err := generateID(sourceDir, destinationDir)
if err != nil {
return err
}
d.SetId(hash)
return nil
}
func resourceTemplateDirDelete(d *schema.ResourceData, _ interface{}) error {
d.SetId("")
destinationDir := d.Get("destination_dir").(string)
if _, err := os.Stat(destinationDir); os.IsNotExist(err) {
return nil
}
if err := os.RemoveAll(destinationDir); err != nil {
return fmt.Errorf("could not delete directory %q: %s", destinationDir, err)
}
return nil
}
func generateDirFile(sourceDir, destinationDir string, f os.FileInfo, vars map[string]interface{}) error {
inputContent, _, err := pathorcontents.Read(sourceDir)
if err != nil {
return err
}
outputContent, err := execute(inputContent, vars)
if err != nil {
return templateRenderError(fmt.Errorf("failed to render %v: %v", sourceDir, err))
}
outputDir := path.Dir(destinationDir)
if _, err := os.Stat(outputDir); err != nil {
if err := os.MkdirAll(outputDir, 0777); err != nil {
return err
}
}
err = ioutil.WriteFile(destinationDir, []byte(outputContent), f.Mode())
if err != nil {
return err
}
return nil
}
func generateID(sourceDir, destinationDir string) (string, error) {
inputHash, err := generateDirHash(sourceDir)
if err != nil {
return "", err
}
outputHash, err := generateDirHash(destinationDir)
if err != nil {
return "", err
}
checksum := sha1.Sum([]byte(inputHash + outputHash))
return hex.EncodeToString(checksum[:]), nil
}
func generateDirHash(directoryPath string) (string, error) {
tarData, err := tarDir(directoryPath)
if err != nil {
return "", fmt.Errorf("could not generate output checksum: %s", err)
}
checksum := sha1.Sum(tarData)
return hex.EncodeToString(checksum[:]), nil
}
func tarDir(directoryPath string) ([]byte, error) {
buf := new(bytes.Buffer)
tw := tar.NewWriter(buf)
writeFile := func(p string, f os.FileInfo, err error) error {
if err != nil {
return err
}
var header *tar.Header
var file *os.File
header, err = tar.FileInfoHeader(f, f.Name())
if err != nil {
return err
}
relPath, _ := filepath.Rel(directoryPath, p)
header.Name = relPath
if err := tw.WriteHeader(header); err != nil {
return err
}
if f.IsDir() {
return nil
}
file, err = os.Open(p)
if err != nil {
return err
}
defer file.Close()
_, err = io.Copy(tw, file)
return err
}
if err := filepath.Walk(directoryPath, writeFile); err != nil {
return []byte{}, err
}
if err := tw.Flush(); err != nil {
return []byte{}, err
}
return buf.Bytes(), nil
}

View File

@ -0,0 +1,104 @@
package template
import (
"fmt"
"testing"
"errors"
r "github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"io/ioutil"
"os"
"path/filepath"
)
const templateDirRenderingConfig = `
resource "template_dir" "dir" {
source_dir = "%s"
destination_dir = "%s"
vars = %s
}`
type testTemplate struct {
template string
want string
}
func testTemplateDirWriteFiles(files map[string]testTemplate) (in, out string, err error) {
in, err = ioutil.TempDir(os.TempDir(), "terraform_template_dir")
if err != nil {
return
}
for name, file := range files {
path := filepath.Join(in, name)
err = os.MkdirAll(filepath.Dir(path), 0777)
if err != nil {
return
}
err = ioutil.WriteFile(path, []byte(file.template), 0777)
if err != nil {
return
}
}
out = fmt.Sprintf("%s.out", in)
return
}
func TestTemplateDirRendering(t *testing.T) {
var cases = []struct {
vars string
files map[string]testTemplate
}{
{
files: map[string]testTemplate{
"foo.txt": {"${bar}", "bar"},
"nested/monkey.txt": {"ooh-ooh-ooh-eee-eee", "ooh-ooh-ooh-eee-eee"},
"maths.txt": {"${1+2+3}", "6"},
},
vars: `{bar = "bar"}`,
},
}
for _, tt := range cases {
// Write the desired templates in a temporary directory.
in, out, err := testTemplateDirWriteFiles(tt.files)
if err != nil {
t.Skipf("could not write templates to temporary directory: %s", err)
continue
}
defer os.RemoveAll(in)
defer os.RemoveAll(out)
// Run test case.
r.UnitTest(t, r.TestCase{
Providers: testProviders,
Steps: []r.TestStep{
{
Config: fmt.Sprintf(templateDirRenderingConfig, in, out, tt.vars),
Check: func(s *terraform.State) error {
for name, file := range tt.files {
content, err := ioutil.ReadFile(filepath.Join(out, name))
if err != nil {
return fmt.Errorf("template:\n%s\nvars:\n%s\ngot:\n%s\nwant:\n%s\n", file.template, tt.vars, err, file.want)
}
if string(content) != file.want {
return fmt.Errorf("template:\n%s\nvars:\n%s\ngot:\n%s\nwant:\n%s\n", file.template, tt.vars, content, file.want)
}
}
return nil
},
},
},
CheckDestroy: func(*terraform.State) error {
if _, err := os.Stat(out); os.IsNotExist(err) {
return nil
}
return errors.New("template_dir did not get destroyed")
},
})
}
}

View File

@ -22,25 +22,25 @@ func Provider() terraform.ResourceProvider {
"account": {
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc("SDC_ACCOUNT", ""),
DefaultFunc: schema.MultiEnvDefaultFunc([]string{"TRITON_ACCOUNT", "SDC_ACCOUNT"}, ""),
},
"url": {
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc("SDC_URL", "https://us-west-1.api.joyentcloud.com"),
DefaultFunc: schema.MultiEnvDefaultFunc([]string{"TRITON_URL", "SDC_URL"}, "https://us-west-1.api.joyentcloud.com"),
},
"key_material": {
Type: schema.TypeString,
Optional: true,
DefaultFunc: schema.EnvDefaultFunc("SDC_KEY_MATERIAL", ""),
DefaultFunc: schema.MultiEnvDefaultFunc([]string{"TRITON_KEY_MATERIAL", "SDC_KEY_MATERIAL"}, ""),
},
"key_id": {
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc("SDC_KEY_ID", ""),
DefaultFunc: schema.MultiEnvDefaultFunc([]string{"TRITON_KEY_ID", "SDC_KEY_ID"}, ""),
},
},

View File

@ -7,6 +7,7 @@ import (
"github.com/Ensighten/udnssdk"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/helper/validation"
)
func resourceUltradnsRdpool() *schema.Resource {
@ -28,12 +29,6 @@ func resourceUltradnsRdpool() *schema.Resource {
Required: true,
ForceNew: true,
},
"order": &schema.Schema{
Type: schema.TypeString,
Required: true,
// 0-255 char
// FIXED | RANDOM | ROUND_ROBIN
},
"rdata": &schema.Schema{
Type: schema.TypeSet,
Set: schema.HashString,
@ -41,10 +36,20 @@ func resourceUltradnsRdpool() *schema.Resource {
Elem: &schema.Schema{Type: schema.TypeString},
},
// Optional
"description": &schema.Schema{
"order": &schema.Schema{
Type: schema.TypeString,
Optional: true,
// 0-255 char
Default: "ROUND_ROBIN",
ValidateFunc: validation.StringInSlice([]string{
"ROUND_ROBIN",
"FIXED",
"RANDOM",
}, false),
},
"description": &schema.Schema{
Type: schema.TypeString,
Optional: true,
ValidateFunc: validation.StringLenBetween(0, 255),
},
"ttl": &schema.Schema{
Type: schema.TypeInt,
@ -111,8 +116,7 @@ func resourceUltradnsRdpoolRead(d *schema.ResourceData, meta interface{}) error
r := rrsets[0]
zone := d.Get("zone")
// ttl
d.Set("ttl", r.TTL)
// hostname
if r.OwnerName == "" {
d.Set("hostname", zone)
@ -134,11 +138,11 @@ func resourceUltradnsRdpoolRead(d *schema.ResourceData, meta interface{}) error
}
// Set simple values
d.Set("ttl", r.TTL)
d.Set("description", p.Description)
d.Set("order", p.Order)
err = d.Set("rdata", makeSetFromStrings(r.RData))
//err = d.Set("rdata", makeSetFromRdataAlone(r.RData))
if err != nil {
return fmt.Errorf("rdata set failed: %#v", err)
}
@ -186,13 +190,12 @@ func resourceUltradnsRdpoolDelete(d *schema.ResourceData, meta interface{}) erro
func newRRSetResourceFromRdpool(d *schema.ResourceData) (rRSetResource, error) {
//rDataRaw := d.Get("rdata").(*schema.Set).List()
r := rRSetResource{
// "The only valid rrtype value for SiteBacker or Traffic Controller pools is A"
// "The only valid rrtype value for RDpools is A"
// per https://portal.ultradns.com/static/docs/REST-API_User_Guide.pdf
RRType: "A",
Zone: d.Get("zone").(string),
OwnerName: d.Get("name").(string),
TTL: d.Get("ttl").(int),
//RData: unzipRdataHosts(rDataRaw),
}
if attr, ok := d.GetOk("rdata"); ok {
rdata := attr.(*schema.Set).List()
@ -213,27 +216,3 @@ func newRRSetResourceFromRdpool(d *schema.ResourceData) (rRSetResource, error) {
return r, nil
}
// zip RData into []map[string]interface{}
func zipRDataAlone(rds []string) []map[string]interface{} {
result := make([]map[string]interface{}, 0, len(rds))
for _, rd := range rds {
r := map[string]interface{}{
// "host": rds[i],
"host": rd,
}
result = append(result, r)
}
return result
}
// makeSetFromRdatas encodes an array of Rdata into a
// *schema.Set in the appropriate structure for the schema
func makeSetFromRdataAlone(rds []string) *schema.Set {
s := &schema.Set{F: hashRdatas}
rs := zipRDataAlone(rds)
for _, r := range rs {
s.Add(r)
}
return s
}

View File

@ -201,7 +201,7 @@ func (c *ApplyCommand) Run(args []string) int {
ctxCancel()
// Notify the user
c.Ui.Output("Interrupt received. Gracefully shutting down...")
c.Ui.Output(outputInterrupt)
// Still get the result, since there is still one
select {
@ -418,3 +418,7 @@ func outputsAsString(state *terraform.State, modPath []string, schema []*config.
return strings.TrimSpace(outputBuf.String())
}
const outputInterrupt = `Interrupt received.
Please wait for Terraform to exit or data loss may occur.
Gracefully shutting down...`

View File

@ -841,7 +841,7 @@ func TestApply_planVars(t *testing.T) {
// we should be able to apply a plan file with no other file dependencies
func TestApply_planNoModuleFiles(t *testing.T) {
// temprary data directory which we can remove between commands
// temporary data directory which we can remove between commands
td, err := ioutil.TempDir("", "tf")
if err != nil {
t.Fatal(err)

View File

@ -194,7 +194,7 @@ Usage: terraform init [options] [SOURCE] [PATH]
This is the first command that should be run for any new or existing
Terraform configuration per machine. This sets up all the local data
necessary to run Terraform that is typically not comitted to version
necessary to run Terraform that is typically not committed to version
control.
This command is always safe to run multiple times. Though subsequent runs

View File

@ -31,6 +31,7 @@ import (
externalprovider "github.com/hashicorp/terraform/builtin/providers/external"
fastlyprovider "github.com/hashicorp/terraform/builtin/providers/fastly"
githubprovider "github.com/hashicorp/terraform/builtin/providers/github"
gitlabprovider "github.com/hashicorp/terraform/builtin/providers/gitlab"
googleprovider "github.com/hashicorp/terraform/builtin/providers/google"
grafanaprovider "github.com/hashicorp/terraform/builtin/providers/grafana"
herokuprovider "github.com/hashicorp/terraform/builtin/providers/heroku"
@ -110,6 +111,7 @@ var InternalProviders = map[string]plugin.ProviderFunc{
"external": externalprovider.Provider,
"fastly": fastlyprovider.Provider,
"github": githubprovider.Provider,
"gitlab": gitlabprovider.Provider,
"google": googleprovider.Provider,
"grafana": grafanaprovider.Provider,
"heroku": herokuprovider.Provider,

View File

@ -1200,7 +1200,7 @@ func (m *Meta) backend_C_r_S_unchanged(
s := sMgr.State()
// it's possible for a backend to be unchanged, and the config itself to
// have changed by moving a paramter from the config to `-backend-config`
// have changed by moving a parameter from the config to `-backend-config`
// In this case we only need to update the Hash.
if c != nil && s.Backend.Hash != c.Hash {
s.Backend.Hash = c.Hash

View File

@ -133,7 +133,7 @@ func (c *StateMvCommand) Run(args []string) int {
}
// addableResult takes the result from a filter operation and returns what to
// call State.Add with. The reason we do this is beacuse in the module case
// call State.Add with. The reason we do this is because in the module case
// we must add the list of all modules returned versus just the root module.
func (c *StateMvCommand) addableResult(results []*terraform.StateFilterResult) interface{} {
switch v := results[0].Value.(type) {

View File

@ -352,7 +352,7 @@ func (w *Walker) walkVertex(v Vertex, info *walkerVertex) {
}
// Check if we have updated dependencies. This can happen if the
// dependencies were satisfied exactly prior to an Update occuring.
// dependencies were satisfied exactly prior to an Update occurring.
// In that case, we'd like to take into account new dependencies
// if possible.
info.DepsLock.Lock()

View File

@ -1,6 +1,6 @@
# ####
# Current Availiable Datacenter Regions
# Current Available Datacenter Regions
# As of 05-07-2016

View File

@ -5,7 +5,7 @@ provider "google" {
region = "${var.region1}"
}
# Create the two networks we want to join. They must have seperate, internal
# Create the two networks we want to join. They must have separate, internal
# ranges.
resource "google_compute_network" "network1" {
name = "network1"

View File

@ -115,7 +115,7 @@ func expandArray(m map[string]string, prefix string) []interface{} {
func expandMap(m map[string]string, prefix string) map[string]interface{} {
// Submaps may not have a '%' key, so we can't count on this value being
// here. If we don't have a count, just procede as if we have have a map.
// here. If we don't have a count, just proceed as if we have have a map.
if count, ok := m[prefix+"%"]; ok && count == "0" {
return map[string]interface{}{}
}

View File

@ -7,7 +7,7 @@ import (
)
// The main version number that is being run at the moment.
const Version = "0.9.4"
const Version = "0.9.5"
// A pre-release marker for the version. If this is "" (empty string)
// then it means that it is a final release. Otherwise, this is a pre-release

View File

@ -1,3 +1,66 @@
Release v1.8.16 (2017-04-21)
===
### Service Client Updates
* `service/appstream`: Updates service API, documentation, and paginators
* The new feature named "Default Internet Access" will enable Internet access from AppStream 2.0 instances - image builders and fleet instances. Admins will check a flag either through AWS management console for AppStream 2.0 or through API while creating an image builder or while creating/updating a fleet.
* `service/kinesis`: Updates service API, documentation, waiters, and paginators
* Adds a new waiter, StreamNotExists, to Kinesis.
### SDK Enhancements
* `aws/endpoints`: Add utilities improving endpoints lookup (#1218)
* Adds several utilities to the endpoints packages to make looking up partitions, regions, and services easier.
* Fixes #994
### SDK Bugs
* `private/protocol/xml/xmlutil`: Fix unmarshaling dropping errors (#1219)
* The XML unmarshaler would drop any serialization or body read error that occurred on the floor effectively hiding any errors that would occur.
* Fixes #1205
Release v1.8.15 (2017-04-20)
===
### Service Client Updates
* `service/devicefarm`: Updates service API and documentation
* API Update for AWS Device Farm: Support for Deals and Promotions
* `service/directconnect`: Updates service documentation
* Documentation updates for AWS Direct Connect.
* `service/elbv2`: Updates service waiters
* `service/kms`: Updates service documentation and examples
* Doc-only update for Key Management Service (KMS): Update docs for GrantConstraints and GenerateRandom
* `service/route53`: Updates service documentation
* Release notes: SDK documentation now includes examples for ChangeResourceRecordSets for all types of resource record set, such as weighted, alias, and failover.
* `service/route53domains`: Updates service API, documentation, and paginators
* Adding examples and other documentation updates.
### SDK Enhancements
* `service/s3`: Add utilities to make getting a bucket's region easier (#1207)
* Adds two features which make it easier to get a bucket's region, `s3.NormalizeBucketLocation` and `s3manager.GetBucketRegion`.
### SDK Bugs
* `service/s3`: Fix HeadObject's incorrect documented error codes (#1213)
* The HeadObject's model incorrectly states that the operation can return the NoSuchKey error code.
* Fixes #1208
Release v1.8.14 (2017-04-19)
===
### Service Client Updates
* `service/apigateway`: Updates service API and documentation
* Add support for "embed" property.
* `service/codestar`: Adds new service
* AWS CodeStar is a cloud-based service for creating, managing, and working with software development projects on AWS. An AWS CodeStar project creates and integrates AWS services for your project development toolchain. AWS CodeStar also manages the permissions required for project users.
* `service/ec2`: Updates service API and documentation
* Adds support for creating an Amazon FPGA Image (AFI) from a specified design checkpoint (DCP).
* `service/iam`: Updates service API and documentation
* This changes introduces a new IAM role type, Service Linked Role, which works like a normal role but must be managed via services' control.
* `service/lambda`: Updates service API and documentation
* Lambda integration with CloudDebugger service to enable customers to enable tracing for the Lambda functions and send trace information to the CloudDebugger service.
* `service/lexmodelbuildingservice`: Adds new service
* `service/polly`: Updates service API, documentation, and paginators
* API Update for Amazon Polly: Add support for speech marks
* `service/rekognition`: Updates service API and documentation
* Given an image, the API detects explicit or suggestive adult content in the image and returns a list of corresponding labels with confidence scores, as well as a taxonomy (parent-child relation) for each label.
Release v1.8.13 (2017-04-18)
===

View File

@ -64,6 +64,9 @@ integration: get-deps-tests integ-custom smoke-tests performance
integ-custom:
go test -tags "integration" ./awstesting/integration/customizations/...
cleanup-integ:
go run -tags "integration" ./awstesting/cmd/bucket_cleanup/main.go "aws-sdk-go-integration"
smoke-tests: get-deps-tests
gucumber -go-tags "integration" ./awstesting/integration/smoke

View File

@ -5,6 +5,7 @@ import (
"net/http/httputil"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/client/metadata"
"github.com/aws/aws-sdk-go/aws/request"
)
@ -105,6 +106,7 @@ func logRequest(r *request.Request) {
dumpedBody, err := httputil.DumpRequestOut(r.HTTPRequest, logBody)
if err != nil {
r.Config.Logger.Log(fmt.Sprintf(logReqErrMsg, r.ClientInfo.ServiceName, r.Operation.Name, err))
r.Error = awserr.New(request.ErrCodeRead, "an error occurred during request body reading", err)
return
}
@ -135,6 +137,7 @@ func logResponse(r *request.Request) {
dumpedBody, err := httputil.DumpResponse(r.HTTPResponse, logBody)
if err != nil {
r.Config.Logger.Log(fmt.Sprintf(logRespErrMsg, r.ClientInfo.ServiceName, r.Operation.Name, err))
r.Error = awserr.New(request.ErrCodeRead, "an error occurred during response body reading", err)
return
}

Some files were not shown because too many files have changed in this diff Show More