merge upstream

This commit is contained in:
Scott Nowicki 2017-04-24 19:40:17 -05:00
commit 3a77aafe49
1205 changed files with 72280 additions and 4695 deletions

View File

@ -201,6 +201,9 @@ Implementing a new resource is a good way to learn more about how Terraform
interacts with upstream APIs. There are plenty of examples to draw from in the
existing resources, but you still get to implement something completely new.
- [ ] __Minimal LOC__: It can be inefficient for both the reviewer
and author to go through long feedback cycles on a big PR with many
resources. We therefore encourage you to only submit **1 resource at a time**.
- [ ] __Acceptance tests__: New resources should include acceptance tests
covering their behavior. See [Writing Acceptance
Tests](#writing-acceptance-tests) below for a detailed guide on how to
@ -223,6 +226,11 @@ Implementing a new provider gives Terraform the ability to manage resources in
a whole new API. It's a larger undertaking, but brings major new functionality
into Terraform.
- [ ] __Minimal initial LOC__: Some providers may be big and it can be
inefficient for both reviewer & author to go through long feedback cycles
on a big PR with many resources. We encourage you to only submit
the necessary minimum in a single PR, ideally **just the first resource**
of the provider.
- [ ] __Acceptance tests__: Each provider should include an acceptance test
suite with tests for each resource should include acceptance tests covering
its behavior. See [Writing Acceptance Tests](#writing-acceptance-tests) below

View File

@ -12,7 +12,6 @@ env:
branches:
only:
- master
- /^(?i:topic)-.*$/
# install terraform
before_deploy:
@ -25,8 +24,8 @@ deploy:
skip_cleanup: true
script: cd $TRAVIS_BUILD_DIR/$TEST_DIR && ./deploy.sh
on:
repo: 10thmagnitude/terraform
branch: topic-101-vm-simple-linux
repo: harijayms/terraform
branch: master
# destroy resources with Azure CLI
after_deploy: cd $TRAVIS_BUILD_DIR/$TEST_DIR && ./after_deploy.sh
# handle cleanup of resources
after_deploy: cd $TRAVIS_BUILD_DIR/$TEST_DIR && ./after_deploy.sh

View File

@ -1,13 +1,116 @@
## 0.9.3 (unreleased)
## 0.9.4 (Unreleased)
BACKWARDS INCOMPATIBILITIES / NOTES:
* provider/template: Fix invalid MIME formatting in `template_cloudinit_config`.
While the change itself is not breaking the data source it may be referenced
e.g. in `aws_launch_configuration` and similar resources which are immutable
and the formatting change will therefore trigger recreation [GH-13752]
FEATURES:
* **New Provider:** `opc` - Oracle Public Cloud [GH-13468]
* **New Provider:** `oneandone` [GH-13633]
* **New Data Source:** `aws_ami_ids` [GH-13844]
* **New Data Source:** `aws_ebs_snapshot_ids` [GH-13844]
* **New Data Source:** `aws_kms_alias` [GH-13669]
* **New Data Source:** `aws_kinesis_stream` [GH-13562]
* **New Data Source:** `digitalocean_image` [GH-13787]
* **New Data Source:** `google_compute_network` [GH-12442]
* **New Data Source:** `google_compute_subnetwork` [GH-12442]
* **New Resource:** `local_file` for creating local files (please see the docs for caveats) [GH-12757]
* **New Resource:** `alicloud_ess_scalinggroup` [GH-13731]
* **New Resource:** `alicloud_ess_scalingconfiguration` [GH-13731]
* **New Resource:** `alicloud_ess_scalingrule` [GH-13731]
* **New Resource:** `alicloud_ess_schedule` [GH-13731]
* **New Resource:** `alicloud_snat_entry` [GH-13731]
* **New Resource:** `alicloud_forward_entry` [GH-13731]
* **New Resource:** `aws_cognito_identity_pool` [GH-13783]
* **New Resource:**  `aws_network_interface_attachment` [GH-13861]
* **New Resource:** `github_branch_protection` [GH-10476]
* **New Resource:** `google_bigquery_dataset` [GH-13436]
* **New Interpolation Function:** `coalescelist()` [GH-12537]
IMPROVEMENTS:
* helper/schema: Disallow validation+diff suppression on computed fields [GH-13878]
* config: The interpolation function `cidrhost` now accepts a negative host number to count backwards from the end of the range [GH-13765]
* config: New interpolation function `matchkeys` for using values from one list to filter corresponding values from another list using a matching set. [GH-13847]
* state/remote/swift: Support Openstack request logging [GH-13583]
* provider/aws: Add an option to skip getting the supported EC2 platforms [GH-13672]
* provider/aws: Add `name_prefix` support to `aws_cloudwatch_log_group` [GH-13273]
* provider/aws: Add `bucket_prefix` to `aws_s3_bucket` [GH-13274]
* provider/aws: Add replica_source_db to the aws_db_instance datasource [GH-13842]
* provider/aws: Add IPv6 outputs to aws_subnet datasource [GH-13841]
* provider/aws: Exercise SecondaryPrivateIpAddressCount for network interface [GH-10590]
* provider/aws: Expose execution ARN + invoke URL for APIG deployment [GH-13889]
* provider/aws: Expose invoke ARN from Lambda function (for API Gateway) [GH-13890]
* provider/aws: Add tagging support to the 'aws_lambda_function' resource [GH-13873]
* provider/aws: Validate WAF metric names [GH-13885]
* provider/aws: Allow AWS Subnet to change IPv6 CIDR Block without ForceNew [GH-13909]
* provider/azurerm: VM Scale Sets - import support [GH-13464]
* provider/azurerm: Allow Azure China region support [GH-13767]
* provider/digitalocean: Export droplet prices [GH-13720]
* provider/fastly: Add support for GCS logging [GH-13553]
* provider/google: `google_compute_address` and `google_compute_global_address` are now importable [GH-13270]
* provider/google: `google_compute_network` is now importable [GH-13834]
* provider/vault: `vault_generic_secret` resource can now optionally detect drift if it has appropriate access [GH-11776]
BUG FIXES:
* core: Add the close provider/provisioner transformers back [GH-13102]
* core: Fix a crash condition by improving the flatmap.Expand() logic [GH-13541]
* provider/alicloud: Fix create PrePaid instance [GH-13662]
* provider/alicloud: Fix allocate public ip error [GH-13268]
* provider/alicloud: alicloud_security_group_rule: check ptr before use it [GH-13731)
* provider/alicloud: alicloud_instance: fix ecs internet_max_bandwidth_out cannot set zero bug [GH-13731]
* provider/aws: Allow force-destroying `aws_route53_zone` which has trailing dot [GH-12421]
* provider/aws: Allow GovCloud KMS ARNs to pass validation in `kms_key_id` attributes [GH-13699]
* provider/aws: Changing aws_opsworks_instance should ForceNew [GH-13839]
* provider/aws: Fix DB Parameter Group Name [GH-13279]
* provider/aws: Fix issue importing some Security Groups and Rules based on rule structure [GH-13630]
* provider/aws: Fix issue for cross account IAM role with `aws_lambda_permission` [GH-13865]
* provider/aws: Fix WAF IPSet descriptors removal on update [GH-13766]
* provider/aws: Increase default number of retries from 11 to 25 [GH-13673]
* provider/aws: Remove aws_vpc_dhcp_options if not found [GH-13610]
* provider/aws: Remove aws_network_acl_rule if not found [GH-13608]
* provider/aws: Use mutex & retry for WAF change operations [GH-13656]
* provider/aws: Adding support for ipv6 to aws_subnets needs migration [GH-13876]
* provider/azurerm: azurerm_redis_cache resource missing hostname [GH-13650]
* provider/azurerm: Locking around Network Security Group / Subnets [GH-13637]
* provider/azurerm: Locking route table on subnet create/delete [GH-13791]
* provider/azurerm: VM's - fixes a bug where ssh_keys could contain a null entry [GH-13755]
* provider/azurerm: fixing a bug refreshing the `azurerm_redis_cache` [GH-13899]
* provider/fastly: Fix issue with using 0 for `default_ttl` [GH-13648]
* provider/fastly: Add ability to associate a healthcheck to a backend [GH-13539]
* provider/google: Stop setting the id when project creation fails [GH-13644]
* provider/google: Make ports in resource_compute_forwarding_rule ForceNew [GH-13833]
* provider/logentries: Refresh from state when resources not found [GH-13810]
* provider/newrelic: newrelic_alert_condition - `condition_scope` must be `application` or `instance` [GH-12972]
* provider/opc: fixed issue with unqualifying nats [GH-13826]
* provider/opc: Fix instance label if unset [GH-13846]
* provider/openstack: Fix updating Ports [GH-13604]
* provider/rabbitmq: Allow users without tags [GH-13798]
## 0.9.3 (April 12, 2017)
BACKWARDS INCOMPATIBILITIES / NOTES:
* provider/aws: Fix a critical bug in `aws_emr_cluster` in order to preserve the ordering
of any arguments in `bootstrap_action`. Terraform will now enforce the ordering
from the configuration. As a result, `aws_emr_cluster` resources may need to be
recreated, as there is no API to update them in-place [GH-13580]
FEATURES:
* **New Resource:** `aws_api_gateway_method_settings` [GH-13542]
* **New Resource:** `aws_api_gateway_stage` [GH-13540]
* **New Resource:** `aws_iam_openid_connect_provider` [GH-13456]
* **New Resource:** `aws_lightsail_static_ip` [GH-13175]
* **New Resource:** `aws_lightsail_static_ip_attachment` [GH-13207]
* **New Resource:** `aws_ses_domain_identity` [GH-13098]
* **New Resource:** `azurerm_managed_disk` [GH-12455]
* **New Resource:** `kubernetes_persistent_volume` [GH-13277]
* **New Resource:** `kubernetes_persistent_volume_claim` [GH-13527]
* **New Resource:** `kubernetes_secret` [GH-12960]
* **New Data Source:** `aws_iam_role` [GH-13213]
@ -16,6 +119,7 @@ IMPROVEMENTS:
* core: add `-lock-timeout` option, which will block and retry locks for the given duration [GH-13262]
* core: new `chomp` interpolation function which returns the given string with any trailing newline characters removed [GH-13419]
* backend/remote-state: Add support for assume role extensions to s3 backend [GH-13236]
* backend/remote-state: Filter extra entries from s3 environment listings [GH-13596]
* config: New interpolation functions `basename` and `dirname`, for file path manipulation [GH-13080]
* helper/resource: Allow unknown "pending" states [GH-13099]
* command/hook_ui: Increase max length of state IDs from 20 to 80 [GH-13317]
@ -40,11 +144,13 @@ IMPROVEMENTS:
* provider/aws: Add support for evaluate_low_sample_count_percentiles to cloudwatch_metric_alarm [GH-13371]
* provider/aws: Add `name_prefix` to `aws_alb_target_group` [GH-13442]
* provider/aws: Add support for EMR clusters to aws_appautoscaling_target [GH-13368]
* provider/aws: Add import capabilities to codecommit_repository [GH-13577]
* provider/bitbucket: Improved error handling [GH-13390]
* provider/cloudstack: Do not force a new resource when updating `cloudstack_loadbalancer_rule` members [GH-11786]
* provider/fastly: Add support for Sumologic logging [GH-12541]
* provider/github: Handle the case when issue labels already exist [GH-13182]
* provider/google: Mark `google_container_cluster`'s `client_key` & `password` inside `master_auth` as sensitive [GH-13148]
* provider/google: Add node_pool field in resource_container_cluster [GH-13402]
* provider/kubernetes: Allow defining custom config context [GH-12958]
* provider/openstack: Add support for 'value_specs' options to `openstack_compute_servergroup_v2` [GH-13380]
* provider/statuscake: Add support for StatusCake TriggerRate field [GH-13340]
@ -84,19 +190,23 @@ BUG FIXES:
* provider/aws: Update ElasticTranscoderPreset to have default for MaxFrameRate [GH-13422]
* provider/aws: Fix aws_ami_launch_permission refresh when AMI disappears [GH-13469]
* provider/aws: Add support for updating SSM documents [GH-13491]
* provider/aws: Fix panic on nil route configs [GH-13548]
* provider/azurerm: Network Security Group - ignoring protocol casing at Import time [GH-13153]
* provider/azurerm: Fix crash when importing Local Network Gateways [GH-13261]
* provider/azurerm: Defaulting the value of `duplicate_detection_history_time_window` for `azurerm_servicebus_topic` [GH-13223]
* provider/azurerm: Event Hubs making the Location field idempotent [GH-13570]
* provider/bitbucket: Fixed issue where provider would fail with an "EOF" error on some operations [GH-13390]
* provider/dnsimple: Handle 404 on DNSimple records [GH-13131]
* provider/kubernetes: Use PATCH to update namespace [GH-13114]
* provider/ns1: No splitting answer on SPF records. [GH-13260]
* provider/openstack: Refresh volume_attachment from state if NotFound [GH-13342]
* provider/openstack: Add SOFT_DELETED to delete status [GH-13444]
* provider/profitbricks: Changed output type of ips variable of ip_block ProfitBricks resource [GH-13290]
* provider/template: Fix panic in cloudinit config [GH-13581]
## 0.9.2 (March 28, 2017)
BACKWARDS IMCOMPATIBILITIES / NOTES:
BACKWARDS INCOMPATIBILITIES / NOTES:
* provider/openstack: Port Fixed IPs are able to be read again using the original numerical notation. However, Fixed IP configurations which are obtaining addresses via DHCP must now use the `all_fixed_ips` attribute to reference the returned IP address.
* Environment names must be safe to use as a URL path segment without escaping, and is enforced by the CLI.
@ -177,7 +287,7 @@ BUG FIXES:
## 0.9.1 (March 17, 2017)
BACKWARDS IMCOMPATIBILITIES / NOTES:
BACKWARDS INCOMPATIBILITIES / NOTES:
* provider/pagerduty: the deprecated `name_regex` field has been removed from vendor data source ([#12396](https://github.com/hashicorp/terraform/issues/12396))

View File

@ -102,7 +102,9 @@ func (b *Local) opApply(
doneCh := make(chan struct{})
go func() {
defer close(doneCh)
applyState, applyErr = tfCtx.Apply()
_, applyErr = tfCtx.Apply()
// we always want the state, even if apply failed
applyState = tfCtx.State()
/*
// Record any shadow errors for later

View File

@ -1,6 +1,7 @@
package s3
import (
"errors"
"fmt"
"sort"
"strings"
@ -30,29 +31,34 @@ func (b *Backend) States() ([]string, error) {
return nil, err
}
var envs []string
envs := []string{backend.DefaultStateName}
for _, obj := range resp.Contents {
env := keyEnv(*obj.Key)
env := b.keyEnv(*obj.Key)
if env != "" {
envs = append(envs, env)
}
}
sort.Strings(envs)
envs = append([]string{backend.DefaultStateName}, envs...)
sort.Strings(envs[1:])
return envs, nil
}
// extract the env name from the S3 key
func keyEnv(key string) string {
parts := strings.Split(key, "/")
func (b *Backend) keyEnv(key string) string {
// we have 3 parts, the prefix, the env name, and the key name
parts := strings.SplitN(key, "/", 3)
if len(parts) < 3 {
// no env here
return ""
}
// shouldn't happen since we listed by prefix
if parts[0] != keyEnvPrefix {
// not our key, so ignore
return ""
}
// not our key, so don't include it in our listing
if parts[2] != b.keyName {
return ""
}
@ -78,6 +84,10 @@ func (b *Backend) DeleteState(name string) error {
}
func (b *Backend) State(name string) (state.State, error) {
if name == "" {
return nil, errors.New("missing state name")
}
client := &RemoteClient{
s3Client: b.s3Client,
dynClient: b.dynClient,

View File

@ -3,6 +3,7 @@ package s3
import (
"fmt"
"os"
"reflect"
"testing"
"time"
@ -10,6 +11,8 @@ import (
"github.com/aws/aws-sdk-go/service/dynamodb"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/state/remote"
"github.com/hashicorp/terraform/terraform"
)
// verify that we are doing ACC tests or the S3 tests specifically
@ -84,7 +87,7 @@ func TestBackendLocked(t *testing.T) {
testACC(t)
bucketName := fmt.Sprintf("terraform-remote-s3-test-%x", time.Now().Unix())
keyName := "testState"
keyName := "test/state"
b1 := backend.TestBackendConfig(t, New(), map[string]interface{}{
"bucket": bucketName,
@ -108,6 +111,133 @@ func TestBackendLocked(t *testing.T) {
backend.TestBackend(t, b1, b2)
}
// add some extra junk in S3 to try and confuse the env listing.
func TestBackendExtraPaths(t *testing.T) {
testACC(t)
bucketName := fmt.Sprintf("terraform-remote-s3-test-%x", time.Now().Unix())
keyName := "test/state/tfstate"
b := backend.TestBackendConfig(t, New(), map[string]interface{}{
"bucket": bucketName,
"key": keyName,
"encrypt": true,
}).(*Backend)
createS3Bucket(t, b.s3Client, bucketName)
defer deleteS3Bucket(t, b.s3Client, bucketName)
// put multiple states in old env paths.
s1 := terraform.NewState()
s2 := terraform.NewState()
// RemoteClient to Put things in various paths
client := &RemoteClient{
s3Client: b.s3Client,
dynClient: b.dynClient,
bucketName: b.bucketName,
path: b.path("s1"),
serverSideEncryption: b.serverSideEncryption,
acl: b.acl,
kmsKeyID: b.kmsKeyID,
lockTable: b.lockTable,
}
stateMgr := &remote.State{Client: client}
stateMgr.WriteState(s1)
if err := stateMgr.PersistState(); err != nil {
t.Fatal(err)
}
client.path = b.path("s2")
stateMgr.WriteState(s2)
if err := stateMgr.PersistState(); err != nil {
t.Fatal(err)
}
if err := checkStateList(b, []string{"default", "s1", "s2"}); err != nil {
t.Fatal(err)
}
// put a state in an env directory name
client.path = keyEnvPrefix + "/error"
stateMgr.WriteState(terraform.NewState())
if err := stateMgr.PersistState(); err != nil {
t.Fatal(err)
}
if err := checkStateList(b, []string{"default", "s1", "s2"}); err != nil {
t.Fatal(err)
}
// add state with the wrong key for an existing env
client.path = keyEnvPrefix + "/s2/notTestState"
stateMgr.WriteState(terraform.NewState())
if err := stateMgr.PersistState(); err != nil {
t.Fatal(err)
}
if err := checkStateList(b, []string{"default", "s1", "s2"}); err != nil {
t.Fatal(err)
}
// remove the state with extra subkey
if err := b.DeleteState("s2"); err != nil {
t.Fatal(err)
}
if err := checkStateList(b, []string{"default", "s1"}); err != nil {
t.Fatal(err)
}
// fetch that state again, which should produce a new lineage
s2Mgr, err := b.State("s2")
if err != nil {
t.Fatal(err)
}
if err := s2Mgr.RefreshState(); err != nil {
t.Fatal(err)
}
if s2Mgr.State().Lineage == s2.Lineage {
t.Fatal("state s2 was not deleted")
}
s2 = s2Mgr.State()
// add a state with a key that matches an existing environment dir name
client.path = keyEnvPrefix + "/s2/"
stateMgr.WriteState(terraform.NewState())
if err := stateMgr.PersistState(); err != nil {
t.Fatal(err)
}
// make sure s2 is OK
s2Mgr, err = b.State("s2")
if err != nil {
t.Fatal(err)
}
if err := s2Mgr.RefreshState(); err != nil {
t.Fatal(err)
}
if s2Mgr.State().Lineage != s2.Lineage {
t.Fatal("we got the wrong state for s2")
}
if err := checkStateList(b, []string{"default", "s1", "s2"}); err != nil {
t.Fatal(err)
}
}
func checkStateList(b backend.Backend, expected []string) error {
states, err := b.States()
if err != nil {
return err
}
if !reflect.DeepEqual(states, expected) {
return fmt.Errorf("incorrect states listed: %q", states)
}
return nil
}
func createS3Bucket(t *testing.T, s3Client *s3.S3, bucketName string) {
createBucketReq := &s3.CreateBucketInput{
Bucket: &bucketName,

View File

@ -0,0 +1,12 @@
package main
import (
"github.com/hashicorp/terraform/builtin/providers/localfile"
"github.com/hashicorp/terraform/plugin"
)
func main() {
plugin.Serve(&plugin.ServeOpts{
ProviderFunc: localfile.Provider,
})
}

View File

@ -0,0 +1,12 @@
package main
import (
"github.com/hashicorp/terraform/builtin/providers/opc"
"github.com/hashicorp/terraform/plugin"
)
func main() {
plugin.Serve(&plugin.ServeOpts{
ProviderFunc: opc.Provider,
})
}

View File

@ -17,38 +17,39 @@ const (
const defaultTimeout = 120
// timeout for long time progerss product, rds e.g.
const defaultLongTimeout = 800
const defaultLongTimeout = 1000
func getRegion(d *schema.ResourceData, meta interface{}) common.Region {
return meta.(*AliyunClient).Region
}
func notFoundError(err error) bool {
if e, ok := err.(*common.Error); ok && (e.StatusCode == 404 || e.ErrorResponse.Message == "Not found") {
if e, ok := err.(*common.Error); ok &&
(e.StatusCode == 404 || e.ErrorResponse.Message == "Not found" || e.Code == InstanceNotfound) {
return true
}
return false
}
// Protocal represents network protocal
type Protocal string
// Protocol represents network protocol
type Protocol string
// Constants of protocal definition
// Constants of protocol definition
const (
Http = Protocal("http")
Https = Protocal("https")
Tcp = Protocal("tcp")
Udp = Protocal("udp")
Http = Protocol("http")
Https = Protocol("https")
Tcp = Protocol("tcp")
Udp = Protocol("udp")
)
// ValidProtocals network protocal list
var ValidProtocals = []Protocal{Http, Https, Tcp, Udp}
// ValidProtocols network protocol list
var ValidProtocols = []Protocol{Http, Https, Tcp, Udp}
// simple array value check method, support string type only
func isProtocalValid(value string) bool {
func isProtocolValid(value string) bool {
res := false
for _, v := range ValidProtocals {
for _, v := range ValidProtocols {
if string(v) == value {
res = true
}
@ -77,4 +78,16 @@ const DB_DEFAULT_CONNECT_PORT = "3306"
const COMMA_SEPARATED = ","
const COLON_SEPARATED = ":"
const LOCAL_HOST_IP = "127.0.0.1"
// Takes the result of flatmap.Expand for an array of strings
// and returns a []string
func expandStringList(configured []interface{}) []string {
vs := make([]string, 0, len(configured))
for _, v := range configured {
vs = append(vs, v.(string))
}
return vs
}

View File

@ -5,6 +5,7 @@ import (
"github.com/denverdino/aliyungo/common"
"github.com/denverdino/aliyungo/ecs"
"github.com/denverdino/aliyungo/ess"
"github.com/denverdino/aliyungo/rds"
"github.com/denverdino/aliyungo/slb"
)
@ -20,6 +21,7 @@ type Config struct {
type AliyunClient struct {
Region common.Region
ecsconn *ecs.Client
essconn *ess.Client
rdsconn *rds.Client
// use new version
ecsNewconn *ecs.Client
@ -60,6 +62,11 @@ func (c *Config) Client() (*AliyunClient, error) {
return nil, err
}
essconn, err := c.essConn()
if err != nil {
return nil, err
}
return &AliyunClient{
Region: c.Region,
ecsconn: ecsconn,
@ -67,6 +74,7 @@ func (c *Config) Client() (*AliyunClient, error) {
vpcconn: vpcconn,
slbconn: slbconn,
rdsconn: rdsconn,
essconn: essconn,
}, nil
}
@ -123,3 +131,8 @@ func (c *Config) vpcConn() (*ecs.Client, error) {
return client, nil
}
func (c *Config) essConn() (*ess.Client, error) {
client := ess.NewESSClient(c.AccessKey, c.SecretKey, c.Region)
client.SetBusinessInfo(BusinessInfoKey)
return client, nil
}

View File

@ -1,5 +1,7 @@
package alicloud
import "github.com/denverdino/aliyungo/common"
const (
// common
Notfound = "Not found"
@ -25,7 +27,23 @@ const (
//Nat gateway
NatGatewayInvalidRegionId = "Invalid.RegionId"
DependencyViolationBandwidthPackages = "DependencyViolation.BandwidthPackages"
NotFindSnatEntryBySnatId = "NotFindSnatEntryBySnatId"
NotFindForwardEntryByForwardId = "NotFindForwardEntryByForwardId"
// vswitch
VswitcInvalidRegionId = "InvalidRegionId.NotFound"
// ess
InvalidScalingGroupIdNotFound = "InvalidScalingGroupId.NotFound"
IncorrectScalingConfigurationLifecycleState = "IncorrectScalingConfigurationLifecycleState"
)
func GetNotFoundErrorFromString(str string) error {
return &common.Error{
ErrorResponse: common.ErrorResponse{
Code: InstanceNotfound,
Message: str,
},
StatusCode: -1,
}
}

View File

@ -38,18 +38,24 @@ func Provider() terraform.ResourceProvider {
"alicloud_instance_types": dataSourceAlicloudInstanceTypes(),
},
ResourcesMap: map[string]*schema.Resource{
"alicloud_instance": resourceAliyunInstance(),
"alicloud_disk": resourceAliyunDisk(),
"alicloud_disk_attachment": resourceAliyunDiskAttachment(),
"alicloud_security_group": resourceAliyunSecurityGroup(),
"alicloud_security_group_rule": resourceAliyunSecurityGroupRule(),
"alicloud_db_instance": resourceAlicloudDBInstance(),
"alicloud_vpc": resourceAliyunVpc(),
"alicloud_nat_gateway": resourceAliyunNatGateway(),
"alicloud_instance": resourceAliyunInstance(),
"alicloud_disk": resourceAliyunDisk(),
"alicloud_disk_attachment": resourceAliyunDiskAttachment(),
"alicloud_security_group": resourceAliyunSecurityGroup(),
"alicloud_security_group_rule": resourceAliyunSecurityGroupRule(),
"alicloud_db_instance": resourceAlicloudDBInstance(),
"alicloud_ess_scaling_group": resourceAlicloudEssScalingGroup(),
"alicloud_ess_scaling_configuration": resourceAlicloudEssScalingConfiguration(),
"alicloud_ess_scaling_rule": resourceAlicloudEssScalingRule(),
"alicloud_ess_schedule": resourceAlicloudEssSchedule(),
"alicloud_vpc": resourceAliyunVpc(),
"alicloud_nat_gateway": resourceAliyunNatGateway(),
//both subnet and vswith exists,cause compatible old version, and compatible aws habit.
"alicloud_subnet": resourceAliyunSubnet(),
"alicloud_vswitch": resourceAliyunSubnet(),
"alicloud_route_entry": resourceAliyunRouteEntry(),
"alicloud_snat_entry": resourceAliyunSnatEntry(),
"alicloud_forward_entry": resourceAliyunForwardEntry(),
"alicloud_eip": resourceAliyunEip(),
"alicloud_eip_association": resourceAliyunEipAssociation(),
"alicloud_slb": resourceAliyunSlb(),

View File

@ -218,7 +218,7 @@ func resourceAlicloudDBInstanceCreate(d *schema.ResourceData, meta interface{})
// wait instance status change from Creating to running
if err := conn.WaitForInstance(d.Id(), rds.Running, defaultLongTimeout); err != nil {
log.Printf("[DEBUG] WaitForInstance %s got error: %#v", rds.Running, err)
return fmt.Errorf("WaitForInstance %s got error: %#v", rds.Running, err)
}
if err := modifySecurityIps(d.Id(), d.Get("security_ips"), meta); err != nil {
@ -386,6 +386,11 @@ func resourceAlicloudDBInstanceRead(d *schema.ResourceData, meta interface{}) er
if err != nil {
return err
}
if resp.Databases.Database == nil {
d.SetId("")
return nil
}
d.Set("db_mappings", flattenDatabaseMappings(resp.Databases.Database))
argn := rds.DescribeDBInstanceNetInfoArgs{

View File

@ -535,7 +535,7 @@ func testAccCheckDBInstanceDestroy(s *terraform.State) error {
client := testAccProvider.Meta().(*AliyunClient)
for _, rs := range s.RootModule().Resources {
if rs.Type != "alicloud_db_instance.foo" {
if rs.Type != "alicloud_db_instance" {
continue
}

View File

@ -78,7 +78,14 @@ func resourceAliyunEipRead(d *schema.ResourceData, meta interface{}) error {
d.SetId("")
return nil
}
return err
return fmt.Errorf("Error Describe Eip Attribute: %#v", err)
}
if eip.InstanceId != "" {
d.Set("instance", eip.InstanceId)
} else {
d.Set("instance", "")
return nil
}
bandwidth, _ := strconv.Atoi(eip.Bandwidth)
@ -87,12 +94,6 @@ func resourceAliyunEipRead(d *schema.ResourceData, meta interface{}) error {
d.Set("ip_address", eip.IpAddress)
d.Set("status", eip.Status)
if eip.InstanceId != "" {
d.Set("instance", eip.InstanceId)
} else {
d.Set("instance", "")
}
return nil
}

View File

@ -66,7 +66,7 @@ func resourceAliyunEipAssociationRead(d *schema.ResourceData, meta interface{})
d.SetId("")
return nil
}
return err
return fmt.Errorf("Error Describe Eip Attribute: %#v", err)
}
if eip.InstanceId != instanceId {

View File

@ -0,0 +1,320 @@
package alicloud
import (
"fmt"
"github.com/denverdino/aliyungo/common"
"github.com/denverdino/aliyungo/ecs"
"github.com/denverdino/aliyungo/ess"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
"strings"
"time"
)
func resourceAlicloudEssScalingConfiguration() *schema.Resource {
return &schema.Resource{
Create: resourceAliyunEssScalingConfigurationCreate,
Read: resourceAliyunEssScalingConfigurationRead,
Update: resourceAliyunEssScalingConfigurationUpdate,
Delete: resourceAliyunEssScalingConfigurationDelete,
Schema: map[string]*schema.Schema{
"active": &schema.Schema{
Type: schema.TypeBool,
Optional: true,
Computed: true,
},
"enable": &schema.Schema{
Type: schema.TypeBool,
Optional: true,
},
"scaling_group_id": &schema.Schema{
Type: schema.TypeString,
ForceNew: true,
Required: true,
},
"image_id": &schema.Schema{
Type: schema.TypeString,
ForceNew: true,
Required: true,
},
"instance_type": &schema.Schema{
Type: schema.TypeString,
ForceNew: true,
Required: true,
},
"io_optimized": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validateIoOptimized,
},
"security_group_id": &schema.Schema{
Type: schema.TypeString,
ForceNew: true,
Required: true,
},
"scaling_configuration_name": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"internet_charge_type": &schema.Schema{
Type: schema.TypeString,
ForceNew: true,
Optional: true,
Computed: true,
ValidateFunc: validateInternetChargeType,
},
"internet_max_bandwidth_in": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
ForceNew: true,
Computed: true,
},
"internet_max_bandwidth_out": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
ForceNew: true,
ValidateFunc: validateInternetMaxBandWidthOut,
},
"system_disk_category": &schema.Schema{
Type: schema.TypeString,
Optional: true,
ForceNew: true,
Computed: true,
ValidateFunc: validateAllowedStringValue([]string{
string(ecs.DiskCategoryCloud),
string(ecs.DiskCategoryCloudSSD),
string(ecs.DiskCategoryCloudEfficiency),
string(ecs.DiskCategoryEphemeralSSD),
}),
},
"data_disk": &schema.Schema{
Optional: true,
ForceNew: true,
Type: schema.TypeList,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"size": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
},
"category": &schema.Schema{
Type: schema.TypeString,
Optional: true,
},
"snapshot_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
},
"device": &schema.Schema{
Type: schema.TypeString,
Optional: true,
},
},
},
},
"instance_ids": &schema.Schema{
Type: schema.TypeList,
Elem: &schema.Schema{Type: schema.TypeString},
Optional: true,
MaxItems: 20,
},
},
}
}
func resourceAliyunEssScalingConfigurationCreate(d *schema.ResourceData, meta interface{}) error {
args, err := buildAlicloudEssScalingConfigurationArgs(d, meta)
if err != nil {
return err
}
essconn := meta.(*AliyunClient).essconn
scaling, err := essconn.CreateScalingConfiguration(args)
if err != nil {
return err
}
d.SetId(d.Get("scaling_group_id").(string) + COLON_SEPARATED + scaling.ScalingConfigurationId)
return resourceAliyunEssScalingConfigurationUpdate(d, meta)
}
func resourceAliyunEssScalingConfigurationUpdate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
if d.HasChange("active") {
active := d.Get("active").(bool)
if !active {
return fmt.Errorf("Please active the scaling configuration directly.")
}
ids := strings.Split(d.Id(), COLON_SEPARATED)
err := client.ActiveScalingConfigurationById(ids[0], ids[1])
if err != nil {
return fmt.Errorf("Active scaling configuration %s err: %#v", ids[1], err)
}
}
if err := enableEssScalingConfiguration(d, meta); err != nil {
return err
}
return resourceAliyunEssScalingConfigurationRead(d, meta)
}
func enableEssScalingConfiguration(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
ids := strings.Split(d.Id(), COLON_SEPARATED)
if d.HasChange("enable") {
d.SetPartial("enable")
enable := d.Get("enable").(bool)
if !enable {
err := client.DisableScalingConfigurationById(ids[0])
if err != nil {
return fmt.Errorf("Disable scaling group %s err: %#v", ids[0], err)
}
}
instance_ids := []string{}
if d.HasChange("instance_ids") {
d.SetPartial("instance_ids")
instances := d.Get("instance_ids").([]interface{})
instance_ids = expandStringList(instances)
}
err := client.EnableScalingConfigurationById(ids[0], ids[1], instance_ids)
if err != nil {
return fmt.Errorf("Enable scaling configuration %s err: %#v", ids[1], err)
}
}
return nil
}
func resourceAliyunEssScalingConfigurationRead(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
ids := strings.Split(d.Id(), COLON_SEPARATED)
c, err := client.DescribeScalingConfigurationById(ids[0], ids[1])
if err != nil {
if e, ok := err.(*common.Error); ok && e.Code == InstanceNotfound {
d.SetId("")
return nil
}
return fmt.Errorf("Error Describe ESS scaling configuration Attribute: %#v", err)
}
d.Set("scaling_group_id", c.ScalingGroupId)
d.Set("active", c.LifecycleState == ess.Active)
d.Set("image_id", c.ImageId)
d.Set("instance_type", c.InstanceType)
d.Set("io_optimized", c.IoOptimized)
d.Set("security_group_id", c.SecurityGroupId)
d.Set("scaling_configuration_name", c.ScalingConfigurationName)
d.Set("internet_charge_type", c.InternetChargeType)
d.Set("internet_max_bandwidth_in", c.InternetMaxBandwidthIn)
d.Set("internet_max_bandwidth_out", c.InternetMaxBandwidthOut)
d.Set("system_disk_category", c.SystemDiskCategory)
d.Set("data_disk", flattenDataDiskMappings(c.DataDisks.DataDisk))
return nil
}
func resourceAliyunEssScalingConfigurationDelete(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
return resource.Retry(5*time.Minute, func() *resource.RetryError {
ids := strings.Split(d.Id(), COLON_SEPARATED)
err := client.DeleteScalingConfigurationById(ids[0], ids[1])
if err != nil {
e, _ := err.(*common.Error)
if e.ErrorResponse.Code == IncorrectScalingConfigurationLifecycleState {
return resource.NonRetryableError(
fmt.Errorf("Scaling configuration is active - please active another one and trying again."))
}
if e.ErrorResponse.Code != InvalidScalingGroupIdNotFound {
return resource.RetryableError(
fmt.Errorf("Scaling configuration in use - trying again while it is deleted."))
}
}
_, err = client.DescribeScalingConfigurationById(ids[0], ids[1])
if err != nil {
if notFoundError(err) {
return nil
}
return resource.NonRetryableError(err)
}
return resource.RetryableError(
fmt.Errorf("Scaling configuration in use - trying again while it is deleted."))
})
}
func buildAlicloudEssScalingConfigurationArgs(d *schema.ResourceData, meta interface{}) (*ess.CreateScalingConfigurationArgs, error) {
args := &ess.CreateScalingConfigurationArgs{
ScalingGroupId: d.Get("scaling_group_id").(string),
ImageId: d.Get("image_id").(string),
InstanceType: d.Get("instance_type").(string),
IoOptimized: ecs.IoOptimized(d.Get("io_optimized").(string)),
SecurityGroupId: d.Get("security_group_id").(string),
}
if v := d.Get("scaling_configuration_name").(string); v != "" {
args.ScalingConfigurationName = v
}
if v := d.Get("internet_charge_type").(string); v != "" {
args.InternetChargeType = common.InternetChargeType(v)
}
if v := d.Get("internet_max_bandwidth_in").(int); v != 0 {
args.InternetMaxBandwidthIn = v
}
if v := d.Get("internet_max_bandwidth_out").(int); v != 0 {
args.InternetMaxBandwidthOut = v
}
if v := d.Get("system_disk_category").(string); v != "" {
args.SystemDisk_Category = common.UnderlineString(v)
}
dds, ok := d.GetOk("data_disk")
if ok {
disks := dds.([]interface{})
diskTypes := []ess.DataDiskType{}
for _, e := range disks {
pack := e.(map[string]interface{})
disk := ess.DataDiskType{
Size: pack["size"].(int),
Category: pack["category"].(string),
SnapshotId: pack["snapshot_id"].(string),
Device: pack["device"].(string),
}
if v := pack["size"].(int); v != 0 {
disk.Size = v
}
if v := pack["category"].(string); v != "" {
disk.Category = v
}
if v := pack["snapshot_id"].(string); v != "" {
disk.SnapshotId = v
}
if v := pack["device"].(string); v != "" {
disk.Device = v
}
diskTypes = append(diskTypes, disk)
}
args.DataDisk = diskTypes
}
return args, nil
}

View File

@ -0,0 +1,495 @@
package alicloud
import (
"fmt"
"github.com/denverdino/aliyungo/common"
"github.com/denverdino/aliyungo/ess"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"log"
"regexp"
"strings"
"testing"
)
func TestAccAlicloudEssScalingConfiguration_basic(t *testing.T) {
var sc ess.ScalingConfigurationItemType
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
// module name
IDRefreshName: "alicloud_ess_scaling_configuration.foo",
Providers: testAccProviders,
CheckDestroy: testAccCheckEssScalingConfigurationDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccEssScalingConfigurationConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScalingConfigurationExists(
"alicloud_ess_scaling_configuration.foo", &sc),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_configuration.foo",
"instance_type",
"ecs.s2.large"),
resource.TestMatchResourceAttr(
"alicloud_ess_scaling_configuration.foo",
"image_id",
regexp.MustCompile("^centos_6")),
),
},
},
})
}
func TestAccAlicloudEssScalingConfiguration_multiConfig(t *testing.T) {
var sc ess.ScalingConfigurationItemType
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
// module name
IDRefreshName: "alicloud_ess_scaling_configuration.bar",
Providers: testAccProviders,
CheckDestroy: testAccCheckEssScalingConfigurationDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccEssScalingConfiguration_multiConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScalingConfigurationExists(
"alicloud_ess_scaling_configuration.bar", &sc),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_configuration.bar",
"active",
"false"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_configuration.bar",
"instance_type",
"ecs.s2.large"),
resource.TestMatchResourceAttr(
"alicloud_ess_scaling_configuration.bar",
"image_id",
regexp.MustCompile("^centos_6")),
),
},
},
})
}
func SkipTestAccAlicloudEssScalingConfiguration_active(t *testing.T) {
var sc ess.ScalingConfigurationItemType
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
// module name
IDRefreshName: "alicloud_ess_scaling_configuration.bar",
Providers: testAccProviders,
CheckDestroy: testAccCheckEssScalingConfigurationDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccEssScalingConfiguration_active,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScalingConfigurationExists(
"alicloud_ess_scaling_configuration.bar", &sc),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_configuration.bar",
"active",
"true"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_configuration.bar",
"instance_type",
"ecs.s2.large"),
resource.TestMatchResourceAttr(
"alicloud_ess_scaling_configuration.bar",
"image_id",
regexp.MustCompile("^centos_6")),
),
},
resource.TestStep{
Config: testAccEssScalingConfiguration_inActive,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScalingConfigurationExists(
"alicloud_ess_scaling_configuration.bar", &sc),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_configuration.bar",
"active",
"false"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_configuration.bar",
"instance_type",
"ecs.s2.large"),
resource.TestMatchResourceAttr(
"alicloud_ess_scaling_configuration.bar",
"image_id",
regexp.MustCompile("^centos_6")),
),
},
},
})
}
func SkipTestAccAlicloudEssScalingConfiguration_enable(t *testing.T) {
var sc ess.ScalingConfigurationItemType
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
// module name
IDRefreshName: "alicloud_ess_scaling_configuration.foo",
Providers: testAccProviders,
CheckDestroy: testAccCheckEssScalingConfigurationDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccEssScalingConfiguration_enable,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScalingConfigurationExists(
"alicloud_ess_scaling_configuration.foo", &sc),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_configuration.foo",
"enable",
"true"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_configuration.foo",
"instance_type",
"ecs.s2.large"),
resource.TestMatchResourceAttr(
"alicloud_ess_scaling_configuration.foo",
"image_id",
regexp.MustCompile("^centos_6")),
),
},
resource.TestStep{
Config: testAccEssScalingConfiguration_disable,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScalingConfigurationExists(
"alicloud_ess_scaling_configuration.foo", &sc),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_configuration.foo",
"enable",
"false"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_configuration.foo",
"instance_type",
"ecs.s2.large"),
resource.TestMatchResourceAttr(
"alicloud_ess_scaling_configuration.foo",
"image_id",
regexp.MustCompile("^centos_6")),
),
},
},
})
}
func testAccCheckEssScalingConfigurationExists(n string, d *ess.ScalingConfigurationItemType) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No ESS Scaling Configuration ID is set")
}
client := testAccProvider.Meta().(*AliyunClient)
ids := strings.Split(rs.Primary.ID, COLON_SEPARATED)
attr, err := client.DescribeScalingConfigurationById(ids[0], ids[1])
log.Printf("[DEBUG] check scaling configuration %s attribute %#v", rs.Primary.ID, attr)
if err != nil {
return err
}
if attr == nil {
return fmt.Errorf("Scaling Configuration not found")
}
*d = *attr
return nil
}
}
func testAccCheckEssScalingConfigurationDestroy(s *terraform.State) error {
client := testAccProvider.Meta().(*AliyunClient)
for _, rs := range s.RootModule().Resources {
if rs.Type != "alicloud_ess_scaling_configuration" {
continue
}
ids := strings.Split(rs.Primary.ID, COLON_SEPARATED)
ins, err := client.DescribeScalingConfigurationById(ids[0], ids[1])
if ins != nil {
return fmt.Errorf("Error ESS scaling configuration still exist")
}
// Verify the error is what we want
if err != nil {
// Verify the error is what we want
e, _ := err.(*common.Error)
if e.ErrorResponse.Code == InstanceNotfound {
continue
}
return err
}
}
return nil
}
const testAccEssScalingConfigurationConfig = `
data "alicloud_images" "ecs_image" {
most_recent = true
name_regex = "^centos_6\\w{1,5}[64].*"
}
resource "alicloud_security_group" "tf_test_foo" {
description = "foo"
}
resource "alicloud_security_group_rule" "ssh-in" {
type = "ingress"
ip_protocol = "tcp"
nic_type = "internet"
policy = "accept"
port_range = "22/22"
priority = 1
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
cidr_ip = "0.0.0.0/0"
}
resource "alicloud_ess_scaling_group" "foo" {
min_size = 1
max_size = 1
scaling_group_name = "foo"
removal_policies = ["OldestInstance", "NewestInstance"]
}
resource "alicloud_ess_scaling_configuration" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
instance_type = "ecs.s2.large"
io_optimized = "optimized"
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
}
`
const testAccEssScalingConfiguration_multiConfig = `
data "alicloud_images" "ecs_image" {
most_recent = true
name_regex = "^centos_6\\w{1,5}[64].*"
}
resource "alicloud_security_group" "tf_test_foo" {
description = "foo"
}
resource "alicloud_security_group_rule" "ssh-in" {
type = "ingress"
ip_protocol = "tcp"
nic_type = "internet"
policy = "accept"
port_range = "22/22"
priority = 1
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
cidr_ip = "0.0.0.0/0"
}
resource "alicloud_ess_scaling_group" "foo" {
min_size = 1
max_size = 1
scaling_group_name = "foo"
removal_policies = ["OldestInstance", "NewestInstance"]
}
resource "alicloud_ess_scaling_configuration" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
instance_type = "ecs.s2.large"
io_optimized = "optimized"
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
}
resource "alicloud_ess_scaling_configuration" "bar" {
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
instance_type = "ecs.s2.large"
io_optimized = "optimized"
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
}
`
const testAccEssScalingConfiguration_active = `
data "alicloud_images" "ecs_image" {
most_recent = true
name_regex = "^centos_6\\w{1,5}[64].*"
}
resource "alicloud_security_group" "tf_test_foo" {
description = "foo"
}
resource "alicloud_security_group_rule" "ssh-in" {
type = "ingress"
ip_protocol = "tcp"
nic_type = "internet"
policy = "accept"
port_range = "22/22"
priority = 1
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
cidr_ip = "0.0.0.0/0"
}
resource "alicloud_ess_scaling_group" "foo" {
min_size = 1
max_size = 1
scaling_group_name = "foo"
removal_policies = ["OldestInstance", "NewestInstance"]
}
resource "alicloud_ess_scaling_configuration" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
active = true
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
instance_type = "ecs.s2.large"
io_optimized = "optimized"
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
}
`
const testAccEssScalingConfiguration_inActive = `
data "alicloud_images" "ecs_image" {
most_recent = true
name_regex = "^centos_6\\w{1,5}[64].*"
}
resource "alicloud_security_group" "tf_test_foo" {
description = "foo"
}
resource "alicloud_security_group_rule" "ssh-in" {
type = "ingress"
ip_protocol = "tcp"
nic_type = "internet"
policy = "accept"
port_range = "22/22"
priority = 1
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
cidr_ip = "0.0.0.0/0"
}
resource "alicloud_ess_scaling_group" "foo" {
min_size = 1
max_size = 1
scaling_group_name = "foo"
removal_policies = ["OldestInstance", "NewestInstance"]
}
resource "alicloud_ess_scaling_configuration" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
active = false
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
instance_type = "ecs.s2.large"
io_optimized = "optimized"
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
}
`
const testAccEssScalingConfiguration_enable = `
data "alicloud_images" "ecs_image" {
most_recent = true
name_regex = "^centos_6\\w{1,5}[64].*"
}
resource "alicloud_security_group" "tf_test_foo" {
description = "foo"
}
resource "alicloud_security_group_rule" "ssh-in" {
type = "ingress"
ip_protocol = "tcp"
nic_type = "internet"
policy = "accept"
port_range = "22/22"
priority = 1
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
cidr_ip = "0.0.0.0/0"
}
resource "alicloud_ess_scaling_group" "foo" {
min_size = 1
max_size = 1
scaling_group_name = "foo"
removal_policies = ["OldestInstance", "NewestInstance"]
}
resource "alicloud_ess_scaling_configuration" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
enable = true
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
instance_type = "ecs.s2.large"
io_optimized = "optimized"
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
}
`
const testAccEssScalingConfiguration_disable = `
data "alicloud_images" "ecs_image" {
most_recent = true
name_regex = "^centos_6\\w{1,5}[64].*"
}
resource "alicloud_security_group" "tf_test_foo" {
description = "foo"
}
resource "alicloud_security_group_rule" "ssh-in" {
type = "ingress"
ip_protocol = "tcp"
nic_type = "internet"
policy = "accept"
port_range = "22/22"
priority = 1
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
cidr_ip = "0.0.0.0/0"
}
resource "alicloud_ess_scaling_group" "foo" {
min_size = 1
max_size = 1
scaling_group_name = "foo"
removal_policies = ["OldestInstance", "NewestInstance"]
}
resource "alicloud_ess_scaling_configuration" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
enable = false
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
instance_type = "ecs.s2.large"
io_optimized = "optimized"
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
}
`

View File

@ -0,0 +1,209 @@
package alicloud
import (
"fmt"
"github.com/denverdino/aliyungo/common"
"github.com/denverdino/aliyungo/ess"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
"strings"
"time"
)
func resourceAlicloudEssScalingGroup() *schema.Resource {
return &schema.Resource{
Create: resourceAliyunEssScalingGroupCreate,
Read: resourceAliyunEssScalingGroupRead,
Update: resourceAliyunEssScalingGroupUpdate,
Delete: resourceAliyunEssScalingGroupDelete,
Schema: map[string]*schema.Schema{
"min_size": &schema.Schema{
Type: schema.TypeInt,
Required: true,
ValidateFunc: validateIntegerInRange(0, 100),
},
"max_size": &schema.Schema{
Type: schema.TypeInt,
Required: true,
ValidateFunc: validateIntegerInRange(0, 100),
},
"scaling_group_name": &schema.Schema{
Type: schema.TypeString,
Optional: true,
},
"default_cooldown": &schema.Schema{
Type: schema.TypeInt,
Default: 300,
Optional: true,
ValidateFunc: validateIntegerInRange(0, 86400),
},
"vswitch_id": &schema.Schema{
Type: schema.TypeString,
Optional: true,
},
"removal_policies": &schema.Schema{
Type: schema.TypeList,
Elem: &schema.Schema{Type: schema.TypeString},
Optional: true,
MaxItems: 2,
},
"db_instance_ids": &schema.Schema{
Type: schema.TypeList,
Elem: &schema.Schema{Type: schema.TypeString},
Optional: true,
MaxItems: 3,
},
"loadbalancer_ids": &schema.Schema{
Type: schema.TypeList,
Elem: &schema.Schema{Type: schema.TypeString},
Optional: true,
},
},
}
}
func resourceAliyunEssScalingGroupCreate(d *schema.ResourceData, meta interface{}) error {
args, err := buildAlicloudEssScalingGroupArgs(d, meta)
if err != nil {
return err
}
essconn := meta.(*AliyunClient).essconn
scaling, err := essconn.CreateScalingGroup(args)
if err != nil {
return err
}
d.SetId(scaling.ScalingGroupId)
return resourceAliyunEssScalingGroupUpdate(d, meta)
}
func resourceAliyunEssScalingGroupRead(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
scaling, err := client.DescribeScalingGroupById(d.Id())
if err != nil {
if e, ok := err.(*common.Error); ok && e.Code == InstanceNotfound {
d.SetId("")
return nil
}
return fmt.Errorf("Error Describe ESS scaling group Attribute: %#v", err)
}
d.Set("min_size", scaling.MinSize)
d.Set("max_size", scaling.MaxSize)
d.Set("scaling_group_name", scaling.ScalingGroupName)
d.Set("default_cooldown", scaling.DefaultCooldown)
d.Set("removal_policies", scaling.RemovalPolicies)
d.Set("db_instance_ids", scaling.DBInstanceIds)
d.Set("loadbalancer_ids", scaling.LoadBalancerId)
return nil
}
func resourceAliyunEssScalingGroupUpdate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AliyunClient).essconn
args := &ess.ModifyScalingGroupArgs{
ScalingGroupId: d.Id(),
}
if d.HasChange("scaling_group_name") {
args.ScalingGroupName = d.Get("scaling_group_name").(string)
}
if d.HasChange("min_size") {
args.MinSize = d.Get("min_size").(int)
}
if d.HasChange("max_size") {
args.MaxSize = d.Get("max_size").(int)
}
if d.HasChange("default_cooldown") {
args.DefaultCooldown = d.Get("default_cooldown").(int)
}
if d.HasChange("removal_policies") {
policyStrings := d.Get("removal_policies").([]interface{})
args.RemovalPolicy = expandStringList(policyStrings)
}
if _, err := conn.ModifyScalingGroup(args); err != nil {
return err
}
return resourceAliyunEssScalingGroupRead(d, meta)
}
func resourceAliyunEssScalingGroupDelete(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
return resource.Retry(2*time.Minute, func() *resource.RetryError {
err := client.DeleteScalingGroupById(d.Id())
if err != nil {
e, _ := err.(*common.Error)
if e.ErrorResponse.Code != InvalidScalingGroupIdNotFound {
return resource.RetryableError(fmt.Errorf("Scaling group in use - trying again while it is deleted."))
}
}
_, err = client.DescribeScalingGroupById(d.Id())
if err != nil {
if notFoundError(err) {
return nil
}
return resource.NonRetryableError(err)
}
return resource.RetryableError(fmt.Errorf("Scaling group in use - trying again while it is deleted."))
})
}
func buildAlicloudEssScalingGroupArgs(d *schema.ResourceData, meta interface{}) (*ess.CreateScalingGroupArgs, error) {
client := meta.(*AliyunClient)
args := &ess.CreateScalingGroupArgs{
RegionId: getRegion(d, meta),
MinSize: d.Get("min_size").(int),
MaxSize: d.Get("max_size").(int),
DefaultCooldown: d.Get("default_cooldown").(int),
}
if v := d.Get("scaling_group_name").(string); v != "" {
args.ScalingGroupName = v
}
if v := d.Get("vswitch_id").(string); v != "" {
args.VSwitchId = v
// get vpcId
vpcId, err := client.GetVpcIdByVSwitchId(v)
if err != nil {
return nil, fmt.Errorf("VswitchId %s is not valid of current region", v)
}
// fill vpcId by vswitchId
args.VpcId = vpcId
}
dbs, ok := d.GetOk("db_instance_ids")
if ok {
dbsStrings := dbs.([]interface{})
args.DBInstanceId = expandStringList(dbsStrings)
}
lbs, ok := d.GetOk("loadbalancer_ids")
if ok {
lbsStrings := lbs.([]interface{})
args.LoadBalancerId = strings.Join(expandStringList(lbsStrings), COMMA_SEPARATED)
}
return args, nil
}

View File

@ -0,0 +1,297 @@
package alicloud
import (
"fmt"
"github.com/denverdino/aliyungo/common"
"github.com/denverdino/aliyungo/ess"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"log"
"testing"
)
func TestAccAlicloudEssScalingGroup_basic(t *testing.T) {
var sg ess.ScalingGroupItemType
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
// module name
IDRefreshName: "alicloud_ess_scaling_group.foo",
Providers: testAccProviders,
CheckDestroy: testAccCheckEssScalingGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccEssScalingGroupConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScalingGroupExists(
"alicloud_ess_scaling_group.foo", &sg),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"min_size",
"1"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"max_size",
"1"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"scaling_group_name",
"foo"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"removal_policies.#",
"2",
),
),
},
},
})
}
func TestAccAlicloudEssScalingGroup_update(t *testing.T) {
var sg ess.ScalingGroupItemType
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
// module name
IDRefreshName: "alicloud_ess_scaling_group.foo",
Providers: testAccProviders,
CheckDestroy: testAccCheckEssScalingGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccEssScalingGroup,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScalingGroupExists(
"alicloud_ess_scaling_group.foo", &sg),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"min_size",
"1"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"max_size",
"1"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"scaling_group_name",
"foo"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"removal_policies.#",
"2",
),
),
},
resource.TestStep{
Config: testAccEssScalingGroup_update,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScalingGroupExists(
"alicloud_ess_scaling_group.foo", &sg),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"min_size",
"2"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"max_size",
"2"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"scaling_group_name",
"update"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"removal_policies.#",
"1",
),
),
},
},
})
}
func SkipTestAccAlicloudEssScalingGroup_vpc(t *testing.T) {
var sg ess.ScalingGroupItemType
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
// module name
IDRefreshName: "alicloud_ess_scaling_group.foo",
Providers: testAccProviders,
CheckDestroy: testAccCheckEssScalingGroupDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccEssScalingGroup_vpc,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScalingGroupExists(
"alicloud_ess_scaling_group.foo", &sg),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"min_size",
"1"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"max_size",
"1"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"scaling_group_name",
"foo"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_group.foo",
"removal_policies.#",
"2",
),
),
},
},
})
}
func testAccCheckEssScalingGroupExists(n string, d *ess.ScalingGroupItemType) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No ESS Scaling Group ID is set")
}
client := testAccProvider.Meta().(*AliyunClient)
attr, err := client.DescribeScalingGroupById(rs.Primary.ID)
log.Printf("[DEBUG] check scaling group %s attribute %#v", rs.Primary.ID, attr)
if err != nil {
return err
}
if attr == nil {
return fmt.Errorf("Scaling Group not found")
}
*d = *attr
return nil
}
}
func testAccCheckEssScalingGroupDestroy(s *terraform.State) error {
client := testAccProvider.Meta().(*AliyunClient)
for _, rs := range s.RootModule().Resources {
if rs.Type != "alicloud_ess_scaling_group" {
continue
}
ins, err := client.DescribeScalingGroupById(rs.Primary.ID)
if ins != nil {
return fmt.Errorf("Error ESS scaling group still exist")
}
// Verify the error is what we want
if err != nil {
// Verify the error is what we want
e, _ := err.(*common.Error)
if e.ErrorResponse.Code == InstanceNotfound {
continue
}
return err
}
}
return nil
}
const testAccEssScalingGroupConfig = `
resource "alicloud_ess_scaling_group" "foo" {
min_size = 1
max_size = 1
scaling_group_name = "foo"
removal_policies = ["OldestInstance", "NewestInstance"]
}
`
const testAccEssScalingGroup = `
resource "alicloud_ess_scaling_group" "foo" {
min_size = 1
max_size = 1
scaling_group_name = "foo"
removal_policies = ["OldestInstance", "NewestInstance"]
}
`
const testAccEssScalingGroup_update = `
resource "alicloud_ess_scaling_group" "foo" {
min_size = 2
max_size = 2
scaling_group_name = "update"
removal_policies = ["OldestInstance"]
}
`
const testAccEssScalingGroup_vpc = `
data "alicloud_images" "ecs_image" {
most_recent = true
name_regex = "^centos_6\\w{1,5}[64].*"
}
data "alicloud_zones" "default" {
"available_disk_category"= "cloud_efficiency"
"available_resource_creation"= "VSwitch"
}
resource "alicloud_vpc" "foo" {
name = "tf_test_foo"
cidr_block = "172.16.0.0/12"
}
resource "alicloud_vswitch" "foo" {
vpc_id = "${alicloud_vpc.foo.id}"
cidr_block = "172.16.0.0/21"
availability_zone = "${data.alicloud_zones.default.zones.0.id}"
}
resource "alicloud_security_group" "tf_test_foo" {
description = "foo"
vpc_id = "${alicloud_vpc.foo.id}"
}
resource "alicloud_ess_scaling_group" "foo" {
min_size = 1
max_size = 1
scaling_group_name = "foo"
default_cooldown = 20
vswitch_id = "${alicloud_vswitch.foo.id}"
removal_policies = ["OldestInstance", "NewestInstance"]
}
resource "alicloud_ess_scaling_configuration" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.foo.id}"
enable = true
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
instance_type = "ecs.n1.medium"
io_optimized = "optimized"
system_disk_category = "cloud_efficiency"
internet_charge_type = "PayByTraffic"
internet_max_bandwidth_out = 10
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
}
`

View File

@ -0,0 +1,168 @@
package alicloud
import (
"fmt"
"github.com/denverdino/aliyungo/common"
"github.com/denverdino/aliyungo/ess"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
"strings"
"time"
)
func resourceAlicloudEssScalingRule() *schema.Resource {
return &schema.Resource{
Create: resourceAliyunEssScalingRuleCreate,
Read: resourceAliyunEssScalingRuleRead,
Update: resourceAliyunEssScalingRuleUpdate,
Delete: resourceAliyunEssScalingRuleDelete,
Schema: map[string]*schema.Schema{
"scaling_group_id": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"adjustment_type": &schema.Schema{
Type: schema.TypeString,
Required: true,
ValidateFunc: validateAllowedStringValue([]string{string(ess.QuantityChangeInCapacity),
string(ess.PercentChangeInCapacity), string(ess.TotalCapacity)}),
},
"adjustment_value": &schema.Schema{
Type: schema.TypeInt,
Required: true,
},
"scaling_rule_name": &schema.Schema{
Type: schema.TypeString,
Computed: true,
Optional: true,
},
"ari": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"cooldown": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
ValidateFunc: validateIntegerInRange(0, 86400),
},
},
}
}
func resourceAliyunEssScalingRuleCreate(d *schema.ResourceData, meta interface{}) error {
args, err := buildAlicloudEssScalingRuleArgs(d, meta)
if err != nil {
return err
}
essconn := meta.(*AliyunClient).essconn
rule, err := essconn.CreateScalingRule(args)
if err != nil {
return err
}
d.SetId(d.Get("scaling_group_id").(string) + COLON_SEPARATED + rule.ScalingRuleId)
return resourceAliyunEssScalingRuleUpdate(d, meta)
}
func resourceAliyunEssScalingRuleRead(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
ids := strings.Split(d.Id(), COLON_SEPARATED)
rule, err := client.DescribeScalingRuleById(ids[0], ids[1])
if err != nil {
if e, ok := err.(*common.Error); ok && e.Code == InstanceNotfound {
d.SetId("")
return nil
}
return fmt.Errorf("Error Describe ESS scaling rule Attribute: %#v", err)
}
d.Set("scaling_group_id", rule.ScalingGroupId)
d.Set("ari", rule.ScalingRuleAri)
d.Set("adjustment_type", rule.AdjustmentType)
d.Set("adjustment_value", rule.AdjustmentValue)
d.Set("scaling_rule_name", rule.ScalingRuleName)
d.Set("cooldown", rule.Cooldown)
return nil
}
func resourceAliyunEssScalingRuleDelete(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
ids := strings.Split(d.Id(), COLON_SEPARATED)
return resource.Retry(2*time.Minute, func() *resource.RetryError {
err := client.DeleteScalingRuleById(ids[1])
if err != nil {
return resource.RetryableError(fmt.Errorf("Scaling rule in use - trying again while it is deleted."))
}
_, err = client.DescribeScalingRuleById(ids[0], ids[1])
if err != nil {
if notFoundError(err) {
return nil
}
return resource.NonRetryableError(err)
}
return resource.RetryableError(fmt.Errorf("Scaling rule in use - trying again while it is deleted."))
})
}
func resourceAliyunEssScalingRuleUpdate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AliyunClient).essconn
ids := strings.Split(d.Id(), COLON_SEPARATED)
args := &ess.ModifyScalingRuleArgs{
ScalingRuleId: ids[1],
}
if d.HasChange("adjustment_type") {
args.AdjustmentType = ess.AdjustmentType(d.Get("adjustment_type").(string))
}
if d.HasChange("adjustment_value") {
args.AdjustmentValue = d.Get("adjustment_value").(int)
}
if d.HasChange("scaling_rule_name") {
args.ScalingRuleName = d.Get("scaling_rule_name").(string)
}
if d.HasChange("cooldown") {
args.Cooldown = d.Get("cooldown").(int)
}
if _, err := conn.ModifyScalingRule(args); err != nil {
return err
}
return resourceAliyunEssScalingRuleRead(d, meta)
}
func buildAlicloudEssScalingRuleArgs(d *schema.ResourceData, meta interface{}) (*ess.CreateScalingRuleArgs, error) {
args := &ess.CreateScalingRuleArgs{
RegionId: getRegion(d, meta),
ScalingGroupId: d.Get("scaling_group_id").(string),
AdjustmentType: ess.AdjustmentType(d.Get("adjustment_type").(string)),
AdjustmentValue: d.Get("adjustment_value").(int),
}
if v := d.Get("scaling_rule_name").(string); v != "" {
args.ScalingRuleName = v
}
if v := d.Get("cooldown").(int); v != 0 {
args.Cooldown = v
}
return args, nil
}

View File

@ -0,0 +1,290 @@
package alicloud
import (
"fmt"
"github.com/denverdino/aliyungo/common"
"github.com/denverdino/aliyungo/ess"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"log"
"strings"
"testing"
)
func TestAccAlicloudEssScalingRule_basic(t *testing.T) {
var sc ess.ScalingRuleItemType
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
// module name
IDRefreshName: "alicloud_ess_scaling_rule.foo",
Providers: testAccProviders,
CheckDestroy: testAccCheckEssScalingRuleDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccEssScalingRuleConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScalingRuleExists(
"alicloud_ess_scaling_rule.foo", &sc),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_rule.foo",
"adjustment_type",
"TotalCapacity"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_rule.foo",
"adjustment_value",
"1"),
),
},
},
})
}
func TestAccAlicloudEssScalingRule_update(t *testing.T) {
var sc ess.ScalingRuleItemType
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
// module name
IDRefreshName: "alicloud_ess_scaling_rule.foo",
Providers: testAccProviders,
CheckDestroy: testAccCheckEssScalingRuleDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccEssScalingRule,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScalingRuleExists(
"alicloud_ess_scaling_rule.foo", &sc),
testAccCheckEssScalingRuleExists(
"alicloud_ess_scaling_rule.foo", &sc),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_rule.foo",
"adjustment_type",
"TotalCapacity"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_rule.foo",
"adjustment_value",
"1"),
),
},
resource.TestStep{
Config: testAccEssScalingRule_update,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScalingRuleExists(
"alicloud_ess_scaling_rule.foo", &sc),
testAccCheckEssScalingRuleExists(
"alicloud_ess_scaling_rule.foo", &sc),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_rule.foo",
"adjustment_type",
"TotalCapacity"),
resource.TestCheckResourceAttr(
"alicloud_ess_scaling_rule.foo",
"adjustment_value",
"2"),
),
},
},
})
}
func testAccCheckEssScalingRuleExists(n string, d *ess.ScalingRuleItemType) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No ESS Scaling Rule ID is set")
}
client := testAccProvider.Meta().(*AliyunClient)
ids := strings.Split(rs.Primary.ID, COLON_SEPARATED)
attr, err := client.DescribeScalingRuleById(ids[0], ids[1])
log.Printf("[DEBUG] check scaling rule %s attribute %#v", rs.Primary.ID, attr)
if err != nil {
return err
}
if attr == nil {
return fmt.Errorf("Scaling rule not found")
}
*d = *attr
return nil
}
}
func testAccCheckEssScalingRuleDestroy(s *terraform.State) error {
client := testAccProvider.Meta().(*AliyunClient)
for _, rs := range s.RootModule().Resources {
if rs.Type != "alicloud_ess_scaling_rule" {
continue
}
ids := strings.Split(rs.Primary.ID, COLON_SEPARATED)
ins, err := client.DescribeScalingRuleById(ids[0], ids[1])
if ins != nil {
return fmt.Errorf("Error ESS scaling rule still exist")
}
// Verify the error is what we want
if err != nil {
// Verify the error is what we want
e, _ := err.(*common.Error)
if e.ErrorResponse.Code == InstanceNotfound {
continue
}
return err
}
}
return nil
}
const testAccEssScalingRuleConfig = `
data "alicloud_images" "ecs_image" {
most_recent = true
name_regex = "^centos_6\\w{1,5}[64].*"
}
resource "alicloud_security_group" "tf_test_foo" {
description = "foo"
}
resource "alicloud_security_group_rule" "ssh-in" {
type = "ingress"
ip_protocol = "tcp"
nic_type = "internet"
policy = "accept"
port_range = "22/22"
priority = 1
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
cidr_ip = "0.0.0.0/0"
}
resource "alicloud_ess_scaling_group" "bar" {
min_size = 1
max_size = 1
scaling_group_name = "bar"
removal_policies = ["OldestInstance", "NewestInstance"]
}
resource "alicloud_ess_scaling_configuration" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
instance_type = "ecs.s2.large"
io_optimized = "optimized"
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
}
resource "alicloud_ess_scaling_rule" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
adjustment_type = "TotalCapacity"
adjustment_value = 1
cooldown = 120
}
`
const testAccEssScalingRule = `
data "alicloud_images" "ecs_image" {
most_recent = true
name_regex = "^centos_6\\w{1,5}[64].*"
}
resource "alicloud_security_group" "tf_test_foo" {
description = "foo"
}
resource "alicloud_security_group_rule" "ssh-in" {
type = "ingress"
ip_protocol = "tcp"
nic_type = "internet"
policy = "accept"
port_range = "22/22"
priority = 1
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
cidr_ip = "0.0.0.0/0"
}
resource "alicloud_ess_scaling_group" "bar" {
min_size = 1
max_size = 1
scaling_group_name = "bar"
removal_policies = ["OldestInstance", "NewestInstance"]
}
resource "alicloud_ess_scaling_configuration" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
instance_type = "ecs.s2.large"
io_optimized = "optimized"
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
}
resource "alicloud_ess_scaling_rule" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
adjustment_type = "TotalCapacity"
adjustment_value = 1
cooldown = 120
}
`
const testAccEssScalingRule_update = `
data "alicloud_images" "ecs_image" {
most_recent = true
name_regex = "^centos_6\\w{1,5}[64].*"
}
resource "alicloud_security_group" "tf_test_foo" {
description = "foo"
}
resource "alicloud_security_group_rule" "ssh-in" {
type = "ingress"
ip_protocol = "tcp"
nic_type = "internet"
policy = "accept"
port_range = "22/22"
priority = 1
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
cidr_ip = "0.0.0.0/0"
}
resource "alicloud_ess_scaling_group" "bar" {
min_size = 1
max_size = 1
scaling_group_name = "bar"
removal_policies = ["OldestInstance", "NewestInstance"]
}
resource "alicloud_ess_scaling_configuration" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
instance_type = "ecs.s2.large"
io_optimized = "optimized"
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
}
resource "alicloud_ess_scaling_rule" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
adjustment_type = "TotalCapacity"
adjustment_value = 2
cooldown = 60
}
`

View File

@ -0,0 +1,220 @@
package alicloud
import (
"fmt"
"github.com/denverdino/aliyungo/common"
"github.com/denverdino/aliyungo/ess"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
"time"
)
func resourceAlicloudEssSchedule() *schema.Resource {
return &schema.Resource{
Create: resourceAliyunEssScheduleCreate,
Read: resourceAliyunEssScheduleRead,
Update: resourceAliyunEssScheduleUpdate,
Delete: resourceAliyunEssScheduleDelete,
Schema: map[string]*schema.Schema{
"scheduled_action": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"launch_time": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"scheduled_task_name": &schema.Schema{
Type: schema.TypeString,
Optional: true,
},
"description": &schema.Schema{
Type: schema.TypeString,
Computed: true,
Optional: true,
},
"launch_expiration_time": &schema.Schema{
Type: schema.TypeInt,
Default: 600,
Optional: true,
ValidateFunc: validateIntegerInRange(0, 21600),
},
"recurrence_type": &schema.Schema{
Type: schema.TypeString,
Computed: true,
Optional: true,
ValidateFunc: validateAllowedStringValue([]string{string(ess.Daily),
string(ess.Weekly), string(ess.Monthly)}),
},
"recurrence_value": &schema.Schema{
Type: schema.TypeString,
Computed: true,
Optional: true,
},
"recurrence_end_time": &schema.Schema{
Type: schema.TypeString,
Computed: true,
Optional: true,
},
"task_enabled": &schema.Schema{
Type: schema.TypeBool,
Default: true,
Optional: true,
},
},
}
}
func resourceAliyunEssScheduleCreate(d *schema.ResourceData, meta interface{}) error {
args, err := buildAlicloudEssScheduleArgs(d, meta)
if err != nil {
return err
}
essconn := meta.(*AliyunClient).essconn
rule, err := essconn.CreateScheduledTask(args)
if err != nil {
return err
}
d.SetId(rule.ScheduledTaskId)
return resourceAliyunEssScheduleUpdate(d, meta)
}
func resourceAliyunEssScheduleRead(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
rule, err := client.DescribeScheduleById(d.Id())
if err != nil {
if e, ok := err.(*common.Error); ok && e.Code == InstanceNotfound {
d.SetId("")
return nil
}
return fmt.Errorf("Error Describe ESS schedule Attribute: %#v", err)
}
d.Set("scheduled_action", rule.ScheduledAction)
d.Set("launch_time", rule.LaunchTime)
d.Set("scheduled_task_name", rule.ScheduledTaskName)
d.Set("description", rule.Description)
d.Set("launch_expiration_time", rule.LaunchExpirationTime)
d.Set("recurrence_type", rule.RecurrenceType)
d.Set("recurrence_value", rule.RecurrenceValue)
d.Set("recurrence_end_time", rule.RecurrenceEndTime)
d.Set("task_enabled", rule.TaskEnabled)
return nil
}
func resourceAliyunEssScheduleUpdate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AliyunClient).essconn
args := &ess.ModifyScheduledTaskArgs{
ScheduledTaskId: d.Id(),
}
if d.HasChange("scheduled_task_name") {
args.ScheduledTaskName = d.Get("scheduled_task_name").(string)
}
if d.HasChange("description") {
args.Description = d.Get("description").(string)
}
if d.HasChange("scheduled_action") {
args.ScheduledAction = d.Get("scheduled_action").(string)
}
if d.HasChange("launch_time") {
args.LaunchTime = d.Get("launch_time").(string)
}
if d.HasChange("launch_expiration_time") {
args.LaunchExpirationTime = d.Get("launch_expiration_time").(int)
}
if d.HasChange("recurrence_type") {
args.RecurrenceType = ess.RecurrenceType(d.Get("recurrence_type").(string))
}
if d.HasChange("recurrence_value") {
args.RecurrenceValue = d.Get("recurrence_value").(string)
}
if d.HasChange("recurrence_end_time") {
args.RecurrenceEndTime = d.Get("recurrence_end_time").(string)
}
if d.HasChange("task_enabled") {
args.TaskEnabled = d.Get("task_enabled").(bool)
}
if _, err := conn.ModifyScheduledTask(args); err != nil {
return err
}
return resourceAliyunEssScheduleRead(d, meta)
}
func resourceAliyunEssScheduleDelete(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
return resource.Retry(2*time.Minute, func() *resource.RetryError {
err := client.DeleteScheduleById(d.Id())
if err != nil {
return resource.RetryableError(fmt.Errorf("Scaling schedule in use - trying again while it is deleted."))
}
_, err = client.DescribeScheduleById(d.Id())
if err != nil {
if notFoundError(err) {
return nil
}
return resource.NonRetryableError(err)
}
return resource.RetryableError(fmt.Errorf("Scaling schedule in use - trying again while it is deleted."))
})
}
func buildAlicloudEssScheduleArgs(d *schema.ResourceData, meta interface{}) (*ess.CreateScheduledTaskArgs, error) {
args := &ess.CreateScheduledTaskArgs{
RegionId: getRegion(d, meta),
ScheduledAction: d.Get("scheduled_action").(string),
LaunchTime: d.Get("launch_time").(string),
TaskEnabled: d.Get("task_enabled").(bool),
}
if v := d.Get("scheduled_task_name").(string); v != "" {
args.ScheduledTaskName = v
}
if v := d.Get("description").(string); v != "" {
args.Description = v
}
if v := d.Get("recurrence_type").(string); v != "" {
args.RecurrenceType = ess.RecurrenceType(v)
}
if v := d.Get("recurrence_value").(string); v != "" {
args.RecurrenceValue = v
}
if v := d.Get("recurrence_end_time").(string); v != "" {
args.RecurrenceEndTime = v
}
if v := d.Get("launch_expiration_time").(int); v != 0 {
args.LaunchExpirationTime = v
}
return args, nil
}

View File

@ -0,0 +1,151 @@
package alicloud
import (
"fmt"
"github.com/denverdino/aliyungo/common"
"github.com/denverdino/aliyungo/ess"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"log"
"testing"
)
func TestAccAlicloudEssSchedule_basic(t *testing.T) {
var sc ess.ScheduledTaskItemType
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
// module name
IDRefreshName: "alicloud_ess_schedule.foo",
Providers: testAccProviders,
CheckDestroy: testAccCheckEssScheduleDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccEssScheduleConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckEssScheduleExists(
"alicloud_ess_schedule.foo", &sc),
resource.TestCheckResourceAttr(
"alicloud_ess_schedule.foo",
"launch_time",
"2017-04-29T07:30Z"),
resource.TestCheckResourceAttr(
"alicloud_ess_schedule.foo",
"task_enabled",
"true"),
),
},
},
})
}
func testAccCheckEssScheduleExists(n string, d *ess.ScheduledTaskItemType) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No ESS Schedule ID is set")
}
client := testAccProvider.Meta().(*AliyunClient)
attr, err := client.DescribeScheduleById(rs.Primary.ID)
log.Printf("[DEBUG] check schedule %s attribute %#v", rs.Primary.ID, attr)
if err != nil {
return err
}
if attr == nil {
return fmt.Errorf("Ess schedule not found")
}
*d = *attr
return nil
}
}
func testAccCheckEssScheduleDestroy(s *terraform.State) error {
client := testAccProvider.Meta().(*AliyunClient)
for _, rs := range s.RootModule().Resources {
if rs.Type != "alicloud_ess_schedule" {
continue
}
ins, err := client.DescribeScheduleById(rs.Primary.ID)
if ins != nil {
return fmt.Errorf("Error ESS schedule still exist")
}
// Verify the error is what we want
if err != nil {
// Verify the error is what we want
e, _ := err.(*common.Error)
if e.ErrorResponse.Code == InstanceNotfound {
continue
}
return err
}
}
return nil
}
const testAccEssScheduleConfig = `
data "alicloud_images" "ecs_image" {
most_recent = true
name_regex = "^centos_6\\w{1,5}[64].*"
}
resource "alicloud_security_group" "tf_test_foo" {
name = "tf_test_foo"
description = "foo"
}
resource "alicloud_security_group_rule" "ssh-in" {
type = "ingress"
ip_protocol = "tcp"
nic_type = "internet"
policy = "accept"
port_range = "22/22"
priority = 1
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
cidr_ip = "0.0.0.0/0"
}
resource "alicloud_ess_scaling_group" "bar" {
min_size = 1
max_size = 1
scaling_group_name = "bar"
removal_policies = ["OldestInstance", "NewestInstance"]
}
resource "alicloud_ess_scaling_configuration" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
image_id = "${data.alicloud_images.ecs_image.images.0.id}"
instance_type = "ecs.s2.large"
io_optimized = "optimized"
security_group_id = "${alicloud_security_group.tf_test_foo.id}"
}
resource "alicloud_ess_scaling_rule" "foo" {
scaling_group_id = "${alicloud_ess_scaling_group.bar.id}"
adjustment_type = "TotalCapacity"
adjustment_value = 2
cooldown = 60
}
resource "alicloud_ess_schedule" "foo" {
scheduled_action = "${alicloud_ess_scaling_rule.foo.ari}"
launch_time = "2017-04-29T07:30Z"
scheduled_task_name = "tf-foo"
}
`

View File

@ -0,0 +1,165 @@
package alicloud
import (
"fmt"
"github.com/denverdino/aliyungo/ecs"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceAliyunForwardEntry() *schema.Resource {
return &schema.Resource{
Create: resourceAliyunForwardEntryCreate,
Read: resourceAliyunForwardEntryRead,
Update: resourceAliyunForwardEntryUpdate,
Delete: resourceAliyunForwardEntryDelete,
Schema: map[string]*schema.Schema{
"forward_table_id": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"external_ip": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"external_port": &schema.Schema{
Type: schema.TypeString,
Required: true,
ValidateFunc: validateForwardPort,
},
"ip_protocol": &schema.Schema{
Type: schema.TypeString,
Required: true,
ValidateFunc: validateAllowedStringValue([]string{"tcp", "udp", "any"}),
},
"internal_ip": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"internal_port": &schema.Schema{
Type: schema.TypeString,
Required: true,
ValidateFunc: validateForwardPort,
},
},
}
}
func resourceAliyunForwardEntryCreate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AliyunClient).vpcconn
args := &ecs.CreateForwardEntryArgs{
RegionId: getRegion(d, meta),
ForwardTableId: d.Get("forward_table_id").(string),
ExternalIp: d.Get("external_ip").(string),
ExternalPort: d.Get("external_port").(string),
IpProtocol: d.Get("ip_protocol").(string),
InternalIp: d.Get("internal_ip").(string),
InternalPort: d.Get("internal_port").(string),
}
resp, err := conn.CreateForwardEntry(args)
if err != nil {
return fmt.Errorf("CreateForwardEntry got error: %#v", err)
}
d.SetId(resp.ForwardEntryId)
d.Set("forward_table_id", d.Get("forward_table_id").(string))
return resourceAliyunForwardEntryRead(d, meta)
}
func resourceAliyunForwardEntryRead(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
forwardEntry, err := client.DescribeForwardEntry(d.Get("forward_table_id").(string), d.Id())
if err != nil {
if notFoundError(err) {
return nil
}
return err
}
d.Set("forward_table_id", forwardEntry.ForwardTableId)
d.Set("external_ip", forwardEntry.ExternalIp)
d.Set("external_port", forwardEntry.ExternalPort)
d.Set("ip_protocol", forwardEntry.IpProtocol)
d.Set("internal_ip", forwardEntry.InternalIp)
d.Set("internal_port", forwardEntry.InternalPort)
return nil
}
func resourceAliyunForwardEntryUpdate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
conn := client.vpcconn
forwardEntry, err := client.DescribeForwardEntry(d.Get("forward_table_id").(string), d.Id())
if err != nil {
return err
}
d.Partial(true)
attributeUpdate := false
args := &ecs.ModifyForwardEntryArgs{
RegionId: getRegion(d, meta),
ForwardTableId: forwardEntry.ForwardTableId,
ForwardEntryId: forwardEntry.ForwardEntryId,
ExternalIp: forwardEntry.ExternalIp,
IpProtocol: forwardEntry.IpProtocol,
ExternalPort: forwardEntry.ExternalPort,
InternalIp: forwardEntry.InternalIp,
InternalPort: forwardEntry.InternalPort,
}
if d.HasChange("external_port") {
d.SetPartial("external_port")
args.ExternalPort = d.Get("external_port").(string)
attributeUpdate = true
}
if d.HasChange("ip_protocol") {
d.SetPartial("ip_protocol")
args.IpProtocol = d.Get("ip_protocol").(string)
attributeUpdate = true
}
if d.HasChange("internal_port") {
d.SetPartial("internal_port")
args.InternalPort = d.Get("internal_port").(string)
attributeUpdate = true
}
if attributeUpdate {
if err := conn.ModifyForwardEntry(args); err != nil {
return err
}
}
d.Partial(false)
return resourceAliyunForwardEntryRead(d, meta)
}
func resourceAliyunForwardEntryDelete(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
conn := client.vpcconn
forwardEntryId := d.Id()
forwardTableId := d.Get("forward_table_id").(string)
args := &ecs.DeleteForwardEntryArgs{
RegionId: getRegion(d, meta),
ForwardTableId: forwardTableId,
ForwardEntryId: forwardEntryId,
}
if err := conn.DeleteForwardEntry(args); err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,216 @@
package alicloud
import (
"fmt"
"github.com/denverdino/aliyungo/common"
"github.com/denverdino/aliyungo/ecs"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"testing"
)
func TestAccAlicloudForward_basic(t *testing.T) {
var forward ecs.ForwardTableEntrySetType
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
// module name
IDRefreshName: "alicloud_forward_entry.foo",
Providers: testAccProviders,
CheckDestroy: testAccCheckForwardEntryDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccForwardEntryConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckForwardEntryExists(
"alicloud_forward_entry.foo", &forward),
),
},
resource.TestStep{
Config: testAccForwardEntryUpdate,
Check: resource.ComposeTestCheckFunc(
testAccCheckForwardEntryExists(
"alicloud_forward_entry.foo", &forward),
),
},
},
})
}
func testAccCheckForwardEntryDestroy(s *terraform.State) error {
client := testAccProvider.Meta().(*AliyunClient)
for _, rs := range s.RootModule().Resources {
if rs.Type != "alicloud_snat_entry" {
continue
}
// Try to find the Snat entry
instance, err := client.DescribeForwardEntry(rs.Primary.Attributes["forward_table_id"], rs.Primary.ID)
//this special deal cause the DescribeSnatEntry can't find the records would be throw "cant find the snatTable error"
if instance.ForwardEntryId == "" {
return nil
}
if instance.ForwardEntryId != "" {
return fmt.Errorf("Forward entry still exist")
}
if err != nil {
// Verify the error is what we want
e, _ := err.(*common.Error)
if !notFoundError(e) {
return err
}
}
}
return nil
}
func testAccCheckForwardEntryExists(n string, snat *ecs.ForwardTableEntrySetType) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No ForwardEntry ID is set")
}
client := testAccProvider.Meta().(*AliyunClient)
instance, err := client.DescribeForwardEntry(rs.Primary.Attributes["forward_table_id"], rs.Primary.ID)
if err != nil {
return err
}
if instance.ForwardEntryId == "" {
return fmt.Errorf("ForwardEntry not found")
}
*snat = instance
return nil
}
}
const testAccForwardEntryConfig = `
provider "alicloud"{
region = "cn-hangzhou"
}
data "alicloud_zones" "default" {
"available_resource_creation"= "VSwitch"
}
resource "alicloud_vpc" "foo" {
name = "tf_test_foo"
cidr_block = "172.16.0.0/12"
}
resource "alicloud_vswitch" "foo" {
vpc_id = "${alicloud_vpc.foo.id}"
cidr_block = "172.16.0.0/21"
availability_zone = "${data.alicloud_zones.default.zones.0.id}"
}
resource "alicloud_nat_gateway" "foo" {
vpc_id = "${alicloud_vpc.foo.id}"
spec = "Small"
name = "test_foo"
bandwidth_packages = [{
ip_count = 1
bandwidth = 5
zone = "${data.alicloud_zones.default.zones.0.id}"
},{
ip_count = 1
bandwidth = 6
zone = "${data.alicloud_zones.default.zones.0.id}"
}]
depends_on = [
"alicloud_vswitch.foo"]
}
resource "alicloud_forward_entry" "foo"{
forward_table_id = "${alicloud_nat_gateway.foo.forward_table_ids}"
external_ip = "${alicloud_nat_gateway.foo.bandwidth_packages.0.public_ip_addresses}"
external_port = "80"
ip_protocol = "tcp"
internal_ip = "172.16.0.3"
internal_port = "8080"
}
resource "alicloud_forward_entry" "foo1"{
forward_table_id = "${alicloud_nat_gateway.foo.forward_table_ids}"
external_ip = "${alicloud_nat_gateway.foo.bandwidth_packages.0.public_ip_addresses}"
external_port = "443"
ip_protocol = "udp"
internal_ip = "172.16.0.4"
internal_port = "8080"
}
`
const testAccForwardEntryUpdate = `
provider "alicloud"{
region = "cn-hangzhou"
}
data "alicloud_zones" "default" {
"available_resource_creation"= "VSwitch"
}
resource "alicloud_vpc" "foo" {
name = "tf_test_foo"
cidr_block = "172.16.0.0/12"
}
resource "alicloud_vswitch" "foo" {
vpc_id = "${alicloud_vpc.foo.id}"
cidr_block = "172.16.0.0/21"
availability_zone = "${data.alicloud_zones.default.zones.0.id}"
}
resource "alicloud_nat_gateway" "foo" {
vpc_id = "${alicloud_vpc.foo.id}"
spec = "Small"
name = "test_foo"
bandwidth_packages = [{
ip_count = 1
bandwidth = 5
zone = "${data.alicloud_zones.default.zones.0.id}"
},{
ip_count = 1
bandwidth = 6
zone = "${data.alicloud_zones.default.zones.0.id}"
}]
depends_on = [
"alicloud_vswitch.foo"]
}
resource "alicloud_forward_entry" "foo"{
forward_table_id = "${alicloud_nat_gateway.foo.forward_table_ids}"
external_ip = "${alicloud_nat_gateway.foo.bandwidth_packages.0.public_ip_addresses}"
external_port = "80"
ip_protocol = "tcp"
internal_ip = "172.16.0.3"
internal_port = "8081"
}
resource "alicloud_forward_entry" "foo1"{
forward_table_id = "${alicloud_nat_gateway.foo.forward_table_ids}"
external_ip = "${alicloud_nat_gateway.foo.bandwidth_packages.0.public_ip_addresses}"
external_port = "22"
ip_protocol = "udp"
internal_ip = "172.16.0.4"
internal_port = "8080"
}
`

View File

@ -8,8 +8,10 @@ import (
"encoding/json"
"github.com/denverdino/aliyungo/common"
"github.com/denverdino/aliyungo/ecs"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
"strings"
"time"
)
func resourceAliyunInstance() *schema.Resource {
@ -193,11 +195,8 @@ func resourceAliyunInstanceCreate(d *schema.ResourceData, meta interface{}) erro
//d.Set("system_disk_category", d.Get("system_disk_category"))
//d.Set("system_disk_size", d.Get("system_disk_size"))
if d.Get("allocate_public_ip").(bool) {
_, err := conn.AllocatePublicIpAddress(d.Id())
if err != nil {
log.Printf("[DEBUG] AllocatePublicIpAddress for instance got error: %#v", err)
}
if err := allocateIpAndBandWidthRelative(d, meta); err != nil {
return fmt.Errorf("allocateIpAndBandWidthRelative err: %#v", err)
}
// after instance created, its status is pending,
@ -226,6 +225,12 @@ func resourceAliyunRunInstance(d *schema.ResourceData, meta interface{}) error {
return err
}
if args.IoOptimized == "optimized" {
args.IoOptimized = ecs.IoOptimized("true")
} else {
args.IoOptimized = ecs.IoOptimized("false")
}
runArgs, err := buildAliyunRunInstancesArgs(d, meta)
if err != nil {
return err
@ -246,14 +251,15 @@ func resourceAliyunRunInstance(d *schema.ResourceData, meta interface{}) error {
d.Set("system_disk_category", d.Get("system_disk_category"))
d.Set("system_disk_size", d.Get("system_disk_size"))
if d.Get("allocate_public_ip").(bool) {
_, err := conn.AllocatePublicIpAddress(d.Id())
if err != nil {
log.Printf("[DEBUG] AllocatePublicIpAddress for instance got error: %#v", err)
}
// after instance created, its status change from pending, starting to running
if err := conn.WaitForInstanceAsyn(d.Id(), ecs.Running, defaultTimeout); err != nil {
log.Printf("[DEBUG] WaitForInstance %s got error: %#v", ecs.Running, err)
}
if err := allocateIpAndBandWidthRelative(d, meta); err != nil {
return fmt.Errorf("allocateIpAndBandWidthRelative err: %#v", err)
}
// after instance created, its status change from pending, starting to running
if err := conn.WaitForInstanceAsyn(d.Id(), ecs.Running, defaultTimeout); err != nil {
log.Printf("[DEBUG] WaitForInstance %s got error: %#v", ecs.Running, err)
}
@ -451,30 +457,47 @@ func resourceAliyunInstanceDelete(d *schema.ResourceData, meta interface{}) erro
client := meta.(*AliyunClient)
conn := client.ecsconn
instance, err := client.QueryInstancesById(d.Id())
if err != nil {
if notFoundError(err) {
return nil
}
return fmt.Errorf("Error DescribeInstanceAttribute: %#v", err)
}
if instance.Status != ecs.Stopped {
if err := conn.StopInstance(d.Id(), true); err != nil {
return err
return resource.Retry(5*time.Minute, func() *resource.RetryError {
instance, err := client.QueryInstancesById(d.Id())
if err != nil {
if notFoundError(err) {
return nil
}
}
if err := conn.WaitForInstance(d.Id(), ecs.Stopped, defaultTimeout); err != nil {
return err
if instance.Status != ecs.Stopped {
if err := conn.StopInstance(d.Id(), true); err != nil {
return resource.RetryableError(fmt.Errorf("ECS stop error - trying again."))
}
if err := conn.WaitForInstance(d.Id(), ecs.Stopped, defaultTimeout); err != nil {
return resource.RetryableError(fmt.Errorf("Waiting for ecs stopped timeout - trying again."))
}
}
if err := conn.DeleteInstance(d.Id()); err != nil {
return resource.RetryableError(fmt.Errorf("ECS Instance in use - trying again while it is deleted."))
}
return nil
})
}
func allocateIpAndBandWidthRelative(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AliyunClient).ecsconn
if d.Get("allocate_public_ip").(bool) {
if d.Get("internet_max_bandwidth_out") == 0 {
return fmt.Errorf("Error: if allocate_public_ip is true than the internet_max_bandwidth_out cannot equal zero.")
}
_, err := conn.AllocatePublicIpAddress(d.Id())
if err != nil {
return fmt.Errorf("[DEBUG] AllocatePublicIpAddress for instance got error: %#v", err)
}
}
if err := conn.DeleteInstance(d.Id()); err != nil {
return err
}
return nil
}
func buildAliyunRunInstancesArgs(d *schema.ResourceData, meta interface{}) (*ecs.RunInstanceArgs, error) {
args := &ecs.RunInstanceArgs{
MaxAmount: DEFAULT_INSTANCE_COUNT,
@ -560,7 +583,6 @@ func buildAliyunInstanceArgs(d *schema.ResourceData, meta interface{}) (*ecs.Cre
args.Description = v
}
log.Printf("[DEBUG] SystemDisk is %d", systemDiskSize)
if v := d.Get("internet_charge_type").(string); v != "" {
args.InternetChargeType = common.InternetChargeType(v)
}
@ -578,11 +600,7 @@ func buildAliyunInstanceArgs(d *schema.ResourceData, meta interface{}) (*ecs.Cre
}
if v := d.Get("io_optimized").(string); v != "" {
if v == "optimized" {
args.IoOptimized = ecs.IoOptimized("true")
} else {
args.IoOptimized = ecs.IoOptimized("false")
}
args.IoOptimized = ecs.IoOptimized(v)
}
vswitchValue := d.Get("subnet_id").(string)

View File

@ -4,12 +4,13 @@ import (
"fmt"
"testing"
"log"
"github.com/denverdino/aliyungo/common"
"github.com/denverdino/aliyungo/ecs"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/terraform"
"log"
)
func TestAccAlicloudInstance_basic(t *testing.T) {
@ -456,6 +457,17 @@ func TestAccAlicloudInstance_associatePublicIP(t *testing.T) {
}
}
testCheckPublicIP := func() resource.TestCheckFunc {
return func(*terraform.State) error {
publicIP := instance.PublicIpAddress.IpAddress[0]
if publicIP == "" {
return fmt.Errorf("can't get public IP")
}
return nil
}
}
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
@ -469,6 +481,7 @@ func TestAccAlicloudInstance_associatePublicIP(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
testAccCheckInstanceExists("alicloud_instance.foo", &instance),
testCheckPrivateIP(),
testCheckPublicIP(),
),
},
},

View File

@ -8,6 +8,7 @@ import (
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
"log"
"strconv"
"strings"
"time"
)
@ -44,6 +45,16 @@ func resourceAliyunNatGateway() *schema.Resource {
Computed: true,
},
"snat_table_ids": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"forward_table_ids": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"bandwidth_packages": &schema.Schema{
Type: schema.TypeList,
Elem: &schema.Resource{
@ -60,6 +71,10 @@ func resourceAliyunNatGateway() *schema.Resource {
Type: schema.TypeString,
Optional: true,
},
"public_ip_addresses": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
},
},
Required: true,
@ -133,8 +148,16 @@ func resourceAliyunNatGatewayRead(d *schema.ResourceData, meta interface{}) erro
d.Set("name", natGateway.Name)
d.Set("spec", natGateway.Spec)
d.Set("bandwidth_package_ids", strings.Join(natGateway.BandwidthPackageIds.BandwidthPackageId, ","))
d.Set("snat_table_ids", strings.Join(natGateway.SnatTableIds.SnatTableId, ","))
d.Set("forward_table_ids", strings.Join(natGateway.ForwardTableIds.ForwardTableId, ","))
d.Set("description", natGateway.Description)
d.Set("vpc_id", natGateway.VpcId)
bindWidthPackages, err := flattenBandWidthPackages(natGateway.BandwidthPackageIds.BandwidthPackageId, meta, d)
if err != nil {
log.Printf("[ERROR] bindWidthPackages flattenBandWidthPackages failed. natgateway id is %#v", d.Id())
} else {
d.Set("bandwidth_packages", bindWidthPackages)
}
return nil
}
@ -254,7 +277,7 @@ func resourceAliyunNatGatewayDelete(d *schema.ResourceData, meta interface{}) er
}
args := &ecs.DeleteNatGatewayArgs{
RegionId: client.Region,
RegionId: getRegion(d, meta),
NatGatewayId: d.Id(),
}
@ -267,7 +290,7 @@ func resourceAliyunNatGatewayDelete(d *schema.ResourceData, meta interface{}) er
}
describeArgs := &ecs.DescribeNatGatewaysArgs{
RegionId: client.Region,
RegionId: getRegion(d, meta),
NatGatewayId: d.Id(),
}
gw, _, gwErr := conn.DescribeNatGateways(describeArgs)
@ -282,3 +305,69 @@ func resourceAliyunNatGatewayDelete(d *schema.ResourceData, meta interface{}) er
return resource.RetryableError(fmt.Errorf("NatGateway in use - trying again while it is deleted."))
})
}
func flattenBandWidthPackages(bandWidthPackageIds []string, meta interface{}, d *schema.ResourceData) ([]map[string]interface{}, error) {
packageLen := len(bandWidthPackageIds)
result := make([]map[string]interface{}, 0, packageLen)
for i := packageLen - 1; i >= 0; i-- {
packageId := bandWidthPackageIds[i]
packages, err := getPackages(packageId, meta, d)
if err != nil {
log.Printf("[ERROR] NatGateways getPackages failed. packageId is %#v", packageId)
return result, err
}
ipAddress := flattenPackPublicIp(packages.PublicIpAddresses.PublicIpAddresse)
ipCont, ipContErr := strconv.Atoi(packages.IpCount)
bandWidth, bandWidthErr := strconv.Atoi(packages.Bandwidth)
if ipContErr != nil {
log.Printf("[ERROR] NatGateways getPackages failed: ipCont convert error. packageId is %#v", packageId)
return result, ipContErr
}
if bandWidthErr != nil {
log.Printf("[ERROR] NatGateways getPackages failed: bandWidthErr convert error. packageId is %#v", packageId)
return result, bandWidthErr
}
l := map[string]interface{}{
"ip_count": ipCont,
"bandwidth": bandWidth,
"zone": packages.ZoneId,
"public_ip_addresses": ipAddress,
}
result = append(result, l)
}
return result, nil
}
func getPackages(packageId string, meta interface{}, d *schema.ResourceData) (*ecs.DescribeBandwidthPackageType, error) {
client := meta.(*AliyunClient)
conn := client.vpcconn
packages, err := conn.DescribeBandwidthPackages(&ecs.DescribeBandwidthPackagesArgs{
RegionId: getRegion(d, meta),
BandwidthPackageId: packageId,
})
if err != nil {
log.Printf("[ERROR] Describe bandwidth package is failed, BandwidthPackageId Id: %s", packageId)
return nil, err
}
if len(packages) == 0 {
return nil, common.GetClientErrorFromString(InstanceNotfound)
}
return &packages[0], nil
}
func flattenPackPublicIp(publicIpAddressList []ecs.PublicIpAddresseType) string {
var result []string
for _, publicIpAddresses := range publicIpAddressList {
ipAddress := publicIpAddresses.IpAddress
result = append(result, ipAddress)
}
return strings.Join(result, ",")
}

View File

@ -48,6 +48,7 @@ func TestAccAlicloudNatGateway_basic(t *testing.T) {
"alicloud_nat_gateway.foo",
"name",
"test_foo"),
testAccCheckNatgatewayIpAddress("alicloud_nat_gateway.foo", &nat),
),
},
},
@ -96,6 +97,31 @@ func TestAccAlicloudNatGateway_spec(t *testing.T) {
}
func testAccCheckNatgatewayIpAddress(n string, nat *ecs.NatGatewaySetType) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No NatGateway ID is set")
}
client := testAccProvider.Meta().(*AliyunClient)
natGateway, err := client.DescribeNatGateway(rs.Primary.ID)
if err != nil {
return err
}
if natGateway == nil {
return fmt.Errorf("Natgateway not found")
}
return nil
}
}
func testAccCheckNatGatewayExists(n string, nat *ecs.NatGatewaySetType) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
@ -164,7 +190,7 @@ resource "alicloud_vpc" "foo" {
resource "alicloud_vswitch" "foo" {
vpc_id = "${alicloud_vpc.foo.id}"
cidr_block = "172.16.0.0/21"
availability_zone = "${data.alicloud_zones.default.zones.0.id}"
availability_zone = "${data.alicloud_zones.default.zones.2.id}"
}
resource "alicloud_nat_gateway" "foo" {
@ -174,11 +200,19 @@ resource "alicloud_nat_gateway" "foo" {
bandwidth_packages = [{
ip_count = 1
bandwidth = 5
zone = "${data.alicloud_zones.default.zones.0.id}"
zone = "${data.alicloud_zones.default.zones.2.id}"
}, {
ip_count = 2
bandwidth = 10
zone = "${data.alicloud_zones.default.zones.0.id}"
bandwidth = 6
zone = "${data.alicloud_zones.default.zones.2.id}"
}, {
ip_count = 3
bandwidth = 7
zone = "${data.alicloud_zones.default.zones.2.id}"
}, {
ip_count = 1
bandwidth = 8
zone = "${data.alicloud_zones.default.zones.2.id}"
}]
depends_on = [
"alicloud_vswitch.foo"]

View File

@ -74,6 +74,11 @@ func resourceAliyunSecurityGroupRead(d *schema.ResourceData, meta interface{}) e
return fmt.Errorf("Error DescribeSecurityGroupAttribute: %#v", err)
}
if sg == nil {
d.SetId("")
return nil
}
d.Set("name", sg.SecurityGroupName)
d.Set("description", sg.Description)

View File

@ -3,9 +3,10 @@ package alicloud
import (
"fmt"
"github.com/denverdino/aliyungo/ecs"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
"log"
"strings"
"time"
)
func resourceAliyunSecurityGroupRule() *schema.Resource {
@ -141,7 +142,7 @@ func resourceAliyunSecurityGroupRuleRead(d *schema.ResourceData, meta interface{
}
return fmt.Errorf("Error SecurityGroup rule: %#v", err)
}
log.Printf("[WARN]sg %s, type %s, protocol %s, port %s, rule %#v", sgId, direction, ip_protocol, port_range, rule)
d.Set("type", rule.Direction)
d.Set("ip_protocol", strings.ToLower(string(rule.IpProtocol)))
d.Set("nic_type", rule.NicType)
@ -163,7 +164,7 @@ func resourceAliyunSecurityGroupRuleRead(d *schema.ResourceData, meta interface{
return nil
}
func resourceAliyunSecurityGroupRuleDelete(d *schema.ResourceData, meta interface{}) error {
func deleteSecurityGroupRule(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
ruleType := d.Get("type").(string)
@ -187,6 +188,30 @@ func resourceAliyunSecurityGroupRuleDelete(d *schema.ResourceData, meta interfac
AuthorizeSecurityGroupEgressArgs: *args,
}
return client.RevokeSecurityGroupEgress(revokeArgs)
}
func resourceAliyunSecurityGroupRuleDelete(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
parts := strings.Split(d.Id(), ":")
sgId, direction, ip_protocol, port_range, nic_type := parts[0], parts[1], parts[2], parts[3], parts[4]
return resource.Retry(5*time.Minute, func() *resource.RetryError {
err := deleteSecurityGroupRule(d, meta)
if err != nil {
resource.RetryableError(fmt.Errorf("Security group rule in use - trying again while it is deleted."))
}
_, err = client.DescribeSecurityGroupRule(sgId, direction, nic_type, ip_protocol, port_range)
if err != nil {
if notFoundError(err) {
return nil
}
return resource.NonRetryableError(err)
}
return resource.RetryableError(fmt.Errorf("Security group rule in use - trying again while it is deleted."))
})
}

View File

@ -281,6 +281,11 @@ func resourceAliyunSlbRead(d *schema.ResourceData, meta interface{}) error {
return err
}
if loadBalancer == nil {
d.SetId("")
return nil
}
d.Set("name", loadBalancer.LoadBalancerName)
if loadBalancer.AddressType == slb.InternetAddressType {

View File

@ -64,10 +64,14 @@ func resourceAliyunSlbAttachmentRead(d *schema.ResourceData, meta interface{}) e
if err != nil {
if notFoundError(err) {
d.SetId("")
return fmt.Errorf("Read special SLB Id not found: %#v", err)
return nil
}
return fmt.Errorf("Read special SLB Id not found: %#v", err)
}
return err
if loadBalancer == nil {
d.SetId("")
return nil
}
backendServerType := loadBalancer.BackendServers

View File

@ -0,0 +1,134 @@
package alicloud
import (
"fmt"
"github.com/denverdino/aliyungo/ecs"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceAliyunSnatEntry() *schema.Resource {
return &schema.Resource{
Create: resourceAliyunSnatEntryCreate,
Read: resourceAliyunSnatEntryRead,
Update: resourceAliyunSnatEntryUpdate,
Delete: resourceAliyunSnatEntryDelete,
Schema: map[string]*schema.Schema{
"snat_table_id": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"source_vswitch_id": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"snat_ip": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
},
}
}
func resourceAliyunSnatEntryCreate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AliyunClient).vpcconn
args := &ecs.CreateSnatEntryArgs{
RegionId: getRegion(d, meta),
SnatTableId: d.Get("snat_table_id").(string),
SourceVSwitchId: d.Get("source_vswitch_id").(string),
SnatIp: d.Get("snat_ip").(string),
}
resp, err := conn.CreateSnatEntry(args)
if err != nil {
return fmt.Errorf("CreateSnatEntry got error: %#v", err)
}
d.SetId(resp.SnatEntryId)
d.Set("snat_table_id", d.Get("snat_table_id").(string))
return resourceAliyunSnatEntryRead(d, meta)
}
func resourceAliyunSnatEntryRead(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
snatEntry, err := client.DescribeSnatEntry(d.Get("snat_table_id").(string), d.Id())
if err != nil {
if notFoundError(err) {
return nil
}
return err
}
d.Set("snat_table_id", snatEntry.SnatTableId)
d.Set("source_vswitch_id", snatEntry.SourceVSwitchId)
d.Set("snat_ip", snatEntry.SnatIp)
return nil
}
func resourceAliyunSnatEntryUpdate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
conn := client.vpcconn
snatEntry, err := client.DescribeSnatEntry(d.Get("snat_table_id").(string), d.Id())
if err != nil {
return err
}
d.Partial(true)
attributeUpdate := false
args := &ecs.ModifySnatEntryArgs{
RegionId: getRegion(d, meta),
SnatTableId: snatEntry.SnatTableId,
SnatEntryId: snatEntry.SnatEntryId,
}
if d.HasChange("snat_ip") {
d.SetPartial("snat_ip")
var snat_ip string
if v, ok := d.GetOk("snat_ip"); ok {
snat_ip = v.(string)
} else {
return fmt.Errorf("cann't change snap_ip to empty string")
}
args.SnatIp = snat_ip
attributeUpdate = true
}
if attributeUpdate {
if err := conn.ModifySnatEntry(args); err != nil {
return err
}
}
d.Partial(false)
return resourceAliyunSnatEntryRead(d, meta)
}
func resourceAliyunSnatEntryDelete(d *schema.ResourceData, meta interface{}) error {
client := meta.(*AliyunClient)
conn := client.vpcconn
snatEntryId := d.Id()
snatTableId := d.Get("snat_table_id").(string)
args := &ecs.DeleteSnatEntryArgs{
RegionId: getRegion(d, meta),
SnatTableId: snatTableId,
SnatEntryId: snatEntryId,
}
if err := conn.DeleteSnatEntry(args); err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,180 @@
package alicloud
import (
"fmt"
"github.com/denverdino/aliyungo/common"
"github.com/denverdino/aliyungo/ecs"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"testing"
)
func TestAccAlicloudSnat_basic(t *testing.T) {
var snat ecs.SnatEntrySetType
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
// module name
IDRefreshName: "alicloud_snat_entry.foo",
Providers: testAccProviders,
CheckDestroy: testAccCheckSnatEntryDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccSnatEntryConfig,
Check: resource.ComposeTestCheckFunc(
testAccCheckSnatEntryExists(
"alicloud_snat_entry.foo", &snat),
),
},
resource.TestStep{
Config: testAccSnatEntryUpdate,
Check: resource.ComposeTestCheckFunc(
testAccCheckSnatEntryExists(
"alicloud_snat_entry.foo", &snat),
),
},
},
})
}
func testAccCheckSnatEntryDestroy(s *terraform.State) error {
client := testAccProvider.Meta().(*AliyunClient)
for _, rs := range s.RootModule().Resources {
if rs.Type != "alicloud_snat_entry" {
continue
}
// Try to find the Snat entry
instance, err := client.DescribeSnatEntry(rs.Primary.Attributes["snat_table_id"], rs.Primary.ID)
//this special deal cause the DescribeSnatEntry can't find the records would be throw "cant find the snatTable error"
if instance.SnatEntryId == "" {
return nil
}
if instance.SnatEntryId != "" {
return fmt.Errorf("Snat entry still exist")
}
if err != nil {
// Verify the error is what we want
e, _ := err.(*common.Error)
if !notFoundError(e) {
return err
}
}
}
return nil
}
func testAccCheckSnatEntryExists(n string, snat *ecs.SnatEntrySetType) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No SnatEntry ID is set")
}
client := testAccProvider.Meta().(*AliyunClient)
instance, err := client.DescribeSnatEntry(rs.Primary.Attributes["snat_table_id"], rs.Primary.ID)
if err != nil {
return err
}
if instance.SnatEntryId == "" {
return fmt.Errorf("SnatEntry not found")
}
*snat = instance
return nil
}
}
const testAccSnatEntryConfig = `
data "alicloud_zones" "default" {
"available_resource_creation"= "VSwitch"
}
resource "alicloud_vpc" "foo" {
name = "tf_test_foo"
cidr_block = "172.16.0.0/12"
}
resource "alicloud_vswitch" "foo" {
vpc_id = "${alicloud_vpc.foo.id}"
cidr_block = "172.16.0.0/21"
availability_zone = "${data.alicloud_zones.default.zones.2.id}"
}
resource "alicloud_nat_gateway" "foo" {
vpc_id = "${alicloud_vpc.foo.id}"
spec = "Small"
name = "test_foo"
bandwidth_packages = [{
ip_count = 2
bandwidth = 5
zone = "${data.alicloud_zones.default.zones.2.id}"
},{
ip_count = 1
bandwidth = 6
zone = "${data.alicloud_zones.default.zones.2.id}"
}]
depends_on = [
"alicloud_vswitch.foo"]
}
resource "alicloud_snat_entry" "foo"{
snat_table_id = "${alicloud_nat_gateway.foo.snat_table_ids}"
source_vswitch_id = "${alicloud_vswitch.foo.id}"
snat_ip = "${alicloud_nat_gateway.foo.bandwidth_packages.0.public_ip_addresses}"
}
`
const testAccSnatEntryUpdate = `
data "alicloud_zones" "default" {
"available_resource_creation"= "VSwitch"
}
resource "alicloud_vpc" "foo" {
name = "tf_test_foo"
cidr_block = "172.16.0.0/12"
}
resource "alicloud_vswitch" "foo" {
vpc_id = "${alicloud_vpc.foo.id}"
cidr_block = "172.16.0.0/21"
availability_zone = "${data.alicloud_zones.default.zones.2.id}"
}
resource "alicloud_nat_gateway" "foo" {
vpc_id = "${alicloud_vpc.foo.id}"
spec = "Small"
name = "test_foo"
bandwidth_packages = [{
ip_count = 2
bandwidth = 5
zone = "${data.alicloud_zones.default.zones.2.id}"
},{
ip_count = 1
bandwidth = 6
zone = "${data.alicloud_zones.default.zones.2.id}"
}]
depends_on = [
"alicloud_vswitch.foo"]
}
resource "alicloud_snat_entry" "foo"{
snat_table_id = "${alicloud_nat_gateway.foo.snat_table_ids}"
source_vswitch_id = "${alicloud_vswitch.foo.id}"
snat_ip = "${alicloud_nat_gateway.foo.bandwidth_packages.1.public_ip_addresses}"
}
`

View File

@ -86,7 +86,7 @@ func resourceAliyunVpcCreate(d *schema.ResourceData, meta interface{}) error {
return fmt.Errorf("Timeout when WaitForVpcAvailable")
}
return resourceAliyunVpcRead(d, meta)
return resourceAliyunVpcUpdate(d, meta)
}
func resourceAliyunVpcRead(d *schema.ResourceData, meta interface{}) error {
@ -144,7 +144,7 @@ func resourceAliyunVpcUpdate(d *schema.ResourceData, meta interface{}) error {
d.Partial(false)
return nil
return resourceAliyunVpcRead(d, meta)
}
func resourceAliyunVpcDelete(d *schema.ResourceData, meta interface{}) error {

View File

@ -68,7 +68,7 @@ func resourceAliyunSwitchCreate(d *schema.ResourceData, meta interface{}) error
return fmt.Errorf("WaitForVSwitchAvailable got a error: %s", err)
}
return resourceAliyunSwitchRead(d, meta)
return resourceAliyunSwitchUpdate(d, meta)
}
func resourceAliyunSwitchRead(d *schema.ResourceData, meta interface{}) error {
@ -139,7 +139,7 @@ func resourceAliyunSwitchUpdate(d *schema.ResourceData, meta interface{}) error
d.Partial(false)
return nil
return resourceAliyunSwitchRead(d, meta)
}
func resourceAliyunSwitchDelete(d *schema.ResourceData, meta interface{}) error {

View File

@ -131,7 +131,7 @@ func (client *AliyunClient) QueryInstancesById(id string) (instance *ecs.Instanc
}
if len(instances) == 0 {
return nil, common.GetClientErrorFromString(InstanceNotfound)
return nil, GetNotFoundErrorFromString(InstanceNotfound)
}
return &instances[0], nil
@ -244,7 +244,7 @@ func (client *AliyunClient) DescribeSecurityGroupRule(securityGroupId, direction
return &p, nil
}
}
return nil, nil
return nil, GetNotFoundErrorFromString("Security group rule not found")
}

View File

@ -0,0 +1,167 @@
package alicloud
import (
"github.com/denverdino/aliyungo/ess"
)
func (client *AliyunClient) DescribeScalingGroupById(sgId string) (*ess.ScalingGroupItemType, error) {
args := ess.DescribeScalingGroupsArgs{
RegionId: client.Region,
ScalingGroupId: []string{sgId},
}
sgs, _, err := client.essconn.DescribeScalingGroups(&args)
if err != nil {
return nil, err
}
if len(sgs) == 0 {
return nil, GetNotFoundErrorFromString("Scaling group not found")
}
return &sgs[0], nil
}
func (client *AliyunClient) DeleteScalingGroupById(sgId string) error {
args := ess.DeleteScalingGroupArgs{
ScalingGroupId: sgId,
ForceDelete: true,
}
_, err := client.essconn.DeleteScalingGroup(&args)
return err
}
func (client *AliyunClient) DescribeScalingConfigurationById(sgId, configId string) (*ess.ScalingConfigurationItemType, error) {
args := ess.DescribeScalingConfigurationsArgs{
RegionId: client.Region,
ScalingGroupId: sgId,
ScalingConfigurationId: []string{configId},
}
cs, _, err := client.essconn.DescribeScalingConfigurations(&args)
if err != nil {
return nil, err
}
if len(cs) == 0 {
return nil, GetNotFoundErrorFromString("Scaling configuration not found")
}
return &cs[0], nil
}
func (client *AliyunClient) ActiveScalingConfigurationById(sgId, configId string) error {
args := ess.ModifyScalingGroupArgs{
ScalingGroupId: sgId,
ActiveScalingConfigurationId: configId,
}
_, err := client.essconn.ModifyScalingGroup(&args)
return err
}
func (client *AliyunClient) EnableScalingConfigurationById(sgId, configId string, ids []string) error {
args := ess.EnableScalingGroupArgs{
ScalingGroupId: sgId,
ActiveScalingConfigurationId: configId,
}
if len(ids) > 0 {
args.InstanceId = ids
}
_, err := client.essconn.EnableScalingGroup(&args)
return err
}
func (client *AliyunClient) DisableScalingConfigurationById(sgId string) error {
args := ess.DisableScalingGroupArgs{
ScalingGroupId: sgId,
}
_, err := client.essconn.DisableScalingGroup(&args)
return err
}
func (client *AliyunClient) DeleteScalingConfigurationById(sgId, configId string) error {
args := ess.DeleteScalingConfigurationArgs{
ScalingGroupId: sgId,
ScalingConfigurationId: configId,
}
_, err := client.essconn.DeleteScalingConfiguration(&args)
return err
}
// Flattens an array of datadisk into a []map[string]interface{}
func flattenDataDiskMappings(list []ess.DataDiskItemType) []map[string]interface{} {
result := make([]map[string]interface{}, 0, len(list))
for _, i := range list {
l := map[string]interface{}{
"size": i.Size,
"category": i.Category,
"snapshot_id": i.SnapshotId,
"device": i.Device,
}
result = append(result, l)
}
return result
}
func (client *AliyunClient) DescribeScalingRuleById(sgId, ruleId string) (*ess.ScalingRuleItemType, error) {
args := ess.DescribeScalingRulesArgs{
RegionId: client.Region,
ScalingGroupId: sgId,
ScalingRuleId: []string{ruleId},
}
cs, _, err := client.essconn.DescribeScalingRules(&args)
if err != nil {
return nil, err
}
if len(cs) == 0 {
return nil, GetNotFoundErrorFromString("Scaling rule not found")
}
return &cs[0], nil
}
func (client *AliyunClient) DeleteScalingRuleById(ruleId string) error {
args := ess.DeleteScalingRuleArgs{
RegionId: client.Region,
ScalingRuleId: ruleId,
}
_, err := client.essconn.DeleteScalingRule(&args)
return err
}
func (client *AliyunClient) DescribeScheduleById(scheduleId string) (*ess.ScheduledTaskItemType, error) {
args := ess.DescribeScheduledTasksArgs{
RegionId: client.Region,
ScheduledTaskId: []string{scheduleId},
}
cs, _, err := client.essconn.DescribeScheduledTasks(&args)
if err != nil {
return nil, err
}
if len(cs) == 0 {
return nil, GetNotFoundErrorFromString("Schedule not found")
}
return &cs[0], nil
}
func (client *AliyunClient) DeleteScheduleById(scheduleId string) error {
args := ess.DeleteScheduledTaskArgs{
RegionId: client.Region,
ScheduledTaskId: scheduleId,
}
_, err := client.essconn.DeleteScheduledTask(&args)
return err
}

View File

@ -6,7 +6,20 @@ import (
"strings"
)
// when getInstance is empty, then throw InstanceNotfound error
//
// _______________ _______________ _______________
// | | ______param______\ | | _____request_____\ | |
// | Business | | Service | | SDK/API |
// | | __________________ | | __________________ | |
// |______________| \ (obj, err) |______________| \ (status, cont) |______________|
// | |
// |A. {instance, nil} |a. {200, content}
// |B. {nil, error} |b. {200, nil}
// |c. {4xx, nil}
//
// The API return 200 for resource not found.
// When getInstance is empty, then throw InstanceNotfound error.
// That the business layer only need to check error.
func (client *AliyunClient) DescribeDBInstanceById(id string) (instance *rds.DBInstanceAttribute, err error) {
arrtArgs := rds.DescribeDBInstancesArgs{
DBInstanceId: id,
@ -19,7 +32,7 @@ func (client *AliyunClient) DescribeDBInstanceById(id string) (instance *rds.DBI
attr := resp.Items.DBInstanceAttribute
if len(attr) <= 0 {
return nil, common.GetClientErrorFromString(InstanceNotfound)
return nil, GetNotFoundErrorFromString("DB instance not found")
}
return &attr[0], nil
@ -164,13 +177,10 @@ func (client *AliyunClient) GetSecurityIps(instanceId string) ([]string, error)
if err != nil {
return nil, err
}
ips := ""
for i, ip := range arr {
if i == 0 {
ips += ip.SecurityIPList
} else {
ips += COMMA_SEPARATED + ip.SecurityIPList
}
var ips, separator string
for _, ip := range arr {
ips += separator + ip.SecurityIPList
separator = COMMA_SEPARATED
}
return strings.Split(ips, COMMA_SEPARATED), nil
}

View File

@ -32,6 +32,7 @@ func (client *AliyunClient) DescribeNatGateway(natGatewayId string) (*ecs.NatGat
}
natGateways, _, err := client.vpcconn.DescribeNatGateways(args)
//fmt.Println("natGateways %#v", natGateways)
if err != nil {
return nil, err
}
@ -64,6 +65,78 @@ func (client *AliyunClient) DescribeVpc(vpcId string) (*ecs.VpcSetType, error) {
return &vpcs[0], nil
}
func (client *AliyunClient) DescribeSnatEntry(snatTableId string, snatEntryId string) (ecs.SnatEntrySetType, error) {
var resultSnat ecs.SnatEntrySetType
args := &ecs.DescribeSnatTableEntriesArgs{
RegionId: client.Region,
SnatTableId: snatTableId,
}
snatEntries, _, err := client.vpcconn.DescribeSnatTableEntries(args)
//this special deal cause the DescribeSnatEntry can't find the records would be throw "cant find the snatTable error"
//so judge the snatEntries length priority
if len(snatEntries) == 0 {
return resultSnat, common.GetClientErrorFromString(InstanceNotfound)
}
if err != nil {
return resultSnat, err
}
findSnat := false
for _, snat := range snatEntries {
if snat.SnatEntryId == snatEntryId {
resultSnat = snat
findSnat = true
}
}
if !findSnat {
return resultSnat, common.GetClientErrorFromString(NotFindSnatEntryBySnatId)
}
return resultSnat, nil
}
func (client *AliyunClient) DescribeForwardEntry(forwardTableId string, forwardEntryId string) (ecs.ForwardTableEntrySetType, error) {
var resultFoward ecs.ForwardTableEntrySetType
args := &ecs.DescribeForwardTableEntriesArgs{
RegionId: client.Region,
ForwardTableId: forwardTableId,
}
forwardEntries, _, err := client.vpcconn.DescribeForwardTableEntries(args)
//this special deal cause the DescribeSnatEntry can't find the records would be throw "cant find the snatTable error"
//so judge the snatEntries length priority
if len(forwardEntries) == 0 {
return resultFoward, common.GetClientErrorFromString(InstanceNotfound)
}
findForward := false
for _, forward := range forwardEntries {
if forward.ForwardEntryId == forwardEntryId {
resultFoward = forward
findForward = true
}
}
if !findForward {
return resultFoward, common.GetClientErrorFromString(NotFindForwardEntryByForwardId)
}
if err != nil {
return resultFoward, err
}
return resultFoward, nil
}
// describe vswitch by param filters
func (client *AliyunClient) QueryVswitches(args *ecs.DescribeVSwitchesArgs) (vswitches []ecs.VSwitchSetType, err error) {
vsws, _, err := client.ecsconn.DescribeVSwitches(args)
@ -130,7 +203,7 @@ func (client *AliyunClient) QueryRouteEntry(routeTableId, cidrBlock, nextHopType
return &e, nil
}
}
return nil, nil
return nil, GetNotFoundErrorFromString("Vpc router entry not found")
}
func (client *AliyunClient) GetVpcIdByVSwitchId(vswitchId string) (vpcId string, err error) {

View File

@ -1,11 +0,0 @@
package alicloud
// Takes the result of flatmap.Expand for an array of strings
// and returns a []string
func expandStringList(configured []interface{}) []string {
vs := make([]string, 0, len(configured))
for _, v := range configured {
vs = append(vs, v.(string))
}
return vs
}

View File

@ -18,7 +18,7 @@ func validateInstancePort(v interface{}, k string) (ws []string, errors []error)
value := v.(int)
if value < 1 || value > 65535 {
errors = append(errors, fmt.Errorf(
"%q must be a valid instance port between 1 and 65535",
"%q must be a valid port between 1 and 65535",
k))
return
}
@ -26,8 +26,8 @@ func validateInstancePort(v interface{}, k string) (ws []string, errors []error)
}
func validateInstanceProtocol(v interface{}, k string) (ws []string, errors []error) {
protocal := v.(string)
if !isProtocalValid(protocal) {
protocol := v.(string)
if !isProtocolValid(protocol) {
errors = append(errors, fmt.Errorf(
"%q is an invalid value. Valid values are either http, https, tcp or udp",
k))
@ -282,9 +282,9 @@ func validateInternetChargeType(v interface{}, k string) (ws []string, errors []
func validateInternetMaxBandWidthOut(v interface{}, k string) (ws []string, errors []error) {
value := v.(int)
if value < 1 || value > 100 {
if value < 0 || value > 100 {
errors = append(errors, fmt.Errorf(
"%q must be a valid internet bandwidth out between 1 and 1000",
"%q must be a valid internet bandwidth out between 0 and 100",
k))
return
}
@ -565,3 +565,14 @@ func validateRegion(v interface{}, k string) (ws []string, errors []error) {
}
return
}
func validateForwardPort(v interface{}, k string) (ws []string, errors []error) {
value := v.(string)
if value != "any" {
valueConv, err := strconv.Atoi(value)
if err != nil || valueConv < 1 || valueConv > 65535 {
errors = append(errors, fmt.Errorf("%q must be a valid port between 1 and 65535 or any ", k))
}
}
return
}

View File

@ -21,17 +21,17 @@ func TestValidateInstancePort(t *testing.T) {
}
func TestValidateInstanceProtocol(t *testing.T) {
validProtocals := []string{"http", "tcp", "https", "udp"}
for _, v := range validProtocals {
_, errors := validateInstanceProtocol(v, "instance_protocal")
validProtocols := []string{"http", "tcp", "https", "udp"}
for _, v := range validProtocols {
_, errors := validateInstanceProtocol(v, "instance_protocol")
if len(errors) != 0 {
t.Fatalf("%q should be a valid instance protocol: %q", v, errors)
}
}
invalidProtocals := []string{"HTTP", "abc", "ecmp", "dubbo"}
for _, v := range invalidProtocals {
_, errors := validateInstanceProtocol(v, "instance_protocal")
invalidProtocols := []string{"HTTP", "abc", "ecmp", "dubbo"}
for _, v := range invalidProtocols {
_, errors := validateInstanceProtocol(v, "instance_protocol")
if len(errors) == 0 {
t.Fatalf("%q should be an invalid instance protocol", v)
}
@ -353,7 +353,7 @@ func TestValidateInternetMaxBandWidthOut(t *testing.T) {
}
}
invalidInternetMaxBandWidthOut := []int{-2, 0, 101, 123}
invalidInternetMaxBandWidthOut := []int{-2, 101, 123}
for _, v := range invalidInternetMaxBandWidthOut {
_, errors := validateInternetMaxBandWidthOut(v, "internet_max_bandwidth_out")
if len(errors) == 0 {

View File

@ -28,6 +28,7 @@ import (
"github.com/aws/aws-sdk-go/service/codecommit"
"github.com/aws/aws-sdk-go/service/codedeploy"
"github.com/aws/aws-sdk-go/service/codepipeline"
"github.com/aws/aws-sdk-go/service/cognitoidentity"
"github.com/aws/aws-sdk-go/service/configservice"
"github.com/aws/aws-sdk-go/service/databasemigrationservice"
"github.com/aws/aws-sdk-go/service/directoryservice"
@ -97,6 +98,7 @@ type Config struct {
Insecure bool
SkipCredsValidation bool
SkipGetEC2Platforms bool
SkipRegionValidation bool
SkipRequestingAccountId bool
SkipMetadataApiCheck bool
@ -110,6 +112,7 @@ type AWSClient struct {
cloudwatchconn *cloudwatch.CloudWatch
cloudwatchlogsconn *cloudwatchlogs.CloudWatchLogs
cloudwatcheventsconn *cloudwatchevents.CloudWatchEvents
cognitoconn *cognitoidentity.CognitoIdentity
configconn *configservice.ConfigService
dmsconn *databasemigrationservice.DatabaseMigrationService
dsconn *directoryservice.DirectoryService
@ -280,13 +283,15 @@ func (c *Config) Client() (interface{}, error) {
client.ec2conn = ec2.New(awsEc2Sess)
supportedPlatforms, err := GetSupportedEC2Platforms(client.ec2conn)
if err != nil {
// We intentionally fail *silently* because there's a chance
// user just doesn't have ec2:DescribeAccountAttributes permissions
log.Printf("[WARN] Unable to get supported EC2 platforms: %s", err)
} else {
client.supportedplatforms = supportedPlatforms
if !c.SkipGetEC2Platforms {
supportedPlatforms, err := GetSupportedEC2Platforms(client.ec2conn)
if err != nil {
// We intentionally fail *silently* because there's a chance
// user just doesn't have ec2:DescribeAccountAttributes permissions
log.Printf("[WARN] Unable to get supported EC2 platforms: %s", err)
} else {
client.supportedplatforms = supportedPlatforms
}
}
client.acmconn = acm.New(sess)
@ -303,6 +308,7 @@ func (c *Config) Client() (interface{}, error) {
client.codebuildconn = codebuild.New(sess)
client.codedeployconn = codedeploy.New(sess)
client.configconn = configservice.New(sess)
client.cognitoconn = cognitoidentity.New(sess)
client.dmsconn = databasemigrationservice.New(sess)
client.codepipelineconn = codepipeline.New(sess)
client.dsconn = directoryservice.New(sess)

View File

@ -0,0 +1,112 @@
package aws
import (
"fmt"
"log"
"regexp"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/hashicorp/terraform/helper/hashcode"
"github.com/hashicorp/terraform/helper/schema"
)
func dataSourceAwsAmiIds() *schema.Resource {
return &schema.Resource{
Read: dataSourceAwsAmiIdsRead,
Schema: map[string]*schema.Schema{
"filter": dataSourceFiltersSchema(),
"executable_users": {
Type: schema.TypeList,
Optional: true,
ForceNew: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"name_regex": {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
ValidateFunc: validateNameRegex,
},
"owners": {
Type: schema.TypeList,
Optional: true,
ForceNew: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"tags": dataSourceTagsSchema(),
"ids": &schema.Schema{
Type: schema.TypeSet,
Computed: true,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
},
}
}
func dataSourceAwsAmiIdsRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).ec2conn
executableUsers, executableUsersOk := d.GetOk("executable_users")
filters, filtersOk := d.GetOk("filter")
nameRegex, nameRegexOk := d.GetOk("name_regex")
owners, ownersOk := d.GetOk("owners")
if executableUsersOk == false && filtersOk == false && nameRegexOk == false && ownersOk == false {
return fmt.Errorf("One of executable_users, filters, name_regex, or owners must be assigned")
}
params := &ec2.DescribeImagesInput{}
if executableUsersOk {
params.ExecutableUsers = expandStringList(executableUsers.([]interface{}))
}
if filtersOk {
params.Filters = buildAwsDataSourceFilters(filters.(*schema.Set))
}
if ownersOk {
o := expandStringList(owners.([]interface{}))
if len(o) > 0 {
params.Owners = o
}
}
resp, err := conn.DescribeImages(params)
if err != nil {
return err
}
var filteredImages []*ec2.Image
imageIds := make([]string, 0)
if nameRegexOk {
r := regexp.MustCompile(nameRegex.(string))
for _, image := range resp.Images {
// Check for a very rare case where the response would include no
// image name. No name means nothing to attempt a match against,
// therefore we are skipping such image.
if image.Name == nil || *image.Name == "" {
log.Printf("[WARN] Unable to find AMI name to match against "+
"for image ID %q owned by %q, nothing to do.",
*image.ImageId, *image.OwnerId)
continue
}
if r.MatchString(*image.Name) {
filteredImages = append(filteredImages, image)
}
}
} else {
filteredImages = resp.Images[:]
}
for _, image := range filteredImages {
imageIds = append(imageIds, *image.ImageId)
}
d.SetId(fmt.Sprintf("%d", hashcode.String(params.String())))
d.Set("ids", imageIds)
return nil
}

View File

@ -0,0 +1,58 @@
package aws
import (
"testing"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccDataSourceAwsAmiIds_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsAmiIdsConfig_basic,
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsAmiDataSourceID("data.aws_ami_ids.ubuntu"),
),
},
},
})
}
func TestAccDataSourceAwsAmiIds_empty(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsAmiIdsConfig_empty,
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsAmiDataSourceID("data.aws_ami_ids.empty"),
resource.TestCheckResourceAttr("data.aws_ami_ids.empty", "ids.#", "0"),
),
},
},
})
}
const testAccDataSourceAwsAmiIdsConfig_basic = `
data "aws_ami_ids" "ubuntu" {
owners = ["099720109477"]
filter {
name = "name"
values = ["ubuntu/images/ubuntu-*-*-amd64-server-*"]
}
}
`
const testAccDataSourceAwsAmiIdsConfig_empty = `
data "aws_ami_ids" "empty" {
filter {
name = "name"
values = []
}
}
`

View File

@ -188,6 +188,11 @@ func dataSourceAwsDbInstance() *schema.Resource {
Computed: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"replicate_source_db": {
Type: schema.TypeString,
Computed: true,
},
},
}
}
@ -271,6 +276,7 @@ func dataSourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error
d.Set("storage_encrypted", dbInstance.StorageEncrypted)
d.Set("storage_type", dbInstance.StorageType)
d.Set("timezone", dbInstance.Timezone)
d.Set("replicate_source_db", dbInstance.ReadReplicaSourceDBInstanceIdentifier)
var vpcSecurityGroups []string
for _, v := range dbInstance.VpcSecurityGroups {

View File

@ -0,0 +1,78 @@
package aws
import (
"fmt"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/hashicorp/terraform/helper/hashcode"
"github.com/hashicorp/terraform/helper/schema"
)
func dataSourceAwsEbsSnapshotIds() *schema.Resource {
return &schema.Resource{
Read: dataSourceAwsEbsSnapshotIdsRead,
Schema: map[string]*schema.Schema{
"filter": dataSourceFiltersSchema(),
"owners": {
Type: schema.TypeList,
Optional: true,
ForceNew: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"restorable_by_user_ids": {
Type: schema.TypeList,
Optional: true,
ForceNew: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"tags": dataSourceTagsSchema(),
"ids": &schema.Schema{
Type: schema.TypeSet,
Computed: true,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
},
}
}
func dataSourceAwsEbsSnapshotIdsRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).ec2conn
restorableUsers, restorableUsersOk := d.GetOk("restorable_by_user_ids")
filters, filtersOk := d.GetOk("filter")
owners, ownersOk := d.GetOk("owners")
if restorableUsers == false && filtersOk == false && ownersOk == false {
return fmt.Errorf("One of filters, restorable_by_user_ids, or owners must be assigned")
}
params := &ec2.DescribeSnapshotsInput{}
if restorableUsersOk {
params.RestorableByUserIds = expandStringList(restorableUsers.([]interface{}))
}
if filtersOk {
params.Filters = buildAwsDataSourceFilters(filters.(*schema.Set))
}
if ownersOk {
params.OwnerIds = expandStringList(owners.([]interface{}))
}
resp, err := conn.DescribeSnapshots(params)
if err != nil {
return err
}
snapshotIds := make([]string, 0)
for _, snapshot := range resp.Snapshots {
snapshotIds = append(snapshotIds, *snapshot.SnapshotId)
}
d.SetId(fmt.Sprintf("%d", hashcode.String(params.String())))
d.Set("ids", snapshotIds)
return nil
}

View File

@ -0,0 +1,59 @@
package aws
import (
"testing"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccDataSourceAwsEbsSnapshotIds_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsEbsSnapshotIdsConfig_basic,
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ebs_snapshot_ids.test"),
),
},
},
})
}
func TestAccDataSourceAwsEbsSnapshotIds_empty(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsEbsSnapshotIdsConfig_empty,
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ebs_snapshot_ids.empty"),
resource.TestCheckResourceAttr("data.aws_ebs_snapshot_ids.empty", "ids.#", "0"),
),
},
},
})
}
const testAccDataSourceAwsEbsSnapshotIdsConfig_basic = `
resource "aws_ebs_volume" "test" {
availability_zone = "us-west-2a"
size = 40
}
resource "aws_ebs_snapshot" "test" {
volume_id = "${aws_ebs_volume.test.id}"
}
data "aws_ebs_snapshot_ids" "test" {
owners = ["self"]
}
`
const testAccDataSourceAwsEbsSnapshotIdsConfig_empty = `
data "aws_ebs_snapshot_ids" "empty" {
owners = ["000000000000"]
}
`

View File

@ -44,7 +44,7 @@ func testAccCheckAwsEbsSnapshotDataSourceID(n string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Can't find Volume data source: %s", n)
return fmt.Errorf("Can't find snapshot data source: %s", n)
}
if rs.Primary.ID == "" {

View File

@ -0,0 +1,95 @@
package aws
import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/kinesis"
"github.com/hashicorp/terraform/helper/schema"
)
func dataSourceAwsKinesisStream() *schema.Resource {
return &schema.Resource{
Read: dataSourceAwsKinesisStreamRead,
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"arn": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"creation_timestamp": &schema.Schema{
Type: schema.TypeInt,
Computed: true,
},
"status": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"retention_period": &schema.Schema{
Type: schema.TypeInt,
Computed: true,
},
"open_shards": &schema.Schema{
Type: schema.TypeSet,
Computed: true,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
"closed_shards": &schema.Schema{
Type: schema.TypeSet,
Computed: true,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
"shard_level_metrics": &schema.Schema{
Type: schema.TypeSet,
Computed: true,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
"tags": &schema.Schema{
Type: schema.TypeMap,
Computed: true,
},
},
}
}
func dataSourceAwsKinesisStreamRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).kinesisconn
sn := d.Get("name").(string)
state, err := readKinesisStreamState(conn, sn)
if err != nil {
return err
}
d.SetId(state.arn)
d.Set("arn", state.arn)
d.Set("name", sn)
d.Set("open_shards", state.openShards)
d.Set("closed_shards", state.closedShards)
d.Set("status", state.status)
d.Set("creation_timestamp", state.creationTimestamp)
d.Set("retention_period", state.retentionPeriod)
d.Set("shard_level_metrics", state.shardLevelMetrics)
tags, err := conn.ListTagsForStream(&kinesis.ListTagsForStreamInput{
StreamName: aws.String(sn),
})
if err != nil {
return err
}
d.Set("tags", tagsToMapKinesis(tags.Tags))
return nil
}

View File

@ -0,0 +1,94 @@
package aws
import (
"fmt"
"testing"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/kinesis"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccAWSKinesisStreamDataSource(t *testing.T) {
var stream kinesis.StreamDescription
sn := fmt.Sprintf("terraform-kinesis-test-%d", acctest.RandInt())
config := fmt.Sprintf(testAccCheckAwsKinesisStreamDataSourceConfig, sn)
updateShardCount := func() {
conn := testAccProvider.Meta().(*AWSClient).kinesisconn
_, err := conn.UpdateShardCount(&kinesis.UpdateShardCountInput{
ScalingType: aws.String(kinesis.ScalingTypeUniformScaling),
StreamName: aws.String(sn),
TargetShardCount: aws.Int64(3),
})
if err != nil {
t.Fatalf("Error calling UpdateShardCount: %s", err)
}
if err := waitForKinesisToBeActive(conn, sn); err != nil {
t.Fatal(err)
}
}
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckKinesisStreamDestroy,
Steps: []resource.TestStep{
{
Config: config,
Check: resource.ComposeTestCheckFunc(
testAccCheckKinesisStreamExists("aws_kinesis_stream.test_stream", &stream),
resource.TestCheckResourceAttrSet("data.aws_kinesis_stream.test_stream", "arn"),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "name", sn),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "status", "ACTIVE"),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "open_shards.#", "2"),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "closed_shards.#", "0"),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "shard_level_metrics.#", "2"),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "retention_period", "72"),
resource.TestCheckResourceAttrSet("data.aws_kinesis_stream.test_stream", "creation_timestamp"),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "tags.Name", "tf-test"),
),
},
{
Config: config,
PreConfig: updateShardCount,
Check: resource.ComposeTestCheckFunc(
testAccCheckKinesisStreamExists("aws_kinesis_stream.test_stream", &stream),
resource.TestCheckResourceAttrSet("data.aws_kinesis_stream.test_stream", "arn"),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "name", sn),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "status", "ACTIVE"),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "open_shards.#", "3"),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "closed_shards.#", "4"),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "shard_level_metrics.#", "2"),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "retention_period", "72"),
resource.TestCheckResourceAttrSet("data.aws_kinesis_stream.test_stream", "creation_timestamp"),
resource.TestCheckResourceAttr("data.aws_kinesis_stream.test_stream", "tags.Name", "tf-test"),
),
},
},
})
}
var testAccCheckAwsKinesisStreamDataSourceConfig = `
resource "aws_kinesis_stream" "test_stream" {
name = "%s"
shard_count = 2
retention_period = 72
tags {
Name = "tf-test"
}
shard_level_metrics = [
"IncomingBytes",
"OutgoingBytes"
]
lifecycle {
ignore_changes = ["shard_count"]
}
}
data "aws_kinesis_stream" "test_stream" {
name = "${aws_kinesis_stream.test_stream.name}"
}
`

View File

@ -0,0 +1,62 @@
package aws
import (
"fmt"
"time"
"github.com/aws/aws-sdk-go/service/kms"
"github.com/hashicorp/errwrap"
"github.com/hashicorp/terraform/helper/schema"
)
func dataSourceAwsKmsAlias() *schema.Resource {
return &schema.Resource{
Read: dataSourceAwsKmsAliasRead,
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
ValidateFunc: validateAwsKmsName,
},
"arn": {
Type: schema.TypeString,
Computed: true,
},
"target_key_id": {
Type: schema.TypeString,
Computed: true,
},
},
}
}
func dataSourceAwsKmsAliasRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).kmsconn
params := &kms.ListAliasesInput{}
target := d.Get("name")
var alias *kms.AliasListEntry
err := conn.ListAliasesPages(params, func(page *kms.ListAliasesOutput, lastPage bool) bool {
for _, entity := range page.Aliases {
if *entity.AliasName == target {
alias = entity
return false
}
}
return true
})
if err != nil {
return errwrap.Wrapf("Error fetch KMS alias list: {{err}}", err)
}
if alias == nil {
return fmt.Errorf("No alias with name %q found in this region.", target)
}
d.SetId(time.Now().UTC().String())
d.Set("arn", alias.AliasArn)
d.Set("target_key_id", alias.TargetKeyId)
return nil
}

View File

@ -0,0 +1,77 @@
package aws
import (
"fmt"
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestAccDataSourceAwsKmsAlias(t *testing.T) {
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccDataSourceAwsKmsAlias(rInt),
Check: resource.ComposeTestCheckFunc(
testAccDataSourceAwsKmsAliasCheck("data.aws_kms_alias.by_name"),
),
},
},
})
}
func testAccDataSourceAwsKmsAliasCheck(name string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[name]
if !ok {
return fmt.Errorf("root module has no resource called %s", name)
}
kmsKeyRs, ok := s.RootModule().Resources["aws_kms_alias.single"]
if !ok {
return fmt.Errorf("can't find aws_kms_alias.single in state")
}
attr := rs.Primary.Attributes
if attr["arn"] != kmsKeyRs.Primary.Attributes["arn"] {
return fmt.Errorf(
"arn is %s; want %s",
attr["arn"],
kmsKeyRs.Primary.Attributes["arn"],
)
}
if attr["target_key_id"] != kmsKeyRs.Primary.Attributes["target_key_id"] {
return fmt.Errorf(
"target_key_id is %s; want %s",
attr["target_key_id"],
kmsKeyRs.Primary.Attributes["target_key_id"],
)
}
return nil
}
}
func testAccDataSourceAwsKmsAlias(rInt int) string {
return fmt.Sprintf(`
resource "aws_kms_key" "one" {
description = "Terraform acc test"
deletion_window_in_days = 7
}
resource "aws_kms_alias" "single" {
name = "alias/tf-acc-key-alias-%d"
target_key_id = "${aws_kms_key.one.key_id}"
}
data "aws_kms_alias" "by_name" {
name = "${aws_kms_alias.single.name}"
}`, rInt)
}

View File

@ -72,7 +72,7 @@ func testAccDataSourceAwsRoute53ZoneCheck(rsName, dsName, zName string) resource
func testAccDataSourceAwsRoute53ZoneConfig(rInt int) string {
return fmt.Sprintf(`
provider "aws" {
region = "us-east-2"
region = "us-east-1"
}
resource "aws_vpc" "test" {

View File

@ -14,19 +14,25 @@ func dataSourceAwsSubnet() *schema.Resource {
Read: dataSourceAwsSubnetRead,
Schema: map[string]*schema.Schema{
"availability_zone": &schema.Schema{
"availability_zone": {
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"cidr_block": &schema.Schema{
"cidr_block": {
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"default_for_az": &schema.Schema{
"ipv6_cidr_block": {
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"default_for_az": {
Type: schema.TypeBool,
Optional: true,
Computed: true,
@ -34,13 +40,13 @@ func dataSourceAwsSubnet() *schema.Resource {
"filter": ec2CustomFiltersSchema(),
"id": &schema.Schema{
"id": {
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"state": &schema.Schema{
"state": {
Type: schema.TypeString,
Optional: true,
Computed: true,
@ -48,11 +54,26 @@ func dataSourceAwsSubnet() *schema.Resource {
"tags": tagsSchemaComputed(),
"vpc_id": &schema.Schema{
"vpc_id": {
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"assign_ipv6_address_on_creation": {
Type: schema.TypeBool,
Computed: true,
},
"map_public_ip_on_launch": {
Type: schema.TypeBool,
Computed: true,
},
"ipv6_cidr_block_association_id": {
Type: schema.TypeString,
Computed: true,
},
},
}
}
@ -76,15 +97,22 @@ func dataSourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error {
defaultForAzStr = "true"
}
req.Filters = buildEC2AttributeFilterList(
map[string]string{
"availabilityZone": d.Get("availability_zone").(string),
"cidrBlock": d.Get("cidr_block").(string),
"defaultForAz": defaultForAzStr,
"state": d.Get("state").(string),
"vpc-id": d.Get("vpc_id").(string),
},
)
filters := map[string]string{
"availabilityZone": d.Get("availability_zone").(string),
"defaultForAz": defaultForAzStr,
"state": d.Get("state").(string),
"vpc-id": d.Get("vpc_id").(string),
}
if v, ok := d.GetOk("cidr_block"); ok {
filters["cidrBlock"] = v.(string)
}
if v, ok := d.GetOk("ipv6_cidr_block"); ok {
filters["ipv6-cidr-block-association.ipv6-cidr-block"] = v.(string)
}
req.Filters = buildEC2AttributeFilterList(filters)
req.Filters = append(req.Filters, buildEC2TagFilterList(
tagsFromMap(d.Get("tags").(map[string]interface{})),
)...)
@ -118,6 +146,15 @@ func dataSourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error {
d.Set("default_for_az", subnet.DefaultForAz)
d.Set("state", subnet.State)
d.Set("tags", tagsToMap(subnet.Tags))
d.Set("assign_ipv6_address_on_creation", subnet.AssignIpv6AddressOnCreation)
d.Set("map_public_ip_on_launch", subnet.MapPublicIpOnLaunch)
for _, a := range subnet.Ipv6CidrBlockAssociationSet {
if *a.Ipv6CidrBlockState.State == "associated" { //we can only ever have 1 IPv6 block associated at once
d.Set("ipv6_cidr_block_association_id", a.AssociationId)
d.Set("ipv6_cidr_block", a.Ipv6CidrBlock)
}
}
return nil
}

View File

@ -11,8 +11,9 @@ import (
func TestAccDataSourceAwsSubnetIDs(t *testing.T) {
rInt := acctest.RandIntRange(0, 256)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckVpcDestroy,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsSubnetIDsConfig(rInt),

View File

@ -13,10 +13,11 @@ func TestAccDataSourceAwsSubnet(t *testing.T) {
rInt := acctest.RandIntRange(0, 256)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckVpcDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testAccDataSourceAwsSubnetConfig(rInt),
Check: resource.ComposeTestCheckFunc(
testAccDataSourceAwsSubnetCheck("data.aws_subnet.by_id", rInt),
@ -30,6 +31,48 @@ func TestAccDataSourceAwsSubnet(t *testing.T) {
})
}
func TestAccDataSourceAwsSubnetIpv6ByIpv6Filter(t *testing.T) {
rInt := acctest.RandIntRange(0, 256)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsSubnetConfigIpv6(rInt),
},
{
Config: testAccDataSourceAwsSubnetConfigIpv6WithDataSourceFilter(rInt),
Check: resource.ComposeAggregateTestCheckFunc(
resource.TestCheckResourceAttrSet(
"data.aws_subnet.by_ipv6_cidr", "ipv6_cidr_block_association_id"),
resource.TestCheckResourceAttrSet(
"data.aws_subnet.by_ipv6_cidr", "ipv6_cidr_block"),
),
},
},
})
}
func TestAccDataSourceAwsSubnetIpv6ByIpv6CidrBlock(t *testing.T) {
rInt := acctest.RandIntRange(0, 256)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsSubnetConfigIpv6(rInt),
},
{
Config: testAccDataSourceAwsSubnetConfigIpv6WithDataSourceIpv6CidrBlock(rInt),
Check: resource.ComposeAggregateTestCheckFunc(
resource.TestCheckResourceAttrSet(
"data.aws_subnet.by_ipv6_cidr", "ipv6_cidr_block_association_id"),
),
},
},
})
}
func testAccDataSourceAwsSubnetCheck(name string, rInt int) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[name]
@ -102,6 +145,7 @@ func testAccDataSourceAwsSubnetConfig(rInt int) string {
}
}
data "aws_subnet" "by_id" {
id = "${aws_subnet.test.id}"
}
@ -128,3 +172,86 @@ func testAccDataSourceAwsSubnetConfig(rInt int) string {
}
`, rInt, rInt, rInt)
}
func testAccDataSourceAwsSubnetConfigIpv6(rInt int) string {
return fmt.Sprintf(`
resource "aws_vpc" "test" {
cidr_block = "172.%d.0.0/16"
assign_generated_ipv6_cidr_block = true
tags {
Name = "terraform-testacc-subnet-data-source-ipv6"
}
}
resource "aws_subnet" "test" {
vpc_id = "${aws_vpc.test.id}"
cidr_block = "172.%d.123.0/24"
availability_zone = "us-west-2a"
ipv6_cidr_block = "${cidrsubnet(aws_vpc.test.ipv6_cidr_block, 8, 1)}"
tags {
Name = "terraform-testacc-subnet-data-sourceipv6-%d"
}
}
`, rInt, rInt, rInt)
}
func testAccDataSourceAwsSubnetConfigIpv6WithDataSourceFilter(rInt int) string {
return fmt.Sprintf(`
resource "aws_vpc" "test" {
cidr_block = "172.%d.0.0/16"
assign_generated_ipv6_cidr_block = true
tags {
Name = "terraform-testacc-subnet-data-source-ipv6"
}
}
resource "aws_subnet" "test" {
vpc_id = "${aws_vpc.test.id}"
cidr_block = "172.%d.123.0/24"
availability_zone = "us-west-2a"
ipv6_cidr_block = "${cidrsubnet(aws_vpc.test.ipv6_cidr_block, 8, 1)}"
tags {
Name = "terraform-testacc-subnet-data-sourceipv6-%d"
}
}
data "aws_subnet" "by_ipv6_cidr" {
filter {
name = "ipv6-cidr-block-association.ipv6-cidr-block"
values = ["${aws_subnet.test.ipv6_cidr_block}"]
}
}
`, rInt, rInt, rInt)
}
func testAccDataSourceAwsSubnetConfigIpv6WithDataSourceIpv6CidrBlock(rInt int) string {
return fmt.Sprintf(`
resource "aws_vpc" "test" {
cidr_block = "172.%d.0.0/16"
assign_generated_ipv6_cidr_block = true
tags {
Name = "terraform-testacc-subnet-data-source-ipv6"
}
}
resource "aws_subnet" "test" {
vpc_id = "${aws_vpc.test.id}"
cidr_block = "172.%d.123.0/24"
availability_zone = "us-west-2a"
ipv6_cidr_block = "${cidrsubnet(aws_vpc.test.ipv6_cidr_block, 8, 1)}"
tags {
Name = "terraform-testacc-subnet-data-sourceipv6-%d"
}
}
data "aws_subnet" "by_ipv6_cidr" {
ipv6_cidr_block = "${aws_subnet.test.ipv6_cidr_block}"
}
`, rInt, rInt, rInt)
}

View File

@ -111,11 +111,11 @@ func ec2CustomFiltersSchema() *schema.Schema {
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"values": &schema.Schema{
"values": {
Type: schema.TypeSet,
Required: true,
Elem: &schema.Schema{

View File

@ -0,0 +1,29 @@
package aws
import (
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccAWSCodeCommitRepository_importBasic(t *testing.T) {
resName := "aws_codecommit_repository.test"
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckCodeCommitRepositoryDestroy,
Steps: []resource.TestStep{
{
Config: testAccCodeCommitRepository_basic(rInt),
},
{
ResourceName: resName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}

View File

@ -0,0 +1,30 @@
package aws
import (
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccAWSCognitoIdentityPool_importBasic(t *testing.T) {
resourceName := "aws_cognito_identity_pool.main"
rName := acctest.RandString(10)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSAPIGatewayAccountDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(rName),
},
{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}

View File

@ -3,19 +3,21 @@ package aws
import (
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccAWSDynamoDbTable_importBasic(t *testing.T) {
resourceName := "aws_dynamodb_table.basic-dynamodb-table"
rName := acctest.RandomWithPrefix("TerraformTestTable-")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSDynamoDbTableDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSDynamoDbConfigInitialState(),
Config: testAccAWSDynamoDbConfigInitialState(rName),
},
{

View File

@ -50,36 +50,67 @@ func resourceAwsSecurityGroupImportState(
}
func resourceAwsSecurityGroupImportStatePerm(sg *ec2.SecurityGroup, ruleType string, perm *ec2.IpPermission) ([]*schema.ResourceData, error) {
/*
Create a seperate Security Group Rule for:
* The collection of IpRanges (cidr_blocks)
* The collection of Ipv6Ranges (ipv6_cidr_blocks)
* Each individual UserIdGroupPair (source_security_group_id)
If, for example, a security group has rules for:
* 2 IpRanges
* 2 Ipv6Ranges
* 2 UserIdGroupPairs
This would generate 4 security group rules:
* 1 for the collection of IpRanges
* 1 for the collection of Ipv6Ranges
* 1 for the first UserIdGroupPair
* 1 for the second UserIdGroupPair
*/
var result []*schema.ResourceData
if len(perm.UserIdGroupPairs) == 0 {
r, err := resourceAwsSecurityGroupImportStatePermPair(sg, ruleType, perm)
if perm.IpRanges != nil {
p := &ec2.IpPermission{
FromPort: perm.FromPort,
IpProtocol: perm.IpProtocol,
PrefixListIds: perm.PrefixListIds,
ToPort: perm.ToPort,
IpRanges: perm.IpRanges,
}
r, err := resourceAwsSecurityGroupImportStatePermPair(sg, ruleType, p)
if err != nil {
return nil, err
}
result = append(result, r)
} else {
// If the rule contained more than one source security group, this
// will iterate over them and create one rule for each
// source security group.
}
if perm.Ipv6Ranges != nil {
p := &ec2.IpPermission{
FromPort: perm.FromPort,
IpProtocol: perm.IpProtocol,
PrefixListIds: perm.PrefixListIds,
ToPort: perm.ToPort,
Ipv6Ranges: perm.Ipv6Ranges,
}
r, err := resourceAwsSecurityGroupImportStatePermPair(sg, ruleType, p)
if err != nil {
return nil, err
}
result = append(result, r)
}
if len(perm.UserIdGroupPairs) > 0 {
for _, pair := range perm.UserIdGroupPairs {
p := &ec2.IpPermission{
FromPort: perm.FromPort,
IpProtocol: perm.IpProtocol,
PrefixListIds: perm.PrefixListIds,
ToPort: perm.ToPort,
FromPort: perm.FromPort,
IpProtocol: perm.IpProtocol,
PrefixListIds: perm.PrefixListIds,
ToPort: perm.ToPort,
UserIdGroupPairs: []*ec2.UserIdGroupPair{pair},
}
if perm.Ipv6Ranges != nil {
p.Ipv6Ranges = perm.Ipv6Ranges
}
if perm.IpRanges != nil {
p.IpRanges = perm.IpRanges
}
r, err := resourceAwsSecurityGroupImportStatePermPair(sg, ruleType, p)
if err != nil {
return nil, err

View File

@ -101,3 +101,59 @@ func TestAccAWSSecurityGroup_importSourceSecurityGroup(t *testing.T) {
},
})
}
func TestAccAWSSecurityGroup_importIPRangeAndSecurityGroupWithSameRules(t *testing.T) {
checkFn := func(s []*terraform.InstanceState) error {
// Expect 4: group, 3 rules
if len(s) != 4 {
return fmt.Errorf("expected 4 states: %#v", s)
}
return nil
}
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSSecurityGroupConfig_importIPRangeAndSecurityGroupWithSameRules,
},
{
ResourceName: "aws_security_group.test_group_1",
ImportState: true,
ImportStateCheck: checkFn,
},
},
})
}
func TestAccAWSSecurityGroup_importIPRangesWithSameRules(t *testing.T) {
checkFn := func(s []*terraform.InstanceState) error {
// Expect 4: group, 2 rules
if len(s) != 3 {
return fmt.Errorf("expected 3 states: %#v", s)
}
return nil
}
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSSecurityGroupConfig_importIPRangesWithSameRules,
},
{
ResourceName: "aws_security_group.test_group_1",
ImportState: true,
ImportStateCheck: checkFn,
},
},
})
}

View File

@ -7,7 +7,7 @@ import (
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccAWSSpotDatafeedSubscription_importBasic(t *testing.T) {
func testAccAWSSpotDatafeedSubscription_importBasic(t *testing.T) {
resourceName := "aws_spot_datafeed_subscription.default"
ri := acctest.RandInt()
@ -16,11 +16,11 @@ func TestAccAWSSpotDatafeedSubscription_importBasic(t *testing.T) {
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSSpotDatafeedSubscriptionDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testAccAWSSpotDatafeedSubscription(ri),
},
resource.TestStep{
{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,

View File

@ -70,7 +70,7 @@ func Provider() terraform.ResourceProvider {
"max_retries": {
Type: schema.TypeInt,
Optional: true,
Default: 11,
Default: 25,
Description: descriptions["max_retries"],
},
@ -122,6 +122,13 @@ func Provider() terraform.ResourceProvider {
Description: descriptions["skip_credentials_validation"],
},
"skip_get_ec2_platforms": {
Type: schema.TypeBool,
Optional: true,
Default: false,
Description: descriptions["skip_get_ec2_platforms"],
},
"skip_region_validation": {
Type: schema.TypeBool,
Optional: true,
@ -156,6 +163,7 @@ func Provider() terraform.ResourceProvider {
"aws_alb": dataSourceAwsAlb(),
"aws_alb_listener": dataSourceAwsAlbListener(),
"aws_ami": dataSourceAwsAmi(),
"aws_ami_ids": dataSourceAwsAmiIds(),
"aws_autoscaling_groups": dataSourceAwsAutoscalingGroups(),
"aws_availability_zone": dataSourceAwsAvailabilityZone(),
"aws_availability_zones": dataSourceAwsAvailabilityZones(),
@ -165,6 +173,7 @@ func Provider() terraform.ResourceProvider {
"aws_cloudformation_stack": dataSourceAwsCloudFormationStack(),
"aws_db_instance": dataSourceAwsDbInstance(),
"aws_ebs_snapshot": dataSourceAwsEbsSnapshot(),
"aws_ebs_snapshot_ids": dataSourceAwsEbsSnapshotIds(),
"aws_ebs_volume": dataSourceAwsEbsVolume(),
"aws_ecs_cluster": dataSourceAwsEcsCluster(),
"aws_ecs_container_definition": dataSourceAwsEcsContainerDefinition(),
@ -172,12 +181,14 @@ func Provider() terraform.ResourceProvider {
"aws_eip": dataSourceAwsEip(),
"aws_elb_hosted_zone_id": dataSourceAwsElbHostedZoneId(),
"aws_elb_service_account": dataSourceAwsElbServiceAccount(),
"aws_kinesis_stream": dataSourceAwsKinesisStream(),
"aws_iam_account_alias": dataSourceAwsIamAccountAlias(),
"aws_iam_policy_document": dataSourceAwsIamPolicyDocument(),
"aws_iam_role": dataSourceAwsIAMRole(),
"aws_iam_server_certificate": dataSourceAwsIAMServerCertificate(),
"aws_instance": dataSourceAwsInstance(),
"aws_ip_ranges": dataSourceAwsIPRanges(),
"aws_kms_alias": dataSourceAwsKmsAlias(),
"aws_kms_secret": dataSourceAwsKmsSecret(),
"aws_partition": dataSourceAwsPartition(),
"aws_prefix_list": dataSourceAwsPrefixList(),
@ -218,9 +229,11 @@ func Provider() terraform.ResourceProvider {
"aws_api_gateway_integration_response": resourceAwsApiGatewayIntegrationResponse(),
"aws_api_gateway_method": resourceAwsApiGatewayMethod(),
"aws_api_gateway_method_response": resourceAwsApiGatewayMethodResponse(),
"aws_api_gateway_method_settings": resourceAwsApiGatewayMethodSettings(),
"aws_api_gateway_model": resourceAwsApiGatewayModel(),
"aws_api_gateway_resource": resourceAwsApiGatewayResource(),
"aws_api_gateway_rest_api": resourceAwsApiGatewayRestApi(),
"aws_api_gateway_stage": resourceAwsApiGatewayStage(),
"aws_api_gateway_usage_plan": resourceAwsApiGatewayUsagePlan(),
"aws_api_gateway_usage_plan_key": resourceAwsApiGatewayUsagePlanKey(),
"aws_app_cookie_stickiness_policy": resourceAwsAppCookieStickinessPolicy(),
@ -247,6 +260,7 @@ func Provider() terraform.ResourceProvider {
"aws_config_configuration_recorder": resourceAwsConfigConfigurationRecorder(),
"aws_config_configuration_recorder_status": resourceAwsConfigConfigurationRecorderStatus(),
"aws_config_delivery_channel": resourceAwsConfigDeliveryChannel(),
"aws_cognito_identity_pool": resourceAwsCognitoIdentityPool(),
"aws_autoscaling_lifecycle_hook": resourceAwsAutoscalingLifecycleHook(),
"aws_cloudwatch_metric_alarm": resourceAwsCloudWatchMetricAlarm(),
"aws_codedeploy_app": resourceAwsCodeDeployApp(),
@ -354,6 +368,7 @@ func Provider() terraform.ResourceProvider {
"aws_default_route_table": resourceAwsDefaultRouteTable(),
"aws_network_acl_rule": resourceAwsNetworkAclRule(),
"aws_network_interface": resourceAwsNetworkInterface(),
"aws_network_interface_attachment": resourceAwsNetworkInterfaceAttachment(),
"aws_opsworks_application": resourceAwsOpsworksApplication(),
"aws_opsworks_stack": resourceAwsOpsworksStack(),
"aws_opsworks_java_app_layer": resourceAwsOpsworksJavaAppLayer(),
@ -487,6 +502,9 @@ func init() {
"skip_credentials_validation": "Skip the credentials validation via STS API. " +
"Used for AWS API implementations that do not have STS available/implemented.",
"skip_get_ec2_platforms": "Skip getting the supported EC2 platforms. " +
"Used by users that don't have ec2:DescribeAccountAttributes permissions.",
"skip_region_validation": "Skip static validation of region name. " +
"Used by users of alternative AWS-like APIs or users w/ access to regions that are not public (yet).",
@ -526,6 +544,7 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) {
MaxRetries: d.Get("max_retries").(int),
Insecure: d.Get("insecure").(bool),
SkipCredsValidation: d.Get("skip_credentials_validation").(bool),
SkipGetEC2Platforms: d.Get("skip_get_ec2_platforms").(bool),
SkipRegionValidation: d.Get("skip_region_validation").(bool),
SkipRequestingAccountId: d.Get("skip_requesting_account_id").(bool),
SkipMetadataApiCheck: d.Get("skip_metadata_api_check").(bool),

View File

@ -54,6 +54,16 @@ func resourceAwsApiGatewayDeployment() *schema.Resource {
Type: schema.TypeString,
Computed: true,
},
"invoke_url": {
Type: schema.TypeString,
Computed: true,
},
"execution_arn": {
Type: schema.TypeString,
Computed: true,
},
},
}
}
@ -90,8 +100,9 @@ func resourceAwsApiGatewayDeploymentRead(d *schema.ResourceData, meta interface{
conn := meta.(*AWSClient).apigateway
log.Printf("[DEBUG] Reading API Gateway Deployment %s", d.Id())
restApiId := d.Get("rest_api_id").(string)
out, err := conn.GetDeployment(&apigateway.GetDeploymentInput{
RestApiId: aws.String(d.Get("rest_api_id").(string)),
RestApiId: aws.String(restApiId),
DeploymentId: aws.String(d.Id()),
})
if err != nil {
@ -104,6 +115,18 @@ func resourceAwsApiGatewayDeploymentRead(d *schema.ResourceData, meta interface{
log.Printf("[DEBUG] Received API Gateway Deployment: %s", out)
d.Set("description", out.Description)
region := meta.(*AWSClient).region
stageName := d.Get("stage_name").(string)
d.Set("invoke_url", buildApiGatewayInvokeURL(restApiId, region, stageName))
accountId := meta.(*AWSClient).accountid
arn, err := buildApiGatewayExecutionARN(restApiId, region, accountId)
if err != nil {
return err
}
d.Set("execution_arn", arn+"/"+stageName)
if err := d.Set("created_date", out.CreatedDate.Format(time.RFC3339)); err != nil {
log.Printf("[DEBUG] Error setting created_date: %s", err)
}

View File

@ -0,0 +1,248 @@
package aws
import (
"fmt"
"log"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/apigateway"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceAwsApiGatewayMethodSettings() *schema.Resource {
return &schema.Resource{
Create: resourceAwsApiGatewayMethodSettingsUpdate,
Read: resourceAwsApiGatewayMethodSettingsRead,
Update: resourceAwsApiGatewayMethodSettingsUpdate,
Delete: resourceAwsApiGatewayMethodSettingsDelete,
Schema: map[string]*schema.Schema{
"rest_api_id": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"stage_name": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"method_path": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"settings": {
Type: schema.TypeList,
Required: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"metrics_enabled": {
Type: schema.TypeBool,
Optional: true,
},
"logging_level": {
Type: schema.TypeString,
Optional: true,
},
"data_trace_enabled": {
Type: schema.TypeBool,
Optional: true,
},
"throttling_burst_limit": {
Type: schema.TypeInt,
Optional: true,
},
"throttling_rate_limit": {
Type: schema.TypeFloat,
Optional: true,
},
"caching_enabled": {
Type: schema.TypeBool,
Optional: true,
},
"cache_ttl_in_seconds": {
Type: schema.TypeInt,
Optional: true,
},
"cache_data_encrypted": {
Type: schema.TypeBool,
Optional: true,
},
"require_authorization_for_cache_control": {
Type: schema.TypeBool,
Optional: true,
},
"unauthorized_cache_control_header_strategy": {
Type: schema.TypeString,
Optional: true,
},
},
},
},
},
}
}
func resourceAwsApiGatewayMethodSettingsRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).apigateway
log.Printf("[DEBUG] Reading API Gateway Method Settings %s", d.Id())
input := apigateway.GetStageInput{
RestApiId: aws.String(d.Get("rest_api_id").(string)),
StageName: aws.String(d.Get("stage_name").(string)),
}
stage, err := conn.GetStage(&input)
if err != nil {
if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NotFoundException" {
log.Printf("[WARN] API Gateway Stage %s not found, removing method settings", d.Id())
d.SetId("")
return nil
}
return err
}
log.Printf("[DEBUG] Received API Gateway Stage: %s", stage)
methodPath := d.Get("method_path").(string)
settings, ok := stage.MethodSettings[methodPath]
if !ok {
log.Printf("[WARN] API Gateway Method Settings for %q not found, removing", methodPath)
d.SetId("")
return nil
}
d.Set("settings.0.metrics_enabled", settings.MetricsEnabled)
d.Set("settings.0.logging_level", settings.LoggingLevel)
d.Set("settings.0.data_trace_enabled", settings.DataTraceEnabled)
d.Set("settings.0.throttling_burst_limit", settings.ThrottlingBurstLimit)
d.Set("settings.0.throttling_rate_limit", settings.ThrottlingRateLimit)
d.Set("settings.0.caching_enabled", settings.CachingEnabled)
d.Set("settings.0.cache_ttl_in_seconds", settings.CacheTtlInSeconds)
d.Set("settings.0.cache_data_encrypted", settings.CacheDataEncrypted)
d.Set("settings.0.require_authorization_for_cache_control", settings.RequireAuthorizationForCacheControl)
d.Set("settings.0.unauthorized_cache_control_header_strategy", settings.UnauthorizedCacheControlHeaderStrategy)
return nil
}
func resourceAwsApiGatewayMethodSettingsUpdate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).apigateway
methodPath := d.Get("method_path").(string)
prefix := fmt.Sprintf("/%s/", methodPath)
ops := make([]*apigateway.PatchOperation, 0)
if d.HasChange("settings.0.metrics_enabled") {
ops = append(ops, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String(prefix + "metrics/enabled"),
Value: aws.String(fmt.Sprintf("%t", d.Get("settings.0.metrics_enabled").(bool))),
})
}
if d.HasChange("settings.0.logging_level") {
ops = append(ops, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String(prefix + "logging/loglevel"),
Value: aws.String(d.Get("settings.0.logging_level").(string)),
})
}
if d.HasChange("settings.0.data_trace_enabled") {
ops = append(ops, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String(prefix + "logging/dataTrace"),
Value: aws.String(fmt.Sprintf("%t", d.Get("settings.0.data_trace_enabled").(bool))),
})
}
if d.HasChange("settings.0.throttling_burst_limit") {
ops = append(ops, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String(prefix + "throttling/burstLimit"),
Value: aws.String(fmt.Sprintf("%d", d.Get("settings.0.throttling_burst_limit").(int))),
})
}
if d.HasChange("settings.0.throttling_rate_limit") {
ops = append(ops, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String(prefix + "throttling/rateLimit"),
Value: aws.String(fmt.Sprintf("%f", d.Get("settings.0.throttling_rate_limit").(float64))),
})
}
if d.HasChange("settings.0.caching_enabled") {
ops = append(ops, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String(prefix + "caching/enabled"),
Value: aws.String(fmt.Sprintf("%t", d.Get("settings.0.caching_enabled").(bool))),
})
}
if d.HasChange("settings.0.cache_ttl_in_seconds") {
ops = append(ops, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String(prefix + "caching/ttlInSeconds"),
Value: aws.String(fmt.Sprintf("%d", d.Get("settings.0.cache_ttl_in_seconds").(int))),
})
}
if d.HasChange("settings.0.cache_data_encrypted") {
ops = append(ops, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String(prefix + "caching/dataEncrypted"),
Value: aws.String(fmt.Sprintf("%d", d.Get("settings.0.cache_data_encrypted").(int))),
})
}
if d.HasChange("settings.0.require_authorization_for_cache_control") {
ops = append(ops, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String(prefix + "caching/requireAuthorizationForCacheControl"),
Value: aws.String(fmt.Sprintf("%t", d.Get("settings.0.require_authorization_for_cache_control").(bool))),
})
}
if d.HasChange("settings.0.unauthorized_cache_control_header_strategy") {
ops = append(ops, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String(prefix + "caching/unauthorizedCacheControlHeaderStrategy"),
Value: aws.String(d.Get("settings.0.unauthorized_cache_control_header_strategy").(string)),
})
}
restApiId := d.Get("rest_api_id").(string)
stageName := d.Get("stage_name").(string)
input := apigateway.UpdateStageInput{
RestApiId: aws.String(restApiId),
StageName: aws.String(stageName),
PatchOperations: ops,
}
log.Printf("[DEBUG] Updating API Gateway Stage: %s", input)
_, err := conn.UpdateStage(&input)
if err != nil {
return fmt.Errorf("Updating API Gateway Stage failed: %s", err)
}
d.SetId(restApiId + "-" + stageName + "-" + methodPath)
return resourceAwsApiGatewayMethodSettingsRead(d, meta)
}
func resourceAwsApiGatewayMethodSettingsDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).apigateway
log.Printf("[DEBUG] Deleting API Gateway Method Settings: %s", d.Id())
input := apigateway.UpdateStageInput{
RestApiId: aws.String(d.Get("rest_api_id").(string)),
StageName: aws.String(d.Get("stage_name").(string)),
PatchOperations: []*apigateway.PatchOperation{
{
Op: aws.String("remove"),
Path: aws.String(fmt.Sprintf("/%s", d.Get("method_path").(string))),
},
},
}
log.Printf("[DEBUG] Updating API Gateway Stage: %s", input)
_, err := conn.UpdateStage(&input)
if err != nil {
return fmt.Errorf("Updating API Gateway Stage failed: %s", err)
}
return nil
}

View File

@ -0,0 +1,265 @@
package aws
import (
"fmt"
"testing"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/apigateway"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestAccAWSAPIGatewayMethodSettings_basic(t *testing.T) {
var stage apigateway.Stage
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSAPIGatewayMethodSettingsDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSAPIGatewayMethodSettingsConfig(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSAPIGatewayMethodSettingsExists("aws_api_gateway_method_settings.test", &stage),
testAccCheckAWSAPIGatewayMethodSettings_metricsEnabled(&stage, "test/GET", true),
testAccCheckAWSAPIGatewayMethodSettings_loggingLevel(&stage, "test/GET", "INFO"),
resource.TestCheckResourceAttr("aws_api_gateway_method_settings.test", "settings.#", "1"),
resource.TestCheckResourceAttr("aws_api_gateway_method_settings.test", "settings.0.metrics_enabled", "true"),
resource.TestCheckResourceAttr("aws_api_gateway_method_settings.test", "settings.0.logging_level", "INFO"),
),
},
{
Config: testAccAWSAPIGatewayMethodSettingsConfigUpdate(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSAPIGatewayMethodSettingsExists("aws_api_gateway_method_settings.test", &stage),
testAccCheckAWSAPIGatewayMethodSettings_metricsEnabled(&stage, "test/GET", false),
testAccCheckAWSAPIGatewayMethodSettings_loggingLevel(&stage, "test/GET", "OFF"),
resource.TestCheckResourceAttr("aws_api_gateway_method_settings.test", "settings.#", "1"),
resource.TestCheckResourceAttr("aws_api_gateway_method_settings.test", "settings.0.metrics_enabled", "false"),
resource.TestCheckResourceAttr("aws_api_gateway_method_settings.test", "settings.0.logging_level", "OFF"),
),
},
},
})
}
func testAccCheckAWSAPIGatewayMethodSettings_metricsEnabled(conf *apigateway.Stage, path string, expected bool) resource.TestCheckFunc {
return func(s *terraform.State) error {
settings, ok := conf.MethodSettings[path]
if !ok {
return fmt.Errorf("Expected to find method settings for %q", path)
}
if expected && *settings.MetricsEnabled != expected {
return fmt.Errorf("Expected metrics to be enabled, got %t", *settings.MetricsEnabled)
}
if !expected && *settings.MetricsEnabled != expected {
return fmt.Errorf("Expected metrics to be disabled, got %t", *settings.MetricsEnabled)
}
return nil
}
}
func testAccCheckAWSAPIGatewayMethodSettings_loggingLevel(conf *apigateway.Stage, path string, expectedLevel string) resource.TestCheckFunc {
return func(s *terraform.State) error {
settings, ok := conf.MethodSettings[path]
if !ok {
return fmt.Errorf("Expected to find method settings for %q", path)
}
if *settings.LoggingLevel != expectedLevel {
return fmt.Errorf("Expected logging level to match %q, got %q", expectedLevel, *settings.LoggingLevel)
}
return nil
}
}
func testAccCheckAWSAPIGatewayMethodSettingsExists(n string, res *apigateway.Stage) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No API Gateway Stage ID is set")
}
conn := testAccProvider.Meta().(*AWSClient).apigateway
req := &apigateway.GetStageInput{
StageName: aws.String(s.RootModule().Resources["aws_api_gateway_deployment.test"].Primary.Attributes["stage_name"]),
RestApiId: aws.String(s.RootModule().Resources["aws_api_gateway_rest_api.test"].Primary.ID),
}
out, err := conn.GetStage(req)
if err != nil {
return err
}
*res = *out
return nil
}
}
func testAccCheckAWSAPIGatewayMethodSettingsDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).apigateway
for _, rs := range s.RootModule().Resources {
if rs.Type != "aws_api_gateway_method_settings" {
continue
}
req := &apigateway.GetStageInput{
StageName: aws.String(s.RootModule().Resources["aws_api_gateway_deployment.test"].Primary.Attributes["stage_name"]),
RestApiId: aws.String(s.RootModule().Resources["aws_api_gateway_rest_api.test"].Primary.ID),
}
out, err := conn.GetStage(req)
if err == nil {
return fmt.Errorf("API Gateway Stage still exists: %s", out)
}
awsErr, ok := err.(awserr.Error)
if !ok {
return err
}
if awsErr.Code() != "NotFoundException" {
return err
}
return nil
}
return nil
}
func testAccAWSAPIGatewayMethodSettingsConfig(rInt int) string {
return fmt.Sprintf(`
resource "aws_api_gateway_rest_api" "test" {
name = "tf-acc-test-apig-method-%d"
}
resource "aws_api_gateway_resource" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
parent_id = "${aws_api_gateway_rest_api.test.root_resource_id}"
path_part = "test"
}
resource "aws_api_gateway_method" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
resource_id = "${aws_api_gateway_resource.test.id}"
http_method = "GET"
authorization = "NONE"
request_models = {
"application/json" = "Error"
}
request_parameters = {
"method.request.header.Content-Type" = false,
"method.request.querystring.page" = true
}
}
resource "aws_api_gateway_integration" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
resource_id = "${aws_api_gateway_resource.test.id}"
http_method = "${aws_api_gateway_method.test.http_method}"
type = "MOCK"
request_templates {
"application/xml" = <<EOF
{
"body" : $input.json('$')
}
EOF
}
}
resource "aws_api_gateway_deployment" "test" {
depends_on = ["aws_api_gateway_integration.test"]
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
stage_name = "dev"
}
resource "aws_api_gateway_method_settings" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
stage_name = "${aws_api_gateway_deployment.test.stage_name}"
method_path = "${aws_api_gateway_resource.test.path_part}/${aws_api_gateway_method.test.http_method}"
settings {
metrics_enabled = true
logging_level = "INFO"
}
}
`, rInt)
}
func testAccAWSAPIGatewayMethodSettingsConfigUpdate(rInt int) string {
return fmt.Sprintf(`
resource "aws_api_gateway_rest_api" "test" {
name = "tf-acc-test-apig-method-%d"
}
resource "aws_api_gateway_resource" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
parent_id = "${aws_api_gateway_rest_api.test.root_resource_id}"
path_part = "test"
}
resource "aws_api_gateway_method" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
resource_id = "${aws_api_gateway_resource.test.id}"
http_method = "GET"
authorization = "NONE"
request_models = {
"application/json" = "Error"
}
request_parameters = {
"method.request.header.Content-Type" = false,
"method.request.querystring.page" = true
}
}
resource "aws_api_gateway_integration" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
resource_id = "${aws_api_gateway_resource.test.id}"
http_method = "${aws_api_gateway_method.test.http_method}"
type = "MOCK"
request_templates {
"application/xml" = <<EOF
{
"body" : $input.json('$')
}
EOF
}
}
resource "aws_api_gateway_deployment" "test" {
depends_on = ["aws_api_gateway_integration.test"]
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
stage_name = "dev"
}
resource "aws_api_gateway_method_settings" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
stage_name = "${aws_api_gateway_deployment.test.stage_name}"
method_path = "${aws_api_gateway_resource.test.path_part}/${aws_api_gateway_method.test.http_method}"
settings {
metrics_enabled = false
logging_level = "OFF"
}
}
`, rInt)
}

View File

@ -0,0 +1,342 @@
package aws
import (
"fmt"
"log"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/apigateway"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceAwsApiGatewayStage() *schema.Resource {
return &schema.Resource{
Create: resourceAwsApiGatewayStageCreate,
Read: resourceAwsApiGatewayStageRead,
Update: resourceAwsApiGatewayStageUpdate,
Delete: resourceAwsApiGatewayStageDelete,
Schema: map[string]*schema.Schema{
"cache_cluster_enabled": {
Type: schema.TypeBool,
Optional: true,
},
"cache_cluster_size": {
Type: schema.TypeString,
Optional: true,
},
"client_certificate_id": {
Type: schema.TypeString,
Optional: true,
},
"deployment_id": {
Type: schema.TypeString,
Required: true,
},
"description": {
Type: schema.TypeString,
Optional: true,
},
"documentation_version": {
Type: schema.TypeString,
Optional: true,
},
"rest_api_id": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"stage_name": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"variables": {
Type: schema.TypeMap,
Optional: true,
},
},
}
}
func resourceAwsApiGatewayStageCreate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).apigateway
d.Partial(true)
input := apigateway.CreateStageInput{
RestApiId: aws.String(d.Get("rest_api_id").(string)),
StageName: aws.String(d.Get("stage_name").(string)),
DeploymentId: aws.String(d.Get("deployment_id").(string)),
}
waitForCache := false
if v, ok := d.GetOk("cache_cluster_enabled"); ok {
input.CacheClusterEnabled = aws.Bool(v.(bool))
waitForCache = true
}
if v, ok := d.GetOk("cache_cluster_size"); ok {
input.CacheClusterSize = aws.String(v.(string))
waitForCache = true
}
if v, ok := d.GetOk("description"); ok {
input.Description = aws.String(v.(string))
}
if v, ok := d.GetOk("documentation_version"); ok {
input.DocumentationVersion = aws.String(v.(string))
}
if vars, ok := d.GetOk("variables"); ok {
variables := make(map[string]string, 0)
for k, v := range vars.(map[string]interface{}) {
variables[k] = v.(string)
}
input.Variables = aws.StringMap(variables)
}
out, err := conn.CreateStage(&input)
if err != nil {
return fmt.Errorf("Error creating API Gateway Stage: %s", err)
}
d.SetId(fmt.Sprintf("ags-%s-%s", d.Get("rest_api_id").(string), d.Get("stage_name").(string)))
d.SetPartial("rest_api_id")
d.SetPartial("stage_name")
d.SetPartial("deployment_id")
d.SetPartial("description")
d.SetPartial("variables")
if waitForCache && *out.CacheClusterStatus != "NOT_AVAILABLE" {
stateConf := &resource.StateChangeConf{
Pending: []string{
"CREATE_IN_PROGRESS",
"DELETE_IN_PROGRESS",
"FLUSH_IN_PROGRESS",
},
Target: []string{"AVAILABLE"},
Refresh: apiGatewayStageCacheRefreshFunc(conn,
d.Get("rest_api_id").(string),
d.Get("stage_name").(string)),
Timeout: 90 * time.Minute,
}
_, err := stateConf.WaitForState()
if err != nil {
return err
}
}
d.SetPartial("cache_cluster_enabled")
d.SetPartial("cache_cluster_size")
d.Partial(false)
if _, ok := d.GetOk("client_certificate_id"); ok {
return resourceAwsApiGatewayStageUpdate(d, meta)
}
return resourceAwsApiGatewayStageRead(d, meta)
}
func resourceAwsApiGatewayStageRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).apigateway
log.Printf("[DEBUG] Reading API Gateway Stage %s", d.Id())
input := apigateway.GetStageInput{
RestApiId: aws.String(d.Get("rest_api_id").(string)),
StageName: aws.String(d.Get("stage_name").(string)),
}
stage, err := conn.GetStage(&input)
if err != nil {
if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NotFoundException" {
log.Printf("[WARN] API Gateway Stage %s not found, removing", d.Id())
d.SetId("")
return nil
}
return err
}
log.Printf("[DEBUG] Received API Gateway Stage: %s", stage)
d.Set("client_certificate_id", stage.ClientCertificateId)
if stage.CacheClusterStatus != nil && *stage.CacheClusterStatus == "DELETE_IN_PROGRESS" {
d.Set("cache_cluster_enabled", false)
d.Set("cache_cluster_size", nil)
} else {
d.Set("cache_cluster_enabled", stage.CacheClusterEnabled)
d.Set("cache_cluster_size", stage.CacheClusterSize)
}
d.Set("deployment_id", stage.DeploymentId)
d.Set("description", stage.Description)
d.Set("documentation_version", stage.DocumentationVersion)
d.Set("variables", aws.StringValueMap(stage.Variables))
return nil
}
func resourceAwsApiGatewayStageUpdate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).apigateway
d.Partial(true)
operations := make([]*apigateway.PatchOperation, 0)
waitForCache := false
if d.HasChange("cache_cluster_enabled") {
operations = append(operations, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String("/cacheClusterEnabled"),
Value: aws.String(fmt.Sprintf("%t", d.Get("cache_cluster_enabled").(bool))),
})
waitForCache = true
}
if d.HasChange("cache_cluster_size") {
operations = append(operations, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String("/cacheClusterSize"),
Value: aws.String(d.Get("cache_cluster_size").(string)),
})
waitForCache = true
}
if d.HasChange("client_certificate_id") {
operations = append(operations, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String("/clientCertificateId"),
Value: aws.String(d.Get("client_certificate_id").(string)),
})
}
if d.HasChange("deployment_id") {
operations = append(operations, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String("/deploymentId"),
Value: aws.String(d.Get("deployment_id").(string)),
})
}
if d.HasChange("description") {
operations = append(operations, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String("/description"),
Value: aws.String(d.Get("description").(string)),
})
}
if d.HasChange("documentation_version") {
operations = append(operations, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String("/documentationVersion"),
Value: aws.String(d.Get("documentation_version").(string)),
})
}
if d.HasChange("variables") {
o, n := d.GetChange("variables")
oldV := o.(map[string]interface{})
newV := n.(map[string]interface{})
operations = append(operations, diffVariablesOps("/variables/", oldV, newV)...)
}
input := apigateway.UpdateStageInput{
RestApiId: aws.String(d.Get("rest_api_id").(string)),
StageName: aws.String(d.Get("stage_name").(string)),
PatchOperations: operations,
}
log.Printf("[DEBUG] Updating API Gateway Stage: %s", input)
out, err := conn.UpdateStage(&input)
if err != nil {
return fmt.Errorf("Updating API Gateway Stage failed: %s", err)
}
d.SetPartial("client_certificate_id")
d.SetPartial("deployment_id")
d.SetPartial("description")
d.SetPartial("variables")
if waitForCache && *out.CacheClusterStatus != "NOT_AVAILABLE" {
stateConf := &resource.StateChangeConf{
Pending: []string{
"CREATE_IN_PROGRESS",
"FLUSH_IN_PROGRESS",
},
Target: []string{
"AVAILABLE",
// There's an AWS API bug (raised & confirmed in Sep 2016 by support)
// which causes the stage to remain in deletion state forever
"DELETE_IN_PROGRESS",
},
Refresh: apiGatewayStageCacheRefreshFunc(conn,
d.Get("rest_api_id").(string),
d.Get("stage_name").(string)),
Timeout: 30 * time.Minute,
}
_, err := stateConf.WaitForState()
if err != nil {
return err
}
}
d.SetPartial("cache_cluster_enabled")
d.SetPartial("cache_cluster_size")
d.Partial(false)
return resourceAwsApiGatewayStageRead(d, meta)
}
func diffVariablesOps(prefix string, oldVars, newVars map[string]interface{}) []*apigateway.PatchOperation {
ops := make([]*apigateway.PatchOperation, 0)
for k, _ := range oldVars {
if _, ok := newVars[k]; !ok {
ops = append(ops, &apigateway.PatchOperation{
Op: aws.String("remove"),
Path: aws.String(prefix + k),
})
}
}
for k, v := range newVars {
newValue := v.(string)
if oldV, ok := oldVars[k]; ok {
oldValue := oldV.(string)
if oldValue == newValue {
continue
}
}
ops = append(ops, &apigateway.PatchOperation{
Op: aws.String("replace"),
Path: aws.String(prefix + k),
Value: aws.String(newValue),
})
}
return ops
}
func apiGatewayStageCacheRefreshFunc(conn *apigateway.APIGateway, apiId, stageName string) func() (interface{}, string, error) {
return func() (interface{}, string, error) {
input := apigateway.GetStageInput{
RestApiId: aws.String(apiId),
StageName: aws.String(stageName),
}
out, err := conn.GetStage(&input)
if err != nil {
return 42, "", err
}
return out, *out.CacheClusterStatus, nil
}
}
func resourceAwsApiGatewayStageDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).apigateway
log.Printf("[DEBUG] Deleting API Gateway Stage: %s", d.Id())
input := apigateway.DeleteStageInput{
RestApiId: aws.String(d.Get("rest_api_id").(string)),
StageName: aws.String(d.Get("stage_name").(string)),
}
_, err := conn.DeleteStage(&input)
if err != nil {
return fmt.Errorf("Deleting API Gateway Stage failed: %s", err)
}
return nil
}

View File

@ -0,0 +1,196 @@
package aws
import (
"fmt"
"testing"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/apigateway"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestAccAWSAPIGatewayStage_basic(t *testing.T) {
var conf apigateway.Stage
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSAPIGatewayStageDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSAPIGatewayStageConfig_basic(),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSAPIGatewayStageExists("aws_api_gateway_stage.test", &conf),
resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "stage_name", "prod"),
resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "cache_cluster_enabled", "true"),
resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "cache_cluster_size", "0.5"),
),
},
resource.TestStep{
Config: testAccAWSAPIGatewayStageConfig_updated(),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSAPIGatewayStageExists("aws_api_gateway_stage.test", &conf),
resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "stage_name", "prod"),
resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "cache_cluster_enabled", "false"),
),
},
resource.TestStep{
Config: testAccAWSAPIGatewayStageConfig_basic(),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSAPIGatewayStageExists("aws_api_gateway_stage.test", &conf),
resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "stage_name", "prod"),
resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "cache_cluster_enabled", "true"),
resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "cache_cluster_size", "0.5"),
),
},
},
})
}
func testAccCheckAWSAPIGatewayStageExists(n string, res *apigateway.Stage) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No API Gateway Stage ID is set")
}
conn := testAccProvider.Meta().(*AWSClient).apigateway
req := &apigateway.GetStageInput{
RestApiId: aws.String(s.RootModule().Resources["aws_api_gateway_rest_api.test"].Primary.ID),
StageName: aws.String(rs.Primary.Attributes["stage_name"]),
}
out, err := conn.GetStage(req)
if err != nil {
return err
}
*res = *out
return nil
}
}
func testAccCheckAWSAPIGatewayStageDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).apigateway
for _, rs := range s.RootModule().Resources {
if rs.Type != "aws_api_gateway_stage" {
continue
}
req := &apigateway.GetStageInput{
RestApiId: aws.String(s.RootModule().Resources["aws_api_gateway_rest_api.test"].Primary.ID),
StageName: aws.String(rs.Primary.Attributes["stage_name"]),
}
out, err := conn.GetStage(req)
if err == nil {
return fmt.Errorf("API Gateway Stage still exists: %s", out)
}
awsErr, ok := err.(awserr.Error)
if !ok {
return err
}
if awsErr.Code() != "NotFoundException" {
return err
}
return nil
}
return nil
}
const testAccAWSAPIGatewayStageConfig_base = `
resource "aws_api_gateway_rest_api" "test" {
name = "tf-acc-test"
}
resource "aws_api_gateway_resource" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
parent_id = "${aws_api_gateway_rest_api.test.root_resource_id}"
path_part = "test"
}
resource "aws_api_gateway_method" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
resource_id = "${aws_api_gateway_resource.test.id}"
http_method = "GET"
authorization = "NONE"
}
resource "aws_api_gateway_method_response" "error" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
resource_id = "${aws_api_gateway_resource.test.id}"
http_method = "${aws_api_gateway_method.test.http_method}"
status_code = "400"
}
resource "aws_api_gateway_integration" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
resource_id = "${aws_api_gateway_resource.test.id}"
http_method = "${aws_api_gateway_method.test.http_method}"
type = "HTTP"
uri = "https://www.google.co.uk"
integration_http_method = "GET"
}
resource "aws_api_gateway_integration_response" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
resource_id = "${aws_api_gateway_resource.test.id}"
http_method = "${aws_api_gateway_integration.test.http_method}"
status_code = "${aws_api_gateway_method_response.error.status_code}"
}
resource "aws_api_gateway_deployment" "dev" {
depends_on = ["aws_api_gateway_integration.test"]
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
stage_name = "dev"
description = "This is a dev env"
variables = {
"a" = "2"
}
}
`
func testAccAWSAPIGatewayStageConfig_basic() string {
return testAccAWSAPIGatewayStageConfig_base + `
resource "aws_api_gateway_stage" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
stage_name = "prod"
deployment_id = "${aws_api_gateway_deployment.dev.id}"
cache_cluster_enabled = true
cache_cluster_size = "0.5"
variables {
one = "1"
two = "2"
}
}
`
}
func testAccAWSAPIGatewayStageConfig_updated() string {
return testAccAWSAPIGatewayStageConfig_base + `
resource "aws_api_gateway_stage" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
stage_name = "prod"
deployment_id = "${aws_api_gateway_deployment.dev.id}"
cache_cluster_enabled = false
description = "Hello world"
variables {
one = "1"
three = "3"
}
}
`
}

View File

@ -202,7 +202,7 @@ resource "aws_launch_configuration" "foobar" {
}
resource "aws_sqs_queue" "foobar" {
name = "foobar"
name = "foobar-%d"
delay_seconds = 90
max_message_size = 2048
message_retention_seconds = 86400
@ -225,7 +225,7 @@ EOF
}
resource "aws_iam_role_policy" "foobar" {
name = "foobar"
name = "foobar-%d"
role = "${aws_iam_role.foobar.id}"
policy = <<EOF
@ -265,7 +265,7 @@ resource "aws_autoscaling_group" "foobar" {
}
resource "aws_autoscaling_lifecycle_hook" "foobar" {
name = "foobar"
name = "foobar-%d"
autoscaling_group_name = "${aws_autoscaling_group.foobar.name}"
heartbeat_timeout = 2000
lifecycle_transition = "autoscaling:EC2_INSTANCE_LAUNCHING"
@ -278,5 +278,5 @@ EOF
notification_target_arn = "${aws_sqs_queue.foobar.arn}"
role_arn = "${aws_iam_role.foobar.arn}"
}`, name, rInt, name)
}`, name, rInt, rInt, rInt, name, rInt)
}

View File

@ -4,6 +4,7 @@ import (
"fmt"
"log"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
"github.com/aws/aws-sdk-go/aws"
@ -24,10 +25,18 @@ func resourceAwsCloudWatchLogGroup() *schema.Resource {
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
ConflictsWith: []string{"name_prefix"},
ValidateFunc: validateLogGroupName,
},
"name_prefix": {
Type: schema.TypeString,
Required: true,
Optional: true,
ForceNew: true,
ValidateFunc: validateLogGroupName,
ValidateFunc: validateLogGroupNamePrefix,
},
"retention_in_days": {
@ -49,10 +58,19 @@ func resourceAwsCloudWatchLogGroup() *schema.Resource {
func resourceAwsCloudWatchLogGroupCreate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).cloudwatchlogsconn
log.Printf("[DEBUG] Creating CloudWatch Log Group: %s", d.Get("name").(string))
var logGroupName string
if v, ok := d.GetOk("name"); ok {
logGroupName = v.(string)
} else if v, ok := d.GetOk("name_prefix"); ok {
logGroupName = resource.PrefixedUniqueId(v.(string))
} else {
logGroupName = resource.UniqueId()
}
log.Printf("[DEBUG] Creating CloudWatch Log Group: %s", logGroupName)
_, err := conn.CreateLogGroup(&cloudwatchlogs.CreateLogGroupInput{
LogGroupName: aws.String(d.Get("name").(string)),
LogGroupName: aws.String(logGroupName),
})
if err != nil {
if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "ResourceAlreadyExistsException" {
@ -61,7 +79,7 @@ func resourceAwsCloudWatchLogGroupCreate(d *schema.ResourceData, meta interface{
return fmt.Errorf("Creating CloudWatch Log Group failed: %s '%s'", err, d.Get("name"))
}
d.SetId(d.Get("name").(string))
d.SetId(logGroupName)
log.Println("[INFO] CloudWatch Log Group created")

View File

@ -2,6 +2,7 @@ package aws
import (
"fmt"
"regexp"
"testing"
"github.com/aws/aws-sdk-go/service/cloudwatchlogs"
@ -30,6 +31,43 @@ func TestAccAWSCloudWatchLogGroup_basic(t *testing.T) {
})
}
func TestAccAWSCloudWatchLogGroup_namePrefix(t *testing.T) {
var lg cloudwatchlogs.LogGroup
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCloudWatchLogGroupDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCloudWatchLogGroup_namePrefix,
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudWatchLogGroupExists("aws_cloudwatch_log_group.test", &lg),
resource.TestMatchResourceAttr("aws_cloudwatch_log_group.test", "name", regexp.MustCompile("^tf-test-")),
),
},
},
})
}
func TestAccAWSCloudWatchLogGroup_generatedName(t *testing.T) {
var lg cloudwatchlogs.LogGroup
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCloudWatchLogGroupDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCloudWatchLogGroup_generatedName,
Check: resource.ComposeTestCheckFunc(
testAccCheckCloudWatchLogGroupExists("aws_cloudwatch_log_group.test", &lg),
),
},
},
})
}
func TestAccAWSCloudWatchLogGroup_retentionPolicy(t *testing.T) {
var lg cloudwatchlogs.LogGroup
rInt := acctest.RandInt()
@ -256,3 +294,13 @@ resource "aws_cloudwatch_log_group" "charlie" {
}
`, rInt, rInt+1, rInt+2)
}
const testAccAWSCloudWatchLogGroup_namePrefix = `
resource "aws_cloudwatch_log_group" "test" {
name_prefix = "tf-test-"
}
`
const testAccAWSCloudWatchLogGroup_generatedName = `
resource "aws_cloudwatch_log_group" "test" {}
`

View File

@ -15,9 +15,12 @@ func resourceAwsCodeCommitRepository() *schema.Resource {
Update: resourceAwsCodeCommitRepositoryUpdate,
Read: resourceAwsCodeCommitRepositoryRead,
Delete: resourceAwsCodeCommitRepositoryDelete,
Importer: &schema.ResourceImporter{
State: schema.ImportStatePassthrough,
},
Schema: map[string]*schema.Schema{
"repository_name": &schema.Schema{
"repository_name": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
@ -31,7 +34,7 @@ func resourceAwsCodeCommitRepository() *schema.Resource {
},
},
"description": &schema.Schema{
"description": {
Type: schema.TypeString,
Optional: true,
ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) {
@ -44,27 +47,27 @@ func resourceAwsCodeCommitRepository() *schema.Resource {
},
},
"arn": &schema.Schema{
"arn": {
Type: schema.TypeString,
Computed: true,
},
"repository_id": &schema.Schema{
"repository_id": {
Type: schema.TypeString,
Computed: true,
},
"clone_url_http": &schema.Schema{
"clone_url_http": {
Type: schema.TypeString,
Computed: true,
},
"clone_url_ssh": &schema.Schema{
"clone_url_ssh": {
Type: schema.TypeString,
Computed: true,
},
"default_branch": &schema.Schema{
"default_branch": {
Type: schema.TypeString,
Optional: true,
},
@ -130,6 +133,8 @@ func resourceAwsCodeCommitRepositoryRead(d *schema.ResourceData, meta interface{
d.Set("arn", out.RepositoryMetadata.Arn)
d.Set("clone_url_http", out.RepositoryMetadata.CloneUrlHttp)
d.Set("clone_url_ssh", out.RepositoryMetadata.CloneUrlSsh)
d.Set("description", out.RepositoryMetadata.RepositoryDescription)
d.Set("repository_name", out.RepositoryMetadata.RepositoryName)
if _, ok := d.GetOk("default_branch"); ok {
if out.RepositoryMetadata.DefaultBranch != nil {

View File

@ -19,7 +19,7 @@ func TestAccAWSCodeCommitRepository_basic(t *testing.T) {
Providers: testAccProviders,
CheckDestroy: testAccCheckCodeCommitRepositoryDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testAccCodeCommitRepository_basic(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckCodeCommitRepositoryExists("aws_codecommit_repository.test"),
@ -36,7 +36,7 @@ func TestAccAWSCodeCommitRepository_withChanges(t *testing.T) {
Providers: testAccProviders,
CheckDestroy: testAccCheckCodeCommitRepositoryDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testAccCodeCommitRepository_basic(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckCodeCommitRepositoryExists("aws_codecommit_repository.test"),
@ -44,7 +44,7 @@ func TestAccAWSCodeCommitRepository_withChanges(t *testing.T) {
"aws_codecommit_repository.test", "description", "This is a test description"),
),
},
resource.TestStep{
{
Config: testAccCodeCommitRepository_withChanges(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckCodeCommitRepositoryExists("aws_codecommit_repository.test"),
@ -63,7 +63,7 @@ func TestAccAWSCodeCommitRepository_create_default_branch(t *testing.T) {
Providers: testAccProviders,
CheckDestroy: testAccCheckCodeCommitRepositoryDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testAccCodeCommitRepository_with_default_branch(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckCodeCommitRepositoryExists("aws_codecommit_repository.test"),
@ -82,7 +82,7 @@ func TestAccAWSCodeCommitRepository_create_and_update_default_branch(t *testing.
Providers: testAccProviders,
CheckDestroy: testAccCheckCodeCommitRepositoryDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testAccCodeCommitRepository_basic(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckCodeCommitRepositoryExists("aws_codecommit_repository.test"),
@ -90,7 +90,7 @@ func TestAccAWSCodeCommitRepository_create_and_update_default_branch(t *testing.
"aws_codecommit_repository.test", "default_branch"),
),
},
resource.TestStep{
{
Config: testAccCodeCommitRepository_with_default_branch(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckCodeCommitRepositoryExists("aws_codecommit_repository.test"),

View File

@ -0,0 +1,238 @@
package aws
import (
"fmt"
"log"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/cognitoidentity"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceAwsCognitoIdentityPool() *schema.Resource {
return &schema.Resource{
Create: resourceAwsCognitoIdentityPoolCreate,
Read: resourceAwsCognitoIdentityPoolRead,
Update: resourceAwsCognitoIdentityPoolUpdate,
Delete: resourceAwsCognitoIdentityPoolDelete,
Importer: &schema.ResourceImporter{
State: schema.ImportStatePassthrough,
},
Schema: map[string]*schema.Schema{
"identity_pool_name": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validateCognitoIdentityPoolName,
},
"cognito_identity_providers": {
Type: schema.TypeSet,
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"client_id": {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateCognitoIdentityProvidersClientId,
},
"provider_name": {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateCognitoIdentityProvidersProviderName,
},
"server_side_token_check": {
Type: schema.TypeBool,
Optional: true,
Default: false,
},
},
},
},
"developer_provider_name": {
Type: schema.TypeString,
Optional: true,
ForceNew: true, // Forcing a new resource since it cannot be edited afterwards
ValidateFunc: validateCognitoProviderDeveloperName,
},
"allow_unauthenticated_identities": {
Type: schema.TypeBool,
Optional: true,
Default: false,
},
"openid_connect_provider_arns": {
Type: schema.TypeList,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeString,
ValidateFunc: validateArn,
},
},
"saml_provider_arns": {
Type: schema.TypeList,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeString,
ValidateFunc: validateArn,
},
},
"supported_login_providers": {
Type: schema.TypeMap,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeString,
ValidateFunc: validateCognitoSupportedLoginProviders,
},
},
},
}
}
func resourceAwsCognitoIdentityPoolCreate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).cognitoconn
log.Print("[DEBUG] Creating Cognito Identity Pool")
params := &cognitoidentity.CreateIdentityPoolInput{
IdentityPoolName: aws.String(d.Get("identity_pool_name").(string)),
AllowUnauthenticatedIdentities: aws.Bool(d.Get("allow_unauthenticated_identities").(bool)),
}
if v, ok := d.GetOk("developer_provider_name"); ok {
params.DeveloperProviderName = aws.String(v.(string))
}
if v, ok := d.GetOk("supported_login_providers"); ok {
params.SupportedLoginProviders = expandCognitoSupportedLoginProviders(v.(map[string]interface{}))
}
if v, ok := d.GetOk("cognito_identity_providers"); ok {
params.CognitoIdentityProviders = expandCognitoIdentityProviders(v.(*schema.Set))
}
if v, ok := d.GetOk("saml_provider_arns"); ok {
params.SamlProviderARNs = expandStringList(v.([]interface{}))
}
if v, ok := d.GetOk("openid_connect_provider_arns"); ok {
params.OpenIdConnectProviderARNs = expandStringList(v.([]interface{}))
}
entity, err := conn.CreateIdentityPool(params)
if err != nil {
return fmt.Errorf("Error creating Cognito Identity Pool: %s", err)
}
d.SetId(*entity.IdentityPoolId)
return resourceAwsCognitoIdentityPoolRead(d, meta)
}
func resourceAwsCognitoIdentityPoolRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).cognitoconn
log.Printf("[DEBUG] Reading Cognito Identity Pool: %s", d.Id())
ip, err := conn.DescribeIdentityPool(&cognitoidentity.DescribeIdentityPoolInput{
IdentityPoolId: aws.String(d.Id()),
})
if err != nil {
if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "ResourceNotFoundException" {
d.SetId("")
return nil
}
return err
}
d.Set("identity_pool_name", ip.IdentityPoolName)
d.Set("allow_unauthenticated_identities", ip.AllowUnauthenticatedIdentities)
d.Set("developer_provider_name", ip.DeveloperProviderName)
if ip.CognitoIdentityProviders != nil {
if err := d.Set("cognito_identity_providers", flattenCognitoIdentityProviders(ip.CognitoIdentityProviders)); err != nil {
return fmt.Errorf("[DEBUG] Error setting cognito_identity_providers error: %#v", err)
}
}
if ip.OpenIdConnectProviderARNs != nil {
if err := d.Set("openid_connect_provider_arns", flattenStringList(ip.OpenIdConnectProviderARNs)); err != nil {
return fmt.Errorf("[DEBUG] Error setting openid_connect_provider_arns error: %#v", err)
}
}
if ip.SamlProviderARNs != nil {
if err := d.Set("saml_provider_arns", flattenStringList(ip.SamlProviderARNs)); err != nil {
return fmt.Errorf("[DEBUG] Error setting saml_provider_arns error: %#v", err)
}
}
if ip.SupportedLoginProviders != nil {
if err := d.Set("supported_login_providers", flattenCognitoSupportedLoginProviders(ip.SupportedLoginProviders)); err != nil {
return fmt.Errorf("[DEBUG] Error setting supported_login_providers error: %#v", err)
}
}
return nil
}
func resourceAwsCognitoIdentityPoolUpdate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).cognitoconn
log.Print("[DEBUG] Updating Cognito Identity Pool")
params := &cognitoidentity.IdentityPool{
IdentityPoolId: aws.String(d.Id()),
AllowUnauthenticatedIdentities: aws.Bool(d.Get("allow_unauthenticated_identities").(bool)),
IdentityPoolName: aws.String(d.Get("identity_pool_name").(string)),
}
if d.HasChange("developer_provider_name") {
params.DeveloperProviderName = aws.String(d.Get("developer_provider_name").(string))
}
if d.HasChange("cognito_identity_providers") {
params.CognitoIdentityProviders = expandCognitoIdentityProviders(d.Get("cognito_identity_providers").(*schema.Set))
}
if d.HasChange("supported_login_providers") {
params.SupportedLoginProviders = expandCognitoSupportedLoginProviders(d.Get("supported_login_providers").(map[string]interface{}))
}
if d.HasChange("openid_connect_provider_arns") {
params.OpenIdConnectProviderARNs = expandStringList(d.Get("openid_connect_provider_arns").([]interface{}))
}
if d.HasChange("saml_provider_arns") {
params.SamlProviderARNs = expandStringList(d.Get("saml_provider_arns").([]interface{}))
}
_, err := conn.UpdateIdentityPool(params)
if err != nil {
return fmt.Errorf("Error creating Cognito Identity Pool: %s", err)
}
return resourceAwsCognitoIdentityPoolRead(d, meta)
}
func resourceAwsCognitoIdentityPoolDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).cognitoconn
log.Printf("[DEBUG] Deleting Cognito Identity Pool: %s", d.Id())
return resource.Retry(5*time.Minute, func() *resource.RetryError {
_, err := conn.DeleteIdentityPool(&cognitoidentity.DeleteIdentityPoolInput{
IdentityPoolId: aws.String(d.Id()),
})
if err == nil {
return nil
}
return resource.NonRetryableError(err)
})
}

View File

@ -0,0 +1,371 @@
package aws
import (
"errors"
"fmt"
"testing"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/cognitoidentity"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestAccAWSCognitoIdentityPool_basic(t *testing.T) {
name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
updatedName := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "allow_unauthenticated_identities", "false"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(updatedName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", updatedName)),
),
},
},
})
}
func TestAccAWSCognitoIdentityPool_supportedLoginProviders(t *testing.T) {
name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCognitoIdentityPoolConfig_supportedLoginProviders(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "supported_login_providers.graph.facebook.com", "7346241598935555"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_supportedLoginProvidersModified(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "supported_login_providers.graph.facebook.com", "7346241598935552"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "supported_login_providers.accounts.google.com", "123456789012.apps.googleusercontent.com"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
),
},
},
})
}
func TestAccAWSCognitoIdentityPool_openidConnectProviderArns(t *testing.T) {
name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCognitoIdentityPoolConfig_openidConnectProviderArns(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "openid_connect_provider_arns.#", "1"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_openidConnectProviderArnsModified(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "openid_connect_provider_arns.#", "2"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
),
},
},
})
}
func TestAccAWSCognitoIdentityPool_samlProviderArns(t *testing.T) {
name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCognitoIdentityPoolConfig_samlProviderArns(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "saml_provider_arns.#", "1"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_samlProviderArnsModified(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "saml_provider_arns.#", "1"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckNoResourceAttr("aws_cognito_identity_pool.main", "saml_provider_arns.#"),
),
},
},
})
}
func TestAccAWSCognitoIdentityPool_cognitoIdentityProviders(t *testing.T) {
name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCognitoIdentityPoolConfig_cognitoIdentityProviders(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.66456389.client_id", "7lhlkkfbfb4q5kpp90urffao"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.66456389.provider_name", "cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.66456389.server_side_token_check", "false"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3571192419.client_id", "7lhlkkfbfb4q5kpp90urffao"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3571192419.provider_name", "cognito-idp.us-east-1.amazonaws.com/us-east-1_Ab129faBb"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3571192419.server_side_token_check", "false"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_cognitoIdentityProvidersModified(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3661724441.client_id", "6lhlkkfbfb4q5kpp90urffae"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3661724441.provider_name", "cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3661724441.server_side_token_check", "false"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
),
},
},
})
}
func testAccCheckAWSCognitoIdentityPoolExists(n string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return errors.New("No Cognito Identity Pool ID is set")
}
conn := testAccProvider.Meta().(*AWSClient).cognitoconn
_, err := conn.DescribeIdentityPool(&cognitoidentity.DescribeIdentityPoolInput{
IdentityPoolId: aws.String(rs.Primary.ID),
})
if err != nil {
return err
}
return nil
}
}
func testAccCheckAWSCognitoIdentityPoolDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).cognitoconn
for _, rs := range s.RootModule().Resources {
if rs.Type != "aws_cognito_identity_pool" {
continue
}
_, err := conn.DescribeIdentityPool(&cognitoidentity.DescribeIdentityPoolInput{
IdentityPoolId: aws.String(rs.Primary.ID),
})
if err != nil {
if wserr, ok := err.(awserr.Error); ok && wserr.Code() == "ResourceNotFoundException" {
return nil
}
return err
}
}
return nil
}
func testAccAWSCognitoIdentityPoolConfig_basic(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
developer_provider_name = "my.developer"
}
`, name)
}
func testAccAWSCognitoIdentityPoolConfig_supportedLoginProviders(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
supported_login_providers {
"graph.facebook.com" = "7346241598935555"
}
}
`, name)
}
func testAccAWSCognitoIdentityPoolConfig_supportedLoginProvidersModified(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
supported_login_providers {
"graph.facebook.com" = "7346241598935552"
"accounts.google.com" = "123456789012.apps.googleusercontent.com"
}
}
`, name)
}
func testAccAWSCognitoIdentityPoolConfig_openidConnectProviderArns(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
openid_connect_provider_arns = ["arn:aws:iam::123456789012:oidc-provider/server.example.com"]
}
`, name)
}
func testAccAWSCognitoIdentityPoolConfig_openidConnectProviderArnsModified(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
openid_connect_provider_arns = ["arn:aws:iam::123456789012:oidc-provider/foo.example.com", "arn:aws:iam::123456789012:oidc-provider/bar.example.com"]
}
`, name)
}
func testAccAWSCognitoIdentityPoolConfig_samlProviderArns(name string) string {
return fmt.Sprintf(`
resource "aws_iam_saml_provider" "default" {
name = "myprovider-%s"
saml_metadata_document = "${file("./test-fixtures/saml-metadata.xml")}"
}
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
saml_provider_arns = ["${aws_iam_saml_provider.default.arn}"]
}
`, name, name)
}
func testAccAWSCognitoIdentityPoolConfig_samlProviderArnsModified(name string) string {
return fmt.Sprintf(`
resource "aws_iam_saml_provider" "default" {
name = "default-%s"
saml_metadata_document = "${file("./test-fixtures/saml-metadata.xml")}"
}
resource "aws_iam_saml_provider" "secondary" {
name = "secondary-%s"
saml_metadata_document = "${file("./test-fixtures/saml-metadata.xml")}"
}
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
saml_provider_arns = ["${aws_iam_saml_provider.secondary.arn}"]
}
`, name, name, name)
}
func testAccAWSCognitoIdentityPoolConfig_cognitoIdentityProviders(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
cognito_identity_providers {
client_id = "7lhlkkfbfb4q5kpp90urffao"
provider_name = "cognito-idp.us-east-1.amazonaws.com/us-east-1_Ab129faBb"
server_side_token_check = false
}
cognito_identity_providers {
client_id = "7lhlkkfbfb4q5kpp90urffao"
provider_name = "cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu"
server_side_token_check = false
}
}
`, name)
}
func testAccAWSCognitoIdentityPoolConfig_cognitoIdentityProvidersModified(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
cognito_identity_providers {
client_id = "6lhlkkfbfb4q5kpp90urffae"
provider_name = "cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu"
server_side_token_check = false
}
}
`, name)
}

View File

@ -12,7 +12,7 @@ import (
"github.com/hashicorp/terraform/terraform"
)
func TestAccAWSConfigConfigRule_basic(t *testing.T) {
func testAccConfigConfigRule_basic(t *testing.T) {
var cr configservice.ConfigRule
rInt := acctest.RandInt()
expectedName := fmt.Sprintf("tf-acc-test-%d", rInt)
@ -37,7 +37,7 @@ func TestAccAWSConfigConfigRule_basic(t *testing.T) {
})
}
func TestAccAWSConfigConfigRule_ownerAws(t *testing.T) {
func testAccConfigConfigRule_ownerAws(t *testing.T) {
var cr configservice.ConfigRule
rInt := acctest.RandInt()
expectedName := fmt.Sprintf("tf-acc-test-%d", rInt)
@ -72,7 +72,7 @@ func TestAccAWSConfigConfigRule_ownerAws(t *testing.T) {
})
}
func TestAccAWSConfigConfigRule_customlambda(t *testing.T) {
func testAccConfigConfigRule_customlambda(t *testing.T) {
var cr configservice.ConfigRule
rInt := acctest.RandInt()
@ -113,7 +113,7 @@ func TestAccAWSConfigConfigRule_customlambda(t *testing.T) {
})
}
func TestAccAWSConfigConfigRule_importAws(t *testing.T) {
func testAccConfigConfigRule_importAws(t *testing.T) {
resourceName := "aws_config_config_rule.foo"
rInt := acctest.RandInt()
@ -135,7 +135,7 @@ func TestAccAWSConfigConfigRule_importAws(t *testing.T) {
})
}
func TestAccAWSConfigConfigRule_importLambda(t *testing.T) {
func testAccConfigConfigRule_importLambda(t *testing.T) {
resourceName := "aws_config_config_rule.foo"
rInt := acctest.RandInt()

View File

@ -11,7 +11,7 @@ import (
"github.com/hashicorp/terraform/terraform"
)
func TestAccAWSConfigConfigurationRecorderStatus_basic(t *testing.T) {
func testAccConfigConfigurationRecorderStatus_basic(t *testing.T) {
var cr configservice.ConfigurationRecorder
var crs configservice.ConfigurationRecorderStatus
rInt := acctest.RandInt()
@ -36,7 +36,7 @@ func TestAccAWSConfigConfigurationRecorderStatus_basic(t *testing.T) {
})
}
func TestAccAWSConfigConfigurationRecorderStatus_startEnabled(t *testing.T) {
func testAccConfigConfigurationRecorderStatus_startEnabled(t *testing.T) {
var cr configservice.ConfigurationRecorder
var crs configservice.ConfigurationRecorderStatus
rInt := acctest.RandInt()
@ -81,7 +81,7 @@ func TestAccAWSConfigConfigurationRecorderStatus_startEnabled(t *testing.T) {
})
}
func TestAccAWSConfigConfigurationRecorderStatus_importBasic(t *testing.T) {
func testAccConfigConfigurationRecorderStatus_importBasic(t *testing.T) {
resourceName := "aws_config_configuration_recorder_status.foo"
rInt := acctest.RandInt()

View File

@ -12,7 +12,7 @@ import (
"github.com/hashicorp/terraform/terraform"
)
func TestAccAWSConfigConfigurationRecorder_basic(t *testing.T) {
func testAccConfigConfigurationRecorder_basic(t *testing.T) {
var cr configservice.ConfigurationRecorder
rInt := acctest.RandInt()
expectedName := fmt.Sprintf("tf-acc-test-%d", rInt)
@ -39,7 +39,7 @@ func TestAccAWSConfigConfigurationRecorder_basic(t *testing.T) {
})
}
func TestAccAWSConfigConfigurationRecorder_allParams(t *testing.T) {
func testAccConfigConfigurationRecorder_allParams(t *testing.T) {
var cr configservice.ConfigurationRecorder
rInt := acctest.RandInt()
expectedName := fmt.Sprintf("tf-acc-test-%d", rInt)
@ -70,7 +70,7 @@ func TestAccAWSConfigConfigurationRecorder_allParams(t *testing.T) {
})
}
func TestAccAWSConfigConfigurationRecorder_importBasic(t *testing.T) {
func testAccConfigConfigurationRecorder_importBasic(t *testing.T) {
resourceName := "aws_config_configuration_recorder.foo"
rInt := acctest.RandInt()

View File

@ -12,7 +12,7 @@ import (
"github.com/hashicorp/terraform/terraform"
)
func TestAccAWSConfigDeliveryChannel_basic(t *testing.T) {
func testAccConfigDeliveryChannel_basic(t *testing.T) {
var dc configservice.DeliveryChannel
rInt := acctest.RandInt()
expectedName := fmt.Sprintf("tf-acc-test-awsconfig-%d", rInt)
@ -36,7 +36,7 @@ func TestAccAWSConfigDeliveryChannel_basic(t *testing.T) {
})
}
func TestAccAWSConfigDeliveryChannel_allParams(t *testing.T) {
func testAccConfigDeliveryChannel_allParams(t *testing.T) {
var dc configservice.DeliveryChannel
rInt := acctest.RandInt()
expectedName := fmt.Sprintf("tf-acc-test-awsconfig-%d", rInt)
@ -64,7 +64,7 @@ func TestAccAWSConfigDeliveryChannel_allParams(t *testing.T) {
})
}
func TestAccAWSConfigDeliveryChannel_importBasic(t *testing.T) {
func testAccConfigDeliveryChannel_importBasic(t *testing.T) {
resourceName := "aws_config_delivery_channel.foo"
rInt := acctest.RandInt()

View File

@ -0,0 +1,44 @@
package aws
import (
"testing"
)
func TestAccAWSConfig(t *testing.T) {
testCases := map[string]map[string]func(t *testing.T){
"Config": {
"basic": testAccConfigConfigRule_basic,
"ownerAws": testAccConfigConfigRule_ownerAws,
"customlambda": testAccConfigConfigRule_customlambda,
"importAws": testAccConfigConfigRule_importAws,
"importLambda": testAccConfigConfigRule_importLambda,
},
"ConfigurationRecorderStatus": {
"basic": testAccConfigConfigurationRecorderStatus_basic,
"startEnabled": testAccConfigConfigurationRecorderStatus_startEnabled,
"importBasic": testAccConfigConfigurationRecorderStatus_importBasic,
},
"ConfigurationRecorder": {
"basic": testAccConfigConfigurationRecorder_basic,
"allParams": testAccConfigConfigurationRecorder_allParams,
"importBasic": testAccConfigConfigurationRecorder_importBasic,
},
"DeliveryChannel": {
"basic": testAccConfigDeliveryChannel_basic,
"allParams": testAccConfigDeliveryChannel_allParams,
"importBasic": testAccConfigDeliveryChannel_importBasic,
},
}
for group, m := range testCases {
m := m
t.Run(group, func(t *testing.T) {
for name, tc := range m {
tc := tc
t.Run(name, func(t *testing.T) {
tc(t)
})
}
})
}
}

View File

@ -98,6 +98,7 @@ func resourceAwsDbParameterGroupCreate(d *schema.ResourceData, meta interface{})
} else {
groupName = resource.UniqueId()
}
d.Set("name", groupName)
createOpts := rds.CreateDBParameterGroupInput{
DBParameterGroupName: aws.String(groupName),

View File

@ -715,11 +715,21 @@ const testAccDBParameterGroupConfig_namePrefix = `
resource "aws_db_parameter_group" "test" {
name_prefix = "tf-test-"
family = "mysql5.6"
parameter {
name = "sync_binlog"
value = 0
}
}
`
const testAccDBParameterGroupConfig_generatedName = `
resource "aws_db_parameter_group" "test" {
family = "mysql5.6"
parameter {
name = "sync_binlog"
value = 0
}
}
`

View File

@ -226,7 +226,7 @@ func resourceAwsDmsReplicationTaskUpdate(d *schema.ResourceData, meta interface{
stateConf := &resource.StateChangeConf{
Pending: []string{"modifying"},
Target: []string{"ready"},
Target: []string{"ready", "stopped", "failed"},
Refresh: resourceAwsDmsReplicationTaskStateRefreshFunc(d, meta),
Timeout: d.Timeout(schema.TimeoutCreate),
MinTimeout: 10 * time.Second,

View File

@ -16,20 +16,22 @@ import (
func TestAccAWSDynamoDbTable_basic(t *testing.T) {
var conf dynamodb.DescribeTableOutput
rName := acctest.RandomWithPrefix("TerraformTestTable-")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSDynamoDbTableDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSDynamoDbConfigInitialState(),
Config: testAccAWSDynamoDbConfigInitialState(rName),
Check: resource.ComposeTestCheckFunc(
testAccCheckInitialAWSDynamoDbTableExists("aws_dynamodb_table.basic-dynamodb-table", &conf),
testAccCheckInitialAWSDynamoDbTableConf("aws_dynamodb_table.basic-dynamodb-table"),
),
},
{
Config: testAccAWSDynamoDbConfigAddSecondaryGSI,
Config: testAccAWSDynamoDbConfigAddSecondaryGSI(rName),
Check: resource.ComposeTestCheckFunc(
testAccCheckDynamoDbTableWasUpdated("aws_dynamodb_table.basic-dynamodb-table"),
),
@ -363,10 +365,10 @@ func dynamoDbAttributesToMap(attributes *[]*dynamodb.AttributeDefinition) map[st
return attrmap
}
func testAccAWSDynamoDbConfigInitialState() string {
func testAccAWSDynamoDbConfigInitialState(rName string) string {
return fmt.Sprintf(`
resource "aws_dynamodb_table" "basic-dynamodb-table" {
name = "TerraformTestTable-%d"
name = "%s"
read_capacity = 10
write_capacity = 20
hash_key = "TestTableHashKey"
@ -407,12 +409,13 @@ resource "aws_dynamodb_table" "basic-dynamodb-table" {
projection_type = "KEYS_ONLY"
}
}
`, acctest.RandInt())
`, rName)
}
const testAccAWSDynamoDbConfigAddSecondaryGSI = `
func testAccAWSDynamoDbConfigAddSecondaryGSI(rName string) string {
return fmt.Sprintf(`
resource "aws_dynamodb_table" "basic-dynamodb-table" {
name = "TerraformTestTable"
name = "%s"
read_capacity = 20
write_capacity = 20
hash_key = "TestTableHashKey"
@ -453,8 +456,8 @@ resource "aws_dynamodb_table" "basic-dynamodb-table" {
projection_type = "INCLUDE"
non_key_attributes = ["TestNonKeyAttribute"]
}
}`, rName)
}
`
func testAccAWSDynamoDbConfigStreamSpecification() string {
return fmt.Sprintf(`

View File

@ -108,20 +108,21 @@ func TestAccAWSEcsServiceWithARN(t *testing.T) {
}
func TestAccAWSEcsServiceWithFamilyAndRevision(t *testing.T) {
rName := acctest.RandomWithPrefix("tf-test")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSEcsServiceDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSEcsServiceWithFamilyAndRevision,
Config: testAccAWSEcsServiceWithFamilyAndRevision(rName),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsServiceExists("aws_ecs_service.jenkins"),
),
},
{
Config: testAccAWSEcsServiceWithFamilyAndRevisionModified,
Config: testAccAWSEcsServiceWithFamilyAndRevisionModified(rName),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsServiceExists("aws_ecs_service.jenkins"),
),
@ -243,14 +244,15 @@ func TestAccAWSEcsService_withEcsClusterName(t *testing.T) {
}
func TestAccAWSEcsService_withAlb(t *testing.T) {
rString := acctest.RandString(10)
rName := acctest.RandomWithPrefix("tf-acc")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSEcsServiceDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSEcsServiceWithAlb(rString),
Config: testAccAWSEcsServiceWithAlb(rName),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsServiceExists("aws_ecs_service.with_alb"),
),
@ -301,14 +303,13 @@ func TestAccAWSEcsServiceWithPlacementConstraints(t *testing.T) {
}
func TestAccAWSEcsServiceWithPlacementConstraints_emptyExpression(t *testing.T) {
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSEcsServiceDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSEcsServiceWithPlacementConstraintEmptyExpression(rInt),
Config: testAccAWSEcsServiceWithPlacementConstraintEmptyExpression,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"),
resource.TestCheckResourceAttr("aws_ecs_service.mongo", "placement_constraints.#", "1"),
@ -485,36 +486,34 @@ resource "aws_ecs_service" "mongo" {
}
`
func testAccAWSEcsServiceWithPlacementConstraintEmptyExpression(rInt int) string {
return fmt.Sprintf(`
resource "aws_ecs_cluster" "default" {
name = "terraformecstest%d"
}
resource "aws_ecs_task_definition" "mongo" {
family = "mongodb"
container_definitions = <<DEFINITION
[
{
"cpu": 128,
"essential": true,
"image": "mongo:latest",
"memory": 128,
"name": "mongodb"
}
]
DEFINITION
}
resource "aws_ecs_service" "mongo" {
name = "mongodb-%d"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.mongo.arn}"
desired_count = 1
placement_constraints {
type = "distinctInstance"
}
}
`, rInt, rInt)
var testAccAWSEcsServiceWithPlacementConstraintEmptyExpression = `
resource "aws_ecs_cluster" "default" {
name = "terraformecstest212"
}
resource "aws_ecs_task_definition" "mongo" {
family = "mongodb"
container_definitions = <<DEFINITION
[
{
"cpu": 128,
"essential": true,
"image": "mongo:latest",
"memory": 128,
"name": "mongodb"
}
]
DEFINITION
}
resource "aws_ecs_service" "mongo" {
name = "mongodb"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.mongo.arn}"
desired_count = 1
placement_constraints {
type = "distinctInstance"
}
}
`
var testAccAWSEcsService_withIamRole = `
resource "aws_ecs_cluster" "main" {
@ -739,13 +738,14 @@ var testAccAWSEcsService_withLbChanges_modified = fmt.Sprintf(
tpl_testAccAWSEcsService_withLbChanges,
"nginx:latest", "nginx", 80, 8080, 8080, "nginx", 80)
var testAccAWSEcsServiceWithFamilyAndRevision = `
func testAccAWSEcsServiceWithFamilyAndRevision(rName string) string {
return fmt.Sprintf(`
resource "aws_ecs_cluster" "default" {
name = "terraformecstest2"
name = "%s"
}
resource "aws_ecs_task_definition" "jenkins" {
family = "jenkins"
family = "%s"
container_definitions = <<DEFINITION
[
{
@ -760,20 +760,21 @@ DEFINITION
}
resource "aws_ecs_service" "jenkins" {
name = "jenkins"
name = "%s"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.jenkins.family}:${aws_ecs_task_definition.jenkins.revision}"
desired_count = 1
}`, rName, rName, rName)
}
`
var testAccAWSEcsServiceWithFamilyAndRevisionModified = `
func testAccAWSEcsServiceWithFamilyAndRevisionModified(rName string) string {
return fmt.Sprintf(`
resource "aws_ecs_cluster" "default" {
name = "terraformecstest2"
name = "%s"
}
resource "aws_ecs_task_definition" "jenkins" {
family = "jenkins"
family = "%s"
container_definitions = <<DEFINITION
[
{
@ -788,12 +789,12 @@ DEFINITION
}
resource "aws_ecs_service" "jenkins" {
name = "jenkins"
name = "%s"
cluster = "${aws_ecs_cluster.default.id}"
task_definition = "${aws_ecs_task_definition.jenkins.family}:${aws_ecs_task_definition.jenkins.revision}"
desired_count = 1
}`, rName, rName, rName)
}
`
var testAccAWSEcsServiceWithRenamedCluster = `
resource "aws_ecs_cluster" "default" {
@ -875,12 +876,15 @@ resource "aws_ecs_service" "jenkins" {
}
`
func testAccAWSEcsServiceWithAlb(rString string) string {
func testAccAWSEcsServiceWithAlb(rName string) string {
return fmt.Sprintf(`
data "aws_availability_zones" "available" {}
resource "aws_vpc" "main" {
cidr_block = "10.10.0.0/16"
tags {
Name = "TestAccAWSEcsService_withAlb"
}
}
resource "aws_subnet" "main" {
@ -891,11 +895,11 @@ resource "aws_subnet" "main" {
}
resource "aws_ecs_cluster" "main" {
name = "terraform_acc_test_ecs_%s"
name = "%s"
}
resource "aws_ecs_task_definition" "with_lb_changes" {
family = "tf_acc_test_ghost_lbd"
family = "%s"
container_definitions = <<DEFINITION
[
{
@ -916,7 +920,7 @@ DEFINITION
}
resource "aws_iam_role" "ecs_service" {
name = "tf_acc_test_%s_role"
name = "%s"
assume_role_policy = <<EOF
{
"Version": "2008-10-17",
@ -935,7 +939,7 @@ EOF
}
resource "aws_iam_role_policy" "ecs_service" {
name = "tf_acc_test_%s_policy"
name = "%s"
role = "${aws_iam_role.ecs_service.name}"
policy = <<EOF
{
@ -959,22 +963,16 @@ EOF
}
resource "aws_alb_target_group" "test" {
name = "tf-acc-test-ecs-ghost-%s"
name = "%s"
port = 80
protocol = "HTTP"
vpc_id = "${aws_vpc.main.id}"
tags {
Name = "TestAccAWSEcsService_withAlb"
}
}
resource "aws_alb" "main" {
name = "tf-acc-test-alb-ecs-%s"
name = "%s"
internal = true
subnets = ["${aws_subnet.main.*.id}"]
tags {
Name = "TestAccAWSEcsService_withAlb"
}
}
resource "aws_alb_listener" "front_end" {
@ -989,7 +987,7 @@ resource "aws_alb_listener" "front_end" {
}
resource "aws_ecs_service" "with_alb" {
name = "tf-acc-test-ecs-ghost-%s"
name = "%s"
cluster = "${aws_ecs_cluster.main.id}"
task_definition = "${aws_ecs_task_definition.with_lb_changes.arn}"
desired_count = 1
@ -1006,5 +1004,5 @@ resource "aws_ecs_service" "with_alb" {
"aws_alb_listener.front_end"
]
}
`, rString, rString, rString, rString, rString, rString)
`, rName, rName, rName, rName, rName, rName, rName)
}

View File

@ -311,7 +311,7 @@ func resourceAwsElasticacheClusterCreate(d *schema.ResourceData, meta interface{
// name contained uppercase characters.
d.SetId(strings.ToLower(*resp.CacheCluster.CacheClusterId))
pending := []string{"creating", "modifying", "restoring"}
pending := []string{"creating", "modifying", "restoring", "snapshotting"}
stateConf := &resource.StateChangeConf{
Pending: pending,
Target: []string{"available"},
@ -572,7 +572,7 @@ func resourceAwsElasticacheClusterDelete(d *schema.ResourceData, meta interface{
log.Printf("[DEBUG] Waiting for deletion: %v", d.Id())
stateConf := &resource.StateChangeConf{
Pending: []string{"creating", "available", "deleting", "incompatible-parameters", "incompatible-network", "restore-failed"},
Pending: []string{"creating", "available", "deleting", "incompatible-parameters", "incompatible-network", "restore-failed", "snapshotting"},
Target: []string{},
Refresh: cacheClusterStateRefreshFunc(conn, d.Id(), "", []string{}),
Timeout: 40 * time.Minute,

View File

@ -138,10 +138,10 @@ func resourceAwsEMRCluster() *schema.Resource {
Required: true,
},
"args": {
Type: schema.TypeSet,
Type: schema.TypeList,
Optional: true,
ForceNew: true,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
},
},
@ -381,6 +381,18 @@ func resourceAwsEMRClusterRead(d *schema.ResourceData, meta interface{}) error {
if err := d.Set("ec2_attributes", flattenEc2Attributes(cluster.Ec2InstanceAttributes)); err != nil {
log.Printf("[ERR] Error setting EMR Ec2 Attributes: %s", err)
}
respBootstraps, err := emrconn.ListBootstrapActions(&emr.ListBootstrapActionsInput{
ClusterId: cluster.Id,
})
if err != nil {
log.Printf("[WARN] Error listing bootstrap actions: %s", err)
}
if err := d.Set("bootstrap_action", flattenBootstrapArguments(respBootstraps.BootstrapActions)); err != nil {
log.Printf("[WARN] Error setting Bootstrap Actions: %s", err)
}
return nil
}
@ -589,6 +601,20 @@ func flattenEc2Attributes(ia *emr.Ec2InstanceAttributes) []map[string]interface{
return result
}
func flattenBootstrapArguments(actions []*emr.Command) []map[string]interface{} {
result := make([]map[string]interface{}, 0)
for _, b := range actions {
attrs := make(map[string]interface{})
attrs["name"] = *b.Name
attrs["path"] = *b.ScriptPath
attrs["args"] = flattenStringList(b.Args)
result = append(result, attrs)
}
return result
}
func loadGroups(d *schema.ResourceData, meta interface{}) ([]*emr.InstanceGroup, error) {
emrconn := meta.(*AWSClient).emrconn
reqGrps := &emr.ListInstanceGroupsInput{
@ -699,7 +725,7 @@ func expandBootstrapActions(bootstrapActions []interface{}) []*emr.BootstrapActi
actionAttributes := raw.(map[string]interface{})
actionName := actionAttributes["name"].(string)
actionPath := actionAttributes["path"].(string)
actionArgs := actionAttributes["args"].(*schema.Set).List()
actionArgs := actionAttributes["args"].([]interface{})
action := &emr.BootstrapActionConfig{
Name: aws.String(actionName),

View File

@ -3,6 +3,7 @@ package aws
import (
"fmt"
"log"
"reflect"
"testing"
"github.com/aws/aws-sdk-go/aws"
@ -14,7 +15,7 @@ import (
)
func TestAccAWSEMRCluster_basic(t *testing.T) {
var jobFlow emr.RunJobFlowOutput
var cluster emr.Cluster
r := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@ -23,14 +24,51 @@ func TestAccAWSEMRCluster_basic(t *testing.T) {
Steps: []resource.TestStep{
{
Config: testAccAWSEmrClusterConfig(r),
Check: testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &jobFlow),
Check: testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &cluster),
},
},
})
}
func TestAccAWSEMRCluster_bootstrap_ordering(t *testing.T) {
var cluster emr.Cluster
rName := acctest.RandomWithPrefix("tf-emr-bootstrap")
argsInts := []string{
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
"10",
}
argsStrings := []string{
"instance.isMaster=true",
"echo running on master node",
}
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSEmrDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSEmrClusterConfig_bootstrap(rName),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEmrClusterExists("aws_emr_cluster.test", &cluster),
testAccCheck_bootstrap_order(&cluster, argsInts, argsStrings),
),
},
},
})
}
func TestAccAWSEMRCluster_terminationProtected(t *testing.T) {
var jobFlow emr.RunJobFlowOutput
var cluster emr.Cluster
r := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@ -40,7 +78,7 @@ func TestAccAWSEMRCluster_terminationProtected(t *testing.T) {
{
Config: testAccAWSEmrClusterConfig(r),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &jobFlow),
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &cluster),
resource.TestCheckResourceAttr(
"aws_emr_cluster.tf-test-cluster", "termination_protection", "false"),
),
@ -48,7 +86,7 @@ func TestAccAWSEMRCluster_terminationProtected(t *testing.T) {
{
Config: testAccAWSEmrClusterConfigTerminationPolicyUpdated(r),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &jobFlow),
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &cluster),
resource.TestCheckResourceAttr(
"aws_emr_cluster.tf-test-cluster", "termination_protection", "true"),
),
@ -57,7 +95,7 @@ func TestAccAWSEMRCluster_terminationProtected(t *testing.T) {
//Need to turn off termination_protection to allow the job to be deleted
Config: testAccAWSEmrClusterConfig(r),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &jobFlow),
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &cluster),
),
},
},
@ -65,7 +103,7 @@ func TestAccAWSEMRCluster_terminationProtected(t *testing.T) {
}
func TestAccAWSEMRCluster_visibleToAllUsers(t *testing.T) {
var jobFlow emr.RunJobFlowOutput
var cluster emr.Cluster
r := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@ -75,7 +113,7 @@ func TestAccAWSEMRCluster_visibleToAllUsers(t *testing.T) {
{
Config: testAccAWSEmrClusterConfig(r),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &jobFlow),
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &cluster),
resource.TestCheckResourceAttr(
"aws_emr_cluster.tf-test-cluster", "visible_to_all_users", "true"),
),
@ -83,7 +121,7 @@ func TestAccAWSEMRCluster_visibleToAllUsers(t *testing.T) {
{
Config: testAccAWSEmrClusterConfigVisibleToAllUsersUpdated(r),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &jobFlow),
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &cluster),
resource.TestCheckResourceAttr(
"aws_emr_cluster.tf-test-cluster", "visible_to_all_users", "false"),
),
@ -93,7 +131,7 @@ func TestAccAWSEMRCluster_visibleToAllUsers(t *testing.T) {
}
func TestAccAWSEMRCluster_tags(t *testing.T) {
var jobFlow emr.RunJobFlowOutput
var cluster emr.Cluster
r := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@ -103,7 +141,7 @@ func TestAccAWSEMRCluster_tags(t *testing.T) {
{
Config: testAccAWSEmrClusterConfig(r),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &jobFlow),
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &cluster),
resource.TestCheckResourceAttr("aws_emr_cluster.tf-test-cluster", "tags.%", "4"),
resource.TestCheckResourceAttr(
"aws_emr_cluster.tf-test-cluster", "tags.role", "rolename"),
@ -117,7 +155,7 @@ func TestAccAWSEMRCluster_tags(t *testing.T) {
{
Config: testAccAWSEmrClusterConfigUpdatedTags(r),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &jobFlow),
testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &cluster),
resource.TestCheckResourceAttr("aws_emr_cluster.tf-test-cluster", "tags.%", "3"),
resource.TestCheckResourceAttr(
"aws_emr_cluster.tf-test-cluster", "tags.dns_zone", "new_zone"),
@ -131,6 +169,45 @@ func TestAccAWSEMRCluster_tags(t *testing.T) {
})
}
func testAccCheck_bootstrap_order(cluster *emr.Cluster, argsInts, argsStrings []string) resource.TestCheckFunc {
return func(s *terraform.State) error {
emrconn := testAccProvider.Meta().(*AWSClient).emrconn
req := emr.ListBootstrapActionsInput{
ClusterId: cluster.Id,
}
resp, err := emrconn.ListBootstrapActions(&req)
if err != nil {
return fmt.Errorf("[ERR] Error listing boostrap actions in test: %s", err)
}
// make sure we actually checked something
var ran bool
for _, ba := range resp.BootstrapActions {
// assume name matches the config
rArgs := aws.StringValueSlice(ba.Args)
if *ba.Name == "test" {
ran = true
if !reflect.DeepEqual(argsInts, rArgs) {
return fmt.Errorf("Error matching Bootstrap args:\n\texpected: %#v\n\tgot: %#v", argsInts, rArgs)
}
} else if *ba.Name == "runif" {
ran = true
if !reflect.DeepEqual(argsStrings, rArgs) {
return fmt.Errorf("Error matching Bootstrap args:\n\texpected: %#v\n\tgot: %#v", argsStrings, rArgs)
}
}
}
if !ran {
return fmt.Errorf("Expected to compare bootstrap actions, but no checks were ran")
}
return nil
}
}
func testAccCheckAWSEmrDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).emrconn
@ -163,7 +240,7 @@ func testAccCheckAWSEmrDestroy(s *terraform.State) error {
return nil
}
func testAccCheckAWSEmrClusterExists(n string, v *emr.RunJobFlowOutput) resource.TestCheckFunc {
func testAccCheckAWSEmrClusterExists(n string, v *emr.Cluster) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
@ -185,6 +262,8 @@ func testAccCheckAWSEmrClusterExists(n string, v *emr.RunJobFlowOutput) resource
return fmt.Errorf("EMR cluser not found")
}
*v = *describe.Cluster
if describe.Cluster != nil &&
*describe.Cluster.Status.State != "WAITING" {
return fmt.Errorf("EMR cluser is not up yet")
@ -194,6 +273,308 @@ func testAccCheckAWSEmrClusterExists(n string, v *emr.RunJobFlowOutput) resource
}
}
func testAccAWSEmrClusterConfig_bootstrap(r string) string {
return fmt.Sprintf(`
resource "aws_emr_cluster" "test" {
count = 1
name = "%s"
release_label = "emr-5.0.0"
applications = ["Hadoop", "Hive"]
log_uri = "s3n://terraform/testlog/"
master_instance_type = "m4.large"
core_instance_type = "m1.small"
core_instance_count = 1
service_role = "${aws_iam_role.iam_emr_default_role.arn}"
depends_on = ["aws_main_route_table_association.a"]
ec2_attributes {
subnet_id = "${aws_subnet.main.id}"
emr_managed_master_security_group = "${aws_security_group.allow_all.id}"
emr_managed_slave_security_group = "${aws_security_group.allow_all.id}"
instance_profile = "${aws_iam_instance_profile.emr_profile.arn}"
}
bootstrap_action {
path = "s3://elasticmapreduce/bootstrap-actions/run-if"
name = "runif"
args = ["instance.isMaster=true", "echo running on master node"]
}
bootstrap_action = [
{
path = "s3://${aws_s3_bucket.tester.bucket}/testscript.sh"
name = "test"
args = ["1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
"10",
]
},
]
}
resource "aws_iam_instance_profile" "emr_profile" {
name = "%s_profile"
role = "${aws_iam_role.iam_emr_profile_role.name}"
}
resource "aws_iam_role" "iam_emr_default_role" {
name = "%s_default_role"
assume_role_policy = <<EOT
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "elasticmapreduce.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOT
}
resource "aws_iam_role" "iam_emr_profile_role" {
name = "%s_profile_role"
assume_role_policy = <<EOT
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOT
}
resource "aws_iam_role_policy_attachment" "profile-attach" {
role = "${aws_iam_role.iam_emr_profile_role.id}"
policy_arn = "${aws_iam_policy.iam_emr_profile_policy.arn}"
}
resource "aws_iam_role_policy_attachment" "service-attach" {
role = "${aws_iam_role.iam_emr_default_role.id}"
policy_arn = "${aws_iam_policy.iam_emr_default_policy.arn}"
}
resource "aws_iam_policy" "iam_emr_default_policy" {
name = "%s_emr"
policy = <<EOT
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Resource": "*",
"Action": [
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CancelSpotInstanceRequests",
"ec2:CreateNetworkInterface",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:DeleteNetworkInterface",
"ec2:DeleteSecurityGroup",
"ec2:DeleteTags",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeAccountAttributes",
"ec2:DescribeDhcpOptions",
"ec2:DescribeInstanceStatus",
"ec2:DescribeInstances",
"ec2:DescribeKeyPairs",
"ec2:DescribeNetworkAcls",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribePrefixLists",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSpotInstanceRequests",
"ec2:DescribeSpotPriceHistory",
"ec2:DescribeSubnets",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVpcEndpoints",
"ec2:DescribeVpcEndpointServices",
"ec2:DescribeVpcs",
"ec2:DetachNetworkInterface",
"ec2:ModifyImageAttribute",
"ec2:ModifyInstanceAttribute",
"ec2:RequestSpotInstances",
"ec2:RevokeSecurityGroupEgress",
"ec2:RunInstances",
"ec2:TerminateInstances",
"ec2:DeleteVolume",
"ec2:DescribeVolumeStatus",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListInstanceProfiles",
"iam:ListRolePolicies",
"iam:PassRole",
"s3:CreateBucket",
"s3:Get*",
"s3:List*",
"sdb:BatchPutAttributes",
"sdb:Select",
"sqs:CreateQueue",
"sqs:Delete*",
"sqs:GetQueue*",
"sqs:PurgeQueue",
"sqs:ReceiveMessage"
]
}]
}
EOT
}
resource "aws_iam_policy" "iam_emr_profile_policy" {
name = "%s_profile"
policy = <<EOT
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Resource": "*",
"Action": [
"cloudwatch:*",
"dynamodb:*",
"ec2:Describe*",
"elasticmapreduce:Describe*",
"elasticmapreduce:ListBootstrapActions",
"elasticmapreduce:ListClusters",
"elasticmapreduce:ListInstanceGroups",
"elasticmapreduce:ListInstances",
"elasticmapreduce:ListSteps",
"kinesis:CreateStream",
"kinesis:DeleteStream",
"kinesis:DescribeStream",
"kinesis:GetRecords",
"kinesis:GetShardIterator",
"kinesis:MergeShards",
"kinesis:PutRecord",
"kinesis:SplitShard",
"rds:Describe*",
"s3:*",
"sdb:*",
"sns:*",
"sqs:*"
]
}]
}
EOT
}
resource "aws_vpc" "main" {
cidr_block = "168.31.0.0/16"
enable_dns_hostnames = true
tags {
name = "emr_test_cts"
}
}
resource "aws_subnet" "main" {
vpc_id = "${aws_vpc.main.id}"
cidr_block = "168.31.0.0/20"
tags {
name = "emr_test_cts"
}
}
resource "aws_internet_gateway" "gw" {
vpc_id = "${aws_vpc.main.id}"
}
resource "aws_route_table" "r" {
vpc_id = "${aws_vpc.main.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gw.id}"
}
}
resource "aws_main_route_table_association" "a" {
vpc_id = "${aws_vpc.main.id}"
route_table_id = "${aws_route_table.r.id}"
}
resource "aws_security_group" "allow_all" {
name = "allow_all"
description = "Allow all inbound traffic"
vpc_id = "${aws_vpc.main.id}"
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
depends_on = ["aws_subnet.main"]
lifecycle {
ignore_changes = ["ingress", "egress"]
}
tags {
name = "emr_test"
}
}
output "cluser_id" {
value = "${aws_emr_cluster.test.id}"
}
resource "aws_s3_bucket" "tester" {
bucket = "%s"
acl = "public-read"
}
resource "aws_s3_bucket_object" "testobject" {
bucket = "${aws_s3_bucket.tester.bucket}"
key = "testscript.sh"
#source = "testscript.sh"
content = "${data.template_file.testscript.rendered}"
acl = "public-read"
}
data "template_file" "testscript" {
template = <<POLICY
#!/bin/bash
echo $@
POLICY
}`, r, r, r, r, r, r, r)
}
func testAccAWSEmrClusterConfig(r int) string {
return fmt.Sprintf(`
provider "aws" {

View File

@ -121,7 +121,7 @@ func resourceAwsIamInstanceProfileCreate(d *schema.ResourceData, meta interface{
_, hasRole := d.GetOk("role")
if hasRole == false && hasRoles == false {
return fmt.Errorf("Either `roles` or `role` must be specified when creating an IAM Instance Profile")
return fmt.Errorf("Either `role` or `roles` (deprecated) must be specified when creating an IAM Instance Profile")
}
request := &iam.CreateInstanceProfileInput{

View File

@ -81,7 +81,7 @@ func TestAccAWSIAMInstanceProfile_missingRoleThrowsError(t *testing.T) {
Steps: []resource.TestStep{
{
Config: testAccAwsIamInstanceProfileConfigMissingRole(rName),
ExpectError: regexp.MustCompile("Either `roles` or `role` must be specified when creating an IAM Instance Profile"),
ExpectError: regexp.MustCompile(regexp.QuoteMeta("Either `role` or `roles` (deprecated) must be specified when creating an IAM Instance Profile")),
},
},
})

View File

@ -140,7 +140,7 @@ func resourceAwsIamRolePolicyDelete(d *schema.ResourceData, meta interface{}) er
func resourceAwsIamRolePolicyParseId(id string) (roleName, policyName string, err error) {
parts := strings.SplitN(id, ":", 2)
if len(parts) != 2 {
err = fmt.Errorf("role_policy id must be of the for <role name>:<policy name>")
err = fmt.Errorf("role_policy id must be of the form <role name>:<policy name>")
return
}

View File

@ -534,6 +534,9 @@ resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
var testAccKinesisFirehoseDeliveryStreamBaseElasticsearchConfig = testAccKinesisFirehoseDeliveryStreamBaseConfig + `
resource "aws_elasticsearch_domain" "test_cluster" {
domain_name = "es-test-%d"
cluster_config {
instance_type = "r3.large.elasticsearch"
}
access_policies = <<CONFIG
{

View File

@ -95,10 +95,10 @@ func resourceAwsKinesisStreamCreate(d *schema.ResourceData, meta interface{}) er
sn, err)
}
s := streamRaw.(kinesisStreamState)
s := streamRaw.(*kinesisStreamState)
d.SetId(s.arn)
d.Set("arn", s.arn)
d.Set("shard_count", s.shardCount)
d.Set("shard_count", len(s.openShards))
return resourceAwsKinesisStreamUpdate(d, meta)
}
@ -141,7 +141,7 @@ func resourceAwsKinesisStreamRead(d *schema.ResourceData, meta interface{}) erro
}
d.Set("arn", state.arn)
d.Set("shard_count", state.shardCount)
d.Set("shard_count", len(state.openShards))
d.Set("retention_period", state.retentionPeriod)
if len(state.shardLevelMetrics) > 0 {
@ -290,23 +290,27 @@ func updateKinesisShardLevelMetrics(conn *kinesis.Kinesis, d *schema.ResourceDat
type kinesisStreamState struct {
arn string
creationTimestamp int64
status string
shardCount int
retentionPeriod int64
openShards []string
closedShards []string
shardLevelMetrics []string
}
func readKinesisStreamState(conn *kinesis.Kinesis, sn string) (kinesisStreamState, error) {
func readKinesisStreamState(conn *kinesis.Kinesis, sn string) (*kinesisStreamState, error) {
describeOpts := &kinesis.DescribeStreamInput{
StreamName: aws.String(sn),
}
var state kinesisStreamState
state := &kinesisStreamState{}
err := conn.DescribeStreamPages(describeOpts, func(page *kinesis.DescribeStreamOutput, last bool) (shouldContinue bool) {
state.arn = aws.StringValue(page.StreamDescription.StreamARN)
state.creationTimestamp = aws.TimeValue(page.StreamDescription.StreamCreationTimestamp).Unix()
state.status = aws.StringValue(page.StreamDescription.StreamStatus)
state.shardCount += len(openShards(page.StreamDescription.Shards))
state.retentionPeriod = aws.Int64Value(page.StreamDescription.RetentionPeriodHours)
state.openShards = append(state.openShards, flattenShards(openShards(page.StreamDescription.Shards))...)
state.closedShards = append(state.closedShards, flattenShards(closedShards(page.StreamDescription.Shards))...)
state.shardLevelMetrics = flattenKinesisShardLevelMetrics(page.StreamDescription.EnhancedMonitoring)
return !last
})
@ -349,14 +353,31 @@ func waitForKinesisToBeActive(conn *kinesis.Kinesis, sn string) error {
return nil
}
// See http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-using-sdk-java-resharding-merge.html
func openShards(shards []*kinesis.Shard) []*kinesis.Shard {
var open []*kinesis.Shard
return filterShards(shards, true)
}
func closedShards(shards []*kinesis.Shard) []*kinesis.Shard {
return filterShards(shards, false)
}
// See http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-using-sdk-java-resharding-merge.html
func filterShards(shards []*kinesis.Shard, open bool) []*kinesis.Shard {
res := make([]*kinesis.Shard, 0, len(shards))
for _, s := range shards {
if s.SequenceNumberRange.EndingSequenceNumber == nil {
open = append(open, s)
if open && s.SequenceNumberRange.EndingSequenceNumber == nil {
res = append(res, s)
} else if !open && s.SequenceNumberRange.EndingSequenceNumber != nil {
res = append(res, s)
}
}
return open
return res
}
func flattenShards(shards []*kinesis.Shard) []string {
res := make([]string, len(shards))
for i, s := range shards {
res[i] = aws.StringValue(s.ShardId)
}
return res
}

View File

@ -29,14 +29,7 @@ func resourceAwsKmsAlias() *schema.Resource {
Optional: true,
ForceNew: true,
ConflictsWith: []string{"name_prefix"},
ValidateFunc: func(v interface{}, k string) (ws []string, es []error) {
value := v.(string)
if !regexp.MustCompile(`^(alias\/)[a-zA-Z0-9:/_-]+$`).MatchString(value) {
es = append(es, fmt.Errorf(
"%q must begin with 'alias/' and be comprised of only [a-zA-Z0-9:/_-]", k))
}
return
},
ValidateFunc: validateAwsKmsName,
},
"name_prefix": &schema.Schema{
Type: schema.TypeString,

View File

@ -146,6 +146,10 @@ func resourceAwsLambdaFunction() *schema.Resource {
Type: schema.TypeString,
Computed: true,
},
"invoke_arn": {
Type: schema.TypeString,
Computed: true,
},
"last_modified": {
Type: schema.TypeString,
Computed: true,
@ -175,6 +179,8 @@ func resourceAwsLambdaFunction() *schema.Resource {
Optional: true,
ValidateFunc: validateArn,
},
"tags": tagsSchema(),
},
}
}
@ -291,6 +297,10 @@ func resourceAwsLambdaFunctionCreate(d *schema.ResourceData, meta interface{}) e
params.KMSKeyArn = aws.String(v.(string))
}
if v, exists := d.GetOk("tags"); exists {
params.Tags = tagsFromMapGeneric(v.(map[string]interface{}))
}
// IAM profiles can take ~10 seconds to propagate in AWS:
// http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#launch-instance-with-role-console
// Error creating Lambda function: InvalidParameterValueException: The role defined for the task cannot be assumed by Lambda.
@ -353,6 +363,7 @@ func resourceAwsLambdaFunctionRead(d *schema.ResourceData, meta interface{}) err
d.Set("runtime", function.Runtime)
d.Set("timeout", function.Timeout)
d.Set("kms_key_arn", function.KMSKeyArn)
d.Set("tags", tagsToMapGeneric(getFunctionOutput.Tags))
config := flattenLambdaVpcConfigResponse(function.VpcConfig)
log.Printf("[INFO] Setting Lambda %s VPC config %#v from API", d.Id(), config)
@ -399,6 +410,8 @@ func resourceAwsLambdaFunctionRead(d *schema.ResourceData, meta interface{}) err
d.Set("version", lastVersion)
d.Set("qualified_arn", lastQualifiedArn)
d.Set("invoke_arn", buildLambdaInvokeArn(*function.FunctionArn, meta.(*AWSClient).region))
return nil
}
@ -448,6 +461,12 @@ func resourceAwsLambdaFunctionUpdate(d *schema.ResourceData, meta interface{}) e
d.Partial(true)
arn := d.Get("arn").(string)
if tagErr := setTagsLambda(conn, d, arn); tagErr != nil {
return tagErr
}
d.SetPartial("tags")
if d.HasChange("filename") || d.HasChange("source_code_hash") || d.HasChange("s3_bucket") || d.HasChange("s3_key") || d.HasChange("s3_object_version") {
codeReq := &lambda.UpdateFunctionCodeInput{
FunctionName: aws.String(d.Id()),

Some files were not shown because too many files have changed in this diff Show More