Merge branch 'master' into import-opsworks-stacks

This commit is contained in:
DJ Home 2016-10-03 11:28:28 +01:00
commit 967335eae3
23 changed files with 532 additions and 153 deletions

View File

@ -1,5 +1,8 @@
## 0.7.5 (Unreleased) ## 0.7.5 (Unreleased)
BACKWARDS INCOMPATIBILITIES / NOTES:
* `tls_cert_request` is now a managed resource instead of a data source, restoring the pre-Terraform 0.7 behaviour [GH-9035]
FEATURES: FEATURES:
* **New Provider:** `bitbucket` [GH-7405] * **New Provider:** `bitbucket` [GH-7405]
* **New Resource:** `aws_api_gateway_client_certificate` [GH-8775] * **New Resource:** `aws_api_gateway_client_certificate` [GH-8775]
@ -8,29 +11,31 @@ IMPROVEMENTS:
* provider/aws: Add JSON validation to the `aws_cloudwatch_event_rule` resource [GH-8897] * provider/aws: Add JSON validation to the `aws_cloudwatch_event_rule` resource [GH-8897]
* provider/aws: S3 bucket policies are imported as separate resources [GH-8915] * provider/aws: S3 bucket policies are imported as separate resources [GH-8915]
* provider/aws: S3 bucket policies can now be removed via the `aws_s3_bucket` resource [GH-8915] * provider/aws: S3 bucket policies can now be removed via the `aws_s3_bucket` resource [GH-8915]
* provider/aws: Added a cluster_address attribute to aws elasticache [GH-8935] * provider/aws: Added a `cluster_address` attribute to aws elasticache [GH-8935]
* provider/aws: Add JSON validation to the aws_elasticsearch_domain resource. [GH-8898] * provider/aws: Add JSON validation to the `aws_elasticsearch_domain resource`. [GH-8898]
* provider/aws: Add JSON validation to the aws_kms_key resource. [GH-8900] * provider/aws: Add JSON validation to the `aws_kms_key resource`. [GH-8900]
* provider/aws: Add JSON validation to the aws_s3_bucket_policy resource. [GH-8901] * provider/aws: Add JSON validation to the `aws_s3_bucket_policy resource`. [GH-8901]
* provider/aws: Add JSON validation to the aws_sns_topic resource. [GH-8902] * provider/aws: Add JSON validation to the `aws_sns_topic resource`. [GH-8902]
* provider/aws: Add JSON validation to the aws_sns_topic_policy resource. [GH-8903] * provider/aws: Add JSON validation to the `aws_sns_topic_policy resource`. [GH-8903]
* provider/aws: Add JSON validation to the aws_sqs_queue resource. [GH-8904] * provider/aws: Add JSON validation to the `aws_sqs_queue resource`. [GH-8904]
* provider/aws: Add JSON validation to the aws_sqs_queue_policy resource. [GH-8905] * provider/aws: Add JSON validation to the `aws_sqs_queue_policy resource`. [GH-8905]
* provider/aws: Add JSON validation to the aws_vpc_endpoint resource. [GH-8906] * provider/aws: Add JSON validation to the `aws_vpc_endpoint resource`. [GH-8906]
* provider/aws: Update aws_cloudformation_stack data source with new helper function. [GH-8907] * provider/aws: Update `aws_cloudformation_stack` data source with new helper function. [GH-8907]
* provider/aws: Add JSON validation to the aws_s3_bucket resource. [GH-8908] * provider/aws: Add JSON validation to the `aws_s3_bucket` resource. [GH-8908]
* provider/aws: Add support for `cloudwatch_logging_options` to Firehose Delivery Streams [GH-8671] * provider/aws: Add support for `cloudwatch_logging_options` to Firehose Delivery Streams [GH-8671]
* provider/aws: Add HTTP/2 support via the http_version parameter to CloudFront distribution [GH-8777] * provider/aws: Add HTTP/2 support via the http_version parameter to CloudFront distribution [GH-8777]
* provider/aws: Add query_string_cache_keys to allow for selective caching of CloudFront keys [GH-8777] * provider/aws: Add `query_string_cache_keys` to allow for selective caching of CloudFront keys [GH-8777]
* provider/aws: Support Import aws_elasticache_cluster [GH-9010] * provider/aws: Support Import `aws_elasticache_cluster` [GH-9010]
* provider/aws: Add support for tags to aws_cloudfront_distribution [GH-9011] * provider/aws: Add support for tags to `aws_cloudfront_distribution` [GH-9011]
* provider/azurerm: Add normalizeJsonString and validateJsonString functions [GH-8909] * provider/azurerm: Add normalizeJsonString and validateJsonString functions [GH-8909]
* provider/openstack: Use proxy environment variables for communication with services [GH-8948] * provider/openstack: Use proxy environment variables for communication with services [GH-8948]
* provider/vsphere: Adding 'detach_unknown_disks_on_delete' flag for VM resource [GH-8947] * provider/vsphere: Adding `detach_unknown_disks_on_delete` flag for VM resource [GH-8947]
* provisioner/chef: Add `skip_register` attribute to allow skipping the registering steps [GH-9127]
BUG FIXES: BUG FIXES:
* core: Fixed variables not being in scope for destroy -target on modules [GH-9021] * core: Fixed variables not being in scope for destroy -target on modules [GH-9021]
* provider/aws: Remove support for `ah` and `esp` literals in Security Group Ingress/Egress rules; you must use the actual protocol number for protocols other than `tcp`, `udp`, `icmp`, or `all` [GH-8975] * provider/aws: Remove support for `ah` and `esp` literals in Security Group Ingress/Egress rules; you must use the actual protocol number for protocols other than `tcp`, `udp`, `icmp`, or `all` [GH-8975]
* provider/aws: Do not report drift for effect values differing only by case in AWS policies [GH-9139]
* provider/aws: VPC ID, Port, Protocol and Name change on aws_alb_target_group will ForceNew resource [GH-8989] * provider/aws: VPC ID, Port, Protocol and Name change on aws_alb_target_group will ForceNew resource [GH-8989]
* provider/aws: Wait for Spot Fleet to drain before removing from state [GH-8938] * provider/aws: Wait for Spot Fleet to drain before removing from state [GH-8938]
* provider/aws: Fix issue when importing `aws_eip` resources by IP address [GH-8970] * provider/aws: Fix issue when importing `aws_eip` resources by IP address [GH-8970]
@ -47,8 +52,10 @@ BUG FIXES:
* provider/aws: aws_db_option_group flattenOptions failing due to missing values [GH-9052] * provider/aws: aws_db_option_group flattenOptions failing due to missing values [GH-9052]
* provider/aws: Add retry logic to the aws_ecr_repository delete func [GH-9050] * provider/aws: Add retry logic to the aws_ecr_repository delete func [GH-9050]
* provider/aws: Modifying the parameter_group_name of aws_elasticache_replication_group caused a panic [GH-9101] * provider/aws: Modifying the parameter_group_name of aws_elasticache_replication_group caused a panic [GH-9101]
* provider/aws: Fix issue with updating ELB subnets for subnets in the same AZ [GH-9131]
* provider/librato: Mandatory name and conditions attributes weren't being sent on Update unless changed [GH-8984] * provider/librato: Mandatory name and conditions attributes weren't being sent on Update unless changed [GH-8984]
* provisioner/chef: Fix an error with parsing certain `vault_json` content [GH-9114]
* provisioner/chef: Change to order in which to cleanup the user key so this is done before the Chef run starts [GH-9114]
## 0.7.4 (September 19, 2016) ## 0.7.4 (September 19, 2016)

View File

@ -82,6 +82,8 @@ func protocolIntegers() map[string]int {
var protocolIntegers = make(map[string]int) var protocolIntegers = make(map[string]int)
protocolIntegers = map[string]int{ protocolIntegers = map[string]int{
// defined at https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml // defined at https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml
"ah": 51,
"esp": 50,
"udp": 17, "udp": 17,
"tcp": 6, "tcp": 6,
"icmp": 1, "icmp": 1,

View File

@ -695,19 +695,6 @@ func resourceAwsElbUpdate(d *schema.ResourceData, meta interface{}) error {
removed := expandStringList(os.Difference(ns).List()) removed := expandStringList(os.Difference(ns).List())
added := expandStringList(ns.Difference(os).List()) added := expandStringList(ns.Difference(os).List())
if len(added) > 0 {
attachOpts := &elb.AttachLoadBalancerToSubnetsInput{
LoadBalancerName: aws.String(d.Id()),
Subnets: added,
}
log.Printf("[DEBUG] ELB attach subnets opts: %s", attachOpts)
_, err := elbconn.AttachLoadBalancerToSubnets(attachOpts)
if err != nil {
return fmt.Errorf("Failure adding ELB subnets: %s", err)
}
}
if len(removed) > 0 { if len(removed) > 0 {
detachOpts := &elb.DetachLoadBalancerFromSubnetsInput{ detachOpts := &elb.DetachLoadBalancerFromSubnetsInput{
LoadBalancerName: aws.String(d.Id()), LoadBalancerName: aws.String(d.Id()),
@ -721,6 +708,33 @@ func resourceAwsElbUpdate(d *schema.ResourceData, meta interface{}) error {
} }
} }
if len(added) > 0 {
attachOpts := &elb.AttachLoadBalancerToSubnetsInput{
LoadBalancerName: aws.String(d.Id()),
Subnets: added,
}
log.Printf("[DEBUG] ELB attach subnets opts: %s", attachOpts)
err := resource.Retry(1*time.Minute, func() *resource.RetryError {
_, err := elbconn.AttachLoadBalancerToSubnets(attachOpts)
if err != nil {
if awsErr, ok := err.(awserr.Error); ok {
// eventually consistent issue with removing a subnet in AZ1 and
// immediately adding a new one in the same AZ
if awsErr.Code() == "InvalidConfigurationRequest" && strings.Contains(awsErr.Message(), "cannot be attached to multiple subnets in the same AZ") {
log.Printf("[DEBUG] retrying az association")
return resource.RetryableError(awsErr)
}
}
return resource.NonRetryableError(err)
}
return nil
})
if err != nil {
return fmt.Errorf("Failure adding ELB subnets: %s", err)
}
}
d.SetPartial("subnets") d.SetPartial("subnets")
} }

View File

@ -248,6 +248,36 @@ func TestAccAWSELB_iam_server_cert(t *testing.T) {
}) })
} }
func TestAccAWSELB_swap_subnets(t *testing.T) {
var conf elb.LoadBalancerDescription
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
IDRefreshName: "aws_elb.ourapp",
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSELBDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSELBConfig_subnets,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSELBExists("aws_elb.ourapp", &conf),
resource.TestCheckResourceAttr(
"aws_elb.ourapp", "subnets.#", "2"),
),
},
resource.TestStep{
Config: testAccAWSELBConfig_subnet_swap,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSELBExists("aws_elb.ourapp", &conf),
resource.TestCheckResourceAttr(
"aws_elb.ourapp", "subnets.#", "2"),
),
},
},
})
}
func testAccLoadTags(conf *elb.LoadBalancerDescription, td *elb.TagDescription) resource.TestCheckFunc { func testAccLoadTags(conf *elb.LoadBalancerDescription, td *elb.TagDescription) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).elbconn conn := testAccProvider.Meta().(*AWSClient).elbconn
@ -1329,3 +1359,127 @@ resource "aws_elb" "bar" {
} }
`, certName) `, certName)
} }
const testAccAWSELBConfig_subnets = `
provider "aws" {
region = "us-west-2"
}
resource "aws_vpc" "azelb" {
cidr_block = "10.1.0.0/16"
enable_dns_hostnames = true
tags {
Name = "subnet-vpc"
}
}
resource "aws_subnet" "public_a_one" {
vpc_id = "${aws_vpc.azelb.id}"
cidr_block = "10.1.1.0/24"
availability_zone = "us-west-2a"
}
resource "aws_subnet" "public_b_one" {
vpc_id = "${aws_vpc.azelb.id}"
cidr_block = "10.1.7.0/24"
availability_zone = "us-west-2b"
}
resource "aws_subnet" "public_a_two" {
vpc_id = "${aws_vpc.azelb.id}"
cidr_block = "10.1.2.0/24"
availability_zone = "us-west-2a"
}
resource "aws_elb" "ourapp" {
name = "terraform-asg-deployment-example"
subnets = [
"${aws_subnet.public_a_one.id}",
"${aws_subnet.public_b_one.id}",
]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
depends_on = ["aws_internet_gateway.gw"]
}
resource "aws_internet_gateway" "gw" {
vpc_id = "${aws_vpc.azelb.id}"
tags {
Name = "main"
}
}
`
const testAccAWSELBConfig_subnet_swap = `
provider "aws" {
region = "us-west-2"
}
resource "aws_vpc" "azelb" {
cidr_block = "10.1.0.0/16"
enable_dns_hostnames = true
tags {
Name = "subnet-vpc"
}
}
resource "aws_subnet" "public_a_one" {
vpc_id = "${aws_vpc.azelb.id}"
cidr_block = "10.1.1.0/24"
availability_zone = "us-west-2a"
}
resource "aws_subnet" "public_b_one" {
vpc_id = "${aws_vpc.azelb.id}"
cidr_block = "10.1.7.0/24"
availability_zone = "us-west-2b"
}
resource "aws_subnet" "public_a_two" {
vpc_id = "${aws_vpc.azelb.id}"
cidr_block = "10.1.2.0/24"
availability_zone = "us-west-2a"
}
resource "aws_elb" "ourapp" {
name = "terraform-asg-deployment-example"
subnets = [
"${aws_subnet.public_a_two.id}",
"${aws_subnet.public_b_one.id}",
]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
depends_on = ["aws_internet_gateway.gw"]
}
resource "aws_internet_gateway" "gw" {
vpc_id = "${aws_vpc.azelb.id}"
tags {
Name = "main"
}
}
`

View File

@ -228,7 +228,25 @@ func TestAccAWSNetworkAcl_Subnets(t *testing.T) {
}, },
}, },
}) })
}
func TestAccAWSNetworkAcl_espProtocol(t *testing.T) {
var networkAcl ec2.NetworkAcl
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
IDRefreshName: "aws_network_acl.testesp",
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSNetworkAclDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAWSNetworkAclEsp,
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSNetworkAclExists("aws_network_acl.testesp", &networkAcl),
),
},
},
})
} }
func testAccCheckAWSNetworkAclDestroy(s *terraform.State) error { func testAccCheckAWSNetworkAclDestroy(s *terraform.State) error {
@ -638,3 +656,26 @@ resource "aws_network_acl" "bar" {
} }
} }
` `
const testAccAWSNetworkAclEsp = `
resource "aws_vpc" "testespvpc" {
cidr_block = "10.1.0.0/16"
}
resource "aws_network_acl" "testesp" {
vpc_id = "${aws_vpc.testespvpc.id}"
egress {
protocol = "esp"
rule_no = 5
action = "allow"
cidr_block = "10.3.0.0/18"
from_port = 0
to_port = 0
}
tags {
Name = "test_esp"
}
}
`

View File

@ -955,7 +955,7 @@ func protocolForValue(v string) string {
return "-1" return "-1"
} }
// if it's a name like tcp, return that // if it's a name like tcp, return that
if _, ok := protocolIntegers()[protocol]; ok { if _, ok := sgProtocolIntegers()[protocol]; ok {
return protocol return protocol
} }
// convert to int, look for that value // convert to int, look for that value
@ -967,7 +967,7 @@ func protocolForValue(v string) string {
return protocol return protocol
} }
for k, v := range protocolIntegers() { for k, v := range sgProtocolIntegers() {
if p == v { if p == v {
// guard against protocolIntegers sometime in the future not having lower // guard against protocolIntegers sometime in the future not having lower
// case ids in the map // case ids in the map
@ -980,6 +980,23 @@ func protocolForValue(v string) string {
return protocol return protocol
} }
// a map of protocol names and their codes, defined at
// https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml,
// documented to be supported by AWS Security Groups
// http://docs.aws.amazon.com/fr_fr/AWSEC2/latest/APIReference/API_IpPermission.html
// Similar to protocolIntegers() used by Network ACLs, but explicitly only
// supports "tcp", "udp", "icmp", and "all"
func sgProtocolIntegers() map[string]int {
var protocolIntegers = make(map[string]int)
protocolIntegers = map[string]int{
"udp": 17,
"tcp": 6,
"icmp": 1,
"all": -1,
}
return protocolIntegers
}
// The AWS Lambda service creates ENIs behind the scenes and keeps these around for a while // The AWS Lambda service creates ENIs behind the scenes and keeps these around for a while
// which would prevent SGs attached to such ENIs from being destroyed // which would prevent SGs attached to such ENIs from being destroyed
func deleteLingeringLambdaENIs(conn *ec2.EC2, d *schema.ResourceData) error { func deleteLingeringLambdaENIs(conn *ec2.EC2, d *schema.ResourceData) error {

View File

@ -12,18 +12,11 @@ import (
func Provider() terraform.ResourceProvider { func Provider() terraform.ResourceProvider {
return &schema.Provider{ return &schema.Provider{
DataSourcesMap: map[string]*schema.Resource{
"tls_cert_request": dataSourceCertRequest(),
},
ResourcesMap: map[string]*schema.Resource{ ResourcesMap: map[string]*schema.Resource{
"tls_private_key": resourcePrivateKey(), "tls_private_key": resourcePrivateKey(),
"tls_locally_signed_cert": resourceLocallySignedCert(), "tls_locally_signed_cert": resourceLocallySignedCert(),
"tls_self_signed_cert": resourceSelfSignedCert(), "tls_self_signed_cert": resourceSelfSignedCert(),
"tls_cert_request": resourceCertRequest(),
"tls_cert_request": schema.DataSourceResourceShim(
"tls_cert_request",
dataSourceCertRequest(),
),
}, },
} }
} }

View File

@ -12,8 +12,10 @@ import (
const pemCertReqType = "CERTIFICATE REQUEST" const pemCertReqType = "CERTIFICATE REQUEST"
func dataSourceCertRequest() *schema.Resource { func resourceCertRequest() *schema.Resource {
return &schema.Resource{ return &schema.Resource{
Create: CreateCertRequest,
Delete: DeleteCertRequest,
Read: ReadCertRequest, Read: ReadCertRequest,
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
@ -22,6 +24,7 @@ func dataSourceCertRequest() *schema.Resource {
Type: schema.TypeList, Type: schema.TypeList,
Optional: true, Optional: true,
Description: "List of DNS names to use as subjects of the certificate", Description: "List of DNS names to use as subjects of the certificate",
ForceNew: true,
Elem: &schema.Schema{ Elem: &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
}, },
@ -31,6 +34,7 @@ func dataSourceCertRequest() *schema.Resource {
Type: schema.TypeList, Type: schema.TypeList,
Optional: true, Optional: true,
Description: "List of IP addresses to use as subjects of the certificate", Description: "List of IP addresses to use as subjects of the certificate",
ForceNew: true,
Elem: &schema.Schema{ Elem: &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
}, },
@ -40,12 +44,14 @@ func dataSourceCertRequest() *schema.Resource {
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
Description: "Name of the algorithm to use to generate the certificate's private key", Description: "Name of the algorithm to use to generate the certificate's private key",
ForceNew: true,
}, },
"private_key_pem": &schema.Schema{ "private_key_pem": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
Description: "PEM-encoded private key that the certificate will belong to", Description: "PEM-encoded private key that the certificate will belong to",
ForceNew: true,
StateFunc: func(v interface{}) string { StateFunc: func(v interface{}) string {
return hashForState(v.(string)) return hashForState(v.(string))
}, },
@ -55,6 +61,7 @@ func dataSourceCertRequest() *schema.Resource {
Type: schema.TypeList, Type: schema.TypeList,
Required: true, Required: true,
Elem: nameSchema, Elem: nameSchema,
ForceNew: true,
}, },
"cert_request_pem": &schema.Schema{ "cert_request_pem": &schema.Schema{
@ -65,7 +72,7 @@ func dataSourceCertRequest() *schema.Resource {
} }
} }
func ReadCertRequest(d *schema.ResourceData, meta interface{}) error { func CreateCertRequest(d *schema.ResourceData, meta interface{}) error {
key, err := parsePrivateKey(d, "private_key_pem", "key_algorithm") key, err := parsePrivateKey(d, "private_key_pem", "key_algorithm")
if err != nil { if err != nil {
return err return err
@ -109,3 +116,12 @@ func ReadCertRequest(d *schema.ResourceData, meta interface{}) error {
return nil return nil
} }
func DeleteCertRequest(d *schema.ResourceData, meta interface{}) error {
d.SetId("")
return nil
}
func ReadCertRequest(d *schema.ResourceData, meta interface{}) error {
return nil
}

View File

@ -12,12 +12,12 @@ import (
) )
func TestCertRequest(t *testing.T) { func TestCertRequest(t *testing.T) {
r.UnitTest(t, r.TestCase{ r.Test(t, r.TestCase{
Providers: testProviders, Providers: testProviders,
Steps: []r.TestStep{ Steps: []r.TestStep{
r.TestStep{ r.TestStep{
Config: fmt.Sprintf(` Config: fmt.Sprintf(`
data "tls_cert_request" "test" { resource "tls_cert_request" "test" {
subject { subject {
common_name = "example.com" common_name = "example.com"
organization = "Example, Inc" organization = "Example, Inc"
@ -46,7 +46,7 @@ func TestCertRequest(t *testing.T) {
EOT EOT
} }
output "key_pem" { output "key_pem" {
value = "${data.tls_cert_request.test.cert_request_pem}" value = "${tls_cert_request.test.cert_request_pem}"
} }
`, testPrivateKey), `, testPrivateKey),
Check: func(s *terraform.State) error { Check: func(s *terraform.State) error {

View File

@ -11,6 +11,7 @@ import (
"path" "path"
"regexp" "regexp"
"strings" "strings"
"sync"
"text/template" "text/template"
"time" "time"
@ -103,6 +104,7 @@ type Provisioner struct {
SecretKey string `mapstructure:"secret_key"` SecretKey string `mapstructure:"secret_key"`
ServerURL string `mapstructure:"server_url"` ServerURL string `mapstructure:"server_url"`
SkipInstall bool `mapstructure:"skip_install"` SkipInstall bool `mapstructure:"skip_install"`
SkipRegister bool `mapstructure:"skip_register"`
SSLVerifyMode string `mapstructure:"ssl_verify_mode"` SSLVerifyMode string `mapstructure:"ssl_verify_mode"`
UserName string `mapstructure:"user_name"` UserName string `mapstructure:"user_name"`
UserKey string `mapstructure:"user_key"` UserKey string `mapstructure:"user_key"`
@ -110,7 +112,7 @@ type Provisioner struct {
Version string `mapstructure:"version"` Version string `mapstructure:"version"`
attributes map[string]interface{} attributes map[string]interface{}
vaults map[string]string vaults map[string][]string
cleanupUserKeyCmd string cleanupUserKeyCmd string
createConfigFiles func(terraform.UIOutput, communicator.Communicator) error createConfigFiles func(terraform.UIOutput, communicator.Communicator) error
@ -192,12 +194,14 @@ func (r *ResourceProvisioner) Apply(
defer comm.Disconnect() defer comm.Disconnect()
// Make sure we always delete the user key from the new node! // Make sure we always delete the user key from the new node!
defer func() { var once sync.Once
cleanupUserKey := func() {
o.Output("Cleanup user key...") o.Output("Cleanup user key...")
if err := p.runCommand(o, comm, p.cleanupUserKeyCmd); err != nil { if err := p.runCommand(o, comm, p.cleanupUserKeyCmd); err != nil {
o.Output("WARNING: Failed to cleanup user key on new node: " + err.Error()) o.Output("WARNING: Failed to cleanup user key on new node: " + err.Error())
} }
}() }
defer once.Do(cleanupUserKey)
if !p.SkipInstall { if !p.SkipInstall {
if err := p.installChefClient(o, comm); err != nil { if err := p.installChefClient(o, comm); err != nil {
@ -210,6 +214,7 @@ func (r *ResourceProvisioner) Apply(
return err return err
} }
if !p.SkipRegister {
if p.FetchChefCertificates { if p.FetchChefCertificates {
o.Output("Fetch Chef certificates...") o.Output("Fetch Chef certificates...")
if err := p.fetchChefCertificates(o, comm); err != nil { if err := p.fetchChefCertificates(o, comm); err != nil {
@ -221,6 +226,7 @@ func (r *ResourceProvisioner) Apply(
if err := p.generateClientKey(o, comm); err != nil { if err := p.generateClientKey(o, comm); err != nil {
return err return err
} }
}
if p.VaultJSON != "" { if p.VaultJSON != "" {
o.Output("Configure Chef vaults...") o.Output("Configure Chef vaults...")
@ -229,6 +235,10 @@ func (r *ResourceProvisioner) Apply(
} }
} }
// Cleanup the user key before we run Chef-Client to prevent issues
// with rights caused by changing settings during the run.
once.Do(cleanupUserKey)
o.Output("Starting initial Chef-Client run...") o.Output("Starting initial Chef-Client run...")
if err := p.runChefClient(o, comm); err != nil { if err := p.runChefClient(o, comm); err != nil {
return err return err
@ -352,11 +362,28 @@ func (r *ResourceProvisioner) decodeConfig(c *terraform.ResourceConfig) (*Provis
} }
if vaults, ok := c.Config["vault_json"].(string); ok { if vaults, ok := c.Config["vault_json"].(string); ok {
var m map[string]string var m map[string]interface{}
if err := json.Unmarshal([]byte(vaults), &m); err != nil { if err := json.Unmarshal([]byte(vaults), &m); err != nil {
return nil, fmt.Errorf("Error parsing vault_json: %v", err) return nil, fmt.Errorf("Error parsing vault_json: %v", err)
} }
p.vaults = m
v := make(map[string][]string)
for vault, items := range m {
switch items := items.(type) {
case []interface{}:
for _, item := range items {
if item, ok := item.(string); ok {
v[vault] = append(v[vault], item)
}
}
case interface{}:
if item, ok := items.(string); ok {
v[vault] = append(v[vault], item)
}
}
}
p.vaults = v
} }
return p, nil return p, nil
@ -544,7 +571,8 @@ func (p *Provisioner) configureVaultsFunc(
path.Join(confDir, p.UserName+".pem"), path.Join(confDir, p.UserName+".pem"),
) )
for vault, item := range p.vaults { for vault, items := range p.vaults {
for _, item := range items {
updateCmd := fmt.Sprintf("%s vault update %s %s -A %s -M client %s", updateCmd := fmt.Sprintf("%s vault update %s %s -A %s -M client %s",
knifeCmd, knifeCmd,
vault, vault,
@ -556,6 +584,7 @@ func (p *Provisioner) configureVaultsFunc(
return err return err
} }
} }
}
return nil return nil
} }

View File

@ -221,3 +221,128 @@ func TestResourceProvider_fetchChefCertificates(t *testing.T) {
} }
} }
} }
func TestResourceProvider_configureVaults(t *testing.T) {
cases := map[string]struct {
Config *terraform.ResourceConfig
GemCmd string
KnifeCmd string
ConfDir string
Commands map[string]bool
}{
"Linux Vault string": {
Config: testConfig(t, map[string]interface{}{
"node_name": "nodename1",
"prevent_sudo": true,
"run_list": []interface{}{"cookbook::recipe"},
"server_url": "https://chef.local",
"user_name": "bob",
"user_key": "USER-KEY",
"vault_json": `{"vault1": "item1"}`,
}),
GemCmd: linuxGemCmd,
KnifeCmd: linuxKnifeCmd,
ConfDir: linuxConfDir,
Commands: map[string]bool{
fmt.Sprintf("%s install chef-vault", linuxGemCmd): true,
fmt.Sprintf("%s vault update vault1 item1 -A nodename1 -M client -c %s/client.rb "+
"-u bob --key %s/bob.pem", linuxKnifeCmd, linuxConfDir, linuxConfDir): true,
},
},
"Linux Vault []string": {
Config: testConfig(t, map[string]interface{}{
"fetch_chef_certificates": true,
"node_name": "nodename1",
"prevent_sudo": true,
"run_list": []interface{}{"cookbook::recipe"},
"server_url": "https://chef.local",
"user_name": "bob",
"user_key": "USER-KEY",
"vault_json": `{"vault1": ["item1", "item2"]}`,
}),
GemCmd: linuxGemCmd,
KnifeCmd: linuxKnifeCmd,
ConfDir: linuxConfDir,
Commands: map[string]bool{
fmt.Sprintf("%s install chef-vault", linuxGemCmd): true,
fmt.Sprintf("%s vault update vault1 item1 -A nodename1 -M client -c %s/client.rb "+
"-u bob --key %s/bob.pem", linuxKnifeCmd, linuxConfDir, linuxConfDir): true,
fmt.Sprintf("%s vault update vault1 item2 -A nodename1 -M client -c %s/client.rb "+
"-u bob --key %s/bob.pem", linuxKnifeCmd, linuxConfDir, linuxConfDir): true,
},
},
"Windows Vault string": {
Config: testConfig(t, map[string]interface{}{
"node_name": "nodename1",
"prevent_sudo": true,
"run_list": []interface{}{"cookbook::recipe"},
"server_url": "https://chef.local",
"user_name": "bob",
"user_key": "USER-KEY",
"vault_json": `{"vault1": "item1"}`,
}),
GemCmd: windowsGemCmd,
KnifeCmd: windowsKnifeCmd,
ConfDir: windowsConfDir,
Commands: map[string]bool{
fmt.Sprintf("%s install chef-vault", windowsGemCmd): true,
fmt.Sprintf("%s vault update vault1 item1 -A nodename1 -M client -c %s/client.rb "+
"-u bob --key %s/bob.pem", windowsKnifeCmd, windowsConfDir, windowsConfDir): true,
},
},
"Windows Vault []string": {
Config: testConfig(t, map[string]interface{}{
"fetch_chef_certificates": true,
"node_name": "nodename1",
"prevent_sudo": true,
"run_list": []interface{}{"cookbook::recipe"},
"server_url": "https://chef.local",
"user_name": "bob",
"user_key": "USER-KEY",
"vault_json": `{"vault1": ["item1", "item2"]}`,
}),
GemCmd: windowsGemCmd,
KnifeCmd: windowsKnifeCmd,
ConfDir: windowsConfDir,
Commands: map[string]bool{
fmt.Sprintf("%s install chef-vault", windowsGemCmd): true,
fmt.Sprintf("%s vault update vault1 item1 -A nodename1 -M client -c %s/client.rb "+
"-u bob --key %s/bob.pem", windowsKnifeCmd, windowsConfDir, windowsConfDir): true,
fmt.Sprintf("%s vault update vault1 item2 -A nodename1 -M client -c %s/client.rb "+
"-u bob --key %s/bob.pem", windowsKnifeCmd, windowsConfDir, windowsConfDir): true,
},
},
}
r := new(ResourceProvisioner)
o := new(terraform.MockUIOutput)
c := new(communicator.MockCommunicator)
for k, tc := range cases {
c.Commands = tc.Commands
p, err := r.decodeConfig(tc.Config)
if err != nil {
t.Fatalf("Error: %v", err)
}
p.configureVaults = p.configureVaultsFunc(tc.GemCmd, tc.KnifeCmd, tc.ConfDir)
p.useSudo = !p.PreventSudo
err = p.configureVaults(o, c)
if err != nil {
t.Fatalf("Test %q failed: %v", k, err)
}
}
}

View File

@ -1,8 +1,9 @@
package awspolicy package awspolicy
import ( import (
"reflect"
"encoding/json" "encoding/json"
"reflect"
"strings"
"github.com/hashicorp/errwrap" "github.com/hashicorp/errwrap"
) )
@ -104,7 +105,7 @@ func (statement *awsPolicyStatement) equals(other *awsPolicyStatement) bool {
return false return false
} }
if statement.Effect != other.Effect { if strings.ToLower(statement.Effect) != strings.ToLower(other.Effect) {
return false return false
} }
@ -218,7 +219,6 @@ func stringPrincipalsEqual(ours, theirs interface{}) bool {
return false return false
} }
type awsConditionsBlock map[string]map[string]interface{} type awsConditionsBlock map[string]map[string]interface{}
func (conditions awsConditionsBlock) Equals(other awsConditionsBlock) bool { func (conditions awsConditionsBlock) Equals(other awsConditionsBlock) bool {
@ -287,7 +287,6 @@ func (conditions awsConditionsBlock) Equals(other awsConditionsBlock) bool {
return true return true
} }
type awsStringSet []string type awsStringSet []string
// newAWSStringSet constructs an awsStringSet from an interface{} - which // newAWSStringSet constructs an awsStringSet from an interface{} - which

6
vendor/vendor.json vendored
View File

@ -1291,10 +1291,10 @@
"revision": "f233a8bac88d1f2dc282a98186f5a3363b806181" "revision": "f233a8bac88d1f2dc282a98186f5a3363b806181"
}, },
{ {
"checksumSHA1": "cCSJGF1h+suYcgMq7wEm1carknw=", "checksumSHA1": "zyyyjWKu9gGLFy00k8utV7pncvg=",
"path": "github.com/jen20/awspolicyequivalence", "path": "github.com/jen20/awspolicyequivalence",
"revision": "6b9230008577fc3dcd10c104ce8fb16ed679bf66", "revision": "ebe5485f2c1822e7bee8b5008e14d9481a14a3a3",
"revisionTime": "2016-09-01T18:24:20Z" "revisionTime": "2016-09-29T21:48:42Z"
}, },
{ {
"checksumSHA1": "oPpOfZn11Ef6DWOoETxSW9Venzs=", "checksumSHA1": "oPpOfZn11Ef6DWOoETxSW9Venzs=",

View File

@ -27,7 +27,7 @@ to the developers via a GitHub Issue. As a user, you're not required to dig
into this file. into this file.
However, if you are interested in figuring out what might have gone wrong However, if you are interested in figuring out what might have gone wrong
before filing an issue, here are the the basic details of how to read a crash before filing an issue, here are the basic details of how to read a crash
log. log.
The most interesting part of a crash log is the panic message itself and the The most interesting part of a crash log is the panic message itself and the

View File

@ -2,31 +2,21 @@
layout: "docs" layout: "docs"
page_title: "Creating Modules" page_title: "Creating Modules"
sidebar_current: "docs-modules-create" sidebar_current: "docs-modules-create"
description: |- description: How to create modules.
Creating modules in Terraform is easy. You may want to do this to better organize your code, to make a reusable component, or just to learn more about Terraform. For any reason, if you already know the basics of Terraform, creating a module is a piece of cake.
--- ---
# Creating Modules # Creating Modules
Creating modules in Terraform is easy. You may want to do this to better Creating modules in Terraform is easy. You may want to do this to better organize your code, to make a reusable component, or just to learn more about Terraform. For any reason, if you already know the basics of Terraform, then creating a module is a piece of cake.
organize your code, to make a reusable component, or just to learn more about
Terraform. For any reason, if you already know the basics of Terraform,
creating a module is a piece of cake.
Modules in Terraform are just folders with Terraform files. In fact, Modules in Terraform are folders with Terraform files. In fact, when you run `terraform apply`, the current working directory holding
when you run `terraform apply`, the current working directory holding the Terraform files you're applying comprise what is called the _root module_. This itself is a valid module.
the Terraform files you're applying comprise what is called the
_root module_. It itself is a valid module.
Therefore, you can enter the source of any module, run `terraform apply`, Therefore, you can enter the source of any module, satisfy any required variables, run `terraform apply`, and expect it to work.
and expect it to work (assuming you satisfy the required variables, if any).
## An Example ## An Example Module
Within a folder containing Terraform configurations, create a subfolder Within a folder containing Terraform configurations, create a subfolder called `child`. In this subfolder, make one empty `main.tf` file. Then, back in the root folder containing the `child` folder, add this to one of your Terraform configuration files:
"child". In this subfolder, make one empty "main.tf" file. Then, back in
the root folder containing the "child" folder, add this to one of the
Terraform files:
``` ```
module "child" { module "child" {
@ -34,25 +24,18 @@ module "child" {
} }
``` ```
This will work. You've created your first module! You can add resources You've now created your first module! You can now add resources to the `child` module.
to the child module to see how that interaction works.
Note: Prior to running the above, you'll have to run **Note:** Prior to running the above, you'll have to run [the get command](/docs/commands/get.html) for Terraform to sync
[the get command](/docs/commands/get.html) for Terraform to sync your modules. This should be instant since the module is a local path.
your modules. This should be instant since the module is just a local path.
## Inputs/Outputs ## Inputs/Outputs
To make modules more useful than simple isolated containers of Terraform To make modules more useful than simple isolated containers of Terraform configurations, modules can be configured and also have outputs that can be consumed by your Terraform configuration.
configurations, modules can be configured and also have outputs that can be
consumed by the configuration using the module.
Inputs of a module are [variables](/docs/configuration/variables.html) Inputs of a module are [variables](/docs/configuration/variables.html) and outputs are [outputs](/docs/configuration/outputs.html). There is no special syntax to define these, they're defined just like any other variables or outputs. You can think about these variables and outputs as the API interface to your module.
and outputs are [outputs](/docs/configuration/outputs.html). There is no
special syntax to define these, they're defined just like any other
variables or outputs.
In the "child" module we created above, add the following: Let's add a variable and an output to our `child` module.
``` ```
variable "memory" {} variable "memory" {}
@ -62,8 +45,7 @@ output "received" {
} }
``` ```
This will create a required variable "memory" and then an output "received" This will create a required variable, `memory`, and then an output, `received`, that will be the value of the `memory` variable.
that will simply be the value of the memory variable.
You can then configure the module and use the output like so: You can then configure the module and use the output like so:
@ -79,24 +61,13 @@ output "child_memory" {
} }
``` ```
If you run `apply`, you'll again see that this works. If you now run `terraform apply`, you see how this works.
And that is all there is to it. Variables and outputs are used to configure
modules and provide results. Resources within a module are isolated,
and the whole thing is managed as a single unit.
## Paths and Embedded Files ## Paths and Embedded Files
It is sometimes useful to embed files within the module that aren't It is sometimes useful to embed files within the module that aren't Terraform configuration files, such as a script to provision a resource or a file to upload.
Terraform configuration files, such as a script to provision a resource
or a file to upload.
In these cases, you can't use a relative path, since paths in Terraform In these cases, you can't use a relative path, since paths in Terraform are generally relative to the working directory from which Terraform was executed. Instead, you want to use a module-relative path. To do this, you should use the [path interpolated variables](/docs/configuration/interpolation.html).
are generally relative to the working directory that Terraform was executed
from. Instead, you want to use a module-relative path. To do this, use
the [path interpolated variables](/docs/configuration/interpolation.html).
An example is shown below:
``` ```
resource "aws_instance" "server" { resource "aws_instance" "server" {
@ -108,20 +79,13 @@ resource "aws_instance" "server" {
} }
``` ```
In the above, we use `${path.module}` to get a module-relative path. This Here we use `${path.module}` to get a module-relative path.
is usually what you'll want in any case.
## Nested Modules ## Nested Modules
You can use a module within a module just like you would anywhere else. You can nest a module within another module. This module will be hidden from your root configuration, so you'll have re-expose any
This module will be hidden from the root user, so you'll have re-expose any variables and outputs you require.
variables if you need to, as well as outputs.
The [get command](/docs/commands/get.html) will automatically get all The [get command](/docs/commands/get.html) will automatically get all nested modules.
nested modules as well.
You don't have to worry about conflicting versions of modules, since You don't have to worry about conflicting versions of modules, since Terraform builds isolated subtrees of all dependencies. For example, one module might use version 1.0 of module `foo` and another module might use version 2.0, and this will all work fine within Terraform since the modules are created separately.
Terraform builds isolated subtrees of all dependencies. For example,
one module might use version 1.0 of module "foo" and another module
might use version 2.0 of module "foo", and this would all work fine
within Terraform since the modules are created separately.

View File

@ -103,7 +103,7 @@ Tags support the following:
The following attributes are exported: The following attributes are exported:
* `id` - The autoscaling group name. * `id` - The autoscaling group id.
* `arn` - The ARN for this AutoScaling Group * `arn` - The ARN for this AutoScaling Group
* `availability_zones` - The availability zones of the autoscale group. * `availability_zones` - The availability zones of the autoscale group.
* `min_size` - The minimum size of the autoscale group * `min_size` - The minimum size of the autoscale group

View File

@ -369,3 +369,12 @@ The following attributes are exported:
[5]: /docs/providers/aws/r/cloudfront_origin_access_identity.html [5]: /docs/providers/aws/r/cloudfront_origin_access_identity.html
[6]: https://aws.amazon.com/certificate-manager/ [6]: https://aws.amazon.com/certificate-manager/
[7]: http://docs.aws.amazon.com/Route53/latest/APIReference/CreateAliasRRSAPI.html [7]: http://docs.aws.amazon.com/Route53/latest/APIReference/CreateAliasRRSAPI.html
## Import
Cloudfront Distributions can be imported using the `id`, e.g.
```
$ terraform import aws_cloudfront_distribution.distribution E74FTE3EXAMPLE
```

View File

@ -29,7 +29,7 @@ be removed in future releases, please use `creation_token` instead.
The following arguments are supported: The following arguments are supported:
* `creation_token` - (Optional) A unique name (a maximum of 64 characters are allowed) * `creation_token` - (Optional) A unique name (a maximum of 64 characters are allowed)
used as reference when creating the the Elastic File System to ensure idempotent file used as reference when creating the Elastic File System to ensure idempotent file
system creation. By default generated by Terraform. See [Elastic File System] system creation. By default generated by Terraform. See [Elastic File System]
(http://docs.aws.amazon.com/efs/latest/ug/) user guide for more information. (http://docs.aws.amazon.com/efs/latest/ug/) user guide for more information.
* `reference_name` - **DEPRECATED** (Optional) A reference name used when creating the * `reference_name` - **DEPRECATED** (Optional) A reference name used when creating the

View File

@ -26,8 +26,8 @@ resource "aws_opsworks_permission" "my_stack_permission" {
The following arguments are supported: The following arguments are supported:
* `allow_ssh` - (Optional) Whethe the user is allowed to use SSH to communicate with the instance * `allow_ssh` - (Optional) Whether the user is allowed to use SSH to communicate with the instance
* `allow_sudo` - (Optional) Whethe the user is allowed to use sudo to elevate privileges * `allow_sudo` - (Optional) Whether the user is allowed to use sudo to elevate privileges
* `user_arn` - (Required) The user's IAM ARN to set permissions for * `user_arn` - (Required) The user's IAM ARN to set permissions for
* `level` - (Optional) The users permission level. Mus be one of `deny`, `show`, `deploy`, `manage`, `iam_only` * `level` - (Optional) The users permission level. Mus be one of `deny`, `show`, `deploy`, `manage`, `iam_only`
* `stack_id` - (Required) The stack to set the permissions for * `stack_id` - (Required) The stack to set the permissions for

View File

@ -35,4 +35,4 @@ The following arguments are supported:
The following attributes are exported: The following attributes are exported:
* `token` - If the the log `source` is `token`, this value holds the generated log token that is used by logging clients. See the Logentries [token-based input documentation](https://logentries.com/doc/input-token/) for more information. * `token` - If the log `source` is `token`, this value holds the generated log token that is used by logging clients. See the Logentries [token-based input documentation](https://logentries.com/doc/input-token/) for more information.

View File

@ -13,11 +13,21 @@ typical format used to request a certificate from a certificate authority.
This resource is intended to be used in conjunction with a Terraform provider This resource is intended to be used in conjunction with a Terraform provider
for a particular certificate authority in order to provision a new certificate. for a particular certificate authority in order to provision a new certificate.
This is a *logical resource*, so it contributes only to the current Terraform
state and does not create any external managed resources.
~> **Compatibility Note** From Terraform 0.7.0 to 0.7.4 this resource was
converted to a data source, and the resource form of it was deprecated. This
turned out to be a design error since a cert request includes a random number
in the form of the signature nonce, and so the data source form of this
resource caused non-convergent configuration. The data source form is no longer
supported as of Terraform 0.7.5 and any users should return to using the
resource form.
## Example Usage ## Example Usage
``` ```
data "tls_cert_request" "example" { resource "tls_cert_request" "example" {
key_algorithm = "ECDSA" key_algorithm = "ECDSA"
private_key_pem = "${file(\"private_key.pem\")}" private_key_pem = "${file(\"private_key.pem\")}"

View File

@ -115,6 +115,11 @@ The following arguments are supported:
machine. This assumes Chef Client is already installed when you run the `chef` machine. This assumes Chef Client is already installed when you run the `chef`
provisioner. provisioner.
* `skip_register (boolean)` - (Optional) Skip the registration of Chef Client on the remote
machine. This assumes Chef Client is already registered and the private key (`client.pem`)
is available in the default Chef configuration directory when you run the `chef`
provisioner.
* `ssl_verify_mode (string)` - (Optional) Use to set the verify mode for Chef Client HTTPS * `ssl_verify_mode (string)` - (Optional) Use to set the verify mode for Chef Client HTTPS
requests. requests.

View File

@ -10,15 +10,6 @@
<a href="/docs/providers/tls/index.html">TLS Provider</a> <a href="/docs/providers/tls/index.html">TLS Provider</a>
</li> </li>
<li<%= sidebar_current(/^docs-tls-data-source/) %>>
<a href="#">Data Sources</a>
<ul class="nav nav-visible">
<li<%= sidebar_current("docs-tls-data-source-cert-request") %>>
<a href="/docs/providers/tls/d/cert_request.html">tls_cert_request</a>
</li>
</ul>
</li>
<li<%= sidebar_current(/^docs-tls-resource/) %>> <li<%= sidebar_current(/^docs-tls-resource/) %>>
<a href="#">Resources</a> <a href="#">Resources</a>
<ul class="nav nav-visible"> <ul class="nav nav-visible">
@ -31,6 +22,9 @@
<li<%= sidebar_current("docs-tls-resource-locally-signed-cert") %>> <li<%= sidebar_current("docs-tls-resource-locally-signed-cert") %>>
<a href="/docs/providers/tls/r/locally_signed_cert.html">tls_locally_signed_cert</a> <a href="/docs/providers/tls/r/locally_signed_cert.html">tls_locally_signed_cert</a>
</li> </li>
<li<%= sidebar_current("docs-tls-resourse-cert-request") %>>
<a href="/docs/providers/tls/r/cert_request.html">tls_cert_request</a>
</li>
</ul> </ul>
</li> </li>
</ul> </ul>