Commit Graph

351 Commits

Author SHA1 Message Date
Mitchell Hashimoto 3f36787207 Merge pull request #9618 from hashicorp/b-computed-prim
helper/schema,terraform: handle computed primitives in diffs
2016-10-28 10:44:13 -04:00
Mitchell Hashimoto 95d37ea79c
helper/schema,terraform: handle computed primtives in diffs
Fixes #3309

There are two primary changes, one to how helper/schema creates diffs
and one to how Terraform compares diffs. Both require careful
understanding.

== 1. helper/schema Changes

helper/schema, given any primitive field (string, int, bool, etc.)
_used to_ create a basic diff when given a computed new value (i.e. from
an unkown interpolation). This would put in the plan that the old value
is whatever the old value was, and the new value was the actual
interpolation. For example, from #3309, the diff showed the following:

```
~ module.test.aws_eip.test-instance.0
    instance: "<INSTANCE ID>" => "${element(aws_instance.test-instance.*.id, count.index)}"
```

Then, when running `apply`, the diff would be realized and you would get
a diff mismatch error because it would realize the final value is the
same and remove it from the diff.

**The change:** `helper/schema` now marks unknown primitive values with
`NewComputed` set to true. Semantically this is correct for the diff to
have this information.

== 2. Terraform Diff.Same Changes

Next, the way Terraform compares diffs needed to be updated

Specifically, the case where the diff from the plan had a NewComputed
primitive and the diff from the apply _no longer has that value_. This
is possible if the computed value ended up being the same as the old
value. This is allowed to pass through.

Together, these fix #3309.
2016-10-25 22:36:59 -04:00
Mitchell Hashimoto 60140b28f4
Revert "Merge pull request #9536 from hashicorp/f-provider-stop"
This reverts commit c3a4cff133, reversing
changes made to 791a02e6e4.

This change requires plugin recompilation and we should hold off until a
minor release for that.
2016-10-25 12:00:36 -07:00
Mitchell Hashimoto 86eb30b8a2
helper/schema: expose stop information as a Context 2016-10-25 11:32:30 -07:00
Mitchell Hashimoto 8c11f137f5
helper/schema: support Stop() 2016-10-25 11:31:55 -07:00
James Nugent 43dd13cd36 helper/schema: Introduce Noop and RemoveFromState
This commit implements reusable functions for when resources have no
need to implement a particular operation:

- Noop - does nothing and returns no error.
- RemoveFromState - sets the resource ID to empty string (removing it
  from state) and returns no error.
2016-10-25 11:47:03 -05:00
Mitchell Hashimoto fa9758e162
helper/schema: test with DiffSuppress and Default 2016-10-24 22:23:13 -07:00
stack72 5a537cdbf9
helper/schema: Adding of MinItems as a validation to Lists and Maps
This is required for the times when the configuration cannot have an
empty configuration. An example would be in AzureRM, when you create a
LoadBalancer with a configuration, you can delete *all* but 1 of these
configurations
2016-10-04 18:57:58 +01:00
Paul Hinze 534a5a3b75 Merge pull request #8891 from gotascii/fix-diff-set-update
Unchanged NestedSets are not returned by DiffFieldReader.
2016-10-03 11:06:40 -05:00
Justin Marney 895d10a627 Use DeepEqual instead of multiple if statements to reduce noise. 2016-09-17 17:41:44 +00:00
Justin Marney 53e10dfa89 Name test appropriately. 2016-09-17 16:56:12 +00:00
Justin Marney 9217f5a063 Unchanged NestedSets are not returned by DiffFieldReader. 2016-09-16 23:57:33 +00:00
James Nugent e0226c9039 core: Check for attrV being nil before dereference
This can be an issue with unset computed fields.

Fixes #8815.
2016-09-14 09:51:15 +01:00
Paul Stack bf755bb5c9 Merge pull request #8585 from hashicorp/f-diff-suppression
helper/schema: Add diff suppression callback
2016-09-01 14:46:03 +01:00
James Nugent 85ec09111b helper/schema: Add diff suppression callback
This commit adds a new callback, DiffSuppressFunc, to  the schema.Schema
structure. If set for a given schema, a callback to the user-supplied
function will be made for each attribute for which the default
type-based diff mechanism produces an attribute diff. Returning `true`
from the callback will suppress the diff (i.e. pretend there was no
diff), and returning false will retain it as part of the plan.

There are a number of motivating examples for this - one of which is
included as an example:

1. On SSH public keys, trailing whitespace does not matter in many
   cases - and in some cases it is added by provider APIs. For
   digitalocean_ssh_key resources we previously had a StateFunc that
   trimmed the whitespace - we now have a DiffSuppressFunc which
   verifies whether the trimmed strings are equivalent.

2. IAM policy equivalence for AWS. A good proportion of AWS issues
   relate to IAM policies which have been "normalized" (used loosely)
   by the IAM API endpoints. This can make the JSON strings differ
   from those generated by iam_policy_document resources or template
   files, even though the semantics are the same (for example,
   reordering of `bucket-prefix/` and `bucket-prefix/*` in an S3
   bucket policy. DiffSupressFunc can be used to test for semantic
   equivalence rather than pure text equivalence, but without having to
   deal with the complexity associated with a full "provider-land" diff
   implementation without helper/schema.
2016-08-31 19:13:53 -05:00
Radek Simko b90ab0b705
schema: Return all validation errors together from InternalValidate 2016-08-27 10:34:16 +01:00
Paul Hinze 3dccfa0cc9
terraform: diffs with only tainted set are non-empty
Fixes issue where a resource marked as tainted with no other attribute
diffs would never show up in the plan or apply as needing to be
replaced.

One unrelated test needed updating due to a quirk in the testDiffFn
logic - it adds a "type" field diff if the diff is non-Empty. NBD
2016-08-12 17:37:49 -05:00
Clint a84aa5e914 Revert "helper/schema: Make nested Set(s) in List(s) work" (#7436) 2016-06-30 10:48:52 -05:00
Radek Simko 37d57f4a85 Improve naming convention (resultSet -> exists) 2016-06-28 17:40:45 +01:00
Radek Simko 917ad44cf0 helper/schema: Fix readSet implementation (DiffFieldReader) 2016-06-28 17:40:44 +01:00
Radek Simko c738c5a9a3 helper/schema: Implement reader-specific readList method 2016-06-28 17:40:44 +01:00
Radek Simko 378b526dc3 helper/schema: Add regression tests for nested Set/List
Although DiffFieldReader was the one mostly responsible for a buggy behaviour
more tests were added throughout the debugging process most of which
would fail without the bugfix.

 - ResourceData
 - MultiLevelFieldReader
 - MapFieldReader
 - DiffFieldReader
2016-06-28 17:40:44 +01:00
James Nugent dbf725bd68 core: Allow dynamic attributes in helper/schema
The helper/schema framework for building providers previously validated
in all cases that each field being set in state was in the schema.
However, in order to support remote state in a usable fashion, the need
has arisen for the top level attributes of the resource to be created
dynamically. In order to still be able to use helper/schema, this commit
adds the capability to assign additional fields.

Though I do not forsee this being used by providers other than remote
state (and that eventually may move into Terraform Core rather than
being a provider), the usage and semantics are:

To opt into dynamic attributes, add a schema attribute named
"__has_dynamic_attributes", and make it an optional string with no
default value, in order that it does not appear in diffs:

        "__has_dynamic_attributes": {
            Type: schema.TypeString
            Optional: true
        }

In the read callback, use the d.UnsafeSetFieldRaw(key, value) function
to set the dynamic attributes.

Note that other fields in the schema _are_ copied into state, and that
the names of the schema fields cannot currently be used as dynamic
attribute names, as we check to ensure a value is not already set for a
given key.
2016-06-11 13:29:05 +01:00
clint shryock 7d71b8cc3c helper and terraform interpolate test update 2016-06-10 10:07:17 -05:00
James Bardin 2c7b702d1f Merge pull request #7091 from hashicorp/jbardin/serialize
Serialization for hash panics on TypeMap
2016-06-09 16:16:41 -04:00
James Bardin bab031aac5 Add test for TypeMap in a Schema 2016-06-09 16:00:33 -04:00
James Bardin d8fbaa7924 Serialization for hash panics on TypeMap
The serializeCollectionMemberForHash helper can't be called for the
MapType values, because MapType doesn't have a schema.Elem. Instead, we
can write the key/value pairs directly to the buffer. This still doesn't
allow for nested maps or lists, but we need to define that use case
before committing to it here.
2016-06-09 13:37:58 -04:00
James Nugent 074545e536 core: Use .% instead of .# for maps in state
The flatmapped representation of state prior to this commit encoded maps
and lists (and therefore by extension, sets) with a key corresponding to
the number of elements, or the unknown variable indicator under a .# key
and then individual items. For example, the list ["a", "b", "c"] would
have been encoded as:

    listname.# = 3
    listname.0 = "a"
    listname.1 = "b"
    listname.2 = "c"

And the map {"key1": "value1", "key2", "value2"} would have been encoded
as:

    mapname.# = 2
    mapname.key1 = "value1"
    mapname.key2 = "value2"

Sets use the hash code as the key - for example a set with a (fictional)
hashcode calculation may look like:

    setname.# = 2
    setname.12312512 = "value1"
    setname.56345233 = "value2"

Prior to the work done to extend the type system, this was sufficient
since the internal representation of these was effectively the same.
However, following the separation of maps and lists into distinct
first-class types, this encoding presents a problem: given a state file,
it is impossible to tell the encoding of an empty list and an empty map
apart. This presents problems for the type checker during interpolation,
as many interpolation functions will operate on only one of these two
structures.

This commit therefore changes the representation in state of maps to use
a "%" as the key for the number of elements. Consequently the map above
will now be encoded as:

    mapname.% = 2
    mapname.key1 = "value1"
    mapname.key2 = "value2"

This has the effect of an empty list (or set) now being encoded as:

    listname.# = 0

And an empty map now being encoded as:

    mapname.% = 0

Therefore we can eliminate some nasty guessing logic from the resource
variable supplier for interpolation, at the cost of having to migrate
state up front (to follow in a subsequent commit).

In order to reduce the number of potential situations in which resources
would be "forced new", we continue to accept "#" as the count key when
reading maps via helper/schema. There is no situation under which we can
allow "#" as an actual map key in any case, as it would not be
distinguishable from a list or set in state.
2016-06-09 10:49:42 +01:00
James Nugent 91587a49f3 core: Remove unnecessary debug logging
Some unnecessary debug logging was introduced in 7b6df27e4, this commit
removes it so as not to clutter logs.
2016-06-08 18:38:41 +01:00
Chris Marchesi 9d7fb89114 core: Adding Sensitive attribute to resource schema
This an effort to address hashicorp/terraform#516.

Adding the Sensitive attribute to the resource schema, opening up the
ability for resource maintainers to mark some fields as sensitive.
Sensitive fields are hidden in the output, and, possibly in the future,
could be encrypted.
2016-05-29 22:18:44 -07:00
Sander van Harmelen 8560f50cbc
Change taint behaviour to act as a normal resource
This means it’s shown correctly in a plan and takes into account any
actions that are dependant on the tainted resource and, vice verse, any
actions that the tainted resource depends on.

So this changes the behaviour from saying this resource is tainted so
just forget about it and make sure it gets deleted in the background,
to saying I want that resource to be recreated (taking into account the
existing resource and it’s place in the graph).
2016-05-26 19:55:26 -05:00
Mitchell Hashimoto 55583baa7e
Merge branch 'f-aws-import' 2016-05-18 15:28:12 -06:00
Mitchell Hashimoto b7d4767dd6
helper/schema: pass through import state func 2016-05-16 10:03:57 -07:00
Martin Atkins 6cd22a4c9a helper/schema: emit warning when using data source resource shim
For backward compatibility we will continue to support using the data
sources that were formerly logical resources as resources for the moment,
but we want to warn the user about it since this support is likely to
be removed in future.

This is done by adding a new "deprecation message" feature to
schema.Resource, but for the moment this is done as an internal feature
(not usable directly by plugins) so that we can collect additional
use-cases and design a more general interface before creating a
compatibility constraint.
2016-05-14 08:26:36 -07:00
Martin Atkins fb262d0dbe helper/schema: shim for making data sources act like resources
Historically we've had some "read-only" and "logical" resources. With the
addition of the data source concept these will gradually become data
sources, but we need to retain backward compatibility with existing
configurations that use the now-deprecated resources.

This shim is intended to allow us to easily create a resource from a
data source implementation. It adjusts the schema as needed and adds
stub Create and Delete implementations.

This would ideally also produce a deprecation warning whenever such a
shimmed resource is used, but the schema system doesn't currently have
a mechanism for resource-specific validation, so that remains just a TODO
for the moment.
2016-05-14 08:26:36 -07:00
Martin Atkins 6a468dcd83 helper/schema: Resource can be writable or not
In the "schema" layer a Resource is just any "thing" that has a schema
and supports some or all of the CRUD operations. Data sources introduce
a new use of Resource to represent read-only resources, which require
some different InternalValidate logic.
2016-05-14 08:26:36 -07:00
Martin Atkins 0e0e3d73af core: New ResourceProvider methods for data resources
This is a breaking change to the ResourceProvider interface that adds the
new operations relating to data sources.

DataSources, ValidateDataSource, ReadDataDiff and ReadDataApply are the
data source equivalents of Resources, Validate, Diff and Apply (respectively)
for managed resources.

The diff/apply model seems at first glance a rather strange workflow for
read-only resources, but implementing data resources in this way allows them
to fit cleanly into the standard plan/apply lifecycle in cases where the
configuration contains computed arguments and thus the read must be deferred
until apply time.

Along with breaking the interface, we also fix up the plugin client/server
and helper/schema implementations of it, which are all of the callers
used when provider plugins use helper/schema. This would be a breaking
change for any provider plugin that directly implements the provider
interface, but no known plugins do this and it is not recommended.

At the helper/schema layer the implementer sees ReadDataApply as a "Read",
as opposed to "Create" or "Update" as in the managed resource Apply
implementation. The planning mechanics are handled entirely within
helper/schema, so that complexity is hidden from the provider implementation
itself.
2016-05-14 08:26:36 -07:00
Mitchell Hashimoto b728e55861
helper/schema: Resource.Data should set latest schema version 2016-05-11 13:02:36 -07:00
Mitchell Hashimoto c02c6c3f9c
helper/schema: default state func for import 2016-05-11 13:02:35 -07:00
Mitchell Hashimoto 2bb814e3de
helper/schema: adapt to ID being arg to ImportState 2016-05-11 13:02:35 -07:00
Mitchell Hashimoto 03931bfda9
helper/schema: ImportState must set ID on the resource data 2016-05-11 13:02:34 -07:00
Mitchell Hashimoto 445194ebdf
helper/schema: test ImportState 2016-05-11 13:02:33 -07:00
Mitchell Hashimoto 1685054a9a
helper/schema: cleaner way to store Ephemeral 2016-05-11 13:02:31 -07:00
Mitchell Hashimoto 84531a3fd5
helper/schema: sets Importable to true for resources that have importer 2016-05-11 13:02:30 -07:00
Mitchell Hashimoto 19609bde0e
helper/schema: can specify Importer on Resource, InternalValidate 2016-05-11 13:02:30 -07:00
Mitchell Hashimoto b8121ea63e
helper/schema: Resource.Data to return a ResourceData for a Resource 2016-05-11 13:02:30 -07:00
Mitchell Hashimoto 4e9877179c
helper/schema: start the resource importer 2016-05-11 13:02:29 -07:00
James Nugent 7b6df27e4a helper/schema: Read native maps from configuration
This adds a test and the support necessary to read from native maps
passed as variables via interpolation - for example:

```
resource ...... {
     mapValue = "${var.map}"
}
```

We also add support for interpolating maps from the flat-mapped resource
config, which is necessary to support assignment of computed maps, which
is now valid.

Unfortunately there is no good way to distinguish between a list and a
map in the flatmap. In lieu of changing that representation (which is
risky), we assume that if all the keys are numeric, this is intended to
be a list, and if not it is intended to be a map. This does preclude
maps which have purely numeric keys, which should be noted as a
backwards compatibility concern.
2016-05-10 14:49:14 -04:00
James Nugent f49583d25a core: support native list variables in config
This commit adds support for native list variables and outputs, building
up on the previous change to state. Interpolation functions now return
native lists in preference to StringList.

List variables are defined like this:

variable "test" {
    # This can also be inferred
    type = "list"
    default = ["Hello", "World"]
}

output "test_out" {
    value = "${var.a_list}"
}
This results in the following state:

```
...
            "outputs": {
                "test_out": [
                    "hello",
                    "world"
                ]
            },
...
```

And the result of terraform output is as follows:

```
$ terraform output
test_out = [
  hello
  world
]
```

Using the output name, an xargs-friendly representation is output:

```
$ terraform output test_out
hello
world
```

The output command also supports indexing into the list (with
appropriate range checking and no wrapping):

```
$ terraform output test_out 1
world
```

Along with maps, list outputs from one module may be passed as variables
into another, removing the need for the `join(",", var.list_as_string)`
and `split(",", var.list_as_string)` which was previously necessary in
Terraform configuration.

This commit also updates the tests and implementations of built-in
interpolation functions to take and return native lists where
appropriate.

A backwards compatibility note: previously the concat interpolation
function was capable of concatenating either strings or lists. The
strings use case was deprectated a long time ago but still remained.
Because we cannot return `ast.TypeAny` from an interpolation function,
this use case is no longer supported for strings - `concat` is only
capable of concatenating lists. This should not be a huge issue - the
type checker picks up incorrect parameters, and the native HIL string
concatenation - or the `join` function - can be used to replicate the
missing behaviour.
2016-05-10 14:49:14 -04:00
Paul Hinze b4df304b47
helper/schema: Normalize bools to "true"/"false" in diffs
For a long time now, the diff logic has relied on the behavior of
`mapstructure.WeakDecode` to determine how various primitives are
converted into strings.  The `schema.DiffString` function is used for
all primitive field types: TypeBool, TypeInt, TypeFloat, and TypeString.

The `mapstructure` library's string representation of booleans is "0"
and "1", which differs from `strconv.FormatBool`'s "false" and "true"
(which is used in writing out boolean fields to the state).

Because of this difference, diffs have long had the potential for
cosmetically odd but semantically neutral output like:

    "true" => "1"
    "false" => "0"

So long as `mapstructure.Decode` or `strconv.ParseBool` are used to
interpret these strings, there's no functional problem.

We had our first clear functional problem with #6005 and friends, where
users noticed diffs like the above showing up unexpectedly and causing
troubles when `ignore_changes` was in play.

This particular bug occurs down in Terraform core's EvalIgnoreChanges.
There, the diff is modified to account for ignored attributes, and
special logic attempts to handle properly the situation where the
ignored attribute was going to trigger a resource replacement. That
logic relies on the string representations of the Old and New fields in
the diff to be the same so that it filters properly.

So therefore, we now get a bug when a diff includes `Old: "0", New:
"false"` since the strings do not match, and `ignore_changes` is not
properly handled.

Here, we introduce `TypeBool`-specific normalizing into `finalizeDiff`.
I spiked out a full `diffBool` function, but figuring out which pieces
of `diffString` to duplicate there got hairy. This seemed like a simpler
and more direct solution.

Fixes #6005 (and potentially others!)
2016-05-05 09:00:58 -05:00