Commit Graph

7719 Commits

Author SHA1 Message Date
James Bardin 682286e184 test for Required win MinItems > 1 2019-07-26 14:36:19 -07:00
Alex Pilon 7f8f198719
remove UnknownVariabeValue from config and update references to shim 2019-07-17 22:41:24 -04:00
James Bardin efd2a6cef6 dynamic set block test 2019-07-12 16:48:49 -04:00
James Bardin a0338df4d4 update ignore_changes to use cty.Path.Equals
Remove reflect.DeepEqual from path comparisons to get reliable results.

The equality issues were only noticed going the grpc interface, so add a
corresponding test to the test provider.
2019-07-10 14:49:37 -04:00
Radek Simko 5b9f2fafc8 Standardise directory name for test data 2019-06-30 10:16:15 +02:00
James Bardin 0d2363a058 add tests to preserve existing Set nil behavior
While it may not be intuitive, providers expect that setting a `nil`
value will appear as an empty string in state.
2019-06-28 12:09:50 -04:00
James Bardin 75602df5ef Revert "Remove removed attribute from applied state"
This reverts commit 2e2a363052.
2019-06-28 11:51:52 -04:00
James Bardin 0d9f84414a
Merge pull request #21879 from hashicorp/jbardin/sdk-panics
prevent sdk panics in 2 specific cases
2019-06-25 15:38:15 -04:00
James Bardin fe11724678 test for panic when readin empty map 2019-06-25 14:24:31 -04:00
James Bardin f362198467
Merge pull request #21806 from hashicorp/jbardin/lost-empty-strings
prevent an empty string from being lost
2019-06-25 12:32:06 -04:00
James Bardin c4874aa5b3 add more timeout provider tests 2019-06-19 22:48:15 -04:00
James Bardin 4dcda04d96 prevent an empty string from being lost
The helper/schema diff process loses empty strings, causing them to show
up as unset (null) during apply. Besides failing to show as set by
GetOk, the absence of the value also triggers the schema to insert a
default value again during apply.

It would also be be preferable if the defaults weren't re-evaluated
again during ApplyResourceChange, but that would require a more invasive
patch to the field readers, and ensuring the empty string is stored in
the plan should block the default.
2019-06-19 17:42:00 -04:00
James Bardin 8ae31aa2db normalize empty blocks during import
Like Upgrade, Import is another case where we don't have the context of
the configuration and need to ensure missing blocks are normalized.
2019-06-17 20:29:01 -04:00
James Bardin 2e2a363052 Remove removed attribute from applied state
When a Diff contains a NewRemoved attribute (which would have been null
in the planned state), the final value is often the "zero" value string
for the type, which the provider itself still applies to the state.
Rather than risking a change of behavior in helper/schema by fixing the
inconsistency, we'll remove the NewRemoved attributes after apply to
prevent further issues resulting from the change in planned value.
2019-06-13 17:29:25 -04:00
James Bardin d33c5163a7
Merge pull request #21555 from hashicorp/jbardin/re-validate
Allow providers to re-validate the final resource config
2019-06-07 16:43:38 -04:00
James Bardin a056b84cdd add delete timeout test
The timeout value is still not being persisted.

While it doesn't fix this issue, make sure to always return Private
during Plan.
2019-06-05 19:22:46 -04:00
James Bardin c658fe173a test a dynamic block with MinItems in the schema
A dynamic block is always going to be a single value during validation,
but should not fail when min-items > 1 is specified.
2019-05-31 20:07:41 -05:00
James Bardin 6055cb632e filter unknowns from simple lists and maps in sdk
While there was already a check in the sdk to filter unknowns from
validation, it missed the case where those were in simple lists and maps.
2019-05-28 09:58:07 -04:00
James Bardin c391f3a1a3 failing test for cty conversion bug 2019-05-16 16:40:17 -04:00
James Bardin b9e5745b3c remove SkipCoreTypeCheck and LegacySchema
Update the test provider to match the sdk.
2019-05-14 18:19:18 -04:00
James Bardin 6bc36d3321 validate integers when using protoV5
The new type system only has a Number type, but helper schema
differentiates between Int and Float values. Verify that a new config
value is an integer during Validate, because the existing WeakDecode
validation will decode a float value into an integer while the config
FieldReader will attempt to parse the float exactly.

Since we're limiting this to protoV5, we can be certain that any valid
config value will be converted to an `int` type by the shims. The only
case where an integral float value will appear is if the integer is out
of range for the systems `int` type, but we also need to prevent that
anyway since it would fail to read in the same manner.
2019-05-11 09:34:28 -04:00
James Bardin 7075bc9a4d restrict the ComputedKeys usage to containers
Computed primitive values must see the UnknownConfigValue or they are
assumed to be unchanged. Restrict the usage of the protov5 ComputedKeys
to containers.
2019-05-06 19:19:10 -04:00
Martin Atkins 083af21d30 providers/terraform: Explicit validate step
We were previously catching some errors at read time, but some type errors
were panicking because the cty.DynamicPseudoType arguments have no
automatic pre-type-checking done but this code was assuming they would
be objects.

Here we add an explicit validation step that includes both the backend
validation we were previously doing during read and some additional
type checking to ensure the two dynamic arguments are suitably-typed.
Having the separate validation step means that these problems can be
detected by "terraform validate", rather than only in "terraform plan"
or "terraform apply".
2019-05-04 21:06:31 -07:00
James Bardin f9dfa98533 test to make sure a data source is planned
The data source here needs to be re-read during apply, because its
config is changing.
2019-05-03 16:25:37 -04:00
Martin Atkins 332010fd56 plans/objchange: Fix handling of dynamic block placeholders
If a dynamic block (in the HCL dynamic block extension sense) has an
unknown value for its for_each argument, it gets expanded to a single
placeholder block with all of its attributes set to a unknown values.

We can use this as part of a heuristic to relax our object compatibility
checks for situations where the plan included an object that appears to
be (but isn't necessarily) such a placeholder, allowing for the fact that
the one placeholder block could be replaced with zero or more real blocks
once the for_each value is known.

Previously our heuristic was too strict: it would match only if the only
block present was a dynamic placeholder. In practice, users may mix
dynamic blocks with static blocks of the same type, so we need to be more
liberal to avoid generating incorrect incompatibility errors in such
cases.
2019-05-02 14:08:40 -07:00
James Bardin 0f883b118a test that computed maps are applied correctly
Verify that a computed map which is not correctly marked as unknown in
the plan is still applied correctly.
2019-04-23 12:31:19 -04:00
James Bardin 67395306e1 delete unknown values from apply config altogether
removeConfigUnknowns need to remove the value completely from the config
map. Removing this value allows GetOk and GetOkExists to indicate if the
value was set in the config in the case of an Optional+Computed
attribute.
2019-04-22 18:06:26 -04:00
Martin Atkins 861a2ebf26 helper/schema: Use a more targeted shim for nested set diff applying
We previously attempted to make the special diff apply behavior for nested
sets of objects work with attribute mode by totally discarding attribute
mode for all shims.

In practice, that is too broad a solution: there are lots of other shimming
behaviors that we _don't_ want when attribute mode is enabled. In
particular, we need to make sure that the difference between null and
empty can be seen in configuration.

As a compromise then, we will give all of the shims access to the real
ConfigMode and then do a more specialized fixup within the diff-apply
logic: we'll construct a synthetic nested block schema and then use that
to run our existing logic to deal with nested sets of objects, while
using the previous behavior in all other cases.

In effect, this means that the special new behavior only applies when the
provider uses the opt-in ConfigMode setting on a particular attribute,
and thus this change has much less risk of causing broad, unintended
regressions elsewhere.
2019-04-17 07:47:31 -07:00
Martin Atkins ff2de9c818 core: Keep old value on error even for delete
When an operation fails, providers may return a null new value rather than
returning a partial state. In that case, we'd prefer to keep the old value
so that we stand the best chance of being able to retry on a subsequent
run.

Previously we were making an exception for the delete action, allowing
the result of that to be null even when an error is returned. In practice
that was a bad idea because it would cause Terraform to lose track of the
object even though it might not actually have been deleted.

Now we'll retain the old object even in the delete case. Providers can
still return partial new objects if they were able to partially complete
a delete operation, in which case we'll discard what we had before, but
if the result is null with errors then we'll assume the delete failed
entirely and so just keep the old state as-is, giving us the opportunity
to refresh it on the next run to see if anything actually happened after
all.

(This also includes a new resource in the test provider which isn't used
by the patch but was useful for some manual UX testing here, so I thought
I'd include it in case it's similarly useful in future, given how simple
its implementation is.)
2019-04-17 07:40:15 -07:00
Martin Atkins bd1a215580 helper/resource: Ignore Removed attributes for ImportStateVerify
Due to the lossiness of our legacy models for diff and state, shimming a
diff and then creating a state from it produces a different result than
shimming a state directly. That means that ImportStateVerify no longer
works as expected if there are any Computed attributes in the schema where
d.Set isn't called during Read.

Fixing that for every case would require some risky changes to the shim
behavior, so we're instead going to ask provider developers to address it
by adding `d.Set` calls where needed, since that is the contract for
"Computed" anyway -- a default value should be produced during Create, and
thus by extension during Import.

However, since a common situation where this occurs is attributes marked
as "Removed", since all of the code that deals with them has generally
been deleted, we'll avoid problems in that case here by treating Removed
attributes as ignored for the purposes of ImportStateVerify.

This required exporting some functionality that was formerly unexported
in helper/schema, but it's a relatively harmless schema introspection
function so shouldn't be a big deal to export it.
2019-04-16 11:14:49 -07:00
James Bardin 8d32229f7d add test fetching computed set value by address
This is not a recommended method, but it does serve to verify that the
set values in the ResourceData internal state are correctly computed,
which indicates that the expected configuration was passed in.
2019-04-10 09:42:54 -04:00
James Bardin a3d58665ad use LegacyResourceSchema
rather than the previous .CoreConfigSchemaForShimming
2019-04-08 16:45:35 -04:00
James Bardin 1a9c06d0f5 Revert "helper/schema: Implementation of the AsSingle mechanism"
This reverts commit 1987a92386.
2019-04-08 16:45:35 -04:00
James Bardin af9dacb9f9 add failing test for diff.Apply
Add a diff test using a shcema with ConfigModeAttr.

It's in the test provider, because that is what is mostly responsible
for exercising diff.Apply, and where the other tests are.
2019-04-08 16:45:06 -04:00
James Bardin f52a6630f5 create a downstream failure from a computed value
These are the largest source of the old "diffs didn't match after apply"
errors. It's almost always an upstream dependency that caused the final
error.
2019-04-03 17:36:08 -04:00
James Bardin 8b9fa6d05f add test provider coverage around unknown vals 2019-03-29 13:56:42 -04:00
Martin Atkins 87786484ea lang/eval: Apply attr-as-nested-block fixup in EvalBlock
For any block content we evaluate dynamically via this API, we'll make a
special allowance for users to optionally write members of a list
attribute instead as a sequence of nested blocks, thus allowing some
existing provider features that were assuming this capability to continue
to support it after v0.12.

This should not be used for any new provider features, and should ideally
be eventually phased out so that there aren't two
similar-but-slightly-different syntaxes for saying the same thing.
2019-03-28 10:41:01 -07:00
Martin Atkins 135121562e helper/plugin: Implement Schema.SkipCoreTypeCheck
The previous commit added this flag but did not implement it. Here we
implement it by adjusting the shape of schema we return to Terraform Core
to mark the attribute as untyped and then ensure that gets handled
correctly on the SDK side.
2019-03-21 15:19:59 -07:00
Martin Atkins 1987a92386 helper/schema: Implementation of the AsSingle mechanism
The previous commit added a new flag to schema.Schema which is documented
to make a list with MaxItems: 1 be presented to Terraform Core as a single
value instead, giving a way to switch to non-list nested resources without
it being a breaking change for Terraform v0.11 users as long as it's done
prior to a provider's first v0.12-compatible release.

This is the implementation of that mechanism. It's intentionally
implemented as a suite of extra fixups rather than direct modifications to
existing shim code because we want to ensure that this has no effect
whatsoever on the result of a resource type that _isn't_ using AsSingle.

Although there is some small unit test coverage of the fixup steps here,
the primary testing for this is in the test provider since the integration
of all of these fixup steps in the correct order is the more important
result than any of the intermediate fixup steps.
2019-03-14 15:36:15 -07:00
James Bardin 4006c0fc84 add test for set value drift
The changed value for the set attribute should be correctly read from
the provider.
2019-03-13 19:17:38 -04:00
James Bardin 6ecf9b143b we can normalize nulls in Read again
This should be the final change from removing the flatmap normalization.
Since we're no longer trying to a consistent zero or null value in the
flatmap config, rather we're trying to maintain the previously applied
value, ReadResource also needs to apply the normalizeNullValues step in
order to prevent unexpected diffs.
2019-03-12 16:00:25 -04:00
James Bardin 4c23e990e2 add test for complex schema diff apply
This schema was taken from an actual google resource which often shows
perpetual diffs.
2019-03-12 12:04:35 -04:00
Martin Atkins 4de0b33097 helper/schema: Honor ConfigMode when building core schema
This makes some slight adjustments to the shape of the schema we
present to Terraform Core without affecting how it is consumed by the
SDK and thus the provider. This mechanism is designed specifically to
avoid changing how the schema is interpreted by the SDK itself or by the
provider, so that prior behavior can be preserved in Terraform v0.11 mode.

This also includes a new rule that Computed-only (i.e. not also Optional)
schemas _always_ map to attributes, because that is a better mapping of
the intent: they are object values to be used in expressions. Nested
blocks conceptually represent nested objects that are in some sense
independent of what they are embedded in, and so they cannot themselves be
computed.
2019-03-11 17:02:05 -07:00
James Bardin 9d4bb6ec14 stop removing empty flatmap containers
As we've improved the cty.Value normalization, we need to remove
normalization procedures from the flatmap handling. Keeping the empty
containers in the flatmap will prevent unexpected nils from being added
to some schema configurations
2019-03-11 15:14:29 -04:00
James Bardin 3600f59bb7
Merge pull request #20525 from hashicorp/jbardin/extra-set-value
remove the partially-known ~ set sigil in diffs
2019-03-05 16:50:02 -05:00
James Bardin 2b4d030a69 don't re-add removed list values even when planned
Providers were not strict (and were not forced to be) about customizing
the diff when a computed attribute needed to be updated during apply.
The fix we have in place to prevent loss of information during the
helper/schema apply process would add in single missing value back in.

The first place this was caught was when we attempt to fix up the
flatmapped attributes. The 1->0 count error is now better handled by our
cty.Value normalization step, so we can remove the special apply case
here altogether

The next place is in normalizeNullValues, and since the intent was to
re-insert missing zero-value lists and sets, adding a check for a length
of 0 protects us from adding in extra elements.

The new test fixture emulated common provider behavior of re-computing
values without customizing the diff. Since we can work around it, and
core will provider appropriate warnings, the shims should try to
maintain the legacy behavior.
2019-03-05 15:31:08 -05:00
James Bardin 47604c36c8 remove the partially-known ~ set sigil in diffs
The NewExtra values are stored outside the diff from plan, and the
original keys may not contain the ~ prefix. Adding the NewExtra back
into the diff with the mismatched key was causing an entire new set
element to be populated. Since this symbol isn't used to apply the diff
in helper/schema, we can simply strip them out.
2019-03-04 17:36:30 -05:00
James Bardin c814f2da37 Change backend.ValidateConfig to PrepareConfig
This mirrors the change made for providers, so that default values can
be inserted into the config by the backend implementation. This is only
the interface and method name changes, it does not yet add any default
values.
2019-02-25 18:37:20 -05:00
James Bardin da389d6cd4 simple list diffs may also have missing elements
Like was done for list blocks, simple lists of strings may be missing
empty string elements, and any list may be implicitly truncated.
2019-02-14 13:06:04 -05:00
James Bardin 931f459336 add a trouble test schema from the aws provider 2019-02-13 19:09:46 -05:00