Commit Graph

74 Commits

Author SHA1 Message Date
Martin Atkins 1987a92386 helper/schema: Implementation of the AsSingle mechanism
The previous commit added a new flag to schema.Schema which is documented
to make a list with MaxItems: 1 be presented to Terraform Core as a single
value instead, giving a way to switch to non-list nested resources without
it being a breaking change for Terraform v0.11 users as long as it's done
prior to a provider's first v0.12-compatible release.

This is the implementation of that mechanism. It's intentionally
implemented as a suite of extra fixups rather than direct modifications to
existing shim code because we want to ensure that this has no effect
whatsoever on the result of a resource type that _isn't_ using AsSingle.

Although there is some small unit test coverage of the fixup steps here,
the primary testing for this is in the test provider since the integration
of all of these fixup steps in the correct order is the more important
result than any of the intermediate fixup steps.
2019-03-14 15:36:15 -07:00
James Bardin e080706e2e treat normalization during ReadResource like Plan
This will allow resources to return an unexpected change to set blocks
and attributes, otherwise we could mask these changes during
normalization.

Change the "plan" argument in normalizeNullValues to "preferDst" to more
accurately describe what the option is doing, since it no longer applies
only to PlanResourceChange.
2019-03-13 19:14:17 -04:00
James Bardin 6ecf9b143b we can normalize nulls in Read again
This should be the final change from removing the flatmap normalization.
Since we're no longer trying to a consistent zero or null value in the
flatmap config, rather we're trying to maintain the previously applied
value, ReadResource also needs to apply the normalizeNullValues step in
order to prevent unexpected diffs.
2019-03-12 16:00:25 -04:00
James Bardin 11ec3a420e remove normalizeFlatmapContainers
This method was added early on when the diff was being applied as the
legacy code would have done, which is no longer the case. Everything
that normalizeFlatmapContainers does should be covered by the
combination of the initial diff.Apply and the normalizeNullValues on the
final cty.Value.
2019-03-12 12:04:35 -04:00
James Bardin 9d4bb6ec14 stop removing empty flatmap containers
As we've improved the cty.Value normalization, we need to remove
normalization procedures from the flatmap handling. Keeping the empty
containers in the flatmap will prevent unexpected nils from being added
to some schema configurations
2019-03-11 15:14:29 -04:00
James Bardin 6cdf9ff566 Revert "normalize all objects read from the provider"
This reverts commit 209a0a460a.
2019-03-08 17:32:37 -05:00
James Bardin 209a0a460a normalize all objects read from the provider
Use objchange.NormalizeObjectFromLegacySDK to ensure that all objects
returned from the provider match what is expected based on the
configuration according to the schemas.
2019-03-06 14:09:04 -05:00
James Bardin 3600f59bb7
Merge pull request #20525 from hashicorp/jbardin/extra-set-value
remove the partially-known ~ set sigil in diffs
2019-03-05 16:50:02 -05:00
James Bardin 2b4d030a69 don't re-add removed list values even when planned
Providers were not strict (and were not forced to be) about customizing
the diff when a computed attribute needed to be updated during apply.
The fix we have in place to prevent loss of information during the
helper/schema apply process would add in single missing value back in.

The first place this was caught was when we attempt to fix up the
flatmapped attributes. The 1->0 count error is now better handled by our
cty.Value normalization step, so we can remove the special apply case
here altogether

The next place is in normalizeNullValues, and since the intent was to
re-insert missing zero-value lists and sets, adding a check for a length
of 0 protects us from adding in extra elements.

The new test fixture emulated common provider behavior of re-computing
values without customizing the diff. Since we can work around it, and
core will provider appropriate warnings, the shims should try to
maintain the legacy behavior.
2019-03-05 15:31:08 -05:00
James Bardin 47604c36c8 remove the partially-known ~ set sigil in diffs
The NewExtra values are stored outside the diff from plan, and the
original keys may not contain the ~ prefix. Adding the NewExtra back
into the diff with the mismatched key was causing an entire new set
element to be populated. Since this symbol isn't used to apply the diff
in helper/schema, we can simply strip them out.
2019-03-04 17:36:30 -05:00
James Bardin 33d5ddf291 remove empty timeouts blocks in copyTimeoutValues
The hcl2shims will always add in the timeouts block, because there's no
way to differentiate a null single block from an empty one in the
flatmapped state. Since we are only concerned with keeping the prior
timeouts value, always set the new value to null, and then copy over the
prior value if it exists.
2019-03-02 11:30:37 -05:00
James Bardin 49230f8198 existing fields cannot become computed during plan
Fields with no change can only become computed during initial creation.
2019-02-28 18:45:11 -05:00
James Bardin 9a39af5047 1->0 set changes non longer should happen in Read
The new normalization should make preventing those changes unnecessary,
and will also prevent extra empty elements from being added when
resources are refreshed.
2019-02-28 17:47:11 -05:00
James Bardin f9b62cb5fe
Merge pull request #20335 from hashicorp/jbardin/diff-apply
Diff apply needs to check for both types of containers keys
2019-02-13 19:33:34 -05:00
James Bardin c34c37fbd5 missed .% suffixes in diff.Apply
Diff.Apply checks for unneeded container count diffs, but was missing
the check for maps.

Add an early return for planning a destroy.
2019-02-13 19:09:46 -05:00
Martin Atkins fedbd6c3b8 helper/plugin: fix panic with empty objects in normalizeNullValues
cty.Value.AsValueMap can return nil if called on an empty map or object.
The logic above was dealing with that case for maps, but object types
were falling through into this codepath and panicking when trying to
assign a new key into the nil dstMap.

This also includes a bonus fix where we were calling ty.ElementType in
a switch case that accepts object types. Object types don't have a single
element type, so we can't call ElementType on those (that also panics)
but we _can_ use the type of the value we selected from src to construct
our placeholder null value.
2019-02-13 15:56:12 -08:00
Martin Atkins eb1346447f
Merge #20282: Enforce expected behaviors for provider PlanResourceChange
An exception remains for the legacy SDK, which does not meet all of these requirements.
2019-02-12 09:19:05 -08:00
Martin Atkins 31299e688d core: Allow legacy SDK to opt out of plan-time safety checks
Due to the inprecision of our shimming from the legacy SDK type system to
the new Terraform Core type system, the legacy SDK produces a number of
inconsistencies that produce only minor quirky behavior or broken
edge-cases. To retain compatibility with those existing weird behaviors,
the legacy SDK opts out of our safety checks.

The intent here is to allow existing providers to continue to do their
previous unsafe behaviors for now, accepting that this will allow certain
quirky bugs from previous releases to persist, and then gradually migrate
away from the legacy SDK and remove this opt-out on a per-resource basis
over time.

As with the apply-time safety check opt-out, this is reserved only for
the legacy SDK and must not be used in any new SDK implementations. We
still include any inconsistencies as warnings in the logs as an aid to
anyone debugging weird behavior, so that they can see situations where
blame may be misplaced in the user-visible error messages.
2019-02-11 17:26:49 -08:00
James Bardin 1bfc27817e process state even after provider.Apply errors
Terraform core expects a sane state even when the provider returns an
error. Make sure at the prior state is always the default value to
return, and then alway attempt to process any state returned by
provider.Apply.
2019-02-11 15:41:07 -05:00
James Bardin 82588af892 switch blocks based on value type, and check attrs
Check attributes on null objects, and fill in unknowns. If we're
evaluating the object, it either means we are at the top level, or a
NestingSingle block was present, and in either case we need to treat the
attributes as null rather than the entire object.

Switch on the block types rather than Nesting, so we don't need add any
logic to change between List/Tuple or Map/Object when DynamicPseudoType
is involved.
2019-02-08 14:46:29 -05:00
James Bardin 32671241e0 set unknowns during initial PlanResourceChange
If ID is not set, make sure it's unknown.

Use SetUnknowns to set the rest of the computed values to Unknown.
2019-02-07 20:29:24 -05:00
James Bardin d17ba647a8 add SetUnknowns
SetUnknown walks through a resource and changes any unset (null) values
that are going computed in the schema to Unknown.
2019-02-07 20:24:36 -05:00
Martin Atkins 1530fe52f7 core: Legacy SDK providers opt out of our new apply result check
The shim layer for the legacy SDK type system is not precise enough to
guarantee it will produce identical results between plan and apply. In
particular, values that are null during plan will often become zero-valued
during apply.

To avoid breaking those existing providers while still allowing us to
introduce this check in the future, we'll introduce a rather-hacky new
flag that allows the legacy SDK to signal that it is the legacy SDK and
thus disable the check.

Once we start phasing out the legacy SDK in favor of one that natively
understands our new type system, we can stop setting this flag and thus
get the additional safety of this check without breaking any
previously-released providers.

No other SDK is permitted to set this flag, and we will remove it if we
ever introduce protocol version 6 in future, assuming that any provider
supporting that protocol will always produce consistent results.
2019-02-06 11:40:30 -08:00
James Bardin 3b18dd7c01
Merge pull request #20224 from hashicorp/jbardin/sdk
SDK set fixes
2019-02-05 14:11:51 -05:00
James Bardin 58c9c2311a Turn on helper/schema proto5 flag in GetSchema
This turns it on at the last moment, and in one place for all uses of
helper/schema. There's no way to use the new protocol without calling
GetSchema, so we can be sure that any subsequent api calls have this set
when required.
2019-02-05 12:08:17 -05:00
Martin Atkins bdcac8792d plugin: Use correct schema when marshaling imported resource objects
Previously we were using the type name requested in the import to select
the schema, but a provider is free to return additional objects of other
types as part of an import result, and so it's important that we perform
schema selection separately for each returned object.

If we don't do this, we get confusing downstream errors where the
resulting object decodes to the wrong type and breaks various invariants
expected by Terraform Core.

The testResourceImportOther test in the test provider didn't catch this
previously because it happened to have an identical schema to the other
resource type being imported. Now the schema is changed and also there's
a computed attribute we can set as part of the refresh phase to make sure
we're completing the Read call properly during import. Refresh was working
correctly, but we didn't have any tests for it as part of the import flow.
2019-02-01 15:22:54 -08:00
James Bardin 4a603011c5 don't normalizeNullValues in ReadResource
The required normalization now happens in PlanResourceChange, and this
function is no longer appropriate for ReadResource.
2019-02-01 17:21:37 -05:00
James Bardin ba081f5de4 change copyMissingValues to normalizeNullValues
While copyMissingValues was meant to re-insert empty values that were
null after apply, it turns out plan is sometimes not predictable as
well.

normalizeNullValue is meant to fix up any null/empty transitions between
to values, and be useful during plan as well. For plan the function only
concerns itself with individual, known values, and skips sets entirely.
The result of running with plan == true is that only changes between
empty and null collections should be fixed.
2019-01-31 19:02:39 -05:00
James Bardin 3b04b41250 fix RequiresNew in diff
With the new diff.Apply we can keep the diff mostly intact, but we need
turn off all RequiresNew flags so that the prior state is not removed
from the apply.
2019-01-30 14:55:04 -05:00
Martin Atkins 477da57a92 helper/plugin: Honor resource type overrides in import
One quirky aspect of our import feature is that we allow the importer to
produce additional resources alongside the one that was imported, such as
to create separate rules for each rule of an imported security group.

Providers need to be able to set the types of these other resources since
they may not match the "main" resource type. They do this by calling
ResourceData.SetType, which in turn sets InstanceState.Ephemeral.Type.

In our shims here we therefore need to copy that out into our new TypeName
field so that the new core import code can see it and create the right
type in the state.

Testing this required a minor change to the test harness to allow the
ImportStateCheck function to see the resource type.
2019-01-30 09:05:08 -08:00
Paul Tyng bb9ae50279
Copy TF version to helper/schema provider 2019-01-28 14:38:49 -05:00
James Bardin 37b5e2dc87 don't remove empty diff values
Our new diff handling no longer requires stripping the empty diffs out,
and provider may be relying on some of the empty-value quirks in
helper/schema.
2019-01-23 17:33:23 -05:00
James Bardin 46a4628782
Merge pull request #20081 from hashicorp/jbardin/list-block
New Diff.Apply method
2019-01-22 19:20:53 -05:00
Martin Atkins f65b7c5372 helper/plugin: Discard meaningless differences from provider planning
Due to various inprecisions in the old SDK implementation, applying the
generated diff can potentially make changes to the data structure that
have no real effect, such as replacing an empty list with a null list or
vice-versa.

Although we can't totally eliminate such diff noise, here we attempt to
avoid it in situations where there are _only_ meaningless changes -- where
the prior state and planned state are equivalent -- by just echoing back
the prior state verbatim to ensure that Terraform will treat it as a noop
change.

If there _are_ some legitimate changes then the result may still contain
meaningless changes alongside it, but that is just a cosmetic problem for
the diff renderer, because the meaningless changes will be ignored
altogether during a subsequent apply anyway. The primary goal here is just
to ensure we can converge on a fixpoint when there are no explicit changes
in the configuration.
2019-01-22 15:41:10 -08:00
James Bardin 8d302c5bd2 update grpc_provider for new diffs
Keep the diff as-is before applying.
2019-01-22 18:10:12 -05:00
James Bardin 286cb0a39d clean out diff a little more before checking
Check if there wasn't any real diff attributes first, before returning
the original state in PlanResourceChange.
2019-01-17 19:19:13 -05:00
James Bardin 4f691c5988 don't replace null strings with empty strings
This adds unexpected values in some cases, and since the case this
handles is only within set objects, we'll deal woth this when tackling
the sets themselves.
2019-01-17 19:19:13 -05:00
James Bardin 2cc651124e don't overwrite values in plan
Plan can change known values too, which we can't match in sets. We'll
find another way to normalize these eithout losing plan values.
2019-01-17 18:51:18 -05:00
James Bardin 7d05dee08d refactor ApplyResourceChange
Remove a bunch of indentation by returning early, and make sure we don't
fail on non-fatal error without saving the applied value.
2019-01-15 12:35:58 -05:00
James Bardin 0a731167db add a round trip through the shims during apply
Cycle through the shim operations after Apply, to ensure that we can
converge on a stable value for for Plan. While the shims produce valid
values in both directions, helper/schema sometimes does not agree on
which containers should be empty or null.
2019-01-15 11:59:15 -05:00
James Bardin e8096e9c8b normalize values during ReadResource
Match the normalization behavior of Apply, so we don't end up causing
any diffs between zero values when refreshing resources.
2019-01-12 10:41:04 -05:00
James Bardin b55ec74c27 add copyMissingValues for normalizing shimmed Vals
Zero values and empty containers can be lost during the shimming
process, and during the provider's Apply step.

If we have known zero value containers and primitives in the source,
which appear as null values in the destination, we copy over the zero
value. Sets (and lists to an extent) are more difficult, since there
before and after indexes may not correlate. In that case we take the
entire container if it's wholly known, expecting the provider to have
correctly handled the value.
2019-01-08 16:26:22 -05:00
James Bardin 8300d65539 don't strip sets with count 1 when normalizing
normalizeFlatmapContainers should retain sets with a count of 1, and
convert sets with a count of 0 if they were 1 before the Apply step.
2019-01-08 16:26:21 -05:00
James Bardin 924b97238f Handle StateFuncs in provider shim
Any state modifying functions can only be run once during the plan-apply
cycle. When regenerating the Diff during ApplyResourceChange, strip out
all StateFunc and CustomizeDiff functions from the schema.

Thew NewExtra diff field was where config data that was modified by a
StateFunc was stored, and needs to be maintained between plan and apply.

During PlanResourceChange, store any NewExtra data from the Diff in the
PlannedPrivate data, and re-insert the NewExtra data into the Diff
generated during ApplyResourceChange.
2018-12-03 18:12:02 -05:00
James Bardin 5f9b189fcf catch conversion errors in PrepareProviderConfig
Errors were being ignore with the intention that they would be caught
later in validation, but it turns out we nee dto catch those earlier.

The legacy schemas also allowed providers to set and empty string for a
bool value, which we need to handle here, since it's not being handled
from user input like a normal config value.
2018-11-30 14:51:52 -05:00
James Bardin 6f4d86094f preserve possible zero values when normalizing
When normalizing flatmapped containers, compare the attributes to the
prior state and preserve pre-existing zero-length or unknown values. A
zero-length value that was previously unknown is preserved as a
zero-length value, as that may have been computed as such by the
provider.
2018-11-27 08:54:15 -05:00
James Bardin f375691819 add missing key-value from test 2018-11-19 18:58:29 -05:00
Martin Atkins 4fe9632f09 plugin: Establish our current plugin protocol as version 5
The main significant change here is that the package name for the proto
definition is "tfplugin5", which is important because this name is part
of the wire protocol for references to types defined in our package.

Along with that, we also move the generated package into "internal" to
make it explicit that importing the generated Go package from elsewhere is
not the right approach for externally-implemented SDKs, which should
instead vendor the proto definition they are using and generate their
own stubs to ensure that the wire protocol is the only hard dependency
between Terraform Core and plugins.

After this is merged, any provider binaries built against our
helper/schema package will need to be rebuilt so that they use the new
"tfplugin5" package name instead of "proto".

In a future commit we will include more elaborate and organized
documentation on how an external codebase might make use of our RPC
interface definition to implement an SDK, but the primary concern here
is to ensure we have the right wire package name before release.
2018-11-19 09:56:41 -08:00
James Bardin e95f2b586e another test case in helper/plugin 2018-11-16 15:12:16 -05:00
James Bardin 89b2c6f21e comment fixes 2018-11-16 11:24:14 -05:00