There was a missing outer loop for catching inverse module dependencies
when pruning nodes for destroy. Since the need to "register" the fully
destroyed modules no longer exists, the extra complication of pruning
the modules as a whole from the leaves inward is no longer required.
While it is technically still a valid optimization to reduce iterations,
the extra comparisons required to backtrack for transitive dependencies
don't amount to much, and having a single nested loop is much easier to
maintain.
If a module has multiple terraform.required_version constraints, any
failures would point at the last constraint in the error diagnostics. If
an earlier constraint was the actual problem, this leads to confusing
errors like this:
Error: Unsupported Terraform Core version
on main.tf line 6, in terraform:
6: required_version = ">= 0.13.0"
This configuration does not support Terraform version 0.13.0.
The error was due to storing the declaration range of the constraint as
a pointer to the contents of a loop variable, which was later
overwritten in later iterations of the loop. Instead we now use HCL's
handy Ptr() method to create a direct pointer to the range struct.
Include the import walk in the list of operations for which we create an
EvalModuleCallArgument node. This causes module call arguments to be
evaluated even if the module variables have defaults, ensuring that
invalid default values (such as the common "{}" for variables thought of
as maps) do not cause failures specific to import.
This fixes a bug where a child module evaluates an input variable in its
locals block, assuming that it is a nested object structure. The bug
report includes a default value of "{}", which is overridden by a root
variable value. Without the eval node added in this commit, the default
value is used and the local evaluation errors.
In order to determine if we need to re-read a data source during plan,
we need to compare the newly evaluated configuration with the stored
state. To do that we create a ProposedNewVal, which if there are no
changes, should match the existing state exactly.
A problem arises if the remote data source contains any blocks, and they
are not set in the configuration. Terraform always decodes configuration
blocks as empty containers, however the legacy SDK cannot correctly
handle empty blocks and may return a null block which is saved to the
state. In order to correctly make the comparison for planning, we need
to reify those null blocks as empty containers in the cty value.
The createEmptyBlocks helper converts any null NestingList or NestingSet
blocks to empty list or set cty values. We only need to be concerned
with List and Set, because those are the only types that can be defined
with the legacy SDK. In hindsight these could have been normalized in
the legacy SDK shims had this problem been uncovered earlier, but for the
sake of compatibility we will now normalize these in core.
When working with a ConfigResource, the generalization of a
ModuleInstance to a Module was inadvertently dropped, and there was to
test coverage for that type of target.
Ensure we can target a specific module instance alone.
Before expansion happens, we only have expansion resource nodes that
know their ConfigResource address. In order to properly compare these to
targets within a module instance, we need to generalize the target to
also be a ConfigResource.
We can also remove the IgnoreIndices field from the transformer, since
we have addresses that are properly scoped and can compare them in the
correct context.
While removal of attributes can be handled by providers through the
UpgradeResourceState call, data sources may need to be evaluated before
reading, and they have no upgrade path in the provider protocol.
Strip out extra attributes during state decoding when they are no longer
present in the schema, and there is no schema upgrade pending.
When looking up a resource during plan, we need to return an empty
container type when we're certain there are going to be no instances.
It's now more common to reference resources in a context that needs to
be known during plan (e.g. for_each), and always returning a DynamicVal
her would block plan from succeeding.
This is the known case broken by the changes to allow resources pending
destruction to be evaluated from state. When a resource references
another that is create_before_destroy, and that resource is being scaled
in, the first resource will not be updated correctly.
Since we have to allow destroy nodes to be evaluated for providers
during a full destroy, this is adding a transformer to connect temporary
values to any destroy versions of their references when possible. The
ensures that the destroy happens before evaluation, even when there
isn't a full create-then-destroy set of instances.
The cases where the connection can't be made are when the temporary
value has a provider descendant, which means it must evaluate early in
the case of a full destroy. This means the value may contain incorrect
data when referencing resource that are create_before_destroy, or being
scaled-in via count or for_each. That will need to be addressed later by
reevaluating how we handle the full destroy case in terraform.
During a full destroy, providers may reference resources that are going
to be destroyed as well. We currently cannot change this behavior, so we
need to allow the evaluation and try to prevent it from leaking into as
many other places as possible. Another transformer to try and protect
the values in locals, variables and outputs will be added to enforce
destroy ordering when possible.
Outputs and locals cannot refer to destroy nodes. Since those nodes
types do not have different ordering for create and destroy operations,
connecting them directly to destroy nodes can cause cycles.
The destroy graph builder test requires state in order to be correct,
which it didn't have. The other tests hits the edge case where a planned
destroy cannot remove outputs, because the apply phase does not know it
was created from a destroy.
Since data source destruction is only state removal, and other resources
cannot depend on them creating any physical resources, the destroy
dependencies were not tracked in the state. It turns out that there is a
special case which requires this; running terraform destroy where the
provider depends on a data source. In that case the resources using that
provider need to record their indirect dependence on the data source, so
that they can be deleted before the data source is removed from the
state.
Our reference transformer analyses and our destroy transformer analyses
are built around static (not-yet-expanded) addresses so that they can
correctly handle mixtures of expanded and not-yet-expanded objects in the
same graph.
However, this characteristic also makes them unnecessarily conservative
in their handling of references between resources within different
instances of the same module: we know they can never interact with each
other in practice because the dependencies for all instances of a module
are the same and so one instance cannot possibly depend on another.
As a compromise then, here we introduce a new helper function that can
recognize when a proposed edge is between two resource instances that
belong to different instances of the same module, and thus allow us to
skip actually creating those edges even though our imprecise analyses
believe them to be needed.
As well as significantly reducing the number of edges in situations where
multi-instance resources appear inside multi-instance modules, this also
fixes some potential cycles in situations where a single plan includes
both destroying an instance of a module and creating a new instance of the
same module: the dependencies between the objects in the instance being
destroyed and the objects in the instance being created can, if allowed
to connect, cause Terraform to believe that the create and the destroy
both depend on one another even though there is no need for that to be
true in practice.
This involves a very specialized helper function to encode the situation
where this exception applies. This function has an ugly name to reflect
how specialized it is; it's not intended to be of any use outside of these
three situations in particular.
The AbstractResourceInstance type was storing the entire Resource from
the state, when it only needs the actual instance state. This would
cause resources to consume memory on the order of n^2, where n in the
number of instances of the resource.
Rather than attaching the entire resource state, which includes copying
each individual instance, only attach the ResourceInstance state, and
extract out the provider address from the Resource.
The pruneUnusedNodes transformer was skipping root level locals and
variables, causing them to be left in the graph during a full destroy.
Use the return value from temporaryValue to indicate if the node is
truly temporary or not, rather then keeping the entire root module.
If we're adding a node to remove a root output from the state, the
output itself does not need to be re-evaluated. The exception for root
outputs caused them to be missed when we refactored resource destruction
to only use the existing state.
Have the output reference the expansion of a resource (via the whole
resource object), so that we can be sure we don't attempt to evaluate
that expansion during destroy.
When configuring providers, it is normally valid to refer to any value
which is known at apply time. This can include resource instance
attributes, variables, locals, and so on.
The import command has a simpler graph evaluation, which means that
many of these values are unknown. We previously prevented this from
happening by restricting provider configuration references to input
variables (#22862), but this was more restrictive than is necessary.
This commit changes how we verify provider configuration for import.
We no longer inspect the configuration references during graph building,
because this is too early to determine if these values will become known
or not.
Instead, when the provider is configured during evaluation, we
check if the configuration value is wholly known. If not, we fail with a
diagnostic error.
Includes a test case which verifies that providers can now be configured
using locals as well as vars, and an updated test case which verifies
that providers cannot be configured with references to resources.
element types
The error message when evaluateForEachExpression encounted an unknown
value of cty.DynamicPseudoType was not clear:
The given "for_each" argument value is unsuitable: "for_each" supports maps
and sets of strings, but you have provided a set containing type dynamic.
By moving the check for unknowns before the check for set element types,
the following error is returned instead:
"The "for_each" value depends on resource attributes that cannot be
determined until apply (...)"
Orphaned instances that are create_before_destroy will still be in the
state when their references are evaluated. We need to skip instances
that are planned to be destroyed altogether, as they can't be part of an
evaluation.
When the DestroyEdgeTransformer was updated to handle stored
dependencies the addrs.ConfigResource type did not yet exist. The lookup
map keys in the transformer needed to be updated to remove module
indexes.
This is for consistency with other commands which use prompts, all of
which require "yes" rather than "y" to confirm.
We also migrate the login command to use UIInput, which now supports
securely asking for passwords or secrets via the speakeasy library.
This simplifies the initial targeting logic, and removes the complex
algorithm for finding descendants that result in output changes, which
hid bugs that failed with modules.
The targeting is handled in 2 phases. First we find all individual
resource nodes that are targeted, then add all their dependencies to the
set of targets. This in essence is all we need for targeting, and is
straightforward to understand.
The next phase is to add any root module outputs that can be solely
derived from the set of targeted resources. There is currently no way to
target outputs themselves, so this is how we can allow these to be
updated as part of a target.
Rather than attempting to backtrack through the graph to find candidate
outputs, requiring each node on the chain to properly advertise if it
could be traversed, then backtracking again to determine if the
candidate is valid (which often got "off course"), we can start directly
from the outputs themselves. The algorithm here is simpler: if all the
root output's resource dependencies are targeted, add that output and
its dependencies to the targeted set.
We previously intentionally removed support for the allow-missing-config
option to terraform import, requiring that all imported resources have
matching config. See #24412.
However, the option was not removed from the import command, and it is
widely used. This commit reintroduces support for importing with a
missing configuration by falling back to implying the provider FQN based
on the resource type.
If a data source is storing a value that doesn't comply precisely with
the schema, it will now show up as a perpetual diff during plan.
Since we can easily detect if there is no resulting change from the
stored value, rather than presenting a planned read each time, we can
change the plan to a NoOp and log the incongruity as a warning.
If depends_on is allowed for outputs, we should validate that the
expressions are valid. Since outputs are always evaluated, and
validation is just done by this evaluation, we can check the
depends_on validation during evaluation too.
There aren't going to be any nodes specifically for module call
instances during plan, so we have to switch the reference subject to the
general module call.
The output destroy node only needs to connect to each of the output's
up-edges in order to be connected transitively to all of the outputs
dependencies. In large, highly-connected graphs, this may save
considerable time for each output.
The TargetsTransformer ignored resource indices before expansion could
happen, but was not handling module indices. Ensure that we collapse all
pre-expansion addresses to "configuration" addresses, with no module or
resource keys.
The recursive call should only return immediately on error.
The switch statement to find the current path should not use
ReferenceOutside, as we are getting the path for configuration, not for
references. This case would not have been taken currently, since all
GraphNodeReferenceOutside are also GraphNodeModulePath.
* Update error message for apply validation
Add a hint that the validation failure has occurred at the root of the resource
schema to the error message. This is because the root resource has an empty
path when being validated and the path is being relied upon to provide context
into the error message.
During refresh, data sources need to know if their parent modules have
depends_on configured at all. Pass this info back through the search for
depends_on resources, and delay refresh when it's set.
Resources that are not yet created will not be in the graph during
refresh, and therefore cannot be attached to the data source nodes. In
this case we still need to indicate if there are depends_on entries
inherited from the module call, which we can do with the forceDependsOn
field.
We'll need this again for getting the transitive depends_on references
from parent module calls. This is needed to inform us how to handle data
sources during refresh and plan.
Resource destroy nodes can only depend on other resources. Connecting
them to their module expander can introduce cycles when the module
expander depends on resources in the destroyer's subgraph.
We don't need another node type for orphaned outptus, they are just
outputs being removed for a different reason than destroy. Use the
NodeDestroyableOutput implementation.
Destroy outputs also don't need to be referencers, since they are being
removed.
Rename DestroyOutputTransformer to destroyRootOutputTransformer, and add
an explanation as to why it is the only transformer that requires an
exception to know when it's involved from the destroy command.
simplification allows us to settle on a single interface,
graphNodeExpandsInstances for all types if instance expanders. The only
other specific class of resource we need to detect during pruning is the
nodeExpandApplyableResource node, which is already classified under the
GraphNodeResourceInstance interface.
ModulePath was incorrectly returning the parent module, because it did
not implement ReferenceOutside. With ReferenceOutside working correctly,
we can have ModulePath return the real path and remove the special case
for this during pruning.
Create a single transformer to remove all unused nodes from the apply
graph. This is similar to the combination of the resource pruning done
in the destroy edge transformer, and the unused values transformer. In
addition to resources, variables, locals, and outputs, we now need to
remove unused module expansion nodes as well. Since these can all be
interdependent, we need to process them as whole in a single
transformation.
In order for depends_on to work, modules need to implicitly depend on
their child modules. This will have little effect on terraform's
concurrency, as configuration trees are always much wider than they are
deep.
create interfaces that nodes can implement to declare whether they
expand into instances of some sort, using the instances.Expander, and/or
whether use the instances.Expander to find instances.
included is a rough transformer implementation to remove these nodes
from the apply graph.
All of the feedback from the experiment described enhancements that can
potentially be added later without breaking changes, so this change simply
removes the experiment gate from the feature as originally implemented
with no changes to its functionality.
Further enhancements may follow in later releases, but the goal of this
change is just to ship the feature exactly as it was under the experiment.
Most of the changes here are cleaning up the experiment opt-ins from our
test cases. The most important parts are in configs/experiments.go and in
experiments/experiment.go .
Connect references from depends_on in modules calls. This will "just
work" for a lot of cases, but data sources will be read too early in the
case where they require the dependencies to be created. While
data sources will be properly ordered behind the module head node, there
is nothing preventing them from being being evaluated during refresh.
The resource apply nodes need to be GraphNodeDestroyerCBD in order to
correctly inherit create_before_destroy. While the plan will have
recorded this to create the correct deposed nodes, the edges still need
to be transformed correctly.
We also need create_before_destroy to be saved to state for nodes that
inherited it, so that if they are removed from state the destroy will
happen in the correct order.
We need to run the force CBD transformer during apply too, both to
ensure we can rely on the `CreateBeforeDestroy()` status for dependants
during apply, but also to ensure that the correct status is stored into
state.
* addrs: replace NewLegacyProvider with NewDefaultProvider in ParseProviderSourceString
ParseProviderSourceString was still defaulting to NewLegacyProvider when
encountering single-part strings. This has been fixed.
This commit also adds a new function, IsProviderPartNormalized, which
returns a bool indicating if the string given is the same as a
normalized version (as normalized by ParseProviderPart) or an error.
This is intended for use by the configs package when decoding provider
configurations.
* terraform: fix provider local names in tests
* configs: validate that all provider names are normalized
The addrs package normalizes all source strings, but not the local
names. This caused very odd behavior if for e.g. a provider local name
was capitalized in one place and not another. We considered enabling
case-sensitivity for provider local names, but decided that since this
was not something that worked in previous versions of terraform (and we
have yet to encounter any use cases for this feature) we could generate
an error if the provider local name is not normalized. This error also
provides instructions on how to fix it.
* configs: refactor decodeProviderRequirements to consistently not set an FQN when there are errors
The new data source planning logic no longer needs a separate action,
and the apply status can be determined from whether the After value is
complete or not.
Ensure that a data source with depends_on not only plans to update
during refresh, but evaluates correctly in the plan ensuring
dependencies are planned accordingly.
The state was not being set, so the change was not evaluated correctly
for dependant resources.
Remove use of cty.NilVal in readDataSource, only one place was using it,
so the code could just be moved out.
Fix a bunch of places where warnings would be lost.
Rather than re-read the data source during every plan cycle, apply the
config to the prior state, and skip reading if there is no change.
Remove the TODOs, as we're going to accept that data-only changes will
still not be plan-able for the time being.
Fix the null data source test resource, as it had no computed fields at
all, even the id.
The logic for refresh, plan and apply are all subtly different, so
rather than trying to manage that complex flow through a giant 300 line
method, break it up somewhat into 3 different types that can share the
types and a few helpers.
Start fixing plan tests that don't expect data sources to be in the
plan. A few were just checking that Read was never called, and some
expected the data source to be nil.
In order to udpate data sources correctly when their configuration
changes, they need to be evaluated during plan. Since the plan working
state isn't saved, store any data source reads as plan changes to be
applied later. This is currently abusing the Update plan action to
indicate that the data source was read and needs to be applied to state.
We can possibly add a Store action for data sources if this approach
works out. The Read action still indicates that the data source was
deferred to the Apply phase.
We also fully handle any data source depends_on changes. Now that all
the transitive resource dependencies are known at the time of
evaluation, we can check the plan to determine if there are any changes
in the dependencies and selectively defer reading the data source.
We need to load the state during refresh, so that even if the data
source can't be read due to `depends_on`, the state can be saved back
again to prevent it from being lost altogether.
This is a step towards having data sources refresh like resources, which
will be from their saved state value.
This transformer is what will provider the data sources with the
transitive dependencies needed to determine if they can read during plan
or must be deferred.
That name tag was left in only to reduce the diff when during
implementation. Fix the naming now for these nodes so it is correct, and
prevent any possible name collision between types.
Adding a transformer to attach any transitive DependsOn references to
data sources during plan. Refactored the ReferenceMap from the
ReferenceTransformer so it can be reused for both.
GraphNodeAttachDependsOn give us a method for adding all transitive resource
dependencies found through depends_on references, so that data source
can determine if they can be read during plan. This will be done by
inspecting the changes of all dependency resources, and delaying read
until apply if any changes are planned.
* Include eval in output walk
This allows outputs to be evaluated in the evalwalk,
impacting terraform console. Outputs are still not evaluated
for terraform console in the root module, so this has
no impact on writing to state (as child module outputs are not
written to state). Also adds test coverage to the console command,
including for evaluating locals (another use of the evalwalk)
Add the expansion transformer to the eval graph,
which is used in rare scenarios which includes running
terraform console. Prevents panic when running terraform
console in contexts with module expansion
Since objects and tuples have fixed numbers of elements, we can't return
an unknown version of those during validation. While we could return a
DyanmicVal (which was used previously), that prevents the validation of
outputs and attributes in config references.
Instead, we can return a synthetic type made from a List or Map based
on the configuration, which will allow us to more precisely validate
indexes, attributes, and outputs.
Because tuple types have a fixed number of elements, and we may not know
the number of expanded instances, we can't use an unknown tuple type to
validate index expressions.
Since evaluation is driven only by the configuration (i.e. you can't
interpolate what's not in the config), the resource evaluation should
also follow configuration rather than state. Determining the each mode
solely from the config, and applying that to the state and changes
removes the need for EachMode in the resource state. This bypasses the
awkward dance around getting the correct EachMode set in and retrieved
from state during plan when it changes in the config.
A side effect of the various changes to the provider installer included losing the initialization required error message which would occur if a user removed or modified the .terraform directory.
Previously, plugin factories were created after the configuration was loaded, in terraform.NewContext. Terraform would compare the required providers (from config and state) to the available providers and return the aforementioned error if a provider was missing.
Provider factories are now loaded at the beginning of any terraform command, before terraform even loads the configuration, and therefore before terraform has a list of required providers.
This commit replaces the current error when a providers' schema cannot be found in the provider factories with the init error, and adds a command test (to plan tests, for no real reason other than that's what I thought of first).
There is no codepath that can use this any longer, since we need to
evaluate the modules as whole objects.
This means we're going to have to live for now with invalid module
output references returning "object" errors rather that "module".
The evaluationStateData needs the change to the GetModule method to work
with the new evaluator. This is using a deep copy of module instances,
which we will clean up after some changes to the states package.
Stop evaluating count and for each if they aren't set in the config.
Remove "Resource" from the function names, as they are also now used
with modules.
While we don't have any expansion info during validation, we can try to
evaluate variable expressions to catch some basic errors. Do this by
creating module instance RepetitionData with unknown values. This
unfortunately will still miss the incorrect usage of count/each values,
but that would require the module call's each mode, which is not
available at this time.
The variable nodes are not only used during plan and apply, so remove
those from there names. The "plan" node is now
`nodeExpandModuleVariable` and the "apply" node is now just
`nodeModuleVariable`.
Remove unnecessary methods, as the nodeModuleVariable is no longer used
in the full graph transformations.
NodeModuleRemoved is redundant now with the concept of
nodeCloseModule, so we can replace it within the graph. This does mean
that nodeCloseModule needs to know if it's evaluating an orphaned module
that can't be expanded, but the overhead to checking this isn't too
bad.
Now that nodeModuleClose is referenceable, and we can ensure it's always
in the graph at the correct time, we can eliminate the need to connect
each resource to every single node within a module it references, and
instead connect only to the nodeModuleClose, which acts as the module
root. Since module expansion can cause exponential growth in the number
of edges in graphs, this will help with performance problems when
transforming and reducing these graphs by eliminating the bulk of
redundant edges. This will also help with general debugging, making the
graphs easier to read.
This is all that is required to make module reference ordering work
during apply, by adding and edge to the nodeCloseModule node, which will
be the last node evaluated in the module.
These will now use "default" provider addresses, rather than "legacy"
ones, so that they can cooperate with the rest of Terraform that has been
updated to no longer use legacy provider addresses.
This encapsulates the logic for selecting an implied FQN for an
unqualified type name, which could either come from a local name used in
a module without specifying an explicit source for it or from the prefix
of a resource type on a resource that doesn't explicitly set "provider".
This replaces the previous behavior of just directly calling
NewDefaultProvider everywhere so that we can use a different implication
for the local name "terraform", to refer to the built-in terraform
provider rather than the stale one that's on registry.terraform.io for
compatibility with other Terraform versions.
* terraform: add helper functions for creating test state
testSetResourceInstanceCurrent and testSetResourceInstanceTainted are
wrapper functions around states.Module.SetResourceInstanceCurrent()
used to set a resource in state. They work with current, non-deposed
resources with no dependencies.
testSetResourceInstanceDeposed can be used to set a desosed resource in state.
* terraform: update all tests to use modern providers and state
These are cases where we were using the legacy string only to produce a
message to the user or to write to the log. It's enough to make some
basic Terraform commands like "terraform validate" not panic and get far
enough along to see that provider startup is working.
Back when we first introduced provider versioning in Terraform 0.10, we
did the provider version resolution in terraform.NewContext because we
weren't sure yet how exactly our versioning model was going to play out
(whether different versions could be selected per provider configuration,
for example) and because we were building around the limitations of our
existing filesystem-based plugin discovery model.
However, the new installer codepath is new able to do all of the
selections up front during installation, so we don't need such a heavy
inversion of control abstraction to get this done: the command package can
select the exact provider versions and pass their factories directly
to terraform.NewContext as a simple static map.
The result of this commit is that CLI commands other than "init" are now
able to consume the local cache directory and selections produced by the
installation process in "terraform init", passing all of the selected
providers down to the terraform.NewContext function for use in
implementing the main operations.
This commit is just enough to get the providers passing into the
terraform.Context. There's still plenty more to do here, including to
repair all of the tests this change has additionally broken.
We cannot evaluate expansion during validation, since the values may not
be known at that time.
Inject a nodeValidateModule, using the "Concrete" pattern used for other
node types during graph building. This node will always evaluate to a
single module instance, so that we have a valid context within which to
evaluate all sub resources.
Make the expansion logic easier to follow, keeping the evaluation and
registration local to switch cases. We don't validate anything between
count or for_each (config loading should handle that), and we don't need
to keep relying on the count == -1 sentinel value.
Replace the graphNodeRoot for the main graph with a nodeCloseModule for
the root module. USe a new transformer as well, so as to not change any
behavior of DynamicExpand graphs.
Closing out the root module like we do with sub modules means we no
longer need the OrphanResourceTransformer, or the NodeDestroyResource.
The old resource destroy logic has mostly moved into the instance nodes,
and the remaining resource node was just for cleanup, which need to be
done again by the module since there isn't always a NodeDestroyResource
to be evaluated.
The more-correct state caused a few tests to fail, which need to be
cleaned up to match the state without empty resource husks.
There is not one more non-dependent type to look for when pruning unused
values. This fixes the oversight, but still leaves the ugly concrete
type checking which we need to remove.
During plan, anything dependent on a module can connect to the module
expansion node, because all instance nodes are created during
DynamicExpand. During apply the instance nodes are created from the
diff, so we need a root module to terminate the logical module subgraph.
Besides providing an anchor for the completion of a module, the
nodeCloseModule can also be used to cleanup the orphan resource and
module placeholders in the state.
NodeDestroyResource does not require a provider, and to avoid this a
temporary GraphNodeNoProvider was used to differentiate it from other
resource nodes. We can now de-couple the destroy node from the abstract
resource which was adding the ProvidedBy method, and remove the
NoProvider method.
Remove the shims where they aren't necessary from the Init and Close
provider nodes. This also removed some provider path checks from the
builtin eval context, which cannot be resolved since the context may not
be created with a ModuleInstance path.
Use the new addrs type here.
Also remove the uniqueMap from the config transformer. We enforce
uniqueness during config loading, and this is more likely to have false
positives due to stringification than anything.
While the Expander itself now handles the recursive expansion of
modules, Resources themselves still need to be expanded twice, because
the evaluation of the Resource, which entails evaluating the for_each or
count expressions, is separate from the ResourceInstance expansion.
Add a nodeExpandPlannableResource to do handle this expansion to allow
all NodePlannableResources to call EvalWriteResourceState with an
absolute address.
As the Graph is walked, the current way to set the context path was to
have the walker return a context from EnterPath. This required that
every node know it's absolute path, which can no longer be the case
during plan when modules have not been expanded.
This introduces a new method called WithPath, which returns a copy of
the context with the internal path updated to reflect the method
argument. Any use of the EvalContext that requires knowing the path will
now panic if it wasn't explicitly set to ensure that evaluations always
occur in the correct path.
Add EvalContext to the GraphWalker interface.
EvalContext returns an EvalContext that has not yet set a path. This
will allow us to enforce that all context operations requiring a module
instance path will require that a path be explicitly set rather than
evaluating within the wrong path.