Commit Graph

10 Commits

Author SHA1 Message Date
James Bardin ab9a2935ce implement NodePlannableLocal
Using this in the same manner as NodePlannableOutput, which expands the
local values within modules. All thee output and local types are used in
both plan and apply, we may rename these to better reflect their usage
in expanding. That wait until we are certain that apply won't need any
extra machinery for handling values that aren't stored in the plan.
2020-03-10 17:25:11 -04:00
James Bardin bd9cfca794 rename GraphNodeSubPath -> GraphNodeModuleInstance 2020-03-10 17:25:11 -04:00
James Bardin 521bdcc241 implement GraphNodeModulePath
GraphNodeModulePath is similar to GraphNodeSubPath, except that it
returns an addrs.Module rather than an addrs.ModuleInstance. This is
used by the ReferenceTransformer to connect references, when modules may
not yet be expanded.

Because references only exist within the scope of a module, we can
connect everything knowing only the module path. If the reference is to
an expanded module instance output, we can still properly order the
reference because we'll wait for the entire module to complete
evaluation.
2020-03-10 17:25:11 -04:00
Martin Atkins de7b834de7 core: Local and output values must reference destroy nodes too
Because we currently rely on the ReferenceTransformer to introduce the
necessary edges between local/output values and resource destroy nodes, we
must include the destroy phase of any resource we depend on in the
references of these.

This works in conjunction with the changes in the prior commit to restore
correct handling of dependencies for local and output values during
destroy.

With the current design, several seemingly-separate parts of the code must
all coincidentally agree with one another for destroy edges to be created
properly, which makes this code very hard to maintain. In future we should
refactor this so that ReferenceTransformer doesn't create edges for
destroy nodes at all, and have _all_ destroy edges (including
create_before_destroy) be dealt with in the single DestroyEdgeTransformer,
where they can be maintained and unit tested together.
2018-10-16 18:49:20 -07:00
Martin Atkins a28c93c3ce core: render variables, locals and outputs nicely in "terraform graph" 2018-10-16 18:46:46 -07:00
Martin Atkins c937c06a03 terraform: ugly huge change to weave in new HCL2-oriented types
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.

The three main goals here are:
- Use the configuration models from the "configs" package instead of the
  older models in the "config" package, which is now deprecated and
  preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
  new "lang" package, instead of the Interpolator type and related
  functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
  rather than hand-constructed strings. This is not critical to support
  the above, but was a big help during the implementation of these other
  points since it made it much more explicit what kind of address is
  expected in each context.

Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.

I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
2018-10-16 18:46:46 -07:00
James Bardin 7da1a39480 always evaluate locals, even during destroy
Destroy-time provisioners require us to re-evaluate during destroy.

Rather than destroying local values, which doesn't do much since they
aren't persisted to state, we always evaluate them regardless of the
type of apply. Since the destroy-time local node is no longer a
"destroy" operation, the order of evaluation need to be reversed. Take
the existing DestroyValueReferenceTransformer and change it to reverse
the outgoing edges, rather than in incoming edges. This makes it so that
any dependencies of a local or output node are destroyed after
evaluation.

Having locals evaluated during destroy failed one other test, but that
was the odd case where we need `id` to exist as an attribute as well as
a field.
2018-01-29 16:16:41 -05:00
James Bardin 77396107c4 don't evaluate locals during destroy
Locals don't need to be evaluated during destroy.  Rather than simply
skipping them, remove them from the state as they are encountered. Even
though they are not persisted in the state, it keeps the state up to
date as the destroy happens, and we reduce the chance of other
inconstancies later on.
2017-09-28 12:56:25 -04:00
Martin Atkins 3a30bfe845 core: evaluate locals and return them for interpolation
We stash the locals in the module state in a map that is ignored for JSON
serialization. We don't include locals in the persisted state because they
can be trivially recomputed and this allows us to assume that they will
pass through verbatim, without any normalization or other transforms
caused by the JSON serialization.

From a user standpoint a local is just a named alias for an expression,
so it's desirable that the result passes through here in as raw a form
as possible, so it behaves as closely as possible to simply using the
given expression directly.
2017-08-21 15:15:25 -07:00
Martin Atkins 5b66953d1d core: graph nodes and edges for local values
A local value is similar to an output in that it exists only within state
and just always evaluates its value as best it can with the current state.
Therefore it has a single graph node type for all walks, which will
deal with that evaluation operation.
2017-08-21 15:15:25 -07:00