Commit Graph

135 Commits

Author SHA1 Message Date
Sander van Harmelen 7fbd93b5cd command/state: update and fix the state push and pull 2018-10-19 19:12:23 +02:00
Sander van Harmelen af1a471a05 command/state: update and fix the state list command 2018-10-19 16:31:12 +02:00
Martin Atkins ec57927ea3 build: Take protoc out of the "go generate" path
Since protoc is not go-gettable, and most development tasks in Terraform
won't involve recompiling protoc files anyway, we'll use a separate
mechanism for these.

This way "go generate" only depends on things we can "go get" in the
"make tools" target.

In a later commit we should also in some way specify a particular version
of protoc to use so that we don't get "flapping" regenerations as
developers work with different versions, but the priority here is just to
make "make generate" minimally usable again to restore the dev workflow
documented in the README.

This also includes some updates that resulted from running "make generate"
and "make protobuf" after those Makefile changes were in place.
2018-10-18 10:39:20 -07:00
James Bardin d707049f72 don't make a backup of a nil state
This makes sure we don't create a backup of an intermediate state if the
first read state was empty.
2018-10-17 09:42:08 -04:00
Martin Atkins 66f96cf842 command: Un-stub and reimplement "terraform state rm"
This was previously targeting the old state manager and state types, so it
needed some considerable rework to get it working again with the new state
types.

Since our new resource address syntax lacks the weird extra .deposed
special case we had before, we instead interpret addresses as
whole-instance addresses here and remove the deposed objects along with
the current one (if present), since this is more likely to match with
user expectations because we don't consider deposed objects to be
independently addressable in any other situation.

With that said, to be more explicit about what is going on we do now have
a -dry-run mode and maintain separate counts of current and deposed
instances so that we can expose that in the UI where relevant.
2018-10-16 19:14:11 -07:00
Martin Atkins 331cb07a05 states/statefile: Tolerate nil state in statefile.New
For historical reasons sometimes we have nil state in situations where
we'd still like to persist state snapshots to a store. To make life easier
for those callers, we'll substitute an empty state if we are given a nil
one, thus allowing us to still generate a valid serialization that will
load back in as an empty state.
2018-10-16 19:14:11 -07:00
Martin Atkins b0016e9cf6 command: Allow tests to run to completion without panics or hangs
There are still 160 test failures as of this commit, but at least the test
program can run to completion and list out all the failures.
2018-10-16 19:14:11 -07:00
Martin Atkins ec2e6cb06f terraform: Prune resource husks at the end of "terraform destroy"
When we're being asked to destroy everything, we ideally want to end up
with a totally empty state. Normally we will conservatively keep around
the "husks" of resources (what's left after all of the instances have been
destroyed) unless they are configured without count or for_each, but in
this special case we'll prune those out.

The implication of this is that in "weird" expression contexts that happen
before the next "terraform plan", such as evaluation in
"terraform console" or expressions in data resources and provider blocks
that get evaluated during the refresh walk, we will see these results
as unknown rather than as empty lists of objects. We accept that weirdness
for now because in a future release we are likely to remove "refresh" as
a separate walk anyway, doing all of that work during the plan walk where
we can ensure that these values are properly re-populated before trying
to use them.
2018-10-16 19:14:11 -07:00
Martin Atkins 2b5d9506b1 states: Fix TestResourceInstanceDeposeCurrentObject
This was missed when changing the signature of deposeCurrentObject in an
earlier commit.
2018-10-16 19:14:11 -07:00
Martin Atkins 2eea07750a core: Clean up resource states when they are orphaned
We previously had mechanisms to clean up only individual instance states,
leaving behind empty resource husks in the state after they were all
destroyed.

This takes care of it in the "orphan" case. It does not yet do it in the
"terraform destroy" or "terraform plan -destroy" cases because we don't
have anywhere to record in the plan that we're actually destroying and so
the resource configurations should be ignored and _everything_ should be
cleaned. We'll let the state be not-quite-empty in that case for now,
since it doesn't really hurt; cleaning up orphans is the main case because
the state will live on afterwards and so leftover cruft will accumulate
over the course of many changes.
2018-10-16 19:14:11 -07:00
Martin Atkins faddb83a92 core: If create leg of create_before_destroy fails, restore deposed
I misunderstood the logic here on the first pass of porting to the new
provider and state types: EvalUndeposeState is supposed to return the
deposed object back to being current again, so we can undo the deposing
in the case where the create leg fails.

If we don't do this, we end up leaving the instance with no current object
at all and with its prior object deposed, and then the later destroy
node deletes that deposed object, leaving the user with no object at all.

For safety we skip this restoration if there _is_ a new current object,
since a failed create can still produce a partial result which we need
to keep to avoid losing track of any remote objects that were successfully
created.
2018-10-16 19:14:11 -07:00
Martin Atkins 334c6f1c2c core: Be more explicit in how we handle create_before_destroy
Previously our handling of create_before_destroy -- and of deposed objects
in particular -- was rather "implicit" and spread over various different
subsystems. We'd quietly just destroy every deposed object during a
destroy operation, without any user-visible plan to do so.

Here we make things more explicit by tracking each deposed object
individually by its pseudorandomly-allocated key. There are two different
mechanisms at play here, building on the same concepts:

- During a replace operation with create_before_destroy, we *pre-allocate*
  a DeposedKey to use for the prior object in the "apply" node and then
  pass that exact id to the destroy node, ensuring that we only destroy
  the single object we planned to destroy. In the happy path here the
  user never actually sees the allocated deposed key because we use it and
  then immediately destroy it within the same operation. However, that
  destroy may fail, which brings us to the second mechanism:

- If any deposed objects are already present in state during _plan_, we
  insert a destroy change for them into the plan so that it's explicit to
  the user that we are going to destroy these additional objects, and then
  create an individual graph node for each one in DiffTransformer.

The main motivation here is to be more careful in how we handle these
destroys so that from a user's standpoint we never destroy something
without the user knowing about it ahead of time.

However, this new organization also hopefully makes the code itself a
little easier to follow because the connection between the create and
destroy steps of a Replace is reprseented in a single place (in
DiffTransformer) and deposed instances each have their own explicit graph
node rather than being secretly handled as part of the main instance-level
graph node.
2018-10-16 19:14:11 -07:00
Martin Atkins 9eb32c4536 core: Reinstaint instance tainting, but without mutating objects
Our previous mechanism for dealing with tainting relied on directly
mutating the InstanceState object to mark it as such. In our new state
models we consider the instance objects to be immutable by convention, and
so we frequently copy them. As a result, the taint flagging was no longer
making it all the way through the apply evaluation process.

Here we now implement tainting as a separate step in the evaluation
process, creating a copy of the object with a tainted status if there were
any errors during creation.

This introduces a new behavior where any provider-level errors during
creation will also cause an instance to be marked as tainted if any object
is returned at all. Create-time errors _normally_ result in no object at
all, but the provider might return an object if the failure occurred at
a subsequent step of a multi-step creation process and so left behind a
remote object that needs to be cleaned up on a future run.
2018-10-16 19:14:11 -07:00
Martin Atkins d48f3600fe states: In Module.testString, use incrementing ids for deposed
In our old world we always used 1-based indices into a slice of deposed
objects. The new models instead use a map keyed by pseudorandom strings,
so that deposed objects will have a consistent identity across multiple
operations.

However, having that pseudo-random string in our test comparison output
is not helpful, since such strings can never match hard-coded expectation
strings. Therefore for the purposes of generating this test comparison
output we'll revert back to using 1-based indexes.

This should avoid problems for tests that only create one deposed object
per instance, but those which create more than one will need to do some
more work since the _ordering_ of these objects in the output is still
pseudorandom as a result of it coming from a map rather than a slice.
2018-10-16 19:14:11 -07:00
Martin Atkins 83066cd57f states: Support non-string primitives in state string representation 2018-10-16 19:14:11 -07:00
Martin Atkins 5faf027ea7 states: In State.String, use colon suffix only after all module names 2018-10-16 19:14:11 -07:00
Martin Atkins 60718efc8e states: DeepCopy for ResourceInstanceObject
Also a fix for not actually deep-copying "Private", since when this was
originally written it was a cty.Value but then later became a []byte.
2018-10-16 19:14:11 -07:00
Martin Atkins edc0ce6333 states: Prune empty modules after possibly removing resources
Also includes a new log message for the situation where we _do_ prune,
since this seems helpful during debugging.
2018-10-16 19:14:11 -07:00
Martin Atkins 8003b3408f states: Fix incorrect ResourceInstanceObjectSrc.DeepCopy
Accidental shadowing of the top-level attrsFlat variable meant that the
flatmap portion of these objects was getting lost in the DeepCopy result.
2018-10-16 19:14:11 -07:00
James Bardin 6f429cc81b make state output match legacy output 2018-10-16 19:14:11 -07:00
Martin Atkins 9af67806fc core: Prune placeholder objects from state after refresh
Prior to our refactoring here, we were relying on a lucky coincidence for
correct behavior of the plan walk following a refresh in the same run:

- The refresh phase created placeholder objects in the state to represent
  any resource instance pending creation, to allow the interpolator to
  read attributes from them when evaluating "provider" and "data" blocks.
  In effect, the refresh walk is creating a partial plan that only covers
  creation actions, but was immediately discarding the actual diff entries
  and storing only the planned new state.

- It happened that objects pending creation showed up in state with an
  empty ID value, since that only gets assigned by the provider during
  apply.

- The Refresh function concluded by calling terraform.State.Prune, which
  deletes from the state any objects that have an empty ID value, which
  therefore prevented these temporary objects from surviving into the
  plan phase.

After refactoring, we no longer have this special ID field on instance
object state, and we instead rely on the Status field for tracking such
things. We also no longer have an explicit "prune" step on state, since
the state mutation methods themselves keep the structure pruned.

To address this, here we introduce a new instance object status "planned",
which is equivalent to having an empty ID value in the old world. We also
introduce a new method on states.SyncState that deletes from the state
any planned objects, which therefore replaces that portion of the old
State.prune operation just for this refresh use-case.

Finally, we are now expecting the expression evaluator to pull pending
objects from the planned changeset rather than from the state directly,
and so for correct results these placeholder resource creation changes
must also be reported in a throwaway changeset during the refresh walk.

The addition of states.ObjectPlanned also permits a previously-missing
safety check in the expression evaluator to prevent us from relying on the
incomplete value stored in state for a pending object, in the event that
some bug prevents the real pending object from being written into the
planned changeset.
2018-10-16 19:14:11 -07:00
Martin Atkins 20adb9d9b7 core: Evaluate resource references from plan where possible
Our state representation is not able to preserve unknown values, so it's
not suitable for retaining the transient incomplete values we produce
during planning.

Instead, we'll discard the unknown values when writing to state and have
the expression evaluator prefer an object from the plan where possible.
We still use the shape of the transient state to inform things like the
resource's "each mode", so the plan only masks the object values
themselves.
2018-10-16 19:14:11 -07:00
Martin Atkins 03e6771536 states/statemgr: don't panic if no state file is present on first write 2018-10-16 19:14:11 -07:00
Martin Atkins 44bc7519a6 terraform: More wiring in of new provider types
This doesn't actually work yet, but it builds and then panics in a pretty
satisfying way.
2018-10-16 19:12:54 -07:00
Martin Atkins fb57801dfe states: object Private data is []byte, not cty.Value
We're going to allow the provider to encode whatever it wants in here, so
a provider can use whatever is most convenient for its implementation
language and to avoid some of the bugs we saw with the prior model where
the forced round-trip through JSON and back into interface{} would cause
some loss of fidelity, leading to bugs.
2018-10-16 19:11:09 -07:00
Martin Atkins a3403f2766 terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.

The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.

The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.

Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-10-16 19:11:09 -07:00
Martin Atkins cf6892275a states: Port stringer implementation from terraform.State
Our existing core tests make extensive use of the string representation
of a state for comparison purposes, because they were written before we
began making use of helper packages like "cmp".

To avoid the need to rewrite all of those tests and potentially break
them, we will instead port that particular rendering as closely as
possible but mark it with a comment sternly warning not to use it for
anything new.

We don't want to use this moving forward for a number of reasons, but
most notably:

 - printing out whole before and after state representations makes it
   hard to find a subtle difference in outcome when a test fails, while
   "cmp" can provide us with a real diff.

 - this string serialization is constrained by the capabilities of
   Terraform prior to our new state models, and so it does not
   comprehensively represent all possibilities in the new world.

 - it will probably behave oddly/poorly when given states containing
   features that arrived after it was written, even though I made a
   best effort here to make it do something reasonable in situations
   I thought about.
2018-10-16 18:58:49 -07:00
Martin Atkins 424afe0ace states: separate types for encoded and decoded state objects
The types here were originally written to allow us to defer decoding of
object values until schemas are available, but it turns out that this was
forcing us to defer decoding longer than necessary and potentially decode
the same value multiple times.

To avoid this, we create pairs of types to represent the encoded and
decoded versions and methods for moving between them. These types are
identical to one another apart from how the dynamic values are
represented.
2018-10-16 18:58:49 -07:00
Martin Atkins 3bb731e2d6 statemgr: Helper funcs RefreshAndRead and WriteAndPersist
In practice these pairs of functions are often used together when working
with a "full" statemgr, so these helper wrappers allow us to do that more
conveniently.

This also introduces a new interface statemgr.Storage, which represents
a state manager that has all of the storage capabilities but does not
necessarily support locking. In practice callers will usually just use
statemgr.Full, but these more-specific interfaces allow us to reflect
in APIs which subset of the statemgr functionality each function depends
on.
2018-10-16 18:58:49 -07:00
Martin Atkins 05936df0e7 statemgr: Backup file support for statemgr.Filesystem
In the old state package we had this as a separate manager
state.BackupState, but that doesn't work with our new interfaces because
we handle lineage and serial within the state managers themselves and
don't expose them to callers anymore.

In practice it being built in to the filesystem manager is not a problem
because we only use the backup functionality for local state anyway.

This also slightly adjusts the behavior to be more intuitive. The old
BackupState relied on the implementation detail that Terraform re-persists
the original state early in an apply operation, which meant that by
coincidence it would back up the right snapshot. In this new approach,
we instead take an in-memory copy during State and then write _that_ to
disk in WriteState if the new state seems different, so we're guaranteed
that we'll always write out what we read before any changes were made.

In future we may improve this further, such as keeping multiple
generations of backups, etc. But for now this is intended to preserve the
goals of the original implementation while making its behavior
self-contained and not dependent on coincidences.
2018-10-16 18:50:57 -07:00
Martin Atkins f77e7a61b0 various: helpers for collecting necessary provider types
Since schemas are required to interpret provider, resource, and
provisioner attributes in configs, states, and plans, these helpers intend
to make it easier to gather up the the necessary provider types in order
to preload all of the needed schemas before beginning further processing.

Config.ProviderTypes returns directly the list of provider types, since
at this level further detail is not useful: we've not yet run the
provider allocation algorithm, and so the only thing we can reliably
extract here is provider types themselves.

State.ProviderAddrs and Plan.ProviderAddrs each return a list of
absolute provider addresses, which can then be turned into a list of
provider types using the new helper providers.AddressedTypesAbs.

Since we're already using configs.Config throughout core, this also
updates the terraform.LoadSchemas helper to use Config.ProviderTypes
to find the necessary providers, rather than implementing its own
discovery logic. states.State is not yet plumbed in, so we cannot yet
use State.ProviderAddrs to deal with the state but there's a TODO comment
to remind us to update that in a later commit when we swap out
terraform.State for states.State.

A later commit will probably refactor this further so that we can easily
obtain schema for the providers needed to interpret a plan too, but that
is deferred here because further work is required to make core work with
the new plan types first. At that point, terraform.LoadSchemas may become
providers.LoadSchemas with a different interface that just accepts lists
of provider and provisioner names that have been gathered by the caller
using these new helpers.
2018-10-16 18:50:29 -07:00
Martin Atkins a33f941778 states: New SyncState type
This is a wrapper around State that is able to perform higher-level
manipulations (at the granularity of the entire state) in a
concurrency-safe manner, using the lower-level APIs exposed by State and
all of the types it contains.

The granularity of a SyncState operation roughly matches the granularity
off a state-related EvalNode in the "terraform" package, performing a
sequence of more primitive operations while guaranteeing atomicity of the
entire change.

As a compromise for convenience of usage, it's still possible to access
the individual state data objects via this API, but they are always copied
before returning to ensure that two distinct callers cannot have data
races. Callers should access the most granular object possible for their
operation.
2018-10-16 18:49:20 -07:00
Martin Atkins 53cafc542b statemgr: New package for state managers
This idea of a "state manager" was previously modelled via the
confusingly-named state.State interface, which we've been calling a "state
manager" only in some local variable names in situations where there were
also *terraform.State variables.

As part of reworking our state models to make room for the new type
system, we also need to change what was previously the state.StateReader
interface. Since we've found the previous organization confusing anyway,
here we just copy all of those interfaces over into statemgr where we can
make the relationship to states.State hopefully a little clearer.

This is not yet a complete move of the functionality from "state", since
we're not yet ready to break existing callers. In a future commit we'll
turn the interfaces in the old "state" package into aliases of the
interfaces in this package, and update all the implementers of what will
by then be statemgr.Reader to use *states.State instead of
*terraform.State.

This also includes an adaptation of what was previously state.LocalState
into statemgr.FileSystem, using the new state serialization functionality
from package statefile instead of the old terraform.ReadState and
terraform.WriteState.
2018-10-16 18:49:20 -07:00
Martin Atkins 5c1c6e9d9c statefile: New package for loading and saving state files
Whereas the parent directory "states" contains the models that represent
state in memory, this package's responsibility is in serializing a subset
of that data to a JSON-based file format and then reloading that data
back into memory later.

For reading, this package supports state file formats going back to
version 1, using lightly-adapted versions of the migration code previously
used in the "terraform" package. State data is upgraded to the latest
version step by step and then transformed into the in-memory state
representation, which is distinct from any of the file format structs in
this package to enable these to evolve separately.

For writing, only the latest version (4) is supported, which is a new
format that is a slightly-flattened version of the new in-memory state
models introduced in the prior commit. This format retains the outputs
from only the root module and it flattens out the module and instance
parts of the hierarchy by including the identifiers for these inside
the child object. The loader then reconstructs the multi-layer structure
we use for more convenient access in memory.

For now, the only testing in this package is of round-tripping different
versions of state through a read and a write, ensuring the output is
as desired. This exercises all of the reading, upgrading, and writing
functions but should be augmented in later commits to improve coverage
and introduce more focused tests for specific parts of the functionality.
2018-10-16 18:49:20 -07:00
Martin Atkins b975ada8db states: New package with modern models for Terraform state
Our previous state models in the "terraform" package had a few limitations
that are addressed here:

- Instance attributes were stored as map[string]string with dot-separated
  keys representing traversals through a data structure. Now that we have
  a full type system, it's preferable to store it as a real data
  structure.

- The existing state structures skipped over the "resource" concept and
  went straight to resource instance, requiring heuristics to decide
  whether a particular resource should appear as a single object or as
  a list of objects when used in configuration expressions.

- Related to the previous point, the state models also used incorrect
  terminology where "ResourceState" was really a resource instance state
  and "InstanceState" was really the state of a particular remote object
  associated with an instance. These new models use the correct names for
  each of these, introducing the idea of a "ResourceInstanceObject" as
  the local record of a remote object associated with an instance.

This is a first pass at fleshing out a new model for state. Undoubtedly
there will be further iterations of this as we work on integrating these
new models into the "terraform" package.

These new model types no longer serve double-duty as a description of the
JSON state file format, since they are for in-memory use only. A
subsequent commit will introduce a separate package that deals with
persisting state to files and reloading those files later.
2018-10-16 18:49:20 -07:00