core: Functional-style API for terraform.Context

Previously terraform.Context was built in an unfortunate way where all of
the data was provided up front in terraform.NewContext and then mutated
directly by subsequent operations. That made the data flow hard to follow,
commonly leading to bugs, and also meant that we were forced to take
various actions too early in terraform.NewContext, rather than waiting
until a more appropriate time during an operation.

This (enormous) commit changes terraform.Context so that its fields are
broadly just unchanging data about the execution context (current
workspace name, available plugins, etc) whereas the main data Terraform
works with arrives via individual method arguments and is returned in
return values.

Specifically, this means that terraform.Context no longer "has-a" config,
state, and "planned changes", instead holding on to those only temporarily
during an operation. The caller is responsible for propagating the outcome
of one step into the next step so that the data flow between operations is
actually visible.

However, since that's a change to the main entry points in the "terraform"
package, this commit also touches every file in the codebase which
interacted with those APIs. Most of the noise here is in updating tests
to take the same actions using the new API style, but this also affects
the main-code callers in the backends and in the command package.

My goal here was to refactor without changing observable behavior, but in
practice there are a couple externally-visible behavior variations here
that seemed okay in service of the broader goal:
 - The "terraform graph" command is no longer hooked directly into the
   core graph builders, because that's no longer part of the public API.
   However, I did include a couple new Context functions whose contract
   is to produce a UI-oriented graph, and _for now_ those continue to
   return the physical graph we use for those operations. There's no
   exported API for generating the "validate" and "eval" graphs, because
   neither is particularly interesting in its own right, and so
   "terraform graph" no longer supports those graph types.
 - terraform.NewContext no longer has the responsibility for collecting
   all of the provider schemas up front. Instead, we wait until we need
   them. However, that means that some of our error messages now have a
   slightly different shape due to unwinding through a differently-shaped
   call stack. As of this commit we also end up reloading the schemas
   multiple times in some cases, which is functionally acceptable but
   likely represents a performance regression. I intend to rework this to
   use caching, but I'm saving that for a later commit because this one is
   big enough already.

The proximal reason for this change is to resolve the chicken/egg problem
whereby there was previously no single point where we could apply "moved"
statements to the previous run state before creating a plan. With this
change in place, we can now do that as part of Context.Plan, prior to
forking the input state into the three separate state artifacts we use
during planning.

However, this is at least the third project in a row where the previous
API design led to piling more functionality into terraform.NewContext and
then working around the incorrect order of operations that produces, so
I intend that by paying the cost/risk of this large diff now we can in
turn reduce the cost/risk of future projects that relate to our main
workflow actions.
This commit is contained in:
Martin Atkins 2021-08-24 12:06:38 -07:00
parent 4faac6ee43
commit 89b05050ec
55 changed files with 3288 additions and 3676 deletions

View File

@ -141,9 +141,63 @@ type Enhanced interface {
// configurations, variables, and more. Not all backends may support this
// so we separate it out into its own optional interface.
type Local interface {
// Context returns a runnable terraform Context. The operation parameter
// doesn't need a Type set but it needs other options set such as Module.
Context(*Operation) (*terraform.Context, statemgr.Full, tfdiags.Diagnostics)
// LocalRun uses information in the Operation to prepare a set of objects
// needed to start running that operation.
//
// The operation doesn't need a Type set, but it needs various other
// options set. This is a rather odd API that tries to treat all
// operations as the same when they really aren't; see the local and remote
// backend's implementations of this to understand what this actually
// does, because this operation has no well-defined contract aside from
// "whatever it already does".
LocalRun(*Operation) (*LocalRun, statemgr.Full, tfdiags.Diagnostics)
}
// LocalRun represents the assortment of objects that we can collect or
// calculate from an Operation object, which we can then use for local
// operations.
//
// The operation methods on terraform.Context (Plan, Apply, Import, etc) each
// generate new artifacts which supersede parts of the LocalRun object that
// started the operation, so callers should be careful to use those subsequent
// artifacts instead of the fields of LocalRun where appropriate. The LocalRun
// data intentionally doesn't update as a result of calling methods on Context,
// in order to make data flow explicit.
//
// This type is a weird architectural wart resulting from the overly-general
// way our backend API models operations, whereby we behave as if all
// Terraform operations have the same inputs and outputs even though they
// are actually all rather different. The exact meaning of the fields in
// this type therefore vary depending on which OperationType was passed to
// Local.Context in order to create an object of this type.
type LocalRun struct {
// Core is an already-initialized Terraform Core context, ready to be
// used to run operations such as Plan and Apply.
Core *terraform.Context
// Config is the configuration we're working with, which typically comes
// from either config files directly on local disk (when we're creating
// a plan, or similar) or from a snapshot embedded in a plan file
// (when we're applying a saved plan).
Config *configs.Config
// InputState is the state that should be used for whatever is the first
// method call to a context created with CoreOpts. When creating a plan
// this will be the previous run state, but when applying a saved plan
// this will be the prior state recorded in that plan.
InputState *states.State
// PlanOpts are options to pass to a Plan or Plan-like operation.
//
// This is nil when we're applying a saved plan, because the plan itself
// contains enough information about its options to apply it.
PlanOpts *terraform.PlanOpts
// Plan is a plan loaded from a saved plan file, if our operation is to
// apply that saved plan.
//
// This is nil when we're not applying a saved plan.
Plan *plans.Plan
}
// An operation represents an operation for Terraform to execute.

View File

@ -284,7 +284,7 @@ func (b *Local) Operation(ctx context.Context, op *backend.Operation) (*backend.
f = b.opApply
default:
return nil, fmt.Errorf(
"Unsupported operation type: %s\n\n"+
"unsupported operation type: %s\n\n"+
"This is a bug in Terraform and should be reported. The local backend\n"+
"is built-in to Terraform and should always support all operations.",
op.Type)

View File

@ -5,7 +5,6 @@ import (
"fmt"
"log"
"github.com/hashicorp/errwrap"
"github.com/hashicorp/terraform/internal/backend"
"github.com/hashicorp/terraform/internal/command/views"
"github.com/hashicorp/terraform/internal/plans"
@ -23,7 +22,7 @@ func (b *Local) opApply(
runningOp *backend.RunningOperation) {
log.Printf("[INFO] backend/local: starting Apply operation")
var diags tfdiags.Diagnostics
var diags, moreDiags tfdiags.Diagnostics
// If we have a nil module at this point, then set it to an empty tree
// to avoid any potential crashes.
@ -43,7 +42,7 @@ func (b *Local) opApply(
op.Hooks = append(op.Hooks, stateHook)
// Get our context
tfCtx, _, opState, contextDiags := b.context(op)
lr, _, opState, contextDiags := b.localRun(op)
diags = diags.Append(contextDiags)
if contextDiags.HasErrors() {
op.ReportResult(runningOp, diags)
@ -59,15 +58,26 @@ func (b *Local) opApply(
}
}()
runningOp.State = tfCtx.State()
// We'll start off with our result being the input state, and replace it
// with the result state only if we eventually complete the apply
// operation.
runningOp.State = lr.InputState
var plan *plans.Plan
// If we weren't given a plan, then we refresh/plan
if op.PlanFile == nil {
// Perform the plan
log.Printf("[INFO] backend/local: apply calling Plan")
plan, planDiags := tfCtx.Plan()
diags = diags.Append(planDiags)
if planDiags.HasErrors() {
plan, moreDiags = lr.Core.Plan(lr.Config, lr.InputState, lr.PlanOpts)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
op.ReportResult(runningOp, diags)
return
}
schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
op.ReportResult(runningOp, diags)
return
}
@ -75,7 +85,7 @@ func (b *Local) opApply(
trivialPlan := !plan.CanApply()
hasUI := op.UIOut != nil && op.UIIn != nil
mustConfirm := hasUI && !op.AutoApprove && !trivialPlan
op.View.Plan(plan, tfCtx.Schemas())
op.View.Plan(plan, schemas)
if mustConfirm {
var desc, query string
@ -119,7 +129,7 @@ func (b *Local) opApply(
Description: desc,
})
if err != nil {
diags = diags.Append(errwrap.Wrapf("Error asking for approval: {{err}}", err))
diags = diags.Append(fmt.Errorf("error asking for approval: %w", err))
op.ReportResult(runningOp, diags)
return
}
@ -130,16 +140,7 @@ func (b *Local) opApply(
}
}
} else {
plan, err := op.PlanFile.ReadPlan()
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Invalid plan file",
fmt.Sprintf("Failed to read plan from plan file: %s.", err),
))
op.ReportResult(runningOp, diags)
return
}
plan = lr.Plan
for _, change := range plan.Changes.Resources {
if change.Action != plans.NoOp {
op.View.PlannedChange(change)
@ -157,12 +158,10 @@ func (b *Local) opApply(
go func() {
defer close(doneCh)
log.Printf("[INFO] backend/local: apply calling Apply")
_, applyDiags = tfCtx.Apply()
// we always want the state, even if apply failed
applyState = tfCtx.State()
applyState, applyDiags = lr.Core.Apply(plan, lr.Config)
}()
if b.opWait(doneCh, stopCtx, cancelCtx, tfCtx, opState, op.View) {
if b.opWait(doneCh, stopCtx, cancelCtx, lr.Core, opState, op.View) {
return
}
diags = diags.Append(applyDiags)

View File

@ -6,7 +6,6 @@ import (
"log"
"sort"
"github.com/hashicorp/errwrap"
"github.com/hashicorp/terraform/internal/backend"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/configs/configload"
@ -18,25 +17,29 @@ import (
)
// backend.Local implementation.
func (b *Local) Context(op *backend.Operation) (*terraform.Context, statemgr.Full, tfdiags.Diagnostics) {
func (b *Local) LocalRun(op *backend.Operation) (*backend.LocalRun, statemgr.Full, tfdiags.Diagnostics) {
// Make sure the type is invalid. We use this as a way to know not
// to ask for input/validate.
// to ask for input/validate. We're modifying this through a pointer,
// so we're mutating an object that belongs to the caller here, which
// seems bad but we're preserving it for now until we have time to
// properly design this API, vs. just preserving whatever it currently
// happens to do.
op.Type = backend.OperationTypeInvalid
op.StateLocker = op.StateLocker.WithContext(context.Background())
ctx, _, stateMgr, diags := b.context(op)
return ctx, stateMgr, diags
lr, _, stateMgr, diags := b.localRun(op)
return lr, stateMgr, diags
}
func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.Snapshot, statemgr.Full, tfdiags.Diagnostics) {
func (b *Local) localRun(op *backend.Operation) (*backend.LocalRun, *configload.Snapshot, statemgr.Full, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
// Get the latest state.
log.Printf("[TRACE] backend/local: requesting state manager for workspace %q", op.Workspace)
s, err := b.StateMgr(op.Workspace)
if err != nil {
diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err))
diags = diags.Append(fmt.Errorf("error loading state: %w", err))
return nil, nil, nil, diags
}
log.Printf("[TRACE] backend/local: requesting state lock for workspace %q", op.Workspace)
@ -54,35 +57,20 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.
log.Printf("[TRACE] backend/local: reading remote state for workspace %q", op.Workspace)
if err := s.RefreshState(); err != nil {
diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err))
diags = diags.Append(fmt.Errorf("error loading state: %w", err))
return nil, nil, nil, diags
}
ret := &backend.LocalRun{}
// Initialize our context options
var opts terraform.ContextOpts
var coreOpts terraform.ContextOpts
if v := b.ContextOpts; v != nil {
opts = *v
coreOpts = *v
}
coreOpts.UIInput = op.UIIn
coreOpts.Hooks = op.Hooks
// Copy set options from the operation
opts.PlanMode = op.PlanMode
opts.Targets = op.Targets
opts.ForceReplace = op.ForceReplace
opts.UIInput = op.UIIn
opts.Hooks = op.Hooks
opts.SkipRefresh = op.Type != backend.OperationTypeRefresh && !op.PlanRefresh
if opts.SkipRefresh {
log.Printf("[DEBUG] backend/local: skipping refresh of managed resources")
}
// Load the latest state. If we enter contextFromPlanFile below then the
// state snapshot in the plan file must match this, or else it'll return
// error diagnostics.
log.Printf("[TRACE] backend/local: retrieving local state snapshot for workspace %q", op.Workspace)
opts.State = s.State()
var tfCtx *terraform.Context
var ctxDiags tfdiags.Diagnostics
var configSnap *configload.Snapshot
if op.PlanFile != nil {
@ -94,8 +82,8 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.
m := sm.StateSnapshotMeta()
stateMeta = &m
}
log.Printf("[TRACE] backend/local: building context from plan file")
tfCtx, configSnap, ctxDiags = b.contextFromPlanFile(op.PlanFile, opts, stateMeta)
log.Printf("[TRACE] backend/local: populating backend.LocalRun from plan file")
ret, configSnap, ctxDiags = b.localRunForPlanFile(op.PlanFile, ret, &coreOpts, stateMeta)
if ctxDiags.HasErrors() {
diags = diags.Append(ctxDiags)
return nil, nil, nil, diags
@ -105,14 +93,13 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.
// available if we need to generate diagnostic message snippets.
op.ConfigLoader.ImportSourcesFromSnapshot(configSnap)
} else {
log.Printf("[TRACE] backend/local: building context for current working directory")
tfCtx, configSnap, ctxDiags = b.contextDirect(op, opts)
log.Printf("[TRACE] backend/local: populating backend.LocalRun for current working directory")
ret, configSnap, ctxDiags = b.localRunDirect(op, ret, &coreOpts, s)
}
diags = diags.Append(ctxDiags)
if diags.HasErrors() {
return nil, nil, nil, diags
}
log.Printf("[TRACE] backend/local: finished building terraform.Context")
// If we have an operation, then we automatically do the input/validate
// here since every option requires this.
@ -122,7 +109,7 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.
mode := terraform.InputModeProvider
log.Printf("[TRACE] backend/local: requesting interactive input, if necessary")
inputDiags := tfCtx.Input(mode)
inputDiags := ret.Core.Input(ret.Config, mode)
diags = diags.Append(inputDiags)
if inputDiags.HasErrors() {
return nil, nil, nil, diags
@ -132,15 +119,15 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.
// If validation is enabled, validate
if b.OpValidation {
log.Printf("[TRACE] backend/local: running validation operation")
validateDiags := tfCtx.Validate()
validateDiags := ret.Core.Validate(ret.Config)
diags = diags.Append(validateDiags)
}
}
return tfCtx, configSnap, s, diags
return ret, configSnap, s, diags
}
func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts) (*terraform.Context, *configload.Snapshot, tfdiags.Diagnostics) {
func (b *Local) localRunDirect(op *backend.Operation, run *backend.LocalRun, coreOpts *terraform.ContextOpts, s statemgr.Full) (*backend.LocalRun, *configload.Snapshot, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
// Load the configuration using the caller-provided configuration loader.
@ -149,7 +136,7 @@ func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts)
if configDiags.HasErrors() {
return nil, nil, diags
}
opts.Config = config
run.Config = config
var rawVariables map[string]backend.UnparsedVariableValue
if op.AllowUnsetVariables {
@ -163,7 +150,7 @@ func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts)
// values through interactive prompts.
// TODO: Need to route the operation context through into here, so that
// the interactive prompts can be sensitive to its timeouts/etc.
rawVariables = b.interactiveCollectVariables(context.TODO(), op.Variables, config.Module.Variables, opts.UIInput)
rawVariables = b.interactiveCollectVariables(context.TODO(), op.Variables, config.Module.Variables, op.UIIn)
}
variables, varDiags := backend.ParseVariableValues(rawVariables, config.Module.Variables)
@ -171,14 +158,30 @@ func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts)
if diags.HasErrors() {
return nil, nil, diags
}
opts.Variables = variables
tfCtx, ctxDiags := terraform.NewContext(&opts)
diags = diags.Append(ctxDiags)
return tfCtx, configSnap, diags
planOpts := &terraform.PlanOpts{
Mode: op.PlanMode,
Targets: op.Targets,
ForceReplace: op.ForceReplace,
SetVariables: variables,
SkipRefresh: op.Type != backend.OperationTypeRefresh && !op.PlanRefresh,
}
run.PlanOpts = planOpts
// For a "direct" local run, the input state is the most recently stored
// snapshot, from the previous run.
run.InputState = s.State()
tfCtx, moreDiags := terraform.NewContext(coreOpts)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
return nil, nil, diags
}
run.Core = tfCtx
return run, configSnap, diags
}
func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextOpts, currentStateMeta *statemgr.SnapshotMeta) (*terraform.Context, *configload.Snapshot, tfdiags.Diagnostics) {
func (b *Local) localRunForPlanFile(pf *planfile.Reader, run *backend.LocalRun, coreOpts *terraform.ContextOpts, currentStateMeta *statemgr.SnapshotMeta) (*backend.LocalRun, *configload.Snapshot, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
const errSummary = "Invalid plan file"
@ -201,7 +204,7 @@ func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextO
if configDiags.HasErrors() {
return nil, snap, diags
}
opts.Config = config
run.Config = config
// A plan file also contains a snapshot of the prior state the changes
// are intended to apply to.
@ -230,11 +233,10 @@ func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextO
}
}
}
// The caller already wrote the "current state" here, but we're overriding
// it here with the prior state. These two should actually be identical in
// normal use, particularly if we validated the state meta above, but
// we do this here anyway to ensure consistent behavior.
opts.State = priorStateFile.State
// When we're applying a saved plan, the input state is the "prior state"
// recorded in the plan, which incorporates the result of all of the
// refreshing we did while building the plan.
run.InputState = priorStateFile.State
plan, err := pf.ReadPlan()
if err != nil {
@ -245,33 +247,23 @@ func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextO
))
return nil, snap, diags
}
// When we're applying a saved plan, we populate Plan instead of PlanOpts,
// because a plan object incorporates the subset of data from PlanOps that
// we need to apply the plan.
run.Plan = plan
variables := terraform.InputValues{}
for name, dyVal := range plan.VariableValues {
val, err := dyVal.Decode(cty.DynamicPseudoType)
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
errSummary,
fmt.Sprintf("Invalid value for variable %q recorded in plan file: %s.", name, err),
))
continue
}
// When we're applying a saved plan, our context must verify that all of
// the providers it ends up using are identical to those which created
// the plan.
coreOpts.ProviderSHA256s = plan.ProviderSHA256s
variables[name] = &terraform.InputValue{
Value: val,
SourceType: terraform.ValueFromPlan,
}
tfCtx, moreDiags := terraform.NewContext(coreOpts)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
return nil, nil, diags
}
opts.Variables = variables
opts.Changes = plan.Changes
opts.Targets = plan.TargetAddrs
opts.ForceReplace = plan.ForceReplaceAddrs
opts.ProviderSHA256s = plan.ProviderSHA256s
tfCtx, ctxDiags := terraform.NewContext(&opts)
diags = diags.Append(ctxDiags)
return tfCtx, snap, diags
run.Core = tfCtx
return run, snap, diags
}
// interactiveCollectVariables attempts to complete the given existing

View File

@ -1,6 +1,7 @@
package local
import (
"fmt"
"os"
"path/filepath"
"testing"
@ -10,16 +11,19 @@ import (
"github.com/hashicorp/terraform/internal/command/clistate"
"github.com/hashicorp/terraform/internal/command/views"
"github.com/hashicorp/terraform/internal/configs/configload"
"github.com/hashicorp/terraform/internal/configs/configschema"
"github.com/hashicorp/terraform/internal/initwd"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/plans/planfile"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/states/statefile"
"github.com/hashicorp/terraform/internal/states/statemgr"
"github.com/hashicorp/terraform/internal/terminal"
"github.com/hashicorp/terraform/internal/tfdiags"
"github.com/zclconf/go-cty/cty"
)
func TestLocalContext(t *testing.T) {
func TestLocalRun(t *testing.T) {
configDir := "./testdata/empty"
b, cleanup := TestLocal(t)
defer cleanup()
@ -38,20 +42,24 @@ func TestLocalContext(t *testing.T) {
StateLocker: stateLocker,
}
_, _, diags := b.Context(op)
_, _, diags := b.LocalRun(op)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err().Error())
}
// Context() retains a lock on success
// LocalRun() retains a lock on success
assertBackendStateLocked(t, b)
}
func TestLocalContext_error(t *testing.T) {
configDir := "./testdata/apply"
func TestLocalRun_error(t *testing.T) {
configDir := "./testdata/invalid"
b, cleanup := TestLocal(t)
defer cleanup()
// This backend will return an error when asked to RefreshState, which
// should then cause LocalRun to return with the state unlocked.
b.Backend = backendWithStateStorageThatFailsRefresh{}
_, configLoader, configCleanup := initwd.MustLoadConfigForTests(t, configDir)
defer configCleanup()
@ -66,16 +74,16 @@ func TestLocalContext_error(t *testing.T) {
StateLocker: stateLocker,
}
_, _, diags := b.Context(op)
_, _, diags := b.LocalRun(op)
if !diags.HasErrors() {
t.Fatal("unexpected success")
}
// Context() unlocks the state on failure
// LocalRun() unlocks the state on failure
assertBackendStateUnlocked(t, b)
}
func TestLocalContext_stalePlan(t *testing.T) {
func TestLocalRun_stalePlan(t *testing.T) {
configDir := "./testdata/apply"
b, cleanup := TestLocal(t)
defer cleanup()
@ -147,11 +155,76 @@ func TestLocalContext_stalePlan(t *testing.T) {
StateLocker: stateLocker,
}
_, _, diags := b.Context(op)
_, _, diags := b.LocalRun(op)
if !diags.HasErrors() {
t.Fatal("unexpected success")
}
// Context() unlocks the state on failure
// LocalRun() unlocks the state on failure
assertBackendStateUnlocked(t, b)
}
type backendWithStateStorageThatFailsRefresh struct {
}
var _ backend.Backend = backendWithStateStorageThatFailsRefresh{}
func (b backendWithStateStorageThatFailsRefresh) StateMgr(workspace string) (statemgr.Full, error) {
return &stateStorageThatFailsRefresh{}, nil
}
func (b backendWithStateStorageThatFailsRefresh) ConfigSchema() *configschema.Block {
return &configschema.Block{}
}
func (b backendWithStateStorageThatFailsRefresh) PrepareConfig(in cty.Value) (cty.Value, tfdiags.Diagnostics) {
return in, nil
}
func (b backendWithStateStorageThatFailsRefresh) Configure(cty.Value) tfdiags.Diagnostics {
return nil
}
func (b backendWithStateStorageThatFailsRefresh) DeleteWorkspace(name string) error {
return fmt.Errorf("unimplemented")
}
func (b backendWithStateStorageThatFailsRefresh) Workspaces() ([]string, error) {
return []string{"default"}, nil
}
type stateStorageThatFailsRefresh struct {
locked bool
}
func (s *stateStorageThatFailsRefresh) Lock(info *statemgr.LockInfo) (string, error) {
if s.locked {
return "", fmt.Errorf("already locked")
}
s.locked = true
return "locked", nil
}
func (s *stateStorageThatFailsRefresh) Unlock(id string) error {
if !s.locked {
return fmt.Errorf("not locked")
}
s.locked = false
return nil
}
func (s *stateStorageThatFailsRefresh) State() *states.State {
return nil
}
func (s *stateStorageThatFailsRefresh) WriteState(*states.State) error {
return fmt.Errorf("unimplemented")
}
func (s *stateStorageThatFailsRefresh) RefreshState() error {
return fmt.Errorf("intentionally failing for testing purposes")
}
func (s *stateStorageThatFailsRefresh) PersistState() error {
return fmt.Errorf("unimplemented")
}

View File

@ -54,7 +54,7 @@ func (b *Local) opPlan(
}
// Get our context
tfCtx, configSnap, opState, ctxDiags := b.context(op)
lr, configSnap, opState, ctxDiags := b.localRun(op)
diags = diags.Append(ctxDiags)
if ctxDiags.HasErrors() {
op.ReportResult(runningOp, diags)
@ -70,7 +70,9 @@ func (b *Local) opPlan(
}
}()
runningOp.State = tfCtx.State()
// Since planning doesn't immediately change the persisted state, the
// resulting state is always just the input state.
runningOp.State = lr.InputState
// Perform the plan in a goroutine so we can be interrupted
var plan *plans.Plan
@ -79,10 +81,10 @@ func (b *Local) opPlan(
go func() {
defer close(doneCh)
log.Printf("[INFO] backend/local: plan calling Plan")
plan, planDiags = tfCtx.Plan()
plan, planDiags = lr.Core.Plan(lr.Config, lr.InputState, lr.PlanOpts)
}()
if b.opWait(doneCh, stopCtx, cancelCtx, tfCtx, opState, op.View) {
if b.opWait(doneCh, stopCtx, cancelCtx, lr.Core, opState, op.View) {
// If we get in here then the operation was cancelled, which is always
// considered to be a failure.
log.Printf("[INFO] backend/local: plan operation was force-cancelled by interrupt")
@ -144,7 +146,13 @@ func (b *Local) opPlan(
}
// Render the plan
op.View.Plan(plan, tfCtx.Schemas())
schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
op.ReportResult(runningOp, diags)
return
}
op.View.Plan(plan, schemas)
// If we've accumulated any warnings along the way then we'll show them
// here just before we show the summary and next steps. If we encountered

View File

@ -136,7 +136,7 @@ func TestLocal_plan_context_error(t *testing.T) {
// the backend should be unlocked after a run
assertBackendStateUnlocked(t, b)
if got, want := done(t).Stderr(), "Error: Could not load plugin"; !strings.Contains(got, want) {
if got, want := done(t).Stderr(), "Error: Failed to load plugin schemas"; !strings.Contains(got, want) {
t.Fatalf("unexpected error output:\n%s\nwant: %s", got, want)
}
}

View File

@ -6,7 +6,6 @@ import (
"log"
"os"
"github.com/hashicorp/errwrap"
"github.com/hashicorp/terraform/internal/backend"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/states/statemgr"
@ -45,7 +44,7 @@ func (b *Local) opRefresh(
op.PlanRefresh = true
// Get our context
tfCtx, _, opState, contextDiags := b.context(op)
lr, _, opState, contextDiags := b.localRun(op)
diags = diags.Append(contextDiags)
if contextDiags.HasErrors() {
op.ReportResult(runningOp, diags)
@ -62,8 +61,9 @@ func (b *Local) opRefresh(
}
}()
// Set our state
runningOp.State = opState.State()
// If we succeed then we'll overwrite this with the resulting state below,
// but otherwise the resulting state is just the input state.
runningOp.State = lr.InputState
if !runningOp.State.HasResources() {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Warning,
@ -78,11 +78,11 @@ func (b *Local) opRefresh(
doneCh := make(chan struct{})
go func() {
defer close(doneCh)
newState, refreshDiags = tfCtx.Refresh()
newState, refreshDiags = lr.Core.Refresh(lr.Config, lr.InputState, lr.PlanOpts)
log.Printf("[INFO] backend/local: refresh calling Refresh")
}()
if b.opWait(doneCh, stopCtx, cancelCtx, tfCtx, opState, op.View) {
if b.opWait(doneCh, stopCtx, cancelCtx, lr.Core, opState, op.View) {
return
}
@ -96,7 +96,7 @@ func (b *Local) opRefresh(
err := statemgr.WriteAndPersist(opState, newState)
if err != nil {
diags = diags.Append(errwrap.Wrapf("Failed to write state: {{err}}", err))
diags = diags.Append(fmt.Errorf("failed to write state: %w", err))
op.ReportResult(runningOp, diags)
return
}

View File

@ -178,9 +178,9 @@ type testDelegateBackend struct {
deleteErr bool
}
var errTestDelegateState = errors.New("State called")
var errTestDelegateStates = errors.New("States called")
var errTestDelegateDeleteState = errors.New("Delete called")
var errTestDelegateState = errors.New("state called")
var errTestDelegateStates = errors.New("states called")
var errTestDelegateDeleteState = errors.New("delete called")
func (b *testDelegateBackend) StateMgr(name string) (statemgr.Full, error) {
if b.stateErr {

View File

@ -0,0 +1,6 @@
# This configuration is intended to be loadable (valid syntax, etc) but to
# fail terraform.Context.Validate.
locals {
a = local.nonexist
}

View File

@ -91,6 +91,8 @@ type Remote struct {
}
var _ backend.Backend = (*Remote)(nil)
var _ backend.Enhanced = (*Remote)(nil)
var _ backend.Local = (*Remote)(nil)
// New creates a new initialized remote backend.
func New(services *disco.Disco) *Remote {

View File

@ -6,7 +6,6 @@ import (
"log"
"strings"
"github.com/hashicorp/errwrap"
tfe "github.com/hashicorp/go-tfe"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
@ -18,9 +17,15 @@ import (
"github.com/zclconf/go-cty/cty"
)
// Context implements backend.Enhanced.
func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Full, tfdiags.Diagnostics) {
// Context implements backend.Local.
func (b *Remote) LocalRun(op *backend.Operation) (*backend.LocalRun, statemgr.Full, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
ret := &backend.LocalRun{
PlanOpts: &terraform.PlanOpts{
Mode: op.PlanMode,
Targets: op.Targets,
},
}
op.StateLocker = op.StateLocker.WithContext(context.Background())
@ -31,7 +36,7 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
log.Printf("[TRACE] backend/remote: requesting state manager for workspace %q", remoteWorkspaceName)
stateMgr, err := b.StateMgr(op.Workspace)
if err != nil {
diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err))
diags = diags.Append(fmt.Errorf("error loading state: %w", err))
return nil, nil, diags
}
@ -50,7 +55,7 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
log.Printf("[TRACE] backend/remote: reading remote state for workspace %q", remoteWorkspaceName)
if err := stateMgr.RefreshState(); err != nil {
diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err))
diags = diags.Append(fmt.Errorf("error loading state: %w", err))
return nil, nil, diags
}
@ -61,15 +66,13 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
}
// Copy set options from the operation
opts.PlanMode = op.PlanMode
opts.Targets = op.Targets
opts.UIInput = op.UIIn
// Load the latest state. If we enter contextFromPlanFile below then the
// state snapshot in the plan file must match this, or else it'll return
// error diagnostics.
log.Printf("[TRACE] backend/remote: retrieving remote state snapshot for workspace %q", remoteWorkspaceName)
opts.State = stateMgr.State()
ret.InputState = stateMgr.State()
log.Printf("[TRACE] backend/remote: loading configuration for the current working directory")
config, configDiags := op.ConfigLoader.LoadConfig(op.ConfigDir)
@ -77,21 +80,21 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
if configDiags.HasErrors() {
return nil, nil, diags
}
opts.Config = config
ret.Config = config
// The underlying API expects us to use the opaque workspace id to request
// variables, so we'll need to look that up using our organization name
// and workspace name.
remoteWorkspaceID, err := b.getRemoteWorkspaceID(context.Background(), op.Workspace)
if err != nil {
diags = diags.Append(errwrap.Wrapf("Error finding remote workspace: {{err}}", err))
diags = diags.Append(fmt.Errorf("error finding remote workspace: %w", err))
return nil, nil, diags
}
log.Printf("[TRACE] backend/remote: retrieving variables from workspace %s/%s (%s)", remoteWorkspaceName, b.organization, remoteWorkspaceID)
tfeVariables, err := b.client.Variables.List(context.Background(), remoteWorkspaceID, tfe.VariableListOptions{})
if err != nil && err != tfe.ErrResourceNotFound {
diags = diags.Append(errwrap.Wrapf("Error loading variables: {{err}}", err))
diags = diags.Append(fmt.Errorf("error loading variables: %w", err))
return nil, nil, diags
}
@ -100,7 +103,7 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
// more lax about them, stubbing out any unset ones as unknown.
// This gives us enough information to produce a consistent context,
// but not enough information to run a real operation (plan, apply, etc)
opts.Variables = stubAllVariables(op.Variables, config.Module.Variables)
ret.PlanOpts.SetVariables = stubAllVariables(op.Variables, config.Module.Variables)
} else {
if tfeVariables != nil {
if op.Variables == nil {
@ -121,16 +124,17 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
if diags.HasErrors() {
return nil, nil, diags
}
opts.Variables = variables
ret.PlanOpts.SetVariables = variables
}
}
tfCtx, ctxDiags := terraform.NewContext(&opts)
diags = diags.Append(ctxDiags)
ret.Core = tfCtx
log.Printf("[TRACE] backend/remote: finished building terraform.Context")
return tfCtx, stateMgr, diags
return ret, stateMgr, diags
}
func (b *Remote) getRemoteWorkspaceName(localWorkspaceName string) string {

View File

@ -204,7 +204,7 @@ func TestRemoteContextWithVars(t *testing.T) {
}
b.client.Variables.Create(context.TODO(), workspaceID, *v)
_, _, diags := b.Context(op)
_, _, diags := b.LocalRun(op)
if test.WantError != "" {
if !diags.HasErrors() {

View File

@ -99,7 +99,7 @@ func (c *AddCommand) Run(rawArgs []string) int {
}
// Get the context
ctx, _, ctxDiags := local.Context(opReq)
lr, _, ctxDiags := local.LocalRun(opReq)
diags = diags.Append(ctxDiags)
if ctxDiags.HasErrors() {
view.Diagnostics(diags)
@ -118,10 +118,10 @@ func (c *AddCommand) Run(rawArgs []string) int {
// already exist in the config.
var module *configs.Module
if args.Addr.Module.IsRoot() {
module = ctx.Config().Module
module = lr.Config.Module
} else {
// This is weird, but users can potentially specify non-existant module names
cfg := ctx.Config().Root.Descendent(args.Addr.Module.Module())
cfg := lr.Config.Root.Descendent(args.Addr.Module.Module())
if cfg != nil {
module = cfg.Module
}
@ -143,7 +143,12 @@ func (c *AddCommand) Run(rawArgs []string) int {
}
// Get the schemas from the context
schemas := ctx.Schemas()
schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
view.Diagnostics(diags)
return 1
}
// Determine the correct provider config address. The provider-related
// variables may get updated below
@ -154,7 +159,6 @@ func (c *AddCommand) Run(rawArgs []string) int {
// If we are getting the values from state, get the AbsProviderConfig
// directly from state as well.
var resource *states.Resource
var moreDiags tfdiags.Diagnostics
if args.FromState {
resource, moreDiags = c.getResource(b, args.Addr.ContainingResource())
if moreDiags.HasErrors() {

View File

@ -9,6 +9,7 @@ import (
"github.com/hashicorp/terraform/internal/backend"
"github.com/hashicorp/terraform/internal/helper/wrappedstreams"
"github.com/hashicorp/terraform/internal/repl"
"github.com/hashicorp/terraform/internal/terraform"
"github.com/hashicorp/terraform/internal/tfdiags"
"github.com/mitchellh/cli"
@ -95,7 +96,7 @@ func (c *ConsoleCommand) Run(args []string) int {
}
// Get the context
ctx, _, ctxDiags := local.Context(opReq)
lr, _, ctxDiags := local.LocalRun(opReq)
diags = diags.Append(ctxDiags)
if ctxDiags.HasErrors() {
c.showDiagnostics(diags)
@ -116,10 +117,18 @@ func (c *ConsoleCommand) Run(args []string) int {
ErrorWriter: wrappedstreams.Stderr(),
}
evalOpts := &terraform.EvalOpts{}
if lr.PlanOpts != nil {
// the LocalRun type is built primarily to support the main operations,
// so the variable values end up in the "PlanOpts" even though we're
// not actually making a plan.
evalOpts.SetVariables = lr.PlanOpts.SetVariables
}
// Before we can evaluate expressions, we must compute and populate any
// derived values (input variables, local values, output values)
// that are not stored in the persistent state.
scope, scopeDiags := ctx.Eval(addrs.RootModuleInstance)
scope, scopeDiags := lr.Core.Eval(lr.Config, lr.InputState, addrs.RootModuleInstance, evalOpts)
diags = diags.Append(scopeDiags)
if scope == nil {
// scope is nil if there are errors so bad that we can't even build a scope.

View File

@ -4,12 +4,12 @@ import (
"fmt"
"strings"
"github.com/hashicorp/terraform/internal/plans/planfile"
"github.com/hashicorp/terraform/internal/tfdiags"
"github.com/hashicorp/terraform/internal/backend"
"github.com/hashicorp/terraform/internal/dag"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/plans/planfile"
"github.com/hashicorp/terraform/internal/terraform"
"github.com/hashicorp/terraform/internal/tfdiags"
)
// GraphCommand is a Command implementation that takes a Terraform
@ -103,35 +103,64 @@ func (c *GraphCommand) Run(args []string) int {
}
// Get the context
ctx, _, ctxDiags := local.Context(opReq)
lr, _, ctxDiags := local.LocalRun(opReq)
diags = diags.Append(ctxDiags)
if ctxDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
// Determine the graph type
graphType := terraform.GraphTypePlan
if planFile != nil {
graphType = terraform.GraphTypeApply
if graphTypeStr == "" {
switch {
case lr.Plan != nil:
graphTypeStr = "apply"
default:
graphTypeStr = "plan"
}
}
if graphTypeStr != "" {
v, ok := terraform.GraphTypeMap[graphTypeStr]
if !ok {
c.Ui.Error(fmt.Sprintf("Invalid graph type requested: %s", graphTypeStr))
return 1
var g *terraform.Graph
var graphDiags tfdiags.Diagnostics
switch graphTypeStr {
case "plan":
g, graphDiags = lr.Core.PlanGraphForUI(lr.Config, lr.InputState, plans.NormalMode)
case "plan-refresh-only":
g, graphDiags = lr.Core.PlanGraphForUI(lr.Config, lr.InputState, plans.RefreshOnlyMode)
case "plan-destroy":
g, graphDiags = lr.Core.PlanGraphForUI(lr.Config, lr.InputState, plans.DestroyMode)
case "apply":
plan := lr.Plan
// Historically "terraform graph" would allow the nonsensical request to
// render an apply graph without a plan, so we continue to support that
// here, though perhaps one day this should be an error.
if lr.Plan == nil {
plan = &plans.Plan{
Changes: plans.NewChanges(),
UIMode: plans.NormalMode,
PriorState: lr.InputState,
PrevRunState: lr.InputState,
}
}
graphType = v
g, graphDiags = lr.Core.ApplyGraphForUI(plan, lr.Config)
case "eval", "validate":
// Terraform v0.12 through v1.0 supported both of these, but the
// graph variants for "eval" and "validate" are purely implementation
// details and don't reveal anything (user-model-wise) that you can't
// see in the plan graph.
graphDiags = graphDiags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Graph type no longer available",
fmt.Sprintf("The graph type %q is no longer available. Use -type=plan instead to get a similar result.", graphTypeStr),
))
default:
graphDiags = graphDiags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Unsupported graph type",
`The -type=... argument must be either "plan", "plan-refresh-only", "plan-destroy", or "apply".`,
))
}
// Skip validation during graph generation - we want to see the graph even if
// it is invalid for some reason.
g, graphDiags := ctx.Graph(graphType, &terraform.ContextGraphOpts{
Verbose: verbose,
Validate: false,
})
diags = diags.Append(graphDiags)
if graphDiags.HasErrors() {
c.showDiagnostics(diags)
@ -165,19 +194,13 @@ func (c *GraphCommand) Help() string {
helpText := `
Usage: terraform [global options] graph [options]
Outputs the visual execution graph of Terraform resources according to
either the current configuration or an execution plan.
Produces a representation of the dependency graph between different
objects in the current configuration and state.
The graph is outputted in DOT format. The typical program that can
The graph is presented in the DOT language. The typical program that can
read this format is GraphViz, but many web services are also available
to read this format.
The -type flag can be used to control the type of graph shown. Terraform
creates different graphs for different operations. See the options below
for the list of types supported. The default type is "plan" if a
configuration is given, and "apply" if a plan file is passed as an
argument.
Options:
-plan=tfplan Render graph using the specified plan file instead of the
@ -186,8 +209,9 @@ Options:
-draw-cycles Highlight any cycles in the graph with colored edges.
This helps when diagnosing cycle errors.
-type=plan Type of graph to output. Can be: plan, plan-destroy, apply,
validate, input, refresh.
-type=plan Type of graph to output. Can be: plan, plan-refresh-only,
plan-destroy, or apply. By default Terraform chooses
"plan", or "apply" if you also set the -plan=... option.
-module-depth=n (deprecated) In prior versions of Terraform, specified the
depth of modules to show in the output.

View File

@ -212,7 +212,7 @@ func (c *ImportCommand) Run(args []string) int {
}
// Get the context
ctx, state, ctxDiags := local.Context(opReq)
lr, state, ctxDiags := local.LocalRun(opReq)
diags = diags.Append(ctxDiags)
if ctxDiags.HasErrors() {
c.showDiagnostics(diags)
@ -230,13 +230,18 @@ func (c *ImportCommand) Run(args []string) int {
// Perform the import. Note that as you can see it is possible for this
// API to import more than one resource at once. For now, we only allow
// one while we stabilize this feature.
newState, importDiags := ctx.Import(&terraform.ImportOpts{
newState, importDiags := lr.Core.Import(lr.Config, lr.InputState, &terraform.ImportOpts{
Targets: []*terraform.ImportTarget{
&terraform.ImportTarget{
{
Addr: addr,
ID: args[1],
},
},
// The LocalRun idea is designed around our primary operations, so
// the input variables end up represented as plan options even though
// this particular operation isn't really a plan.
SetVariables: lr.PlanOpts.SetVariables,
})
diags = diags.Append(importDiags)
if diags.HasErrors() {

View File

@ -331,7 +331,7 @@ func TestImport_initializationErrorShouldUnlock(t *testing.T) {
}
// specifically, it should fail due to a missing provider
msg := ui.ErrorWriter.String()
msg := strings.ReplaceAll(ui.ErrorWriter.String(), "\n", " ")
if want := `unknown provider "registry.terraform.io/hashicorp/unknown"`; !strings.Contains(msg, want) {
t.Errorf("incorrect message\nwant substring: %s\ngot:\n%s", want, msg)
}

View File

@ -444,7 +444,6 @@ func (m *Meta) contextOpts() (*terraform.ContextOpts, error) {
var opts terraform.ContextOpts
opts.Targets = m.targets
opts.UIInput = m.UIInput()
opts.Parallelism = m.parallelism

View File

@ -1051,7 +1051,7 @@ func TestPlan_init_required(t *testing.T) {
t.Fatalf("expected error, got success")
}
got := output.Stderr()
if !strings.Contains(got, `Error: Could not load plugin`) {
if !strings.Contains(got, `Please run "terraform init".`) {
t.Fatal("wrong error message in output:", got)
}
}

View File

@ -89,14 +89,20 @@ func (c *ProvidersSchemaCommand) Run(args []string) int {
}
// Get the context
ctx, _, ctxDiags := local.Context(opReq)
lr, _, ctxDiags := local.LocalRun(opReq)
diags = diags.Append(ctxDiags)
if ctxDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
schemas := ctx.Schemas()
schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
jsonSchemas, err := jsonprovider.Marshal(schemas)
if err != nil {
c.Ui.Error(fmt.Sprintf("Failed to marshal provider schemas to json: %s", err))

View File

@ -101,7 +101,7 @@ func (c *ShowCommand) Run(args []string) int {
}
// Get the context
ctx, _, ctxDiags := local.Context(opReq)
lr, _, ctxDiags := local.LocalRun(opReq)
diags = diags.Append(ctxDiags)
if ctxDiags.HasErrors() {
c.showDiagnostics(diags)
@ -109,7 +109,12 @@ func (c *ShowCommand) Run(args []string) int {
}
// Get the schemas from the context
schemas := ctx.Schemas()
schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
var planErr, stateErr error
var plan *plans.Plan
@ -148,7 +153,7 @@ func (c *ShowCommand) Run(args []string) int {
if plan != nil {
if jsonOutput {
config := ctx.Config()
config := lr.Config
jsonPlan, err := jsonplan.Marshal(config, plan, stateFile, schemas)
if err != nil {

View File

@ -82,14 +82,18 @@ func (c *StateShowCommand) Run(args []string) int {
}
// Get the context (required to get the schemas)
ctx, _, ctxDiags := local.Context(opReq)
lr, _, ctxDiags := local.LocalRun(opReq)
if ctxDiags.HasErrors() {
c.showDiagnostics(ctxDiags)
return 1
}
// Get the schemas from the context
schemas := ctx.Schemas()
schemas, diags := lr.Core.Schemas(lr.Config, lr.InputState)
if diags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
// Get the state
env, err := c.Workspace()

View File

@ -495,7 +495,16 @@ func (c *TestCommand) testSuiteProviders(suiteDirs testCommandSuiteDirs, testPro
return ret, diags
}
func (c *TestCommand) testSuiteContext(suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory, state *states.State, plan *plans.Plan, destroy bool) (*terraform.Context, tfdiags.Diagnostics) {
type testSuiteRunContext struct {
Core *terraform.Context
PlanMode plans.Mode
Config *configs.Config
InputState *states.State
Changes *plans.Changes
}
func (c *TestCommand) testSuiteContext(suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory, state *states.State, plan *plans.Plan, destroy bool) (*testSuiteRunContext, tfdiags.Diagnostics) {
var changes *plans.Changes
if plan != nil {
changes = plan.Changes
@ -506,8 +515,7 @@ func (c *TestCommand) testSuiteContext(suiteDirs testCommandSuiteDirs, providerF
planMode = plans.DestroyMode
}
return terraform.NewContext(&terraform.ContextOpts{
Config: suiteDirs.Config,
tfCtx, diags := terraform.NewContext(&terraform.ContextOpts{
Providers: providerFactories,
// We just use the provisioners from the main Meta here, because
@ -519,73 +527,83 @@ func (c *TestCommand) testSuiteContext(suiteDirs testCommandSuiteDirs, providerF
Meta: &terraform.ContextMeta{
Env: "test_" + suiteDirs.SuiteName,
},
State: state,
Changes: changes,
PlanMode: planMode,
})
if diags.HasErrors() {
return nil, diags
}
return &testSuiteRunContext{
Core: tfCtx,
PlanMode: planMode,
Config: suiteDirs.Config,
InputState: state,
Changes: changes,
}, diags
}
func (c *TestCommand) testSuitePlan(ctx context.Context, suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory) (*plans.Plan, tfdiags.Diagnostics) {
log.Printf("[TRACE] terraform test: create plan for suite %q", suiteDirs.SuiteName)
tfCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, nil, nil, false)
runCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, nil, nil, false)
if diags.HasErrors() {
return nil, diags
}
// We'll also validate as part of planning, since the "terraform plan"
// command would typically do that and so inconsistencies we detect only
// during planning typically produce error messages saying that they are
// a bug in Terraform.
// (It's safe to use the same context for both validate and plan, because
// validate doesn't generate any new sticky content inside the context
// as plan and apply both do.)
moreDiags := tfCtx.Validate()
// We'll also validate as part of planning, to ensure that the test
// configuration would pass "terraform validate". This is actually
// largely redundant with the runCtx.Core.Plan call below, but was
// included here originally because Plan did _originally_ assume that
// an earlier Validate had already passed, but now does its own
// validation work as (mostly) a superset of validate.
moreDiags := runCtx.Core.Validate(runCtx.Config)
diags = diags.Append(moreDiags)
if diags.HasErrors() {
return nil, diags
}
plan, moreDiags := tfCtx.Plan()
plan, moreDiags := runCtx.Core.Plan(
runCtx.Config, runCtx.InputState, &terraform.PlanOpts{Mode: runCtx.PlanMode},
)
diags = diags.Append(moreDiags)
return plan, diags
}
func (c *TestCommand) testSuiteApply(ctx context.Context, plan *plans.Plan, suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory) (*states.State, tfdiags.Diagnostics) {
log.Printf("[TRACE] terraform test: apply plan for suite %q", suiteDirs.SuiteName)
tfCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, nil, plan, false)
runCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, nil, plan, false)
if diags.HasErrors() {
// To make things easier on the caller, we'll return a valid empty
// state even in this case.
return states.NewState(), diags
}
state, moreDiags := tfCtx.Apply()
state, moreDiags := runCtx.Core.Apply(plan, runCtx.Config)
diags = diags.Append(moreDiags)
return state, diags
}
func (c *TestCommand) testSuiteDestroy(ctx context.Context, state *states.State, suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory) (*states.State, tfdiags.Diagnostics) {
log.Printf("[TRACE] terraform test: plan to destroy any existing objects for suite %q", suiteDirs.SuiteName)
tfCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, state, nil, true)
runCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, state, nil, true)
if diags.HasErrors() {
return state, diags
}
plan, moreDiags := tfCtx.Plan()
plan, moreDiags := runCtx.Core.Plan(
runCtx.Config, runCtx.InputState, &terraform.PlanOpts{Mode: runCtx.PlanMode},
)
diags = diags.Append(moreDiags)
if diags.HasErrors() {
return state, diags
}
log.Printf("[TRACE] terraform test: apply the plan to destroy any existing objects for suite %q", suiteDirs.SuiteName)
tfCtx, moreDiags = c.testSuiteContext(suiteDirs, providerFactories, state, plan, true)
runCtx, moreDiags = c.testSuiteContext(suiteDirs, providerFactories, state, plan, true)
diags = diags.Append(moreDiags)
if diags.HasErrors() {
return state, diags
}
state, moreDiags = tfCtx.Apply()
state, moreDiags = runCtx.Core.Apply(plan, runCtx.Config)
diags = diags.Append(moreDiags)
return state, diags
}

View File

@ -5,8 +5,6 @@ import (
"path/filepath"
"strings"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/terraform/internal/command/arguments"
"github.com/hashicorp/terraform/internal/command/views"
"github.com/hashicorp/terraform/internal/terraform"
@ -73,31 +71,11 @@ func (c *ValidateCommand) validate(dir string) tfdiags.Diagnostics {
return diags
}
// "validate" is to check if the given module is valid regardless of
// input values, current state, etc. Therefore we populate all of the
// input values with unknown values of the expected type, allowing us
// to perform a type check without assuming any particular values.
varValues := make(terraform.InputValues)
for name, variable := range cfg.Module.Variables {
ty := variable.Type
if ty == cty.NilType {
// Can't predict the type at all, so we'll just mark it as
// cty.DynamicVal (unknown value of cty.DynamicPseudoType).
ty = cty.DynamicPseudoType
}
varValues[name] = &terraform.InputValue{
Value: cty.UnknownVal(ty),
SourceType: terraform.ValueFromCLIArg,
}
}
opts, err := c.contextOpts()
if err != nil {
diags = diags.Append(err)
return diags
}
opts.Config = cfg
opts.Variables = varValues
tfCtx, ctxDiags := terraform.NewContext(opts)
diags = diags.Append(ctxDiags)
@ -105,7 +83,7 @@ func (c *ValidateCommand) validate(dir string) tfdiags.Diagnostics {
return diags
}
validateDiags := tfCtx.Validate()
validateDiags := tfCtx.Validate(cfg)
diags = diags.Append(validateDiags)
return diags
}

View File

@ -204,17 +204,19 @@ func testSession(t *testing.T, test testSessionTest) {
// Build the TF context
ctx, diags := terraform.NewContext(&terraform.ContextOpts{
State: test.State,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): providers.FactoryFixed(p),
},
Config: config,
})
if diags.HasErrors() {
t.Fatalf("failed to create context: %s", diags.Err())
}
scope, diags := ctx.Eval(addrs.RootModuleInstance)
state := test.State
if state == nil {
state = states.NewState()
}
scope, diags := ctx.Eval(config, state, addrs.RootModuleInstance, &terraform.EvalOpts{})
if diags.HasErrors() {
t.Fatalf("failed to create scope: %s", diags.Err())
}

View File

@ -533,6 +533,16 @@ func (s *SyncState) Unlock() {
s.lock.Unlock()
}
// Close extracts the underlying state from inside this wrapper, making the
// wrapper invalid for any future operations.
func (s *SyncState) Close() *State {
s.lock.Lock()
ret := s.state
s.state = nil // make sure future operations can't still modify it
s.lock.Unlock()
return ret
}
// maybePruneModule will remove a module from the state altogether if it is
// empty, unless it's the root module which must always be present.
//

View File

@ -10,12 +10,8 @@ import (
"github.com/apparentlymart/go-versions/versions"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/instances"
"github.com/hashicorp/terraform/internal/lang"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/providers"
"github.com/hashicorp/terraform/internal/provisioners"
"github.com/hashicorp/terraform/internal/refactoring"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/tfdiags"
"github.com/zclconf/go-cty/cty"
@ -41,16 +37,7 @@ const (
// ContextOpts are the user-configurable options to create a context with
// NewContext.
type ContextOpts struct {
Config *configs.Config
Changes *plans.Changes
State *states.State
Targets []addrs.Targetable
ForceReplace []addrs.AbsResourceInstance
Variables InputValues
Meta *ContextMeta
PlanMode plans.Mode
SkipRefresh bool
Hooks []Hook
Parallelism int
Providers map[addrs.Provider]providers.Factory
@ -96,58 +83,18 @@ type ContextMeta struct {
// perform operations on infrastructure. This structure is built using
// NewContext.
type Context struct {
config *configs.Config
changes *plans.Changes
skipRefresh bool
targets []addrs.Targetable
forceReplace []addrs.AbsResourceInstance
variables InputValues
meta *ContextMeta
planMode plans.Mode
// meta captures some misc. information about the working directory where
// we're taking these actions, and thus which should remain steady between
// operations.
meta *ContextMeta
// state, refreshState, and prevRunState simultaneously track three
// different incarnations of the Terraform state:
//
// "state" is always the most "up-to-date". During planning it represents
// our best approximation of the planned new state, and during applying
// it represents the results of all of the actions we've taken so far.
//
// "refreshState" is populated and relevant only during planning, where we
// update it to reflect a provider's sense of the current state of the
// remote object each resource instance is bound to but don't include
// any changes implied by the configuration.
//
// "prevRunState" is similar to refreshState except that it doesn't even
// include the result of the provider's refresh step, and instead reflects
// the state as we found it prior to any changes, although it does reflect
// the result of running the provider's schema upgrade actions so that the
// resource instance objects will all conform to the _current_ resource
// type schemas if planning is successful, so that in that case it will
// be meaningful to compare prevRunState to refreshState to detect changes
// made outside of Terraform.
state *states.State
refreshState *states.State
prevRunState *states.State
components contextComponentFactory
dependencyLocks *depsfile.Locks
providersInDevelopment map[addrs.Provider]struct{}
// NOTE: If you're considering adding something new here, consider first
// whether it'd work to add it to type graphWalkOpts instead, possibly by
// adding new arguments to one of the exported operation methods, to scope
// it only to a particular operation rather than having it survive from one
// operation to the next as global mutable state.
//
// Historically we used fields here as a bit of a dumping ground for
// data that needed to ambiently pass between methods of Context, but
// that has tended to cause surprising misbehavior when data from one
// walk inadvertently bleeds into another walk against the same context.
// Perhaps one day we'll move changes, state, refreshState, and prevRunState
// to graphWalkOpts too. Ideally there shouldn't be anything in here which
// changes after NewContext returns.
hooks []Hook
components contextComponentFactory
schemas *Schemas
sh *stopHook
uiInput UIInput
hooks []Hook
sh *stopHook
uiInput UIInput
l sync.Mutex // Lock acquired during any task
parallelSem Semaphore
@ -168,14 +115,9 @@ type Context struct {
// If the returned diagnostics contains errors then the resulting context is
// invalid and must not be used.
func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
log.Printf("[TRACE] terraform.NewContext: starting")
diags := CheckCoreVersionRequirements(opts.Config)
// If version constraints are not met then we'll bail early since otherwise
// we're likely to just see a bunch of other errors related to
// incompatibilities, which could be overwhelming for the user.
if diags.HasErrors() {
return nil, diags
}
// Copy all the hooks and add our stop hook. We don't append directly
// to the Config so that we're not modifying that in-place.
@ -184,11 +126,6 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) {
copy(hooks, opts.Hooks)
hooks[len(opts.Hooks)] = sh
state := opts.State
if state == nil {
state = states.NewState()
}
// Determine parallelism, default to 10. We do this both to limit
// CPU pressure but also to have an extra guard against rate throttling
// from providers.
@ -207,55 +144,47 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) {
par = 10
}
// Set up the variables in the following sequence:
// 0 - Take default values from the configuration
// 1 - Take values from TF_VAR_x environment variables
// 2 - Take values specified in -var flags, overriding values
// set by environment variables if necessary. This includes
// values taken from -var-file in addition.
var variables InputValues
if opts.Config != nil {
// Default variables from the configuration seed our map.
variables = DefaultVariableValues(opts.Config.Module.Variables)
}
// Variables provided by the caller (from CLI, environment, etc) can
// override the defaults.
variables = variables.Override(opts.Variables)
components := &basicComponentFactory{
providers: opts.Providers,
provisioners: opts.Provisioners,
}
log.Printf("[TRACE] terraform.NewContext: loading provider schemas")
schemas, err := LoadSchemas(opts.Config, opts.State, components)
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Could not load plugin",
fmt.Sprintf(errPluginInit, err),
))
return nil, diags
}
log.Printf("[TRACE] terraform.NewContext: complete")
changes := opts.Changes
if changes == nil {
changes = plans.NewChanges()
}
return &Context{
hooks: hooks,
meta: opts.Meta,
uiInput: opts.UIInput,
config := opts.Config
if config == nil {
config = configs.NewEmptyConfig()
}
components: components,
dependencyLocks: opts.LockedDependencies,
providersInDevelopment: opts.ProvidersInDevelopment,
parallelSem: NewSemaphore(par),
providerInputConfig: make(map[string]map[string]cty.Value),
providerSHA256s: opts.ProviderSHA256s,
sh: sh,
}, diags
}
func (c *Context) Schemas(config *configs.Config, state *states.State) (*Schemas, tfdiags.Diagnostics) {
// TODO: This method gets called multiple times on the same context with
// the same inputs by different parts of Terraform that all need the
// schemas, and it's typically quite expensive because it has to spin up
// plugins to gather their schemas, so it'd be good to have some caching
// here to remember plugin schemas we already loaded since the plugin
// selections can't change during the life of a *Context object.
var diags tfdiags.Diagnostics
// If we have a configuration and a set of locked dependencies, verify that
// the provider requirements from the configuration can be satisfied by the
// locked dependencies.
if opts.LockedDependencies != nil {
if c.dependencyLocks != nil && config != nil {
reqs, providerDiags := config.ProviderRequirements()
diags = diags.Append(providerDiags)
locked := opts.LockedDependencies.AllProviders()
locked := c.dependencyLocks.AllProviders()
unmetReqs := make(getproviders.Requirements)
for provider, versionConstraints := range reqs {
// Builtin providers are not listed in the locks file
@ -263,7 +192,7 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) {
continue
}
// Development providers must be excluded from this check
if _, ok := opts.ProvidersInDevelopment[provider]; ok {
if _, ok := c.providersInDevelopment[provider]; ok {
continue
}
// If the required provider doesn't exist in the lock, or the
@ -292,81 +221,16 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) {
}
}
switch opts.PlanMode {
case plans.NormalMode, plans.DestroyMode:
// OK
case plans.RefreshOnlyMode:
if opts.SkipRefresh {
// The CLI layer (and other similar callers) should prevent this
// combination of options.
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Incompatible plan options",
"Cannot skip refreshing in refresh-only mode. This is a bug in Terraform.",
))
return nil, diags
}
default:
// The CLI layer (and other similar callers) should not try to
// create a context for a mode that Terraform Core doesn't support.
ret, err := LoadSchemas(config, state, c.components)
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Unsupported plan mode",
fmt.Sprintf("Terraform Core doesn't know how to handle plan mode %s. This is a bug in Terraform.", opts.PlanMode),
"Failed to load plugin schemas",
fmt.Sprintf("Error while loading schemas for plugin components: %s.", err),
))
return nil, diags
}
if len(opts.ForceReplace) > 0 && opts.PlanMode != plans.NormalMode {
// The other modes don't generate no-op or update actions that we might
// upgrade to be "replace", so doesn't make sense to combine those.
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Unsupported plan mode",
fmt.Sprintf("Forcing resource instance replacement (with -replace=...) is allowed only in normal planning mode."),
))
return nil, diags
}
log.Printf("[TRACE] terraform.NewContext: complete")
// By the time we get here, we should have values defined for all of
// the root module variables, even if some of them are "unknown". It's the
// caller's responsibility to have already handled the decoding of these
// from the various ways the CLI allows them to be set and to produce
// user-friendly error messages if they are not all present, and so
// the error message from checkInputVariables should never be seen and
// includes language asking the user to report a bug.
if config != nil {
varDiags := checkInputVariables(config.Module.Variables, variables)
diags = diags.Append(varDiags)
}
return &Context{
components: components,
schemas: schemas,
planMode: opts.PlanMode,
changes: changes,
hooks: hooks,
meta: opts.Meta,
config: config,
state: state,
refreshState: state.DeepCopy(),
prevRunState: state.DeepCopy(),
skipRefresh: opts.SkipRefresh,
targets: opts.Targets,
forceReplace: opts.ForceReplace,
uiInput: opts.UIInput,
variables: variables,
parallelSem: NewSemaphore(par),
providerInputConfig: make(map[string]map[string]cty.Value),
providerSHA256s: opts.ProviderSHA256s,
sh: sh,
}, diags
}
func (c *Context) Schemas() *Schemas {
return c.schemas
return ret, diags
}
type ContextGraphOpts struct {
@ -377,510 +241,6 @@ type ContextGraphOpts struct {
Verbose bool
}
// Graph returns the graph used for the given operation type.
//
// The most extensive or complex graph type is GraphTypePlan.
func (c *Context) Graph(typ GraphType, opts *ContextGraphOpts) (*Graph, tfdiags.Diagnostics) {
if opts == nil {
opts = &ContextGraphOpts{Validate: true}
}
log.Printf("[INFO] terraform: building graph: %s", typ)
switch typ {
case GraphTypeApply:
return (&ApplyGraphBuilder{
Config: c.config,
Changes: c.changes,
State: c.state,
Components: c.components,
Schemas: c.schemas,
Targets: c.targets,
ForceReplace: c.forceReplace,
Validate: opts.Validate,
}).Build(addrs.RootModuleInstance)
case GraphTypeValidate:
// The validate graph is just a slightly modified plan graph: an empty
// state is substituted in for Validate.
return ValidateGraphBuilder(&PlanGraphBuilder{
Config: c.config,
Components: c.components,
Schemas: c.schemas,
Targets: c.targets,
Validate: opts.Validate,
State: states.NewState(),
}).Build(addrs.RootModuleInstance)
case GraphTypePlan:
// Create the plan graph builder
return (&PlanGraphBuilder{
Config: c.config,
State: c.state,
Components: c.components,
Schemas: c.schemas,
Targets: c.targets,
ForceReplace: c.forceReplace,
Validate: opts.Validate,
skipRefresh: c.skipRefresh,
}).Build(addrs.RootModuleInstance)
case GraphTypePlanDestroy:
return (&DestroyPlanGraphBuilder{
Config: c.config,
State: c.state,
Components: c.components,
Schemas: c.schemas,
Targets: c.targets,
Validate: opts.Validate,
skipRefresh: c.skipRefresh,
}).Build(addrs.RootModuleInstance)
case GraphTypePlanRefreshOnly:
// Create the plan graph builder, with skipPlanChanges set to
// activate the "refresh only" mode.
return (&PlanGraphBuilder{
Config: c.config,
State: c.state,
Components: c.components,
Schemas: c.schemas,
Targets: c.targets,
Validate: opts.Validate,
skipRefresh: c.skipRefresh,
skipPlanChanges: true, // this activates "refresh only" mode.
}).Build(addrs.RootModuleInstance)
case GraphTypeEval:
return (&EvalGraphBuilder{
Config: c.config,
State: c.state,
Components: c.components,
Schemas: c.schemas,
}).Build(addrs.RootModuleInstance)
default:
// Should never happen, because the above is exhaustive for all graph types.
panic(fmt.Errorf("unsupported graph type %s", typ))
}
}
// State returns a copy of the current state associated with this context.
//
// This cannot safely be called in parallel with any other Context function.
func (c *Context) State() *states.State {
return c.state.DeepCopy()
}
// Eval produces a scope in which expressions can be evaluated for
// the given module path.
//
// This method must first evaluate any ephemeral values (input variables, local
// values, and output values) in the configuration. These ephemeral values are
// not included in the persisted state, so they must be re-computed using other
// values in the state before they can be properly evaluated. The updated
// values are retained in the main state associated with the receiving context.
//
// This function takes no action against remote APIs but it does need access
// to all provider and provisioner instances in order to obtain their schemas
// for type checking.
//
// The result is an evaluation scope that can be used to resolve references
// against the root module. If the returned diagnostics contains errors then
// the returned scope may be nil. If it is not nil then it may still be used
// to attempt expression evaluation or other analysis, but some expressions
// may not behave as expected.
func (c *Context) Eval(path addrs.ModuleInstance) (*lang.Scope, tfdiags.Diagnostics) {
// This is intended for external callers such as the "terraform console"
// command. Internally, we create an evaluator in c.walk before walking
// the graph, and create scopes in ContextGraphWalker.
var diags tfdiags.Diagnostics
defer c.acquireRun("eval")()
// Start with a copy of state so that we don't affect any instances
// that other methods may have already returned.
c.state = c.state.DeepCopy()
var walker *ContextGraphWalker
graph, graphDiags := c.Graph(GraphTypeEval, nil)
diags = diags.Append(graphDiags)
if !diags.HasErrors() {
var walkDiags tfdiags.Diagnostics
walker, walkDiags = c.walk(graph, walkEval, &graphWalkOpts{})
diags = diags.Append(walker.NonFatalDiagnostics)
diags = diags.Append(walkDiags)
}
if walker == nil {
// If we skipped walking the graph (due to errors) then we'll just
// use a placeholder graph walker here, which'll refer to the
// unmodified state.
walker = c.graphWalker(walkEval, &graphWalkOpts{})
}
// This is a bit weird since we don't normally evaluate outside of
// the context of a walk, but we'll "re-enter" our desired path here
// just to get hold of an EvalContext for it. GraphContextBuiltin
// caches its contexts, so we should get hold of the context that was
// previously used for evaluation here, unless we skipped walking.
evalCtx := walker.EnterPath(path)
return evalCtx.EvaluationScope(nil, EvalDataForNoInstanceKey), diags
}
// Apply applies the changes represented by this context and returns
// the resulting state.
//
// Even in the case an error is returned, the state may be returned and will
// potentially be partially updated. In addition to returning the resulting
// state, this context is updated with the latest state.
//
// If the state is required after an error, the caller should call
// Context.State, rather than rely on the return value.
//
// TODO: Apply and Refresh should either always return a state, or rely on the
// State() method. Currently the helper/resource testing framework relies
// on the absence of a returned state to determine if Destroy can be
// called, so that will need to be refactored before this can be changed.
func (c *Context) Apply() (*states.State, tfdiags.Diagnostics) {
defer c.acquireRun("apply")()
// Copy our own state
c.state = c.state.DeepCopy()
// Build the graph.
graph, diags := c.Graph(GraphTypeApply, nil)
if diags.HasErrors() {
return nil, diags
}
// Determine the operation
operation := walkApply
if c.planMode == plans.DestroyMode {
operation = walkDestroy
}
// Walk the graph
walker, walkDiags := c.walk(graph, operation, &graphWalkOpts{})
diags = diags.Append(walker.NonFatalDiagnostics)
diags = diags.Append(walkDiags)
if c.planMode == plans.DestroyMode && !diags.HasErrors() {
// If we know we were trying to destroy objects anyway, and we
// completed without any errors, then we'll also prune out any
// leftover empty resource husks (left after all of the instances
// of a resource with "count" or "for_each" are destroyed) to
// help ensure we end up with an _actually_ empty state, assuming
// we weren't destroying with -target here.
//
// (This doesn't actually take into account -target, but that should
// be okay because it doesn't throw away anything we can't recompute
// on a subsequent "terraform plan" run, if the resources are still
// present in the configuration. However, this _will_ cause "count = 0"
// resources to read as unknown during the next refresh walk, which
// may cause some additional churn if used in a data resource or
// provider block, until we remove refreshing as a separate walk and
// just do it as part of the plan walk.)
c.state.PruneResourceHusks()
}
if len(c.targets) > 0 {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Warning,
"Applied changes may be incomplete",
`The plan was created with the -target option in effect, so some changes requested in the configuration may have been ignored and the output values may not be fully updated. Run the following command to verify that no other changes are pending:
terraform plan
Note that the -target option is not suitable for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.`,
))
}
// This isn't technically needed, but don't leave an old refreshed state
// around in case we re-use the context in internal tests.
c.refreshState = c.state.DeepCopy()
return c.state, diags
}
// Plan generates an execution plan for the given context, and returns the
// refreshed state.
//
// The execution plan encapsulates the context and can be stored
// in order to reinstantiate a context later for Apply.
//
// Plan also updates the diff of this context to be the diff generated
// by the plan, so Apply can be called after.
func (c *Context) Plan() (*plans.Plan, tfdiags.Diagnostics) {
defer c.acquireRun("plan")()
c.changes = plans.NewChanges()
var diags tfdiags.Diagnostics
if len(c.targets) > 0 {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Warning,
"Resource targeting is in effect",
`You are creating a plan with the -target option, which means that the result of this plan may not represent all of the changes requested by the current configuration.
The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.`,
))
}
var plan *plans.Plan
var planDiags tfdiags.Diagnostics
switch c.planMode {
case plans.NormalMode:
plan, planDiags = c.plan()
case plans.DestroyMode:
plan, planDiags = c.destroyPlan()
case plans.RefreshOnlyMode:
plan, planDiags = c.refreshOnlyPlan()
default:
panic(fmt.Sprintf("unsupported plan mode %s", c.planMode))
}
diags = diags.Append(planDiags)
if diags.HasErrors() {
return nil, diags
}
// convert the variables into the format expected for the plan
varVals := make(map[string]plans.DynamicValue, len(c.variables))
for k, iv := range c.variables {
// We use cty.DynamicPseudoType here so that we'll save both the
// value _and_ its dynamic type in the plan, so we can recover
// exactly the same value later.
dv, err := plans.NewDynamicValue(iv.Value, cty.DynamicPseudoType)
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Failed to prepare variable value for plan",
fmt.Sprintf("The value for variable %q could not be serialized to store in the plan: %s.", k, err),
))
continue
}
varVals[k] = dv
}
// insert the run-specific data from the context into the plan; variables,
// targets and provider SHAs.
plan.VariableValues = varVals
plan.TargetAddrs = c.targets
plan.ProviderSHA256s = c.providerSHA256s
return plan, diags
}
func (c *Context) plan() (*plans.Plan, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
moveStmts, moveResults := c.prePlanFindAndApplyMoves()
graph, graphDiags := c.Graph(GraphTypePlan, nil)
diags = diags.Append(graphDiags)
if graphDiags.HasErrors() {
return nil, diags
}
// Do the walk
walker, walkDiags := c.walk(graph, walkPlan, &graphWalkOpts{
MoveResults: moveResults,
})
diags = diags.Append(walker.NonFatalDiagnostics)
diags = diags.Append(walkDiags)
if walkDiags.HasErrors() {
return nil, diags
}
plan := &plans.Plan{
UIMode: plans.NormalMode,
Changes: c.changes,
ForceReplaceAddrs: c.forceReplace,
PrevRunState: c.prevRunState.DeepCopy(),
}
c.refreshState.SyncWrapper().RemovePlannedResourceInstanceObjects()
refreshedState := c.refreshState.DeepCopy()
plan.PriorState = refreshedState
// replace the working state with the updated state, so that immediate calls
// to Apply work as expected.
c.state = refreshedState
// TODO: Record the move results in the plan
diags = diags.Append(c.postPlanValidateMoves(moveStmts, walker.InstanceExpander.AllInstances()))
return plan, diags
}
func (c *Context) destroyPlan() (*plans.Plan, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
destroyPlan := &plans.Plan{
PriorState: c.state.DeepCopy(),
}
c.changes = plans.NewChanges()
moveStmts, moveResults := c.prePlanFindAndApplyMoves()
// A destroy plan starts by running Refresh to read any pending data
// sources, and remove missing managed resources. This is required because
// a "destroy plan" is only creating delete changes, and is essentially a
// local operation.
//
// NOTE: if skipRefresh _is_ set then we'll rely on the destroy-plan walk
// below to upgrade the prevRunState and priorState both to the latest
// resource type schemas, so NodePlanDestroyableResourceInstance.Execute
// must coordinate with this by taking that action only when c.skipRefresh
// _is_ set. This coupling between the two is unfortunate but necessary
// to work within our current structure.
if !c.skipRefresh {
refreshPlan, refreshDiags := c.plan()
diags = diags.Append(refreshDiags)
if diags.HasErrors() {
return nil, diags
}
// insert the refreshed state into the destroy plan result, and discard
// the changes recorded from the refresh.
destroyPlan.PriorState = refreshPlan.PriorState.DeepCopy()
destroyPlan.PrevRunState = refreshPlan.PrevRunState.DeepCopy()
c.changes = plans.NewChanges()
}
graph, graphDiags := c.Graph(GraphTypePlanDestroy, nil)
diags = diags.Append(graphDiags)
if graphDiags.HasErrors() {
return nil, diags
}
// Do the walk
walker, walkDiags := c.walk(graph, walkPlanDestroy, &graphWalkOpts{
MoveResults: moveResults,
})
diags = diags.Append(walker.NonFatalDiagnostics)
diags = diags.Append(walkDiags)
if walkDiags.HasErrors() {
return nil, diags
}
if c.skipRefresh {
// If we didn't do refreshing then both the previous run state and
// the prior state are the result of upgrading the previous run state,
// which we should've upgraded as part of the plan-destroy walk
// in NodePlanDestroyableResourceInstance.Execute, so they'll have the
// current schema but neither will reflect any out-of-band changes in
// the remote system.
destroyPlan.PrevRunState = c.prevRunState.DeepCopy()
destroyPlan.PriorState = c.prevRunState.DeepCopy()
}
destroyPlan.UIMode = plans.DestroyMode
destroyPlan.Changes = c.changes
// TODO: Record the move results in the plan
diags = diags.Append(c.postPlanValidateMoves(moveStmts, walker.InstanceExpander.AllInstances()))
return destroyPlan, diags
}
func (c *Context) refreshOnlyPlan() (*plans.Plan, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
moveStmts, moveResults := c.prePlanFindAndApplyMoves()
graph, graphDiags := c.Graph(GraphTypePlanRefreshOnly, nil)
diags = diags.Append(graphDiags)
if graphDiags.HasErrors() {
return nil, diags
}
// Do the walk
walker, walkDiags := c.walk(graph, walkPlan, &graphWalkOpts{
MoveResults: moveResults,
})
diags = diags.Append(walker.NonFatalDiagnostics)
diags = diags.Append(walkDiags)
if walkDiags.HasErrors() {
return nil, diags
}
plan := &plans.Plan{
UIMode: plans.RefreshOnlyMode,
Changes: c.changes,
PrevRunState: c.prevRunState.DeepCopy(),
}
// If the graph builder and graph nodes correctly obeyed our directive
// to refresh only, the set of resource changes should always be empty.
// We'll safety-check that here so we can return a clear message about it,
// rather than probably just generating confusing output at the UI layer.
if len(plan.Changes.Resources) != 0 {
// Some extra context in the logs in case the user reports this message
// as a bug, as a starting point for debugging.
for _, rc := range plan.Changes.Resources {
if depKey := rc.DeposedKey; depKey == states.NotDeposed {
log.Printf("[DEBUG] Refresh-only plan includes %s change for %s", rc.Action, rc.Addr)
} else {
log.Printf("[DEBUG] Refresh-only plan includes %s change for %s deposed object %s", rc.Action, rc.Addr, depKey)
}
}
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Invalid refresh-only plan",
"Terraform generated planned resource changes in a refresh-only plan. This is a bug in Terraform.",
))
}
c.refreshState.SyncWrapper().RemovePlannedResourceInstanceObjects()
refreshedState := c.refreshState
plan.PriorState = refreshedState.DeepCopy()
// replace the working state with the updated state, so that immediate calls
// to Apply work as expected. DeepCopy because such an apply should not
// mutate
c.state = refreshedState
// TODO: Record the move results in the plan
diags = diags.Append(c.postPlanValidateMoves(moveStmts, walker.InstanceExpander.AllInstances()))
return plan, diags
}
func (c *Context) prePlanFindAndApplyMoves() ([]refactoring.MoveStatement, map[addrs.UniqueKey]refactoring.MoveResult) {
moveStmts := refactoring.FindMoveStatements(c.config)
moveResults := refactoring.ApplyMoves(moveStmts, c.prevRunState)
if len(c.targets) > 0 {
for _, result := range moveResults {
matchesTarget := false
for _, targetAddr := range c.targets {
if targetAddr.TargetContains(result.From) {
matchesTarget = true
break
}
}
if !matchesTarget {
// TODO: Return an error stating that a targeted plan is
// only valid if it includes this address that was moved.
}
}
}
return moveStmts, moveResults
}
func (c *Context) postPlanValidateMoves(stmts []refactoring.MoveStatement, allInsts instances.Set) tfdiags.Diagnostics {
return refactoring.ValidateMoves(stmts, c.config, allInsts)
}
// Refresh goes through all the resources in the state and refreshes them
// to their latest state. This is done by executing a plan, and retaining the
// state while discarding the change set.
//
// In the case of an error, there is no state returned.
func (c *Context) Refresh() (*states.State, tfdiags.Diagnostics) {
p, diags := c.Plan()
if diags.HasErrors() {
return nil, diags
}
return p.PriorState, diags
}
// Stop stops the running task.
//
// Stop will block until the task completes.
@ -911,63 +271,6 @@ func (c *Context) Stop() {
log.Printf("[WARN] terraform: stop complete")
}
// Validate performs semantic validation of the configuration, and returning
// any warnings or errors.
//
// Syntax and structural checks are performed by the configuration loader,
// and so are not repeated here.
func (c *Context) Validate() tfdiags.Diagnostics {
defer c.acquireRun("validate")()
var diags tfdiags.Diagnostics
// If we have errors at this point then we probably won't be able to
// construct a graph without producing redundant errors, so we'll halt early.
if diags.HasErrors() {
return diags
}
// Build the graph so we can walk it and run Validate on nodes.
// We also validate the graph generated here, but this graph doesn't
// necessarily match the graph that Plan will generate, so we'll validate the
// graph again later after Planning.
graph, graphDiags := c.Graph(GraphTypeValidate, nil)
diags = diags.Append(graphDiags)
if graphDiags.HasErrors() {
return diags
}
// Walk
walker, walkDiags := c.walk(graph, walkValidate, &graphWalkOpts{})
diags = diags.Append(walker.NonFatalDiagnostics)
diags = diags.Append(walkDiags)
if walkDiags.HasErrors() {
return diags
}
return diags
}
// Config returns the configuration tree associated with this context.
func (c *Context) Config() *configs.Config {
return c.config
}
// Variables will return the mapping of variables that were defined
// for this Context. If Input was called, this mapping may be different
// than what was given.
func (c *Context) Variables() InputValues {
return c.variables
}
// SetVariable sets a variable after a context has already been built.
func (c *Context) SetVariable(k string, v cty.Value) {
c.variables[k] = &InputValue{
Value: v,
SourceType: ValueFromCaller,
}
}
func (c *Context) acquireRun(phase string) func() {
// With the run lock held, grab the context lock to make changes
// to the run context.
@ -1011,70 +314,6 @@ func (c *Context) releaseRun() {
c.runContext = nil
}
// graphWalkOpts is an assortment of options and inputs we need when
// constructing a graph walker.
type graphWalkOpts struct {
// MoveResults is a table of the results of applying move statements prior
// to a plan walk. Irrelevant and totally ignored for non-plan walks.
MoveResults map[addrs.UniqueKey]refactoring.MoveResult
}
func (c *Context) walk(graph *Graph, operation walkOperation, opts *graphWalkOpts) (*ContextGraphWalker, tfdiags.Diagnostics) {
log.Printf("[DEBUG] Starting graph walk: %s", operation.String())
walker := c.graphWalker(operation, opts)
// Watch for a stop so we can call the provider Stop() API.
watchStop, watchWait := c.watchStop(walker)
// Walk the real graph, this will block until it completes
diags := graph.Walk(walker)
// Close the channel so the watcher stops, and wait for it to return.
close(watchStop)
<-watchWait
return walker, diags
}
func (c *Context) graphWalker(operation walkOperation, opts *graphWalkOpts) *ContextGraphWalker {
var state *states.SyncState
var refreshState *states.SyncState
var prevRunState *states.SyncState
switch operation {
case walkValidate:
// validate should not use any state
state = states.NewState().SyncWrapper()
// validate currently uses the plan graph, so we have to populate the
// refreshState and the prevRunState.
refreshState = states.NewState().SyncWrapper()
prevRunState = states.NewState().SyncWrapper()
case walkPlan, walkPlanDestroy:
state = c.state.SyncWrapper()
refreshState = c.refreshState.SyncWrapper()
prevRunState = c.prevRunState.SyncWrapper()
default:
state = c.state.SyncWrapper()
}
return &ContextGraphWalker{
Context: c,
State: state,
RefreshState: refreshState,
PrevRunState: prevRunState,
Changes: c.changes.SyncWrapper(),
InstanceExpander: instances.NewExpander(),
MoveResults: opts.MoveResults,
Operation: operation,
StopContext: c.runContext,
RootVariableValues: c.variables,
}
}
// watchStop immediately returns a `stop` and a `wait` chan after dispatching
// the watchStop goroutine. This will watch the runContext for cancellation and
// stop the providers accordingly. When the watch is no longer needed, the

View File

@ -0,0 +1,142 @@
package terraform
import (
"fmt"
"log"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/tfdiags"
"github.com/zclconf/go-cty/cty"
)
// Apply performs the actions described by the given Plan object and returns
// the resulting updated state.
//
// The given configuration *must* be the same configuration that was passed
// earlier to Context.Plan in order to create this plan.
//
// Even if the returned diagnostics contains errors, Apply always returns the
// resulting state which is likely to have been partially-updated.
func (c *Context) Apply(plan *plans.Plan, config *configs.Config) (*states.State, tfdiags.Diagnostics) {
defer c.acquireRun("apply")()
var diags tfdiags.Diagnostics
schemas, moreDiags := c.Schemas(config, plan.PriorState)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
return nil, diags
}
log.Printf("[DEBUG] Building and walking apply graph for %s plan", plan.UIMode)
graph, operation, moreDiags := c.applyGraph(plan, config, schemas, true)
if moreDiags.HasErrors() {
return nil, diags
}
variables := InputValues{}
for name, dyVal := range plan.VariableValues {
val, err := dyVal.Decode(cty.DynamicPseudoType)
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Invalid variable value in plan",
fmt.Sprintf("Invalid value for variable %q recorded in plan file: %s.", name, err),
))
continue
}
variables[name] = &InputValue{
Value: val,
SourceType: ValueFromPlan,
}
}
workingState := plan.PriorState.DeepCopy()
walker, walkDiags := c.walk(graph, operation, &graphWalkOpts{
Config: config,
Schemas: schemas,
InputState: workingState,
Changes: plan.Changes,
RootVariableValues: variables,
})
diags = diags.Append(walker.NonFatalDiagnostics)
diags = diags.Append(walkDiags)
newState := walker.State.Close()
if plan.UIMode == plans.DestroyMode && !diags.HasErrors() {
// NOTE: This is a vestigial violation of the rule that we mustn't
// use plan.UIMode to affect apply-time behavior.
// We ideally ought to just call newState.PruneResourceHusks
// unconditionally here, but we historically didn't and haven't yet
// verified that it'd be safe to do so.
newState.PruneResourceHusks()
}
if len(plan.TargetAddrs) > 0 {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Warning,
"Applied changes may be incomplete",
`The plan was created with the -target option in effect, so some changes requested in the configuration may have been ignored and the output values may not be fully updated. Run the following command to verify that no other changes are pending:
terraform plan
Note that the -target option is not suitable for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.`,
))
}
return newState, diags
}
func (c *Context) applyGraph(plan *plans.Plan, config *configs.Config, schemas *Schemas, validate bool) (*Graph, walkOperation, tfdiags.Diagnostics) {
graph, diags := (&ApplyGraphBuilder{
Config: config,
Changes: plan.Changes,
State: plan.PriorState,
Components: c.components,
Schemas: schemas,
Targets: plan.TargetAddrs,
ForceReplace: plan.ForceReplaceAddrs,
Validate: validate,
}).Build(addrs.RootModuleInstance)
operation := walkApply
if plan.UIMode == plans.DestroyMode {
// NOTE: This is a vestigial violation of the rule that we mustn't
// use plan.UIMode to affect apply-time behavior. It's a design error
// if anything downstream switches behavior when operation is set
// to walkDestroy, but we've not yet fully audited that.
// TODO: Audit that and remove walkDestroy as an operation mode.
operation = walkDestroy
}
return graph, operation, diags
}
// ApplyGraphForUI is a last vestage of graphs in the public interface of
// Context (as opposed to graphs as an implementation detail) intended only for
// use by the "terraform graph" command when asked to render an apply-time
// graph.
//
// The result of this is intended only for rendering ot the user as a dot
// graph, and so may change in future in order to make the result more useful
// in that context, even if drifts away from the physical graph that Terraform
// Core currently uses as an implementation detail of planning.
func (c *Context) ApplyGraphForUI(plan *plans.Plan, config *configs.Config) (*Graph, tfdiags.Diagnostics) {
// For now though, this really is just the internal graph, confusing
// implementation details and all.
var diags tfdiags.Diagnostics
schemas, moreDiags := c.Schemas(config, plan.PriorState)
diags = diags.Append(moreDiags)
if diags.HasErrors() {
return nil, diags
}
graph, _, moreDiags := c.applyGraph(plan, config, schemas, false)
diags = diags.Append(moreDiags)
return graph, diags
}

View File

@ -47,21 +47,20 @@ func TestContext2Apply_createBeforeDestroy_deposedKeyPreApply(t *testing.T) {
hook := new(MockHook)
ctx := testContext2(t, &ContextOpts{
Config: m,
Hooks: []Hook{hook},
Hooks: []Hook{hook},
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
if p, diags := ctx.Plan(); diags.HasErrors() {
plan, diags := ctx.Plan(m, state, DefaultPlanOpts)
if diags.HasErrors() {
t.Fatalf("diags: %s", diags.Err())
} else {
t.Logf(legacyDiffComparisonString(p.Changes))
t.Logf(legacyDiffComparisonString(plan.Changes))
}
state, diags := ctx.Apply()
_, diags = ctx.Apply(plan, m)
if diags.HasErrors() {
t.Fatalf("diags: %s", diags.Err())
}
@ -145,28 +144,27 @@ output "data" {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: ps,
})
_, diags := ctx.Plan()
plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts)
if diags.HasErrors() {
t.Fatal(diags.Err())
}
_, diags = ctx.Apply()
_, diags = ctx.Apply(plan, m)
if diags.HasErrors() {
t.Fatal(diags.Err())
}
// now destroy the whole thing
ctx = testContext2(t, &ContextOpts{
Config: m,
Providers: ps,
PlanMode: plans.DestroyMode,
})
_, diags = ctx.Plan()
plan, diags = ctx.Plan(m, states.NewState(), &PlanOpts{
Mode: plans.DestroyMode,
})
if diags.HasErrors() {
t.Fatal(diags.Err())
}
@ -177,7 +175,7 @@ output "data" {
return resp
}
_, diags = ctx.Apply()
_, diags = ctx.Apply(plan, m)
if diags.HasErrors() {
t.Fatal(diags.Err())
}
@ -231,18 +229,15 @@ resource "test_instance" "a" {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
State: state,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
if _, diags := ctx.Plan(); diags.HasErrors() {
t.Fatal(diags.Err())
}
plan, diags := ctx.Plan(m, state, DefaultPlanOpts)
assertNoErrors(t, diags)
_, diags := ctx.Apply()
_, diags = ctx.Apply(plan, m)
if diags.HasErrors() {
t.Fatal(diags.Err())
}
@ -322,17 +317,13 @@ resource "aws_instance" "bin" {
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
plan, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.Err())
}
plan, diags := ctx.Plan(m, state, DefaultPlanOpts)
assertNoErrors(t, diags)
bar := plan.PriorState.ResourceInstance(barAddr)
if len(bar.Current.Dependencies) == 0 || !bar.Current.Dependencies[0].Equal(fooAddr.ContainingResource().Config()) {
@ -354,7 +345,7 @@ resource "aws_instance" "bin" {
t.Fatalf("baz should depend on bam after refresh, but got %s", baz.Current.Dependencies)
}
state, diags = ctx.Apply()
state, diags = ctx.Apply(plan, m)
if diags.HasErrors() {
t.Fatal(diags.Err())
}
@ -430,19 +421,15 @@ resource "test_resource" "b" {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
State: state,
})
_, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
plan, diags := ctx.Plan(m, state, DefaultPlanOpts)
assertNoErrors(t, diags)
_, diags = ctx.Apply()
_, diags = ctx.Apply(plan, m)
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
@ -475,18 +462,15 @@ output "out" {
p := simpleMockProvider()
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
_, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts)
assertNoErrors(t, diags)
state, diags := ctx.Apply()
state, diags := ctx.Apply(plan, m)
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
@ -496,10 +480,8 @@ output "out" {
t.Fatalf("Expected 1 sensitive mark for test_object.a, got %#v\n", obj.Current.AttrSensitivePaths)
}
plan, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
plan, diags = ctx.Plan(m, state, DefaultPlanOpts)
assertNoErrors(t, diags)
// make sure the same marks are compared in the next plan as well
for _, c := range plan.Changes.Resources {
@ -543,27 +525,20 @@ resource "test_object" "y" {
p := simpleMockProvider()
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
_, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts)
assertNoErrors(t, diags)
_, diags = ctx.Apply()
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
state, diags := ctx.Apply(plan, m)
assertNoErrors(t, diags)
// FINAL PLAN:
plan, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
plan, diags = ctx.Plan(m, state, DefaultPlanOpts)
assertNoErrors(t, diags)
// make sure the same marks are compared in the next plan as well
for _, c := range plan.Changes.Resources {

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,104 @@
package terraform
import (
"log"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/lang"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/tfdiags"
)
type EvalOpts struct {
SetVariables InputValues
}
// Eval produces a scope in which expressions can be evaluated for
// the given module path.
//
// This method must first evaluate any ephemeral values (input variables, local
// values, and output values) in the configuration. These ephemeral values are
// not included in the persisted state, so they must be re-computed using other
// values in the state before they can be properly evaluated. The updated
// values are retained in the main state associated with the receiving context.
//
// This function takes no action against remote APIs but it does need access
// to all provider and provisioner instances in order to obtain their schemas
// for type checking.
//
// The result is an evaluation scope that can be used to resolve references
// against the root module. If the returned diagnostics contains errors then
// the returned scope may be nil. If it is not nil then it may still be used
// to attempt expression evaluation or other analysis, but some expressions
// may not behave as expected.
func (c *Context) Eval(config *configs.Config, state *states.State, moduleAddr addrs.ModuleInstance, opts *EvalOpts) (*lang.Scope, tfdiags.Diagnostics) {
// This is intended for external callers such as the "terraform console"
// command. Internally, we create an evaluator in c.walk before walking
// the graph, and create scopes in ContextGraphWalker.
var diags tfdiags.Diagnostics
defer c.acquireRun("eval")()
schemas, moreDiags := c.Schemas(config, state)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
return nil, diags
}
// Start with a copy of state so that we don't affect the instance that
// the caller is holding.
state = state.DeepCopy()
var walker *ContextGraphWalker
variables := mergeDefaultInputVariableValues(opts.SetVariables, config.Module.Variables)
// By the time we get here, we should have values defined for all of
// the root module variables, even if some of them are "unknown". It's the
// caller's responsibility to have already handled the decoding of these
// from the various ways the CLI allows them to be set and to produce
// user-friendly error messages if they are not all present, and so
// the error message from checkInputVariables should never be seen and
// includes language asking the user to report a bug.
varDiags := checkInputVariables(config.Module.Variables, variables)
diags = diags.Append(varDiags)
log.Printf("[DEBUG] Building and walking 'eval' graph")
graph, moreDiags := (&EvalGraphBuilder{
Config: config,
State: state,
Components: c.components,
Schemas: schemas,
}).Build(addrs.RootModuleInstance)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
return nil, diags
}
walkOpts := &graphWalkOpts{
InputState: state,
Config: config,
Schemas: schemas,
RootVariableValues: variables,
}
walker, moreDiags = c.walk(graph, walkEval, walkOpts)
diags = diags.Append(moreDiags)
if walker != nil {
diags = diags.Append(walker.NonFatalDiagnostics)
} else {
// If we skipped walking the graph (due to errors) then we'll just
// use a placeholder graph walker here, which'll refer to the
// unmodified state.
walker = c.graphWalker(walkEval, walkOpts)
}
// This is a bit weird since we don't normally evaluate outside of
// the context of a walk, but we'll "re-enter" our desired path here
// just to get hold of an EvalContext for it. ContextGraphWalker
// caches its contexts, so we should get hold of the context that was
// previously used for evaluation here, unless we skipped walking.
evalCtx := walker.EnterPath(moduleAddr)
return evalCtx.EvaluationScope(nil, EvalDataForNoInstanceKey), diags
}

View File

@ -7,6 +7,7 @@ import (
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/providers"
"github.com/hashicorp/terraform/internal/states"
"github.com/zclconf/go-cty/cty"
)
@ -48,13 +49,12 @@ func TestContextEval(t *testing.T) {
m := testModule(t, "eval-context-basic")
p := testProvider("test")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
scope, diags := ctx.Eval(addrs.RootModuleInstance)
scope, diags := ctx.Eval(m, states.NewState(), addrs.RootModuleInstance, &EvalOpts{})
if diags.HasErrors() {
t.Fatalf("Eval errors: %s", diags.Err())
}

View File

@ -25,7 +25,6 @@ type contextTestFixture struct {
// _shallow_ modifications to the options as needed.
func (f *contextTestFixture) ContextOpts() *ContextOpts {
return &ContextOpts{
Config: f.Config,
Providers: f.Providers,
Provisioners: f.Provisioners,
}

View File

@ -1,30 +0,0 @@
package terraform
//go:generate go run golang.org/x/tools/cmd/stringer -type=GraphType context_graph_type.go
// GraphType is an enum of the type of graph to create with a Context.
// The values of the constants may change so they shouldn't be depended on;
// always use the constant name.
type GraphType byte
const (
GraphTypeInvalid GraphType = iota
GraphTypePlan
GraphTypePlanDestroy
GraphTypePlanRefreshOnly
GraphTypeApply
GraphTypeValidate
GraphTypeEval // only visits in-memory elements such as variables, locals, and outputs.
)
// GraphTypeMap is a mapping of human-readable string to GraphType. This
// is useful to use as the mechanism for human input for configurable
// graph types.
var GraphTypeMap = map[string]GraphType{
"apply": GraphTypeApply,
"plan": GraphTypePlan,
"plan-destroy": GraphTypePlanDestroy,
"plan-refresh-only": GraphTypePlanRefreshOnly,
"validate": GraphTypeValidate,
"eval": GraphTypeEval,
}

View File

@ -1,7 +1,10 @@
package terraform
import (
"log"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/tfdiags"
)
@ -10,6 +13,10 @@ import (
type ImportOpts struct {
// Targets are the targets to import
Targets []*ImportTarget
// SetVariables are the variables set outside of the configuration,
// such as on the command line, in variables files, etc.
SetVariables InputValues
}
// ImportTarget is a single resource to import.
@ -35,36 +42,52 @@ type ImportTarget struct {
// Further, this operation also gracefully handles partial state. If during
// an import there is a failure, all previously imported resources remain
// imported.
func (c *Context) Import(opts *ImportOpts) (*states.State, tfdiags.Diagnostics) {
func (c *Context) Import(config *configs.Config, prevRunState *states.State, opts *ImportOpts) (*states.State, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
// Hold a lock since we can modify our own state here
defer c.acquireRun("import")()
// Copy our own state
c.state = c.state.DeepCopy()
schemas, moreDiags := c.Schemas(config, prevRunState)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
return nil, diags
}
// Don't modify our caller's state
state := prevRunState.DeepCopy()
log.Printf("[DEBUG] Building and walking import graph")
// Initialize our graph builder
builder := &ImportGraphBuilder{
ImportTargets: opts.Targets,
Config: c.config,
Config: config,
Components: c.components,
Schemas: c.schemas,
Schemas: schemas,
}
// Build the graph!
// Build the graph
graph, graphDiags := builder.Build(addrs.RootModuleInstance)
diags = diags.Append(graphDiags)
if graphDiags.HasErrors() {
return c.state, diags
return state, diags
}
variables := mergeDefaultInputVariableValues(opts.SetVariables, config.Module.Variables)
// Walk it
_, walkDiags := c.walk(graph, walkImport, &graphWalkOpts{})
walker, walkDiags := c.walk(graph, walkImport, &graphWalkOpts{
Config: config,
Schemas: schemas,
InputState: state,
RootVariableValues: variables,
})
diags = diags.Append(walkDiags)
if walkDiags.HasErrors() {
return c.state, diags
return state, diags
}
return c.state, diags
newState := walker.State.Close()
return newState, diags
}

View File

@ -16,7 +16,6 @@ func TestContextImport_basic(t *testing.T) {
p := testProvider("aws")
m := testModule(t, "import-provider")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -33,9 +32,9 @@ func TestContextImport_basic(t *testing.T) {
},
}
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
@ -49,7 +48,7 @@ func TestContextImport_basic(t *testing.T) {
actual := strings.TrimSpace(state.String())
expected := strings.TrimSpace(testImportStr)
if actual != expected {
t.Fatalf("bad: \n%s", actual)
t.Fatalf("wrong final state\ngot:\n%s\nwant:\n%s", actual, expected)
}
}
@ -57,7 +56,6 @@ func TestContextImport_countIndex(t *testing.T) {
p := testProvider("aws")
m := testModule(t, "import-provider")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -74,9 +72,9 @@ func TestContextImport_countIndex(t *testing.T) {
},
}
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.IntKey(0),
),
@ -99,30 +97,29 @@ func TestContextImport_collision(t *testing.T) {
p := testProvider("aws")
m := testModule(t, "import-provider")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
State: states.BuildState(func(s *states.SyncState) {
s.SetResourceInstanceCurrent(
addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "aws_instance",
Name: "foo",
}.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance),
&states.ResourceInstanceObjectSrc{
AttrsFlat: map[string]string{
"id": "bar",
},
Status: states.ObjectReady,
state := states.BuildState(func(s *states.SyncState) {
s.SetResourceInstanceCurrent(
addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "aws_instance",
Name: "foo",
}.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance),
&states.ResourceInstanceObjectSrc{
AttrsFlat: map[string]string{
"id": "bar",
},
addrs.AbsProviderConfig{
Provider: addrs.NewDefaultProvider("aws"),
Module: addrs.RootModule,
},
)
}),
Status: states.ObjectReady,
},
addrs.AbsProviderConfig{
Provider: addrs.NewDefaultProvider("aws"),
Module: addrs.RootModule,
},
)
})
p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{
@ -136,9 +133,9 @@ func TestContextImport_collision(t *testing.T) {
},
}
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, state, &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
@ -175,15 +172,14 @@ func TestContextImport_missingType(t *testing.T) {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
@ -227,15 +223,14 @@ func TestContextImport_moduleProvider(t *testing.T) {
m := testModule(t, "import-provider")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
@ -263,7 +258,6 @@ func TestContextImport_providerModule(t *testing.T) {
p := testProvider("aws")
m := testModule(t, "import-module")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -289,9 +283,9 @@ func TestContextImport_providerModule(t *testing.T) {
return
}
_, diags := ctx.Import(&ImportOpts{
_, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.Child("child", addrs.NoKey).ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
@ -329,16 +323,9 @@ func TestContextImport_providerConfig(t *testing.T) {
p := testProvider("aws")
m := testModule(t, test.module)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
Variables: InputValues{
"foo": &InputValue{
Value: cty.StringVal("bar"),
SourceType: ValueFromCaller,
},
},
})
p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{
@ -352,15 +339,21 @@ func TestContextImport_providerConfig(t *testing.T) {
},
}
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
ID: "bar",
},
},
SetVariables: InputValues{
"foo": &InputValue{
Value: cty.StringVal("bar"),
SourceType: ValueFromCaller,
},
},
})
if diags.HasErrors() {
t.Fatalf("unexpected errors: %s", diags.Err())
@ -389,7 +382,6 @@ func TestContextImport_providerConfigResources(t *testing.T) {
pTest := testProvider("test")
m := testModule(t, "import-provider-resources")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
addrs.NewDefaultProvider("test"): testProviderFuncFixed(pTest),
@ -407,9 +399,9 @@ func TestContextImport_providerConfigResources(t *testing.T) {
},
}
_, diags := ctx.Import(&ImportOpts{
_, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
@ -429,7 +421,6 @@ func TestContextImport_refresh(t *testing.T) {
p := testProvider("aws")
m := testModule(t, "import-provider")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -455,9 +446,9 @@ func TestContextImport_refresh(t *testing.T) {
}),
}
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
@ -480,7 +471,6 @@ func TestContextImport_refreshNil(t *testing.T) {
p := testProvider("aws")
m := testModule(t, "import-provider")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -503,9 +493,9 @@ func TestContextImport_refreshNil(t *testing.T) {
}
}
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
@ -528,7 +518,6 @@ func TestContextImport_module(t *testing.T) {
p := testProvider("aws")
m := testModule(t, "import-module")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -545,9 +534,9 @@ func TestContextImport_module(t *testing.T) {
},
}
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.Child("child", addrs.IntKey(0)).ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
@ -570,7 +559,6 @@ func TestContextImport_moduleDepth2(t *testing.T) {
p := testProvider("aws")
m := testModule(t, "import-module")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -587,9 +575,9 @@ func TestContextImport_moduleDepth2(t *testing.T) {
},
}
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.Child("child", addrs.IntKey(0)).Child("nested", addrs.NoKey).ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
@ -612,7 +600,6 @@ func TestContextImport_moduleDiff(t *testing.T) {
p := testProvider("aws")
m := testModule(t, "import-module")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -629,9 +616,9 @@ func TestContextImport_moduleDiff(t *testing.T) {
},
}
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.Child("child", addrs.IntKey(0)).ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
@ -692,15 +679,14 @@ func TestContextImport_multiState(t *testing.T) {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
@ -767,15 +753,14 @@ func TestContextImport_multiStateSame(t *testing.T) {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey,
),
@ -866,15 +851,14 @@ resource "test_resource" "unused" {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
state, diags := ctx.Import(&ImportOpts{
state, diags := ctx.Import(m, states.NewState(), &ImportOpts{
Targets: []*ImportTarget{
&ImportTarget{
{
Addr: addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "test_resource", "test", addrs.NoKey,
),
@ -888,6 +872,9 @@ resource "test_resource" "unused" {
ri := state.ResourceInstance(mustResourceInstanceAddr("test_resource.test"))
expected := `{"id":"test"}`
if ri == nil || ri.Current == nil {
t.Fatal("no state is recorded for resource instance test_resource.test")
}
if string(ri.Current.AttrsJSON) != expected {
t.Fatalf("expected %q, got %q\n", expected, ri.Current.AttrsJSON)
}

View File

@ -17,9 +17,21 @@ import (
// Input asks for input to fill unset required arguments in provider
// configurations.
//
// This modifies the configuration in-place, so asking for Input twice
// may result in different UI output showing different current values.
func (c *Context) Input(mode InputMode) tfdiags.Diagnostics {
// Unlike the other better-behaved operation methods, this one actually
// modifies some internal state inside the receving context so that the
// captured values will be implicitly available to a subsequent call to Plan,
// or to some other operation entry point. Hopefully a future iteration of
// this will change design to make that data flow more explicit.
//
// Because Input saves the results inside the Context object, asking for
// input twice on the same Context is invalid and will lead to undefined
// behavior.
//
// Once you've called Input with a particular config, it's invalid to call
// any other Context method with a different config, because the aforementioned
// modified internal state won't match. Again, this is an architectural wart
// that we'll hopefully resolve in future.
func (c *Context) Input(config *configs.Config, mode InputMode) tfdiags.Diagnostics {
// This function used to be responsible for more than it is now, so its
// interface is more general than its current functionality requires.
// It now exists only to handle interactive prompts for provider
@ -33,6 +45,12 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics {
var diags tfdiags.Diagnostics
defer c.acquireRun("input")()
schemas, moreDiags := c.Schemas(config, nil)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
return diags
}
if c.uiInput == nil {
log.Printf("[TRACE] Context.Input: uiInput is nil, so skipping")
return diags
@ -44,17 +62,15 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics {
log.Printf("[TRACE] Context.Input: Prompting for provider arguments")
// We prompt for input only for provider configurations defined in
// the root module. At the time of writing that is an arbitrary
// restriction, but we have future plans to support "count" and
// "for_each" on modules that will then prevent us from supporting
// input for child module configurations anyway (since we'd need to
// dynamic-expand first), and provider configurations in child modules
// are not recommended since v0.11 anyway, so this restriction allows
// us to keep this relatively simple without significant hardship.
// the root module. Provider configurations in other modules are a
// legacy thing we no longer recommend, and even if they weren't we
// can't practically prompt for their inputs here because we've not
// yet done "expansion" and so we don't know whether the modules are
// using count or for_each.
pcs := make(map[string]*configs.Provider)
pas := make(map[string]addrs.LocalProviderConfig)
for _, pc := range c.config.Module.ProviderConfigs {
for _, pc := range config.Module.ProviderConfigs {
addr := pc.Addr()
pcs[addr.String()] = pc
pas[addr.String()] = addr
@ -63,7 +79,7 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics {
// We also need to detect _implied_ provider configs from resources.
// These won't have *configs.Provider objects, but they will still
// exist in the map and we'll just treat them as empty below.
for _, rc := range c.config.Module.ManagedResources {
for _, rc := range config.Module.ManagedResources {
pa := rc.ProviderConfigAddr()
if pa.Alias != "" {
continue // alias configurations cannot be implied
@ -74,7 +90,7 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics {
log.Printf("[TRACE] Context.Input: Provider %s implied by resource block at %s", pa, rc.DeclRange)
}
}
for _, rc := range c.config.Module.DataResources {
for _, rc := range config.Module.DataResources {
pa := rc.ProviderConfigAddr()
if pa.Alias != "" {
continue // alias configurations cannot be implied
@ -96,8 +112,8 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics {
UIInput: c.uiInput,
}
providerFqn := c.config.Module.ProviderForLocalConfig(pa)
schema := c.schemas.ProviderConfig(providerFqn)
providerFqn := config.Module.ProviderForLocalConfig(pa)
schema := schemas.ProviderConfig(providerFqn)
if schema == nil {
// Could either be an incorrect config or just an incomplete
// mock in tests. We'll let a later pass decide, and just
@ -160,7 +176,7 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics {
absConfigAddr := addrs.AbsProviderConfig{
Provider: providerFqn,
Alias: pa.Alias,
Module: c.Config().Path,
Module: config.Path,
}
c.providerInputConfig[absConfigAddr.String()] = vals

View File

@ -10,6 +10,7 @@ import (
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs/configschema"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/providers"
"github.com/hashicorp/terraform/internal/states"
)
@ -46,7 +47,6 @@ func TestContext2Input_provider(t *testing.T) {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -59,7 +59,7 @@ func TestContext2Input_provider(t *testing.T) {
return
}
if diags := ctx.Input(InputModeStd); diags.HasErrors() {
if diags := ctx.Input(m, InputModeStd); diags.HasErrors() {
t.Fatalf("input errors: %s", diags.Err())
}
@ -70,11 +70,10 @@ func TestContext2Input_provider(t *testing.T) {
t.Errorf("wrong description\ngot: %q\nwant: %q", got, want)
}
if _, diags := ctx.Plan(); diags.HasErrors() {
t.Fatalf("plan errors: %s", diags.Err())
}
plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts)
assertNoErrors(t, diags)
if _, diags := ctx.Apply(); diags.HasErrors() {
if _, diags := ctx.Apply(plan, m); diags.HasErrors() {
t.Fatalf("apply errors: %s", diags.Err())
}
@ -117,7 +116,6 @@ func TestContext2Input_providerMulti(t *testing.T) {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -127,13 +125,12 @@ func TestContext2Input_providerMulti(t *testing.T) {
var actual []interface{}
var lock sync.Mutex
if diags := ctx.Input(InputModeStd); diags.HasErrors() {
if diags := ctx.Input(m, InputModeStd); diags.HasErrors() {
t.Fatalf("input errors: %s", diags.Err())
}
if _, diags := ctx.Plan(); diags.HasErrors() {
t.Fatalf("plan errors: %s", diags.Err())
}
plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts)
assertNoErrors(t, diags)
p.ConfigureProviderFn = func(req providers.ConfigureProviderRequest) (resp providers.ConfigureProviderResponse) {
lock.Lock()
@ -141,7 +138,7 @@ func TestContext2Input_providerMulti(t *testing.T) {
actual = append(actual, req.Config.GetAttr("foo").AsString())
return
}
if _, diags := ctx.Apply(); diags.HasErrors() {
if _, diags := ctx.Apply(plan, m); diags.HasErrors() {
t.Fatalf("apply errors: %s", diags.Err())
}
@ -155,13 +152,12 @@ func TestContext2Input_providerOnce(t *testing.T) {
m := testModule(t, "input-provider-once")
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
if diags := ctx.Input(InputModeStd); diags.HasErrors() {
if diags := ctx.Input(m, InputModeStd); diags.HasErrors() {
t.Fatalf("input errors: %s", diags.Err())
}
}
@ -195,7 +191,6 @@ func TestContext2Input_providerId(t *testing.T) {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -212,15 +207,14 @@ func TestContext2Input_providerId(t *testing.T) {
"provider.aws.foo": "bar",
}
if diags := ctx.Input(InputModeStd); diags.HasErrors() {
if diags := ctx.Input(m, InputModeStd); diags.HasErrors() {
t.Fatalf("input errors: %s", diags.Err())
}
if _, diags := ctx.Plan(); diags.HasErrors() {
t.Fatalf("plan errors: %s", diags.Err())
}
plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts)
assertNoErrors(t, diags)
if _, diags := ctx.Apply(); diags.HasErrors() {
if _, diags := ctx.Apply(plan, m); diags.HasErrors() {
t.Fatalf("apply errors: %s", diags.Err())
}
@ -255,16 +249,9 @@ func TestContext2Input_providerOnly(t *testing.T) {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
Variables: InputValues{
"foo": &InputValue{
Value: cty.StringVal("us-west-2"),
SourceType: ValueFromCaller,
},
},
UIInput: input,
})
@ -278,15 +265,30 @@ func TestContext2Input_providerOnly(t *testing.T) {
return
}
if err := ctx.Input(InputModeProvider); err != nil {
if err := ctx.Input(m, InputModeProvider); err != nil {
t.Fatalf("err: %s", err)
}
if _, diags := ctx.Plan(); diags.HasErrors() {
t.Fatalf("plan errors: %s", diags.Err())
}
// NOTE: This is a stale test case from an older version of Terraform
// where Input was responsible for prompting for both input variables _and_
// provider configuration arguments, where it was trying to test the case
// where we were turning off the mode of prompting for input variables.
// That's now always disabled, and so this is essentially the same as the
// normal Input test, but we're preserving it until we have time to review
// and make sure this isn't inadvertently providing unique test coverage
// other than what it set out to test.
plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{
Mode: plans.NormalMode,
SetVariables: InputValues{
"foo": &InputValue{
Value: cty.StringVal("us-west-2"),
SourceType: ValueFromCaller,
},
},
})
assertNoErrors(t, diags)
state, err := ctx.Apply()
state, err := ctx.Apply(plan, m)
if err != nil {
t.Fatalf("err: %s", err)
}
@ -307,16 +309,9 @@ func TestContext2Input_providerVars(t *testing.T) {
m := testModule(t, "input-provider-with-vars")
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
Variables: InputValues{
"foo": &InputValue{
Value: cty.StringVal("bar"),
SourceType: ValueFromCaller,
},
},
UIInput: input,
})
@ -329,15 +324,22 @@ func TestContext2Input_providerVars(t *testing.T) {
actual = req.Config.GetAttr("foo").AsString()
return
}
if diags := ctx.Input(InputModeStd); diags.HasErrors() {
if diags := ctx.Input(m, InputModeStd); diags.HasErrors() {
t.Fatalf("input errors: %s", diags.Err())
}
if _, diags := ctx.Plan(); diags.HasErrors() {
t.Fatalf("plan errors: %s", diags.Err())
}
plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{
Mode: plans.NormalMode,
SetVariables: InputValues{
"foo": &InputValue{
Value: cty.StringVal("bar"),
SourceType: ValueFromCaller,
},
},
})
assertNoErrors(t, diags)
if _, diags := ctx.Apply(); diags.HasErrors() {
if _, diags := ctx.Apply(plan, m); diags.HasErrors() {
t.Fatalf("apply errors: %s", diags.Err())
}
@ -351,14 +353,13 @@ func TestContext2Input_providerVarsModuleInherit(t *testing.T) {
m := testModule(t, "input-provider-with-vars-and-module")
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
UIInput: input,
})
if diags := ctx.Input(InputModeStd); diags.HasErrors() {
if diags := ctx.Input(m, InputModeStd); diags.HasErrors() {
t.Fatalf("input errors: %s", diags.Err())
}
}
@ -369,14 +370,13 @@ func TestContext2Input_submoduleTriggersInvalidCount(t *testing.T) {
m := testModule(t, "input-submodule-count")
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
UIInput: input,
})
if diags := ctx.Input(InputModeStd); diags.HasErrors() {
if diags := ctx.Input(m, InputModeStd); diags.HasErrors() {
t.Fatalf("input errors: %s", diags.Err())
}
}
@ -427,23 +427,25 @@ func TestContext2Input_dataSourceRequiresRefresh(t *testing.T) {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("null"): testProviderFuncFixed(p),
},
State: state,
UIInput: input,
})
if diags := ctx.Input(InputModeStd); diags.HasErrors() {
if diags := ctx.Input(m, InputModeStd); diags.HasErrors() {
t.Fatalf("input errors: %s", diags.Err())
}
// ensure that plan works after Refresh
if _, diags := ctx.Refresh(); diags.HasErrors() {
// ensure that plan works after Refresh. This is a legacy test that
// doesn't really make sense anymore, because Refresh is really just
// a wrapper around plan anyway, but we're keeping it until we get a
// chance to review and check whether it's giving us any additional
// test coverage aside from what it's specifically intending to test.
if _, diags := ctx.Refresh(m, state, DefaultPlanOpts); diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
if _, diags := ctx.Plan(); diags.HasErrors() {
if _, diags := ctx.Plan(m, state, DefaultPlanOpts); diags.HasErrors() {
t.Fatalf("plan errors: %s", diags.Err())
}
}

View File

@ -0,0 +1,435 @@
package terraform
import (
"fmt"
"log"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/instances"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/refactoring"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/tfdiags"
)
// PlanOpts are the various options that affect the details of how Terraform
// will build a plan.
type PlanOpts struct {
Mode plans.Mode
SkipRefresh bool
SetVariables InputValues
Targets []addrs.Targetable
ForceReplace []addrs.AbsResourceInstance
}
// Plan generates an execution plan for the given context, and returns the
// refreshed state.
//
// The execution plan encapsulates the context and can be stored
// in order to reinstantiate a context later for Apply.
//
// Plan also updates the diff of this context to be the diff generated
// by the plan, so Apply can be called after.
func (c *Context) Plan(config *configs.Config, prevRunState *states.State, opts *PlanOpts) (*plans.Plan, tfdiags.Diagnostics) {
defer c.acquireRun("plan")()
var diags tfdiags.Diagnostics
// Save the downstream functions from needing to deal with these broken situations.
// No real callers should rely on these, but we have a bunch of old and
// sloppy tests that don't always populate arguments properly.
if config == nil {
config = configs.NewEmptyConfig()
}
if prevRunState == nil {
prevRunState = states.NewState()
}
if opts == nil {
opts = &PlanOpts{
Mode: plans.NormalMode,
}
}
moreDiags := CheckCoreVersionRequirements(config)
diags = diags.Append(moreDiags)
// If version constraints are not met then we'll bail early since otherwise
// we're likely to just see a bunch of other errors related to
// incompatibilities, which could be overwhelming for the user.
if diags.HasErrors() {
return nil, diags
}
switch opts.Mode {
case plans.NormalMode, plans.DestroyMode:
// OK
case plans.RefreshOnlyMode:
if opts.SkipRefresh {
// The CLI layer (and other similar callers) should prevent this
// combination of options.
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Incompatible plan options",
"Cannot skip refreshing in refresh-only mode. This is a bug in Terraform.",
))
return nil, diags
}
default:
// The CLI layer (and other similar callers) should not try to
// create a context for a mode that Terraform Core doesn't support.
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Unsupported plan mode",
fmt.Sprintf("Terraform Core doesn't know how to handle plan mode %s. This is a bug in Terraform.", opts.Mode),
))
return nil, diags
}
if len(opts.ForceReplace) > 0 && opts.Mode != plans.NormalMode {
// The other modes don't generate no-op or update actions that we might
// upgrade to be "replace", so doesn't make sense to combine those.
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Unsupported plan mode",
"Forcing resource instance replacement (with -replace=...) is allowed only in normal planning mode.",
))
return nil, diags
}
variables := mergeDefaultInputVariableValues(opts.SetVariables, config.Module.Variables)
// By the time we get here, we should have values defined for all of
// the root module variables, even if some of them are "unknown". It's the
// caller's responsibility to have already handled the decoding of these
// from the various ways the CLI allows them to be set and to produce
// user-friendly error messages if they are not all present, and so
// the error message from checkInputVariables should never be seen and
// includes language asking the user to report a bug.
varDiags := checkInputVariables(config.Module.Variables, variables)
diags = diags.Append(varDiags)
if len(opts.Targets) > 0 {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Warning,
"Resource targeting is in effect",
`You are creating a plan with the -target option, which means that the result of this plan may not represent all of the changes requested by the current configuration.
The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.`,
))
}
var plan *plans.Plan
var planDiags tfdiags.Diagnostics
switch opts.Mode {
case plans.NormalMode:
plan, planDiags = c.plan(config, prevRunState, variables, opts)
case plans.DestroyMode:
plan, planDiags = c.destroyPlan(config, prevRunState, variables, opts)
case plans.RefreshOnlyMode:
plan, planDiags = c.refreshOnlyPlan(config, prevRunState, variables, opts)
default:
panic(fmt.Sprintf("unsupported plan mode %s", opts.Mode))
}
diags = diags.Append(planDiags)
if diags.HasErrors() {
return nil, diags
}
// convert the variables into the format expected for the plan
varVals := make(map[string]plans.DynamicValue, len(variables))
for k, iv := range variables {
// We use cty.DynamicPseudoType here so that we'll save both the
// value _and_ its dynamic type in the plan, so we can recover
// exactly the same value later.
dv, err := plans.NewDynamicValue(iv.Value, cty.DynamicPseudoType)
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Failed to prepare variable value for plan",
fmt.Sprintf("The value for variable %q could not be serialized to store in the plan: %s.", k, err),
))
continue
}
varVals[k] = dv
}
// insert the run-specific data from the context into the plan; variables,
// targets and provider SHAs.
if plan != nil {
plan.VariableValues = varVals
plan.TargetAddrs = opts.Targets
plan.ProviderSHA256s = c.providerSHA256s
} else if !diags.HasErrors() {
panic("nil plan but no errors")
}
return plan, diags
}
var DefaultPlanOpts = &PlanOpts{
Mode: plans.NormalMode,
}
func (c *Context) plan(config *configs.Config, prevRunState *states.State, rootVariables InputValues, opts *PlanOpts) (*plans.Plan, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
if opts.Mode != plans.NormalMode {
panic(fmt.Sprintf("called Context.plan with %s", opts.Mode))
}
plan, walkDiags := c.planWalk(config, prevRunState, rootVariables, opts)
diags = diags.Append(walkDiags)
if diags.HasErrors() {
return nil, diags
}
// The refreshed state ends up with some placeholder objects in it for
// objects pending creation. We only really care about those being in
// the working state, since that's what we're going to use when applying,
// so we'll prune them all here.
plan.PriorState.SyncWrapper().RemovePlannedResourceInstanceObjects()
return plan, diags
}
func (c *Context) refreshOnlyPlan(config *configs.Config, prevRunState *states.State, rootVariables InputValues, opts *PlanOpts) (*plans.Plan, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
if opts.Mode != plans.RefreshOnlyMode {
panic(fmt.Sprintf("called Context.refreshOnlyPlan with %s", opts.Mode))
}
plan, walkDiags := c.planWalk(config, prevRunState, rootVariables, opts)
diags = diags.Append(walkDiags)
if diags.HasErrors() {
return nil, diags
}
// If the graph builder and graph nodes correctly obeyed our directive
// to refresh only, the set of resource changes should always be empty.
// We'll safety-check that here so we can return a clear message about it,
// rather than probably just generating confusing output at the UI layer.
if len(plan.Changes.Resources) != 0 {
// Some extra context in the logs in case the user reports this message
// as a bug, as a starting point for debugging.
for _, rc := range plan.Changes.Resources {
if depKey := rc.DeposedKey; depKey == states.NotDeposed {
log.Printf("[DEBUG] Refresh-only plan includes %s change for %s", rc.Action, rc.Addr)
} else {
log.Printf("[DEBUG] Refresh-only plan includes %s change for %s deposed object %s", rc.Action, rc.Addr, depKey)
}
}
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Invalid refresh-only plan",
"Terraform generated planned resource changes in a refresh-only plan. This is a bug in Terraform.",
))
}
// Prune out any placeholder objects we put in the state to represent
// objects that would need to be created.
plan.PriorState.SyncWrapper().RemovePlannedResourceInstanceObjects()
return plan, diags
}
func (c *Context) destroyPlan(config *configs.Config, prevRunState *states.State, rootVariables InputValues, opts *PlanOpts) (*plans.Plan, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
pendingPlan := &plans.Plan{}
if opts.Mode != plans.DestroyMode {
panic(fmt.Sprintf("called Context.destroyPlan with %s", opts.Mode))
}
priorState := prevRunState
// A destroy plan starts by running Refresh to read any pending data
// sources, and remove missing managed resources. This is required because
// a "destroy plan" is only creating delete changes, and is essentially a
// local operation.
//
// NOTE: if skipRefresh _is_ set then we'll rely on the destroy-plan walk
// below to upgrade the prevRunState and priorState both to the latest
// resource type schemas, so NodePlanDestroyableResourceInstance.Execute
// must coordinate with this by taking that action only when c.skipRefresh
// _is_ set. This coupling between the two is unfortunate but necessary
// to work within our current structure.
if !opts.SkipRefresh {
log.Printf("[TRACE] Context.destroyPlan: calling Context.plan to get the effect of refreshing the prior state")
normalOpts := *opts
normalOpts.Mode = plans.NormalMode
refreshPlan, refreshDiags := c.plan(config, prevRunState, rootVariables, &normalOpts)
diags = diags.Append(refreshDiags)
if diags.HasErrors() {
return nil, diags
}
// insert the refreshed state into the destroy plan result, and ignore
// the changes recorded from the refresh.
pendingPlan.PriorState = refreshPlan.PriorState.DeepCopy()
pendingPlan.PrevRunState = refreshPlan.PrevRunState.DeepCopy()
log.Printf("[TRACE] Context.destroyPlan: now _really_ creating a destroy plan")
// We'll use the refreshed state -- which is the "prior state" from
// the perspective of this "pending plan" -- as the starting state
// for our destroy-plan walk, so it can take into account if we
// detected during refreshing that anything was already deleted outside
// of Terraform.
priorState = pendingPlan.PriorState
}
destroyPlan, walkDiags := c.planWalk(config, priorState, rootVariables, opts)
diags = diags.Append(walkDiags)
if walkDiags.HasErrors() {
return nil, diags
}
if !opts.SkipRefresh {
// If we didn't skip refreshing then we want the previous run state
// prior state to be the one we originally fed into the c.plan call
// above, not the refreshed version we used for the destroy walk.
destroyPlan.PrevRunState = pendingPlan.PrevRunState
}
return destroyPlan, diags
}
func (c *Context) prePlanFindAndApplyMoves(config *configs.Config, prevRunState *states.State, targets []addrs.Targetable) ([]refactoring.MoveStatement, map[addrs.UniqueKey]refactoring.MoveResult) {
moveStmts := refactoring.FindMoveStatements(config)
moveResults := refactoring.ApplyMoves(moveStmts, prevRunState)
if len(targets) > 0 {
for _, result := range moveResults {
matchesTarget := false
for _, targetAddr := range targets {
if targetAddr.TargetContains(result.From) {
matchesTarget = true
break
}
}
if !matchesTarget {
// TODO: Return an error stating that a targeted plan is
// only valid if it includes this address that was moved.
}
}
}
return moveStmts, moveResults
}
func (c *Context) postPlanValidateMoves(config *configs.Config, stmts []refactoring.MoveStatement, allInsts instances.Set) tfdiags.Diagnostics {
return refactoring.ValidateMoves(stmts, config, allInsts)
}
func (c *Context) planWalk(config *configs.Config, prevRunState *states.State, rootVariables InputValues, opts *PlanOpts) (*plans.Plan, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
log.Printf("[DEBUG] Building and walking plan graph for %s", opts.Mode)
schemas, moreDiags := c.Schemas(config, prevRunState)
diags = diags.Append(moreDiags)
if diags.HasErrors() {
return nil, diags
}
prevRunState = prevRunState.DeepCopy() // don't modify the caller's object when we process the moves
moveStmts, moveResults := c.prePlanFindAndApplyMoves(config, prevRunState, opts.Targets)
graph, walkOp, moreDiags := c.planGraph(config, prevRunState, opts, schemas, true)
diags = diags.Append(moreDiags)
if diags.HasErrors() {
return nil, diags
}
// If we get here then we should definitely have a non-nil "graph", which
// we can now walk.
changes := plans.NewChanges()
walker, walkDiags := c.walk(graph, walkOp, &graphWalkOpts{
Config: config,
Schemas: schemas,
InputState: prevRunState,
Changes: changes,
MoveResults: moveResults,
RootVariableValues: rootVariables,
})
diags = diags.Append(walker.NonFatalDiagnostics)
diags = diags.Append(walkDiags)
diags = diags.Append(c.postPlanValidateMoves(config, moveStmts, walker.InstanceExpander.AllInstances()))
plan := &plans.Plan{
UIMode: opts.Mode,
Changes: changes,
PriorState: walker.RefreshState.Close(),
PrevRunState: walker.PrevRunState.Close(),
// Other fields get populated by Context.Plan after we return
}
return plan, diags
}
func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, opts *PlanOpts, schemas *Schemas, validate bool) (*Graph, walkOperation, tfdiags.Diagnostics) {
switch mode := opts.Mode; mode {
case plans.NormalMode:
graph, diags := (&PlanGraphBuilder{
Config: config,
State: prevRunState,
Components: c.components,
Schemas: schemas,
Targets: opts.Targets,
ForceReplace: opts.ForceReplace,
Validate: validate,
skipRefresh: opts.SkipRefresh,
}).Build(addrs.RootModuleInstance)
return graph, walkPlan, diags
case plans.RefreshOnlyMode:
graph, diags := (&PlanGraphBuilder{
Config: config,
State: prevRunState,
Components: c.components,
Schemas: schemas,
Targets: opts.Targets,
Validate: validate,
skipRefresh: opts.SkipRefresh,
skipPlanChanges: true, // this activates "refresh only" mode.
}).Build(addrs.RootModuleInstance)
return graph, walkPlan, diags
case plans.DestroyMode:
graph, diags := (&DestroyPlanGraphBuilder{
Config: config,
State: prevRunState,
Components: c.components,
Schemas: schemas,
Targets: opts.Targets,
Validate: validate,
skipRefresh: opts.SkipRefresh,
}).Build(addrs.RootModuleInstance)
return graph, walkPlanDestroy, diags
default:
// The above should cover all plans.Mode values
panic(fmt.Sprintf("unsupported plan mode %s", mode))
}
}
// PlanGraphForUI is a last vestage of graphs in the public interface of Context
// (as opposed to graphs as an implementation detail) intended only for use
// by the "terraform graph" command when asked to render a plan-time graph.
//
// The result of this is intended only for rendering ot the user as a dot
// graph, and so may change in future in order to make the result more useful
// in that context, even if drifts away from the physical graph that Terraform
// Core currently uses as an implementation detail of planning.
func (c *Context) PlanGraphForUI(config *configs.Config, prevRunState *states.State, mode plans.Mode) (*Graph, tfdiags.Diagnostics) {
// For now though, this really is just the internal graph, confusing
// implementation details and all.
var diags tfdiags.Diagnostics
opts := &PlanOpts{Mode: mode}
schemas, moreDiags := c.Schemas(config, prevRunState)
diags = diags.Append(moreDiags)
if diags.HasErrors() {
return nil, diags
}
graph, _, moreDiags := c.planGraph(config, prevRunState, opts, schemas, false)
diags = diags.Append(moreDiags)
return graph, diags
}

View File

@ -69,17 +69,13 @@ resource "test_object" "a" {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
State: state,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
plan, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.Err())
}
plan, diags := ctx.Plan(m, state, DefaultPlanOpts)
assertNoErrors(t, diags)
if !p.UpgradeResourceStateCalled {
t.Errorf("Provider's UpgradeResourceState wasn't called; should've been")
@ -184,17 +180,13 @@ data "test_data_source" "foo" {
)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
State: state,
})
plan, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
plan, diags := ctx.Plan(m, state, DefaultPlanOpts)
assertNoErrors(t, diags)
for _, res := range plan.Changes.Resources {
if res.Action != plans.NoOp {
@ -231,17 +223,13 @@ output "out" {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
State: state,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
plan, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.Err())
}
plan, diags := ctx.Plan(m, state, DefaultPlanOpts)
assertNoErrors(t, diags)
change, err := plan.Changes.Outputs[0].Decode()
if err != nil {
@ -300,16 +288,13 @@ resource "test_object" "a" {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
_, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.Err())
}
_, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts)
assertNoErrors(t, diags)
}
func TestContext2Plan_dataReferencesResourceInModules(t *testing.T) {
@ -376,14 +361,12 @@ resource "test_resource" "b" {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
State: state,
})
plan, diags := ctx.Plan()
plan, diags := ctx.Plan(m, state, DefaultPlanOpts)
assertNoErrors(t, diags)
oldMod := oldDataAddr.Module
@ -466,19 +449,16 @@ resource "test_object" "a" {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
State: state,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
PlanMode: plans.DestroyMode,
SkipRefresh: false,
})
plan, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.Err())
}
plan, diags := ctx.Plan(m, state, &PlanOpts{
Mode: plans.DestroyMode,
SkipRefresh: false, // the default
})
assertNoErrors(t, diags)
if !p.UpgradeResourceStateCalled {
t.Errorf("Provider's UpgradeResourceState wasn't called; should've been")
@ -569,19 +549,16 @@ resource "test_object" "a" {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
State: state,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
PlanMode: plans.DestroyMode,
SkipRefresh: true,
})
plan, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.Err())
}
plan, diags := ctx.Plan(m, state, &PlanOpts{
Mode: plans.DestroyMode,
SkipRefresh: true,
})
assertNoErrors(t, diags)
if !p.UpgradeResourceStateCalled {
t.Errorf("Provider's UpgradeResourceState wasn't called; should've been")
@ -665,17 +642,13 @@ output "result" {
state := states.NewState()
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
State: state,
})
plan, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
plan, diags := ctx.Plan(m, state, DefaultPlanOpts)
assertNoErrors(t, diags)
for _, res := range plan.Changes.Resources {
if res.Action != plans.Create {
@ -716,18 +689,15 @@ provider "test" {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
State: state,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
PlanMode: plans.DestroyMode,
})
_, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.Err())
}
_, diags := ctx.Plan(m, state, &PlanOpts{
Mode: plans.DestroyMode,
})
assertNoErrors(t, diags)
}
func TestContext2Plan_movedResourceBasic(t *testing.T) {
@ -762,17 +732,17 @@ func TestContext2Plan_movedResourceBasic(t *testing.T) {
p := simpleMockProvider()
ctx := testContext2(t, &ContextOpts{
Config: m,
State: state,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
plan, diags := ctx.Plan(m, state, &PlanOpts{
Mode: plans.NormalMode,
ForceReplace: []addrs.AbsResourceInstance{
addrA,
},
})
plan, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatalf("unexpected errors\n%s", diags.Err().Error())
}
@ -873,15 +843,14 @@ func TestContext2Plan_refreshOnlyMode(t *testing.T) {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
State: state,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
PlanMode: plans.RefreshOnlyMode,
})
plan, diags := ctx.Plan()
plan, diags := ctx.Plan(m, state, &PlanOpts{
Mode: plans.RefreshOnlyMode,
})
if diags.HasErrors() {
t.Fatalf("unexpected errors\n%s", diags.Err().Error())
}
@ -1010,15 +979,14 @@ func TestContext2Plan_refreshOnlyMode_deposed(t *testing.T) {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
State: state,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
PlanMode: plans.RefreshOnlyMode,
})
plan, diags := ctx.Plan()
plan, diags := ctx.Plan(m, state, &PlanOpts{
Mode: plans.RefreshOnlyMode,
})
if diags.HasErrors() {
t.Fatalf("unexpected errors\n%s", diags.Err().Error())
}
@ -1089,11 +1057,9 @@ output "root" {
}`,
})
ctx := testContext2(t, &ContextOpts{
Config: m,
})
ctx := testContext2(t, &ContextOpts{})
_, diags := ctx.Plan()
_, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts)
if !diags.HasErrors() {
t.Fatal("succeeded; want errors")
}
@ -1189,17 +1155,13 @@ data "test_data_source" "foo" {
)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
State: state,
})
plan, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
plan, diags := ctx.Plan(m, state, DefaultPlanOpts)
assertNoErrors(t, diags)
for _, res := range plan.Changes.Resources {
switch res.Addr.String() {
@ -1242,17 +1204,17 @@ func TestContext2Plan_forceReplace(t *testing.T) {
p := simpleMockProvider()
ctx := testContext2(t, &ContextOpts{
Config: m,
State: state,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
plan, diags := ctx.Plan(m, state, &PlanOpts{
Mode: plans.NormalMode,
ForceReplace: []addrs.AbsResourceInstance{
addrA,
},
})
plan, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatalf("unexpected errors\n%s", diags.Err().Error())
}
@ -1310,17 +1272,17 @@ func TestContext2Plan_forceReplaceIncompleteAddr(t *testing.T) {
p := simpleMockProvider()
ctx := testContext2(t, &ContextOpts{
Config: m,
State: state,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
plan, diags := ctx.Plan(m, state, &PlanOpts{
Mode: plans.NormalMode,
ForceReplace: []addrs.AbsResourceInstance{
addrBare,
},
})
plan, diags := ctx.Plan()
if diags.HasErrors() {
t.Fatalf("unexpected errors\n%s", diags.Err().Error())
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,37 @@
package terraform
import (
"log"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/tfdiags"
)
// Refresh is a vestigial operation that is equivalent to call to Plan and
// then taking the prior state of the resulting plan.
//
// We retain this only as a measure of semi-backward-compatibility for
// automation relying on the "terraform refresh" subcommand. The modern way
// to get this effect is to create and then apply a plan in the refresh-only
// mode.
func (c *Context) Refresh(config *configs.Config, prevRunState *states.State, opts *PlanOpts) (*states.State, tfdiags.Diagnostics) {
if opts == nil {
// This fallback is only here for tests, not for real code.
opts = &PlanOpts{
Mode: plans.NormalMode,
}
}
if opts.Mode != plans.NormalMode {
panic("can only Refresh in the normal planning mode")
}
log.Printf("[DEBUG] Refresh is really just plan now, so creating a %s plan", opts.Mode)
p, diags := c.Plan(config, prevRunState, opts)
if diags.HasErrors() {
return nil, diags
}
return p.PriorState, diags
}

View File

@ -14,6 +14,7 @@ import (
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs/configschema"
"github.com/hashicorp/terraform/internal/configs/hcl2shim"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/providers"
"github.com/hashicorp/terraform/internal/states"
)
@ -34,11 +35,9 @@ func TestContext2Refresh(t *testing.T) {
)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
schema := p.GetProviderSchemaResponse.ResourceTypes["aws_instance"].Block
@ -52,7 +51,7 @@ func TestContext2Refresh(t *testing.T) {
NewState: readState,
}
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatal(diags.Err())
}
@ -123,17 +122,15 @@ func TestContext2Refresh_dynamicAttr(t *testing.T) {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
State: startingState,
})
schema := p.GetProviderSchemaResponse.ResourceTypes["test_instance"].Block
ty := schema.ImpliedType()
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, startingState, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatal(diags.Err())
}
@ -200,13 +197,12 @@ func TestContext2Refresh_dataComputedModuleVar(t *testing.T) {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -261,16 +257,9 @@ func TestContext2Refresh_targeted(t *testing.T) {
m := testModule(t, "refresh-targeted")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
Targets: []addrs.Targetable{
addrs.RootModuleInstance.Resource(
addrs.ManagedResourceMode, "aws_instance", "me",
),
},
})
refreshedResources := make([]string, 0, 2)
@ -281,7 +270,14 @@ func TestContext2Refresh_targeted(t *testing.T) {
}
}
_, diags := ctx.Refresh()
_, diags := ctx.Refresh(m, state, &PlanOpts{
Mode: plans.NormalMode,
Targets: []addrs.Targetable{
addrs.RootModuleInstance.Resource(
addrs.ManagedResourceMode, "aws_instance", "me",
),
},
})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -339,16 +335,9 @@ func TestContext2Refresh_targetedCount(t *testing.T) {
m := testModule(t, "refresh-targeted-count")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
Targets: []addrs.Targetable{
addrs.RootModuleInstance.Resource(
addrs.ManagedResourceMode, "aws_instance", "me",
),
},
})
refreshedResources := make([]string, 0, 2)
@ -359,7 +348,14 @@ func TestContext2Refresh_targetedCount(t *testing.T) {
}
}
_, diags := ctx.Refresh()
_, diags := ctx.Refresh(m, state, &PlanOpts{
Mode: plans.NormalMode,
Targets: []addrs.Targetable{
addrs.RootModuleInstance.Resource(
addrs.ManagedResourceMode, "aws_instance", "me",
),
},
})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -425,16 +421,9 @@ func TestContext2Refresh_targetedCountIndex(t *testing.T) {
m := testModule(t, "refresh-targeted-count")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
Targets: []addrs.Targetable{
addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "me", addrs.IntKey(0),
),
},
})
refreshedResources := make([]string, 0, 2)
@ -445,7 +434,14 @@ func TestContext2Refresh_targetedCountIndex(t *testing.T) {
}
}
_, diags := ctx.Refresh()
_, diags := ctx.Refresh(m, state, &PlanOpts{
Mode: plans.NormalMode,
Targets: []addrs.Targetable{
addrs.RootModuleInstance.ResourceInstance(
addrs.ManagedResourceMode, "aws_instance", "me", addrs.IntKey(0),
),
},
})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -478,7 +474,6 @@ func TestContext2Refresh_moduleComputedVar(t *testing.T) {
m := testModule(t, "refresh-module-computed-var")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -486,7 +481,7 @@ func TestContext2Refresh_moduleComputedVar(t *testing.T) {
// This was failing (see GH-2188) at some point, so this test just
// verifies that the failure goes away.
if _, diags := ctx.Refresh(); diags.HasErrors() {
if _, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode}); diags.HasErrors() {
t.Fatalf("refresh errs: %s", diags.Err())
}
}
@ -500,18 +495,16 @@ func TestContext2Refresh_delete(t *testing.T) {
testSetResourceInstanceCurrent(root, "aws_instance.web", `{"id":"foo"}`, `provider["registry.terraform.io/hashicorp/aws"]`)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
p.ReadResourceResponse = &providers.ReadResourceResponse{
NewState: cty.NullVal(p.GetProviderSchemaResponse.ResourceTypes["aws_instance"].Block.ImpliedType()),
}
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -526,11 +519,9 @@ func TestContext2Refresh_ignoreUncreated(t *testing.T) {
p := testProvider("aws")
m := testModule(t, "refresh-basic")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: nil,
})
p.ReadResourceResponse = &providers.ReadResourceResponse{
@ -539,7 +530,7 @@ func TestContext2Refresh_ignoreUncreated(t *testing.T) {
}),
}
_, diags := ctx.Refresh()
_, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -558,15 +549,13 @@ func TestContext2Refresh_hook(t *testing.T) {
testSetResourceInstanceCurrent(root, "aws_instance.web", `{"id":"foo"}`, `provider["registry.terraform.io/hashicorp/aws"]`)
ctx := testContext2(t, &ContextOpts{
Config: m,
Hooks: []Hook{h},
Hooks: []Hook{h},
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
if _, diags := ctx.Refresh(); diags.HasErrors() {
if _, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}); diags.HasErrors() {
t.Fatalf("refresh errs: %s", diags.Err())
}
if !h.PreRefreshCalled {
@ -588,11 +577,9 @@ func TestContext2Refresh_modules(t *testing.T) {
testSetResourceInstanceCurrent(child, "aws_instance.web", `{"id":"baz"}`, `provider["registry.terraform.io/hashicorp/aws"]`)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
p.ReadResourceFn = func(req providers.ReadResourceRequest) providers.ReadResourceResponse {
@ -613,7 +600,7 @@ func TestContext2Refresh_modules(t *testing.T) {
}
}
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -648,13 +635,12 @@ func TestContext2Refresh_moduleInputComputedOutput(t *testing.T) {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
if _, diags := ctx.Refresh(); diags.HasErrors() {
if _, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode}); diags.HasErrors() {
t.Fatalf("refresh errs: %s", diags.Err())
}
}
@ -663,13 +649,12 @@ func TestContext2Refresh_moduleVarModule(t *testing.T) {
m := testModule(t, "refresh-module-var-module")
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
if _, diags := ctx.Refresh(); diags.HasErrors() {
if _, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode}); diags.HasErrors() {
t.Fatalf("refresh errs: %s", diags.Err())
}
}
@ -679,7 +664,6 @@ func TestContext2Refresh_noState(t *testing.T) {
p := testProvider("aws")
m := testModule(t, "refresh-no-state")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -691,7 +675,7 @@ func TestContext2Refresh_noState(t *testing.T) {
}),
}
if _, diags := ctx.Refresh(); diags.HasErrors() {
if _, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode}); diags.HasErrors() {
t.Fatalf("refresh errs: %s", diags.Err())
}
}
@ -726,14 +710,12 @@ func TestContext2Refresh_output(t *testing.T) {
root.SetOutputValue("foo", cty.StringVal("foo"), false)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -776,14 +758,12 @@ func TestContext2Refresh_outputPartial(t *testing.T) {
testSetResourceInstanceCurrent(root, "aws_instance.foo", `{}`, `provider["registry.terraform.io/hashicorp/aws"]`)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -804,11 +784,9 @@ func TestContext2Refresh_stateBasic(t *testing.T) {
testSetResourceInstanceCurrent(root, "aws_instance.web", `{"id":"bar"}`, `provider["registry.terraform.io/hashicorp/aws"]`)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
schema := p.GetProviderSchemaResponse.ResourceTypes["aws_instance"].Block
@ -825,7 +803,7 @@ func TestContext2Refresh_stateBasic(t *testing.T) {
NewState: readStateVal,
}
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -879,10 +857,9 @@ func TestContext2Refresh_dataCount(t *testing.T) {
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
Config: m,
})
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
@ -912,11 +889,9 @@ func TestContext2Refresh_dataState(t *testing.T) {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("null"): testProviderFuncFixed(p),
},
State: state,
})
var readStateVal cty.Value
@ -930,7 +905,7 @@ func TestContext2Refresh_dataState(t *testing.T) {
}
}
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -978,11 +953,9 @@ func TestContext2Refresh_dataStateRefData(t *testing.T) {
m := testModule(t, "refresh-data-ref-data")
state := states.NewState()
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("null"): testProviderFuncFixed(p),
},
State: state,
})
p.ReadDataSourceFn = func(req providers.ReadDataSourceRequest) providers.ReadDataSourceResponse {
@ -995,7 +968,7 @@ func TestContext2Refresh_dataStateRefData(t *testing.T) {
}
}
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -1016,11 +989,9 @@ func TestContext2Refresh_tainted(t *testing.T) {
testSetResourceInstanceTainted(root, "aws_instance.web", `{"id":"bar"}`, `provider["registry.terraform.io/hashicorp/aws"]`)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
p.ReadResourceFn = func(req providers.ReadResourceRequest) providers.ReadResourceResponse {
// add the required id
@ -1032,7 +1003,7 @@ func TestContext2Refresh_tainted(t *testing.T) {
}
}
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -1058,14 +1029,14 @@ func TestContext2Refresh_unknownProvider(t *testing.T) {
root := state.EnsureModule(addrs.RootModuleInstance)
testSetResourceInstanceCurrent(root, "aws_instance.web", `{"id":"foo"}`, `provider["registry.terraform.io/hashicorp/aws"]`)
_, diags := NewContext(&ContextOpts{
Config: m,
c, diags := NewContext(&ContextOpts{
Providers: map[addrs.Provider]providers.Factory{},
State: state,
})
assertNoDiagnostics(t, diags)
_, diags = c.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode})
if !diags.HasErrors() {
t.Fatal("successfully created context; want error")
t.Fatal("successfully refreshed; want error")
}
if !regexp.MustCompile(`failed to instantiate provider ".+"`).MatchString(diags.Err().Error()) {
@ -1100,11 +1071,9 @@ func TestContext2Refresh_vars(t *testing.T) {
testSetResourceInstanceCurrent(root, "aws_instance.web", `{"id":"foo"}`, `provider["registry.terraform.io/hashicorp/aws"]`)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
readStateVal, err := schema.CoerceValue(cty.ObjectVal(map[string]cty.Value{
@ -1124,7 +1093,7 @@ func TestContext2Refresh_vars(t *testing.T) {
}
}
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("refresh errors: %s", diags.Err())
}
@ -1176,8 +1145,8 @@ func TestContext2Refresh_orphanModule(t *testing.T) {
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"i-abc123"}`),
Dependencies: []addrs.ConfigResource{
addrs.ConfigResource{Module: addrs.Module{"module.child"}},
addrs.ConfigResource{Module: addrs.Module{"module.child"}},
{Module: addrs.Module{"module.child"}},
{Module: addrs.Module{"module.child"}},
},
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
@ -1188,7 +1157,7 @@ func TestContext2Refresh_orphanModule(t *testing.T) {
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"i-bcd23"}`),
Dependencies: []addrs.ConfigResource{addrs.ConfigResource{Module: addrs.Module{"module.grandchild"}}},
Dependencies: []addrs.ConfigResource{{Module: addrs.Module{"module.grandchild"}}},
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
)
@ -1196,15 +1165,13 @@ func TestContext2Refresh_orphanModule(t *testing.T) {
testSetResourceInstanceCurrent(grandchild, "aws_instance.baz", `{"id":"i-cde345"}`, `provider["registry.terraform.io/hashicorp/aws"]`)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
testCheckDeadlock(t, func() {
_, err := ctx.Refresh()
_, err := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if err != nil {
t.Fatalf("err: %s", err.Err())
}
@ -1239,13 +1206,12 @@ func TestContext2Validate(t *testing.T) {
m := testModule(t, "validate-good")
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := c.Validate()
diags := c.Validate(m)
if len(diags) != 0 {
t.Fatalf("unexpected error: %#v", diags.ErrWithWarnings())
}
@ -1260,11 +1226,9 @@ func TestContext2Refresh_updateProviderInState(t *testing.T) {
testSetResourceInstanceCurrent(root, "aws_instance.bar", `{"id":"foo"}`, `provider["registry.terraform.io/hashicorp/aws"].baz`)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
expected := strings.TrimSpace(`
@ -1272,7 +1236,7 @@ aws_instance.bar:
ID = foo
provider = provider["registry.terraform.io/hashicorp/aws"].foo`)
s, diags := ctx.Refresh()
s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatal(diags.Err())
}
@ -1329,14 +1293,12 @@ func TestContext2Refresh_schemaUpgradeFlatmap(t *testing.T) {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
State: s,
})
state, diags := ctx.Refresh()
state, diags := ctx.Refresh(m, s, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatal(diags.Err())
}
@ -1413,14 +1375,12 @@ func TestContext2Refresh_schemaUpgradeJSON(t *testing.T) {
})
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
State: s,
})
state, diags := ctx.Refresh()
state, diags := ctx.Refresh(m, s, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatal(diags.Err())
}
@ -1471,13 +1431,12 @@ data "aws_data_source" "foo" {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
_, diags := ctx.Refresh()
_, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
// Should get this error:
// Unsupported attribute: This object does not have an attribute named "missing"
@ -1520,14 +1479,12 @@ func TestContext2Refresh_dataResourceDependsOn(t *testing.T) {
testSetResourceInstanceCurrent(root, "test_resource.a", `{"id":"a"}`, `provider["registry.terraform.io/hashicorp/test"]`)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
State: state,
})
_, diags := ctx.Refresh()
_, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("unexpected errors: %s", diags.Err())
}
@ -1566,14 +1523,12 @@ resource "aws_instance" "bar" {
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
state, diags := ctx.Refresh()
state, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatalf("plan errors: %s", diags.Err())
}
@ -1614,14 +1569,12 @@ func TestContext2Refresh_dataSourceOrphan(t *testing.T) {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
State: state,
})
_, diags := ctx.Refresh()
_, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode})
if diags.HasErrors() {
t.Fatal(diags.Err())
}

View File

@ -109,9 +109,12 @@ func TestNewContextRequiredVersion(t *testing.T) {
Required: constraint,
})
}
_, diags := NewContext(&ContextOpts{
Config: mod,
})
c, diags := NewContext(&ContextOpts{})
if diags.HasErrors() {
t.Fatalf("unexpected NewContext errors: %s", diags.Err())
}
diags = c.Validate(mod)
if diags.HasErrors() != tc.Err {
t.Fatalf("err: %s", diags.Err())
}
@ -262,9 +265,6 @@ Please run "terraform init".`,
devProviders[provider] = struct{}{}
}
opts := &ContextOpts{
Config: testModuleInline(t, map[string]string{
"main.tf": tc.Config,
}),
LockedDependencies: locks,
ProvidersInDevelopment: devProviders,
Providers: map[addrs.Provider]providers.Factory{
@ -274,7 +274,16 @@ Please run "terraform init".`,
},
}
ctx, diags := NewContext(opts)
m := testModuleInline(t, map[string]string{
"main.tf": tc.Config,
})
c, diags := NewContext(opts)
if diags.HasErrors() {
t.Fatalf("unexpected NewContext error: %s", diags.Err())
}
diags = c.Validate(m)
if tc.WantErr != "" {
if len(diags) == 0 {
t.Fatal("expected diags but none returned")
@ -286,9 +295,6 @@ Please run "terraform init".`,
if len(diags) > 0 {
t.Errorf("unexpected diags: %s", diags.Err())
}
if ctx == nil {
t.Error("ctx is nil")
}
}
})
}
@ -717,10 +723,10 @@ func testProviderSchema(name string) *providers.GetProviderSchemaResponse {
// our context tests try to exercise lots of stuff at once and so having them
// round-trip things through on-disk files is often an important part of
// fully representing an old bug in a regression test.
func contextOptsForPlanViaFile(configSnap *configload.Snapshot, plan *plans.Plan) (*ContextOpts, error) {
func contextOptsForPlanViaFile(configSnap *configload.Snapshot, plan *plans.Plan) (*ContextOpts, *configs.Config, *plans.Plan, error) {
dir, err := ioutil.TempDir("", "terraform-contextForPlanViaFile")
if err != nil {
return nil, err
return nil, nil, nil, err
}
defer os.RemoveAll(dir)
@ -751,49 +757,27 @@ func contextOptsForPlanViaFile(configSnap *configload.Snapshot, plan *plans.Plan
filename := filepath.Join(dir, "tfplan")
err = planfile.Create(filename, configSnap, prevStateFile, stateFile, plan)
if err != nil {
return nil, err
return nil, nil, nil, err
}
pr, err := planfile.Open(filename)
if err != nil {
return nil, err
return nil, nil, nil, err
}
config, diags := pr.ReadConfig()
if diags.HasErrors() {
return nil, diags.Err()
}
stateFile, err = pr.ReadStateFile()
if err != nil {
return nil, err
return nil, nil, nil, diags.Err()
}
plan, err = pr.ReadPlan()
if err != nil {
return nil, err
}
vars := make(InputValues)
for name, vv := range plan.VariableValues {
val, err := vv.Decode(cty.DynamicPseudoType)
if err != nil {
return nil, fmt.Errorf("can't decode value for variable %q: %s", name, err)
}
vars[name] = &InputValue{
Value: val,
SourceType: ValueFromPlan,
}
return nil, nil, nil, err
}
return &ContextOpts{
Config: config,
State: stateFile.State,
Changes: plan.Changes,
Variables: vars,
Targets: plan.TargetAddrs,
ProviderSHA256s: plan.ProviderSHA256s,
}, nil
}, config, plan, nil
}
// legacyPlanComparisonString produces a string representation of the changes

View File

@ -0,0 +1,88 @@
package terraform
import (
"log"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/tfdiags"
"github.com/zclconf/go-cty/cty"
)
// Validate performs semantic validation of a configuration, and returns
// any warnings or errors.
//
// Syntax and structural checks are performed by the configuration loader,
// and so are not repeated here.
//
// Validate considers only the configuration and so it won't catch any
// errors caused by current values in the state, or other external information
// such as root module input variables. However, the Plan function includes
// all of the same checks as Validate, in addition to the other work it does
// to consider the previous run state and the planning options.
func (c *Context) Validate(config *configs.Config) tfdiags.Diagnostics {
defer c.acquireRun("validate")()
var diags tfdiags.Diagnostics
moreDiags := CheckCoreVersionRequirements(config)
diags = diags.Append(moreDiags)
// If version constraints are not met then we'll bail early since otherwise
// we're likely to just see a bunch of other errors related to
// incompatibilities, which could be overwhelming for the user.
if diags.HasErrors() {
return diags
}
schemas, moreDiags := c.Schemas(config, nil)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
return diags
}
log.Printf("[DEBUG] Building and walking validate graph")
graph, moreDiags := ValidateGraphBuilder(&PlanGraphBuilder{
Config: config,
Components: c.components,
Schemas: schemas,
Validate: true,
State: states.NewState(),
}).Build(addrs.RootModuleInstance)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
return diags
}
// Validate is to check if the given module is valid regardless of
// input values, current state, etc. Therefore we populate all of the
// input values with unknown values of the expected type, allowing us
// to perform a type check without assuming any particular values.
varValues := make(InputValues)
for name, variable := range config.Module.Variables {
ty := variable.Type
if ty == cty.NilType {
// Can't predict the type at all, so we'll just mark it as
// cty.DynamicVal (unknown value of cty.DynamicPseudoType).
ty = cty.DynamicPseudoType
}
varValues[name] = &InputValue{
Value: cty.UnknownVal(ty),
SourceType: ValueFromUnknown,
}
}
walker, walkDiags := c.walk(graph, walkValidate, &graphWalkOpts{
Config: config,
Schemas: schemas,
RootVariableValues: varValues,
})
diags = diags.Append(walker.NonFatalDiagnostics)
diags = diags.Append(walkDiags)
if walkDiags.HasErrors() {
return diags
}
return diags
}

View File

@ -10,7 +10,6 @@ import (
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs/configschema"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/providers"
"github.com/hashicorp/terraform/internal/provisioners"
"github.com/hashicorp/terraform/internal/states"
@ -29,13 +28,12 @@ func TestContext2Validate_badCount(t *testing.T) {
m := testModule(t, "validate-bad-count")
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := c.Validate()
diags := c.Validate(m)
if !diags.HasErrors() {
t.Fatalf("succeeded; want error")
}
@ -53,13 +51,12 @@ func TestContext2Validate_badResource_reference(t *testing.T) {
m := testModule(t, "validate-bad-resource-count")
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := c.Validate()
diags := c.Validate(m)
if !diags.HasErrors() {
t.Fatalf("succeeded; want error")
}
@ -80,52 +77,33 @@ func TestContext2Validate_badVar(t *testing.T) {
m := testModule(t, "validate-bad-var")
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := c.Validate()
diags := c.Validate(m)
if !diags.HasErrors() {
t.Fatalf("succeeded; want error")
}
}
func TestContext2Validate_varMapOverrideOld(t *testing.T) {
m := testModule(t, "validate-module-pc-vars")
p := testProvider("aws")
p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(&ProviderSchema{
Provider: &configschema.Block{
Attributes: map[string]*configschema.Attribute{
"foo": {Type: cty.String, Optional: true},
},
},
ResourceTypes: map[string]*configschema.Block{
"aws_instance": {
Attributes: map[string]*configschema.Attribute{},
},
},
})
_, diags := NewContext(&ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
Variables: InputValues{},
})
if !diags.HasErrors() {
// Error should be: The input variable "provider_var" has not been assigned a value.
t.Fatalf("succeeded; want error")
}
}
func TestContext2Validate_varNoDefaultExplicitType(t *testing.T) {
m := testModule(t, "validate-var-no-default-explicit-type")
_, diags := NewContext(&ContextOpts{
Config: m,
})
c, diags := NewContext(&ContextOpts{})
if diags.HasErrors() {
t.Fatalf("unexpected NewContext errors: %s", diags.Err())
}
// NOTE: This test has grown idiosyncratic because originally Terraform
// would (optionally) check variables during validation, and then in
// Terraform v0.12 we switched to checking variables during NewContext,
// and now most recently we've switched to checking variables only during
// planning because root variables are a plan option. Therefore this has
// grown into a plan test rather than a validate test, but it lives on
// here in order to make it easier to navigate through that history in
// version control.
_, diags = c.Plan(m, states.NewState(), DefaultPlanOpts)
if !diags.HasErrors() {
// Error should be: The input variable "maybe_a_map" has not been assigned a value.
t.Fatalf("succeeded; want error")
@ -166,7 +144,6 @@ func TestContext2Validate_computedVar(t *testing.T) {
m := testModule(t, "validate-computed-var")
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
addrs.NewDefaultProvider("test"): testProviderFuncFixed(pt),
@ -182,7 +159,7 @@ func TestContext2Validate_computedVar(t *testing.T) {
return
}
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -217,13 +194,12 @@ func TestContext2Validate_computedInFunction(t *testing.T) {
m := testModule(t, "validate-computed-in-function")
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -256,13 +232,12 @@ func TestContext2Validate_countComputed(t *testing.T) {
m := testModule(t, "validate-count-computed")
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -281,13 +256,12 @@ func TestContext2Validate_countNegative(t *testing.T) {
}
m := testModule(t, "validate-count-negative")
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := c.Validate()
diags := c.Validate(m)
if !diags.HasErrors() {
t.Fatalf("succeeded; want error")
}
@ -308,13 +282,12 @@ func TestContext2Validate_countVariable(t *testing.T) {
}
m := testModule(t, "apply-count-variable")
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -334,12 +307,14 @@ func TestContext2Validate_countVariableNoDefault(t *testing.T) {
},
},
}
_, diags := NewContext(&ContextOpts{
Config: m,
c, diags := NewContext(&ContextOpts{
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
assertNoDiagnostics(t, diags)
_, diags = c.Plan(m, nil, &PlanOpts{})
if !diags.HasErrors() {
// Error should be: The input variable "foo" has not been assigned a value.
t.Fatalf("succeeded; want error")
@ -361,13 +336,12 @@ func TestContext2Validate_moduleBadOutput(t *testing.T) {
}
m := testModule(t, "validate-bad-module-output")
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := c.Validate()
diags := c.Validate(m)
if !diags.HasErrors() {
t.Fatalf("succeeded; want error")
}
@ -388,13 +362,12 @@ func TestContext2Validate_moduleGood(t *testing.T) {
}
m := testModule(t, "validate-good-module")
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -414,7 +387,6 @@ func TestContext2Validate_moduleBadResource(t *testing.T) {
}
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -424,7 +396,7 @@ func TestContext2Validate_moduleBadResource(t *testing.T) {
Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("bad")),
}
diags := c.Validate()
diags := c.Validate(m)
if !diags.HasErrors() {
t.Fatalf("succeeded; want error")
}
@ -446,13 +418,12 @@ func TestContext2Validate_moduleDepsShouldNotCycle(t *testing.T) {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -481,16 +452,9 @@ func TestContext2Validate_moduleProviderVar(t *testing.T) {
}
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
Variables: InputValues{
"provider_var": &InputValue{
Value: cty.StringVal("bar"),
SourceType: ValueFromCaller,
},
},
})
p.ValidateProviderConfigFn = func(req providers.ValidateProviderConfigRequest) (resp providers.ValidateProviderConfigResponse) {
@ -500,7 +464,7 @@ func TestContext2Validate_moduleProviderVar(t *testing.T) {
return
}
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -529,7 +493,6 @@ func TestContext2Validate_moduleProviderInheritUnused(t *testing.T) {
}
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -542,7 +505,7 @@ func TestContext2Validate_moduleProviderInheritUnused(t *testing.T) {
return
}
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -565,16 +528,10 @@ func TestContext2Validate_orphans(t *testing.T) {
m := testModule(t, "validate-good")
state := states.NewState()
root := state.EnsureModule(addrs.RootModuleInstance)
testSetResourceInstanceCurrent(root, "aws_instance.web", `{"id":"bar"}`, `provider["registry.terraform.io/hashicorp/aws"]`)
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
p.ValidateResourceConfigFn = func(req providers.ValidateResourceConfigRequest) providers.ValidateResourceConfigResponse {
@ -587,7 +544,7 @@ func TestContext2Validate_orphans(t *testing.T) {
}
}
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -614,7 +571,6 @@ func TestContext2Validate_providerConfig_bad(t *testing.T) {
}
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -624,7 +580,7 @@ func TestContext2Validate_providerConfig_bad(t *testing.T) {
Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("bad")),
}
diags := c.Validate()
diags := c.Validate(m)
if len(diags) != 1 {
t.Fatalf("wrong number of diagnostics %d; want %d", len(diags), 1)
}
@ -654,7 +610,6 @@ func TestContext2Validate_providerConfig_skippedEmpty(t *testing.T) {
}
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -664,7 +619,7 @@ func TestContext2Validate_providerConfig_skippedEmpty(t *testing.T) {
Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("should not be called")),
}
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -691,13 +646,12 @@ func TestContext2Validate_providerConfig_good(t *testing.T) {
}
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -727,13 +681,12 @@ func TestContext2Validate_requiredProviderConfig(t *testing.T) {
}
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -757,7 +710,6 @@ func TestContext2Validate_provisionerConfig_bad(t *testing.T) {
pr := simpleMockProvisioner()
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -770,7 +722,7 @@ func TestContext2Validate_provisionerConfig_bad(t *testing.T) {
Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("bad")),
}
diags := c.Validate()
diags := c.Validate(m)
if !diags.HasErrors() {
t.Fatalf("succeeded; want error")
}
@ -794,7 +746,6 @@ func TestContext2Validate_badResourceConnection(t *testing.T) {
pr := simpleMockProvisioner()
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -803,7 +754,7 @@ func TestContext2Validate_badResourceConnection(t *testing.T) {
},
})
diags := c.Validate()
diags := c.Validate(m)
t.Log(diags.Err())
if !diags.HasErrors() {
t.Fatalf("succeeded; want error")
@ -828,7 +779,6 @@ func TestContext2Validate_badProvisionerConnection(t *testing.T) {
pr := simpleMockProvisioner()
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -837,7 +787,7 @@ func TestContext2Validate_badProvisionerConnection(t *testing.T) {
},
})
diags := c.Validate()
diags := c.Validate(m)
t.Log(diags.Err())
if !diags.HasErrors() {
t.Fatalf("succeeded; want error")
@ -878,7 +828,6 @@ func TestContext2Validate_provisionerConfig_good(t *testing.T) {
}
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -887,7 +836,7 @@ func TestContext2Validate_provisionerConfig_good(t *testing.T) {
},
})
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -907,12 +856,22 @@ func TestContext2Validate_requiredVar(t *testing.T) {
},
},
}
_, diags := NewContext(&ContextOpts{
Config: m,
c, diags := NewContext(&ContextOpts{
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
assertNoDiagnostics(t, diags)
// NOTE: This test has grown idiosyncratic because originally Terraform
// would (optionally) check variables during validation, and then in
// Terraform v0.12 we switched to checking variables during NewContext,
// and now most recently we've switched to checking variables only during
// planning because root variables are a plan option. Therefore this has
// grown into a plan test rather than a validate test, but it lives on
// here in order to make it easier to navigate through that history in
// version control.
_, diags = c.Plan(m, states.NewState(), DefaultPlanOpts)
if !diags.HasErrors() {
// Error should be: The input variable "foo" has not been assigned a value.
t.Fatalf("succeeded; want error")
@ -934,7 +893,6 @@ func TestContext2Validate_resourceConfig_bad(t *testing.T) {
},
}
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -944,7 +902,7 @@ func TestContext2Validate_resourceConfig_bad(t *testing.T) {
Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("bad")),
}
diags := c.Validate()
diags := c.Validate(m)
if !diags.HasErrors() {
t.Fatalf("succeeded; want error")
}
@ -965,13 +923,12 @@ func TestContext2Validate_resourceConfig_good(t *testing.T) {
},
}
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -993,16 +950,10 @@ func TestContext2Validate_tainted(t *testing.T) {
}
m := testModule(t, "validate-good")
state := states.NewState()
root := state.EnsureModule(addrs.RootModuleInstance)
testSetResourceInstanceTainted(root, "aws_instance.foo", `{"id":"bar"}`, `provider["registry.terraform.io/hashicorp/aws"]`)
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
State: state,
})
p.ValidateResourceConfigFn = func(req providers.ValidateResourceConfigRequest) providers.ValidateResourceConfigResponse {
@ -1015,7 +966,7 @@ func TestContext2Validate_tainted(t *testing.T) {
}
}
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -1044,23 +995,15 @@ func TestContext2Validate_targetedDestroy(t *testing.T) {
testSetResourceInstanceCurrent(root, "aws_instance.bar", `{"id":"i-abc123"}`, `provider["registry.terraform.io/hashicorp/aws"]`)
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
Provisioners: map[string]provisioners.Factory{
"shell": testProvisionerFuncFixed(pr),
},
State: state,
Targets: []addrs.Targetable{
addrs.RootModuleInstance.Resource(
addrs.ManagedResourceMode, "aws_instance", "foo",
),
},
PlanMode: plans.DestroyMode,
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -1081,16 +1024,9 @@ func TestContext2Validate_varRefUnknown(t *testing.T) {
},
}
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
Variables: InputValues{
"foo": &InputValue{
Value: cty.StringVal("bar"),
SourceType: ValueFromCaller,
},
},
})
var value cty.Value
@ -1099,7 +1035,7 @@ func TestContext2Validate_varRefUnknown(t *testing.T) {
return providers.ValidateResourceConfigResponse{}
}
c.Validate()
c.Validate(m)
// Input variables are always unknown during the validate walk, because
// we're checking for validity of all possible input values. Validity
@ -1129,14 +1065,13 @@ func TestContext2Validate_interpolateVar(t *testing.T) {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("template"): testProviderFuncFixed(p),
},
UIInput: input,
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -1162,14 +1097,13 @@ func TestContext2Validate_interpolateComputedModuleVarDef(t *testing.T) {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
UIInput: input,
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -1183,14 +1117,13 @@ func TestContext2Validate_interpolateMap(t *testing.T) {
p := testProvider("template")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("template"): testProviderFuncFixed(p),
},
UIInput: input,
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -1235,19 +1168,12 @@ resource "aws_instance" "foo" {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
Variables: InputValues{
"bar": &InputValue{
Value: cty.StringVal("boop"),
SourceType: ValueFromCaller,
},
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if diags.HasErrors() {
t.Fatal(diags.Err())
}
@ -1265,47 +1191,26 @@ resource "aws_instance" "foo" {
func TestContext2Validate_PlanGraphBuilder(t *testing.T) {
fixture := contextFixtureApplyVars(t)
opts := fixture.ContextOpts()
opts.Variables = InputValues{
"foo": &InputValue{
Value: cty.StringVal("us-east-1"),
SourceType: ValueFromCaller,
},
"test_list": &InputValue{
Value: cty.ListVal([]cty.Value{
cty.StringVal("Hello"),
cty.StringVal("World"),
}),
SourceType: ValueFromCaller,
},
"test_map": &InputValue{
Value: cty.MapVal(map[string]cty.Value{
"Hello": cty.StringVal("World"),
"Foo": cty.StringVal("Bar"),
"Baz": cty.StringVal("Foo"),
}),
SourceType: ValueFromCaller,
},
"amis": &InputValue{
Value: cty.MapVal(map[string]cty.Value{
"us-east-1": cty.StringVal("override"),
}),
SourceType: ValueFromCaller,
},
}
c := testContext2(t, opts)
graph, diags := (&PlanGraphBuilder{
Config: c.config,
state := states.NewState()
schemas, diags := c.Schemas(fixture.Config, state)
assertNoDiagnostics(t, diags)
graph, diags := ValidateGraphBuilder(&PlanGraphBuilder{
Config: fixture.Config,
State: states.NewState(),
Components: c.components,
Schemas: c.schemas,
Targets: c.targets,
Schemas: schemas,
}).Build(addrs.RootModuleInstance)
if diags.HasErrors() {
t.Fatalf("errors from PlanGraphBuilder: %s", diags.Err())
}
defer c.acquireRun("validate-test")()
walker, diags := c.walk(graph, walkValidate, &graphWalkOpts{})
walker, diags := c.walk(graph, walkValidate, &graphWalkOpts{
Config: fixture.Config,
Schemas: schemas,
})
if diags.HasErrors() {
t.Fatal(diags.Err())
}
@ -1326,13 +1231,12 @@ output "out" {
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if !diags.HasErrors() {
t.Fatal("succeeded; want errors")
}
@ -1363,13 +1267,12 @@ resource "aws_instance" "foo" {
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if !diags.HasErrors() {
t.Fatal("succeeded; want errors")
}
@ -1402,11 +1305,9 @@ output "root" {
}`,
})
ctx := testContext2(t, &ContextOpts{
Config: m,
})
ctx := testContext2(t, &ContextOpts{})
diags := ctx.Validate()
diags := ctx.Validate(m)
if diags.HasErrors() {
t.Fatal(diags.Err())
}
@ -1424,13 +1325,12 @@ output "out" {
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if !diags.HasErrors() {
t.Fatal("succeeded; want errors")
}
@ -1455,13 +1355,12 @@ output "out" {
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if !diags.HasErrors() {
t.Fatal("succeeded; want errors")
}
@ -1486,13 +1385,12 @@ output "out" {
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if !diags.HasErrors() {
t.Fatal("succeeded; want errors")
}
@ -1516,13 +1414,12 @@ resource "test_instance" "bar" {
p := testProvider("test")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if !diags.HasErrors() {
t.Fatal("succeeded; want errors")
}
@ -1549,13 +1446,12 @@ resource "test_instance" "bar" {
p := testProvider("test")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if !diags.HasErrors() {
t.Fatal("succeeded; want errors")
}
@ -1574,13 +1470,12 @@ func TestContext2Validate_variableCustomValidationsFail(t *testing.T) {
p := testProvider("test")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if !diags.HasErrors() {
t.Fatal("succeeded; want errors")
}
@ -1609,19 +1504,12 @@ variable "test" {
p := testProvider("test")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
Variables: InputValues{
"test": &InputValue{
Value: cty.UnknownVal(cty.String),
SourceType: ValueFromCLIArg,
},
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error\ngot: %s", diags.Err().Error())
}
@ -1677,13 +1565,12 @@ resource "aws_instance" "foo" {
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
@ -1705,13 +1592,12 @@ resource "aws_instance" "foo" {
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if !diags.HasErrors() {
t.Fatal("succeeded; want errors")
}
@ -1736,13 +1622,12 @@ resource "aws_instance" "foo" {
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if !diags.HasErrors() {
t.Fatal("succeeded; want errors")
}
@ -1818,13 +1703,12 @@ output "out" {
p := testProvider("aws")
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
@ -1851,9 +1735,7 @@ output "out" {
`,
})
diags := testContext2(t, &ContextOpts{
Config: m,
}).Validate()
diags := testContext2(t, &ContextOpts{}).Validate(m)
if !diags.HasErrors() {
t.Fatal("succeeded; want errors")
}
@ -1891,9 +1773,7 @@ output "out" {
`,
})
diags := testContext2(t, &ContextOpts{
Config: m,
}).Validate()
diags := testContext2(t, &ContextOpts{}).Validate(m)
if !diags.HasErrors() {
t.Fatal("succeeded; want errors")
}
@ -1937,12 +1817,11 @@ resource "test_instance" "a" {
}
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if diags.HasErrors() {
t.Fatal(diags.Err())
}
@ -1977,7 +1856,6 @@ func TestContext2Validate_sensitiveProvisionerConfig(t *testing.T) {
pr := simpleMockProvisioner()
c := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p),
},
@ -1993,7 +1871,7 @@ func TestContext2Validate_sensitiveProvisionerConfig(t *testing.T) {
return pr.ValidateProvisionerConfigResponse
}
diags := c.Validate()
diags := c.Validate(m)
if diags.HasErrors() {
t.Fatalf("unexpected error: %s", diags.Err())
}
@ -2082,13 +1960,12 @@ resource "test_instance" "c" {
`})
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}
@ -2150,13 +2027,12 @@ resource "test_object" "t" {
p := simpleMockProvider()
ctx := testContext2(t, &ContextOpts{
Config: m,
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
diags := ctx.Validate()
diags := ctx.Validate(m)
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}

View File

@ -0,0 +1,122 @@
package terraform
import (
"log"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/instances"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/refactoring"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/tfdiags"
)
// graphWalkOpts captures some transient values we use (and possibly mutate)
// during a graph walk.
//
// The way these options get used unfortunately varies between the different
// walkOperation types. This is a historical design wart that dates back to
// us using the same graph structure for all operations; hopefully we'll
// make the necessary differences between the walk types more explicit someday.
type graphWalkOpts struct {
InputState *states.State
Changes *plans.Changes
Config *configs.Config
Schemas *Schemas
RootVariableValues InputValues
MoveResults map[addrs.UniqueKey]refactoring.MoveResult
}
func (c *Context) walk(graph *Graph, operation walkOperation, opts *graphWalkOpts) (*ContextGraphWalker, tfdiags.Diagnostics) {
log.Printf("[DEBUG] Starting graph walk: %s", operation.String())
walker := c.graphWalker(operation, opts)
// Watch for a stop so we can call the provider Stop() API.
watchStop, watchWait := c.watchStop(walker)
// Walk the real graph, this will block until it completes
diags := graph.Walk(walker)
// Close the channel so the watcher stops, and wait for it to return.
close(watchStop)
<-watchWait
return walker, diags
}
func (c *Context) graphWalker(operation walkOperation, opts *graphWalkOpts) *ContextGraphWalker {
var state *states.SyncState
var refreshState *states.SyncState
var prevRunState *states.SyncState
// NOTE: None of the SyncState objects must directly wrap opts.InputState,
// because we use those to mutate the state object and opts.InputState
// belongs to our caller and thus we must treat it as immutable.
//
// To account for that, most of our SyncState values created below end up
// wrapping a _deep copy_ of opts.InputState instead.
inputState := opts.InputState
if inputState == nil {
// Lots of callers use nil to represent the "empty" case where we've
// not run Apply yet, so we tolerate that.
inputState = states.NewState()
}
switch operation {
case walkValidate:
// validate should not use any state
state = states.NewState().SyncWrapper()
// validate currently uses the plan graph, so we have to populate the
// refreshState and the prevRunState.
refreshState = states.NewState().SyncWrapper()
prevRunState = states.NewState().SyncWrapper()
case walkPlan, walkPlanDestroy:
state = inputState.DeepCopy().SyncWrapper()
refreshState = inputState.DeepCopy().SyncWrapper()
prevRunState = inputState.DeepCopy().SyncWrapper()
default:
state = inputState.DeepCopy().SyncWrapper()
// Only plan-like walks use refreshState and prevRunState
}
changes := opts.Changes
if changes == nil {
// Several of our non-plan walks end up sharing codepaths with the
// plan walk and thus expect to generate planned changes even though
// we don't care about them. To avoid those crashing, we'll just
// insert a placeholder changes object which'll get discarded
// afterwards.
changes = plans.NewChanges()
}
if opts.Schemas == nil {
// Should never happen: caller must always set this one.
// (We catch this here, rather than later, to get a more intelligible
// stack trace when it _does_ panic.)
panic("Context.graphWalker call without Schemas")
}
if opts.Config == nil {
panic("Context.graphWalker call without Config")
}
return &ContextGraphWalker{
Context: c,
State: state,
Config: opts.Config,
Schemas: opts.Schemas,
RefreshState: refreshState,
PrevRunState: prevRunState,
Changes: changes.SyncWrapper(),
InstanceExpander: instances.NewExpander(),
MoveResults: opts.MoveResults,
Operation: operation,
StopContext: c.runContext,
RootVariableValues: opts.RootVariableValues,
}
}

View File

@ -42,7 +42,17 @@ func (g *Graph) walk(walker GraphWalker) tfdiags.Diagnostics {
log.Printf("[TRACE] vertex %q: starting visit (%T)", dag.VertexName(v), v)
defer func() {
log.Printf("[TRACE] vertex %q: visit complete", dag.VertexName(v))
if diags.HasErrors() {
for _, diag := range diags {
if diag.Severity() == tfdiags.Error {
desc := diag.Description()
log.Printf("[ERROR] vertex %q error: %s", dag.VertexName(v), desc.Summary)
}
}
log.Printf("[TRACE] vertex %q: visit complete, with errors", dag.VertexName(v))
} else {
log.Printf("[TRACE] vertex %q: visit complete", dag.VertexName(v))
}
}()
// vertexCtx is the context that we use when evaluating. This

View File

@ -522,7 +522,7 @@ func TestApplyGraphBuilder_updateFromOrphan(t *testing.T) {
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"b_id","test_string":"a_id"}`),
Dependencies: []addrs.ConfigResource{
addrs.ConfigResource{
{
Resource: addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",
@ -626,7 +626,7 @@ func TestApplyGraphBuilder_updateFromCBDOrphan(t *testing.T) {
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"b_id","test_string":"a_id"}`),
Dependencies: []addrs.ConfigResource{
addrs.ConfigResource{
{
Resource: addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",

View File

@ -7,6 +7,7 @@ import (
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/configs/configschema"
"github.com/hashicorp/terraform/internal/instances"
"github.com/hashicorp/terraform/internal/plans"
@ -33,6 +34,8 @@ type ContextGraphWalker struct {
Operation walkOperation
StopContext context.Context
RootVariableValues InputValues
Schemas *Schemas
Config *configs.Config
// This is an output. Do not set this, nor read it while a graph walk
// is in progress.
@ -74,11 +77,11 @@ func (w *ContextGraphWalker) EvalContext() EvalContext {
// different modules.
evaluator := &Evaluator{
Meta: w.Context.meta,
Config: w.Context.config,
Config: w.Config,
Operation: w.Operation,
State: w.State,
Changes: w.Changes,
Schemas: w.Context.schemas,
Schemas: w.Schemas,
VariableValues: w.variableValues,
VariableValuesLock: &w.variableValuesLock,
}
@ -89,7 +92,7 @@ func (w *ContextGraphWalker) EvalContext() EvalContext {
InputValue: w.Context.uiInput,
InstanceExpanderValue: w.InstanceExpander,
Components: w.Context.components,
Schemas: w.Context.schemas,
Schemas: w.Schemas,
MoveResultsValue: w.MoveResults,
ProviderCache: w.providerCache,
ProviderInputConfig: w.Context.providerInputConfig,

View File

@ -1,29 +0,0 @@
// Code generated by "stringer -type=GraphType context_graph_type.go"; DO NOT EDIT.
package terraform
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[GraphTypeInvalid-0]
_ = x[GraphTypePlan-1]
_ = x[GraphTypePlanDestroy-2]
_ = x[GraphTypePlanRefreshOnly-3]
_ = x[GraphTypeApply-4]
_ = x[GraphTypeValidate-5]
_ = x[GraphTypeEval-6]
}
const _GraphType_name = "GraphTypeInvalidGraphTypePlanGraphTypePlanDestroyGraphTypePlanRefreshOnlyGraphTypeApplyGraphTypeValidateGraphTypeEval"
var _GraphType_index = [...]uint8{0, 16, 29, 49, 73, 87, 104, 117}
func (i GraphType) String() string {
if i >= GraphType(len(_GraphType_index)-1) {
return "GraphType(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _GraphType_name[_GraphType_index[i]:_GraphType_index[i+1]]
}

View File

@ -35,10 +35,13 @@ func (n *NodeApplyableProvider) Execute(ctx EvalContext, op walkOperation) (diag
switch op {
case walkValidate:
log.Printf("[TRACE] NodeApplyableProvider: validating configuration for %s", n.Addr)
return diags.Append(n.ValidateProvider(ctx, provider))
case walkPlan, walkApply, walkDestroy:
log.Printf("[TRACE] NodeApplyableProvider: configuring %s", n.Addr)
return diags.Append(n.ConfigureProvider(ctx, provider, false))
case walkImport:
log.Printf("[TRACE] NodeApplyableProvider: configuring %s (requiring that configuration is wholly known)", n.Addr)
return diags.Append(n.ConfigureProvider(ctx, provider, true))
}
return diags

View File

@ -150,6 +150,7 @@ func (n *NodeAbstractResourceInstance) AttachResourceState(s *states.Resource) {
log.Printf("[WARN] attaching nil state to %s", n.Addr)
return
}
log.Printf("[TRACE] NodeAbstractResourceInstance.AttachResourceState for %s", n.Addr)
n.instanceState = s.Instance(n.Addr.Resource.Key)
n.storedProviderConfig = s.ProviderConfig
}

View File

@ -227,6 +227,19 @@ func (vv InputValues) Identical(other InputValues) bool {
return true
}
func mergeDefaultInputVariableValues(setVals InputValues, rootVarsConfig map[string]*configs.Variable) InputValues {
var variables InputValues
// Default variables from the configuration seed our map.
variables = DefaultVariableValues(rootVarsConfig)
// Variables provided by the caller (from CLI, environment, etc) can
// override the defaults.
variables = variables.Override(setVals)
return variables
}
// checkInputVariables ensures that variable values supplied at the UI conform
// to their corresponding declarations in configuration.
//