terraform/internal/terraform/eval_variable.go

374 lines
14 KiB
Go
Raw Normal View History

package terraform
import (
"fmt"
"log"
"strings"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/gohcl"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/lang/marks"
"github.com/hashicorp/terraform/internal/tfdiags"
terraform: ugly huge change to weave in new HCL2-oriented types Due to how deeply the configuration types go into Terraform Core, there isn't a great way to switch out to HCL2 gradually. As a consequence, this huge commit gets us from the old state to a _compilable_ new state, but does not yet attempt to fix any tests and has a number of known missing parts and bugs. We will continue to iterate on this in forthcoming commits, heading back towards passing tests and making Terraform fully-functional again. The three main goals here are: - Use the configuration models from the "configs" package instead of the older models in the "config" package, which is now deprecated and preserved only to help us write our migration tool. - Do expression inspection and evaluation using the functionality of the new "lang" package, instead of the Interpolator type and related functionality in the main "terraform" package. - Represent addresses of various objects using types in the addrs package, rather than hand-constructed strings. This is not critical to support the above, but was a big help during the implementation of these other points since it made it much more explicit what kind of address is expected in each context. Since our new packages are built to accommodate some future planned features that are not yet implemented (e.g. the "for_each" argument on resources, "count"/"for_each" on modules), and since there's still a fair amount of functionality still using old-style APIs, there is a moderate amount of shimming here to connect new assumptions with old, hopefully in a way that makes it easier to find and eliminate these shims later. I apologize in advance to the person who inevitably just found this huge commit while spelunking through the commit history.
2018-04-30 19:33:53 +02:00
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
)
func prepareFinalInputVariableValue(addr addrs.AbsInputVariableInstance, raw *InputValue, cfg *configs.Variable) (cty.Value, tfdiags.Diagnostics) {
core: Handle root and child module input variables consistently Previously we had a significant discrepancy between these two situations: we wrote the raw root module variables directly into the EvalContext and then applied type conversions only at expression evaluation time, while for child modules we converted and validated the values while visiting the variable graph node and wrote only the _final_ value into the EvalContext. This confusion seems to have been the root cause for #29899, where validation rules for root module variables were being applied at the wrong point in the process, prior to type conversion. To fix that bug and also make similar mistakes less likely in the future, I've made the root module variable handling more like the child module variable handling in the following ways: - The "raw value" (exactly as given by the user) lives only in the graph node representing the variable, which mirrors how the _expression_ for a child module variable lives in its graph node. This means that the flow for the two is the same except that there's no expression evaluation step for root module variables, because they arrive as constant values from the caller. - The set of variable values in the EvalContext is always only "final" values, after type conversion is complete. That in turn means we no longer need to do "just in time" conversion in evaluationStateData.GetInputVariable, and can just return the value exactly as stored, which is consistent with how we handle all other references between objects. This diff is noisier than I'd like because of how much it takes to wire a new argument (the raw variable values) through to the plan graph builder, but those changes are pretty mechanical and the interesting logic lives inside the plan graph builder itself, in NodeRootVariable, and the shared helper functions in eval_variable.go. While here I also took the opportunity to fix a historical API wart in EvalContext, where SetModuleCallArguments was built to take a set of variable values all at once but our current caller always calls with only one at a time. That is now just SetModuleCallArgument singular, to match with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
var diags tfdiags.Diagnostics
convertTy := cfg.ConstraintType
log.Printf("[TRACE] prepareFinalInputVariableValue: preparing %s", addr)
var defaultVal cty.Value
if cfg.Default != cty.NilVal {
log.Printf("[TRACE] prepareFinalInputVariableValue: %s has a default value", addr)
var err error
defaultVal, err = convert.Convert(cfg.Default, convertTy)
if err != nil {
// Validation of the declaration should typically catch this,
// but we'll check it here too to be robust.
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid default value for module argument",
Detail: fmt.Sprintf(
"The default value for variable %q is incompatible with its type constraint: %s.",
cfg.Name, err,
),
Subject: &cfg.DeclRange,
})
// We'll return a placeholder unknown value to avoid producing
// redundant downstream errors.
return cty.UnknownVal(cfg.Type), diags
}
}
var sourceRange tfdiags.SourceRange
var nonFileSource string
if raw.HasSourceRange() {
sourceRange = raw.SourceRange
} else {
// If the value came from a place that isn't a file and thus doesn't
// have its own source range, we'll use the declaration range as
// our source range and generate some slightly different error
// messages.
sourceRange = tfdiags.SourceRangeFromHCL(cfg.DeclRange)
switch raw.SourceType {
case ValueFromCLIArg:
nonFileSource = fmt.Sprintf("set using -var=\"%s=...\"", addr.Variable.Name)
case ValueFromEnvVar:
nonFileSource = fmt.Sprintf("set using the TF_VAR_%s environment variable", addr.Variable.Name)
case ValueFromInput:
nonFileSource = "set using an interactive prompt"
default:
nonFileSource = "set from outside of the configuration"
}
}
given := raw.Value
core: Handle root and child module input variables consistently Previously we had a significant discrepancy between these two situations: we wrote the raw root module variables directly into the EvalContext and then applied type conversions only at expression evaluation time, while for child modules we converted and validated the values while visiting the variable graph node and wrote only the _final_ value into the EvalContext. This confusion seems to have been the root cause for #29899, where validation rules for root module variables were being applied at the wrong point in the process, prior to type conversion. To fix that bug and also make similar mistakes less likely in the future, I've made the root module variable handling more like the child module variable handling in the following ways: - The "raw value" (exactly as given by the user) lives only in the graph node representing the variable, which mirrors how the _expression_ for a child module variable lives in its graph node. This means that the flow for the two is the same except that there's no expression evaluation step for root module variables, because they arrive as constant values from the caller. - The set of variable values in the EvalContext is always only "final" values, after type conversion is complete. That in turn means we no longer need to do "just in time" conversion in evaluationStateData.GetInputVariable, and can just return the value exactly as stored, which is consistent with how we handle all other references between objects. This diff is noisier than I'd like because of how much it takes to wire a new argument (the raw variable values) through to the plan graph builder, but those changes are pretty mechanical and the interesting logic lives inside the plan graph builder itself, in NodeRootVariable, and the shared helper functions in eval_variable.go. While here I also took the opportunity to fix a historical API wart in EvalContext, where SetModuleCallArguments was built to take a set of variable values all at once but our current caller always calls with only one at a time. That is now just SetModuleCallArgument singular, to match with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
if given == cty.NilVal { // The variable wasn't set at all (even to null)
log.Printf("[TRACE] prepareFinalInputVariableValue: %s has no defined value", addr)
if cfg.Required() {
// NOTE: The CLI layer typically checks for itself whether all of
core and backend: remove redundant handling of default variable values Previously we had three different layers all thinking they were responsible for substituting a default value for an unset root module variable: - the local backend, via logic in backend.ParseVariableValues - the context.Plan function (and other similar functions) trying to preprocess the input variables using terraform.mergeDefaultInputVariableValues . - the newer prepareFinalInputVariableValue, which aims to centralize all of the variable preparation logic so it can be common to both root and child module variables. The second of these was also trying to handle type constraint checking, which is also the responsibility of the central function and not something we need to handle so early. Only the last of these consistently handles both root and child module variables, and so is the one we ought to keep. The others are now redundant and are causing prepareFinalInputVariableValue to get a slightly corrupted view of the caller's chosen variable values. To rectify that, here we remove the two redundant layers altogether and have unset root variables pass through as cty.NilVal all the way to the central prepareFinalInputVariableValue function, which will then handle them in a suitable way which properly respects the "nullable" setting. This commit includes some test changes in the terraform package to make those tests no longer rely on the mergeDefaultInputVariableValues logic we've removed, and to instead explicitly set cty.NilVal for all unset variables to comply with our intended contract for PlanOpts.SetVariables, and similar. (This is so that we can more easily catch bugs in callers where they _don't_ correctly handle input variables; it allows us to distinguish between the caller explicitly marking a variable as unset vs. not describing it at all, where the latter is a bug in the caller.)
2021-12-21 01:38:52 +01:00
// the required _root_ module variables are set, which would
// mask this error with a more specific one that refers to the
// CLI features for setting such variables. We can get here for
// child module variables, though.
log.Printf("[ERROR] prepareFinalInputVariableValue: %s is required but is not set", addr)
core: Handle root and child module input variables consistently Previously we had a significant discrepancy between these two situations: we wrote the raw root module variables directly into the EvalContext and then applied type conversions only at expression evaluation time, while for child modules we converted and validated the values while visiting the variable graph node and wrote only the _final_ value into the EvalContext. This confusion seems to have been the root cause for #29899, where validation rules for root module variables were being applied at the wrong point in the process, prior to type conversion. To fix that bug and also make similar mistakes less likely in the future, I've made the root module variable handling more like the child module variable handling in the following ways: - The "raw value" (exactly as given by the user) lives only in the graph node representing the variable, which mirrors how the _expression_ for a child module variable lives in its graph node. This means that the flow for the two is the same except that there's no expression evaluation step for root module variables, because they arrive as constant values from the caller. - The set of variable values in the EvalContext is always only "final" values, after type conversion is complete. That in turn means we no longer need to do "just in time" conversion in evaluationStateData.GetInputVariable, and can just return the value exactly as stored, which is consistent with how we handle all other references between objects. This diff is noisier than I'd like because of how much it takes to wire a new argument (the raw variable values) through to the plan graph builder, but those changes are pretty mechanical and the interesting logic lives inside the plan graph builder itself, in NodeRootVariable, and the shared helper functions in eval_variable.go. While here I also took the opportunity to fix a historical API wart in EvalContext, where SetModuleCallArguments was built to take a set of variable values all at once but our current caller always calls with only one at a time. That is now just SetModuleCallArgument singular, to match with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: `Required variable not set`,
Detail: fmt.Sprintf(`The variable %q is required, but is not set.`, addr.Variable.Name),
Subject: cfg.DeclRange.Ptr(),
core: Handle root and child module input variables consistently Previously we had a significant discrepancy between these two situations: we wrote the raw root module variables directly into the EvalContext and then applied type conversions only at expression evaluation time, while for child modules we converted and validated the values while visiting the variable graph node and wrote only the _final_ value into the EvalContext. This confusion seems to have been the root cause for #29899, where validation rules for root module variables were being applied at the wrong point in the process, prior to type conversion. To fix that bug and also make similar mistakes less likely in the future, I've made the root module variable handling more like the child module variable handling in the following ways: - The "raw value" (exactly as given by the user) lives only in the graph node representing the variable, which mirrors how the _expression_ for a child module variable lives in its graph node. This means that the flow for the two is the same except that there's no expression evaluation step for root module variables, because they arrive as constant values from the caller. - The set of variable values in the EvalContext is always only "final" values, after type conversion is complete. That in turn means we no longer need to do "just in time" conversion in evaluationStateData.GetInputVariable, and can just return the value exactly as stored, which is consistent with how we handle all other references between objects. This diff is noisier than I'd like because of how much it takes to wire a new argument (the raw variable values) through to the plan graph builder, but those changes are pretty mechanical and the interesting logic lives inside the plan graph builder itself, in NodeRootVariable, and the shared helper functions in eval_variable.go. While here I also took the opportunity to fix a historical API wart in EvalContext, where SetModuleCallArguments was built to take a set of variable values all at once but our current caller always calls with only one at a time. That is now just SetModuleCallArgument singular, to match with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
})
// We'll return a placeholder unknown value to avoid producing
// redundant downstream errors.
return cty.UnknownVal(cfg.Type), diags
}
given = defaultVal // must be set, because we checked above that the variable isn't required
}
val, err := convert.Convert(given, convertTy)
if err != nil {
core and backend: remove redundant handling of default variable values Previously we had three different layers all thinking they were responsible for substituting a default value for an unset root module variable: - the local backend, via logic in backend.ParseVariableValues - the context.Plan function (and other similar functions) trying to preprocess the input variables using terraform.mergeDefaultInputVariableValues . - the newer prepareFinalInputVariableValue, which aims to centralize all of the variable preparation logic so it can be common to both root and child module variables. The second of these was also trying to handle type constraint checking, which is also the responsibility of the central function and not something we need to handle so early. Only the last of these consistently handles both root and child module variables, and so is the one we ought to keep. The others are now redundant and are causing prepareFinalInputVariableValue to get a slightly corrupted view of the caller's chosen variable values. To rectify that, here we remove the two redundant layers altogether and have unset root variables pass through as cty.NilVal all the way to the central prepareFinalInputVariableValue function, which will then handle them in a suitable way which properly respects the "nullable" setting. This commit includes some test changes in the terraform package to make those tests no longer rely on the mergeDefaultInputVariableValues logic we've removed, and to instead explicitly set cty.NilVal for all unset variables to comply with our intended contract for PlanOpts.SetVariables, and similar. (This is so that we can more easily catch bugs in callers where they _don't_ correctly handle input variables; it allows us to distinguish between the caller explicitly marking a variable as unset vs. not describing it at all, where the latter is a bug in the caller.)
2021-12-21 01:38:52 +01:00
log.Printf("[ERROR] prepareFinalInputVariableValue: %s has unsuitable type\n got: %s\n want: %s", addr, given.Type(), convertTy)
if nonFileSource != "" {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid value for input variable",
Detail: fmt.Sprintf(
"Unsuitable value for %s %s: %s.",
addr, nonFileSource, err,
),
Subject: cfg.DeclRange.Ptr(),
})
} else {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid value for input variable",
Detail: fmt.Sprintf(
"The given value is not suitable for %s declared at %s: %s.",
addr, cfg.DeclRange.String(), err,
),
Subject: sourceRange.ToHCL().Ptr(),
})
}
core: Handle root and child module input variables consistently Previously we had a significant discrepancy between these two situations: we wrote the raw root module variables directly into the EvalContext and then applied type conversions only at expression evaluation time, while for child modules we converted and validated the values while visiting the variable graph node and wrote only the _final_ value into the EvalContext. This confusion seems to have been the root cause for #29899, where validation rules for root module variables were being applied at the wrong point in the process, prior to type conversion. To fix that bug and also make similar mistakes less likely in the future, I've made the root module variable handling more like the child module variable handling in the following ways: - The "raw value" (exactly as given by the user) lives only in the graph node representing the variable, which mirrors how the _expression_ for a child module variable lives in its graph node. This means that the flow for the two is the same except that there's no expression evaluation step for root module variables, because they arrive as constant values from the caller. - The set of variable values in the EvalContext is always only "final" values, after type conversion is complete. That in turn means we no longer need to do "just in time" conversion in evaluationStateData.GetInputVariable, and can just return the value exactly as stored, which is consistent with how we handle all other references between objects. This diff is noisier than I'd like because of how much it takes to wire a new argument (the raw variable values) through to the plan graph builder, but those changes are pretty mechanical and the interesting logic lives inside the plan graph builder itself, in NodeRootVariable, and the shared helper functions in eval_variable.go. While here I also took the opportunity to fix a historical API wart in EvalContext, where SetModuleCallArguments was built to take a set of variable values all at once but our current caller always calls with only one at a time. That is now just SetModuleCallArgument singular, to match with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
// We'll return a placeholder unknown value to avoid producing
// redundant downstream errors.
return cty.UnknownVal(cfg.Type), diags
}
// By the time we get here, we know:
// - val matches the variable's type constraint
// - val is definitely not cty.NilVal, but might be a null value if the given was already null.
//
// That means we just need to handle the case where the value is null,
// which might mean we need to use the default value, or produce an error.
//
// For historical reasons we do this only for a "non-nullable" variable.
// Nullable variables just appear as null if they were set to null,
// regardless of any default value.
if val.IsNull() && !cfg.Nullable {
log.Printf("[TRACE] prepareFinalInputVariableValue: %s is defined as null", addr)
if defaultVal != cty.NilVal {
val = defaultVal
} else {
core and backend: remove redundant handling of default variable values Previously we had three different layers all thinking they were responsible for substituting a default value for an unset root module variable: - the local backend, via logic in backend.ParseVariableValues - the context.Plan function (and other similar functions) trying to preprocess the input variables using terraform.mergeDefaultInputVariableValues . - the newer prepareFinalInputVariableValue, which aims to centralize all of the variable preparation logic so it can be common to both root and child module variables. The second of these was also trying to handle type constraint checking, which is also the responsibility of the central function and not something we need to handle so early. Only the last of these consistently handles both root and child module variables, and so is the one we ought to keep. The others are now redundant and are causing prepareFinalInputVariableValue to get a slightly corrupted view of the caller's chosen variable values. To rectify that, here we remove the two redundant layers altogether and have unset root variables pass through as cty.NilVal all the way to the central prepareFinalInputVariableValue function, which will then handle them in a suitable way which properly respects the "nullable" setting. This commit includes some test changes in the terraform package to make those tests no longer rely on the mergeDefaultInputVariableValues logic we've removed, and to instead explicitly set cty.NilVal for all unset variables to comply with our intended contract for PlanOpts.SetVariables, and similar. (This is so that we can more easily catch bugs in callers where they _don't_ correctly handle input variables; it allows us to distinguish between the caller explicitly marking a variable as unset vs. not describing it at all, where the latter is a bug in the caller.)
2021-12-21 01:38:52 +01:00
log.Printf("[ERROR] prepareFinalInputVariableValue: %s is non-nullable but set to null, and is required", addr)
if nonFileSource != "" {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: `Required variable not set`,
Detail: fmt.Sprintf(
"Unsuitable value for %s %s: required variable may not be set to null.",
addr, nonFileSource,
),
Subject: cfg.DeclRange.Ptr(),
})
} else {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: `Required variable not set`,
Detail: fmt.Sprintf(
"The given value is not suitable for %s defined at %s: required variable may not be set to null.",
addr, cfg.DeclRange.String(),
),
Subject: sourceRange.ToHCL().Ptr(),
})
}
core: Handle root and child module input variables consistently Previously we had a significant discrepancy between these two situations: we wrote the raw root module variables directly into the EvalContext and then applied type conversions only at expression evaluation time, while for child modules we converted and validated the values while visiting the variable graph node and wrote only the _final_ value into the EvalContext. This confusion seems to have been the root cause for #29899, where validation rules for root module variables were being applied at the wrong point in the process, prior to type conversion. To fix that bug and also make similar mistakes less likely in the future, I've made the root module variable handling more like the child module variable handling in the following ways: - The "raw value" (exactly as given by the user) lives only in the graph node representing the variable, which mirrors how the _expression_ for a child module variable lives in its graph node. This means that the flow for the two is the same except that there's no expression evaluation step for root module variables, because they arrive as constant values from the caller. - The set of variable values in the EvalContext is always only "final" values, after type conversion is complete. That in turn means we no longer need to do "just in time" conversion in evaluationStateData.GetInputVariable, and can just return the value exactly as stored, which is consistent with how we handle all other references between objects. This diff is noisier than I'd like because of how much it takes to wire a new argument (the raw variable values) through to the plan graph builder, but those changes are pretty mechanical and the interesting logic lives inside the plan graph builder itself, in NodeRootVariable, and the shared helper functions in eval_variable.go. While here I also took the opportunity to fix a historical API wart in EvalContext, where SetModuleCallArguments was built to take a set of variable values all at once but our current caller always calls with only one at a time. That is now just SetModuleCallArgument singular, to match with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
// Stub out our return value so that the semantic checker doesn't
// produce redundant downstream errors.
val = cty.UnknownVal(cfg.Type)
}
}
return val, diags
}
2020-10-07 16:58:05 +02:00
// evalVariableValidations ensures that all of the configured custom validations
// for a variable are passing.
//
// This must be used only after any side-effects that make the value of the
// variable available for use in expression evaluation, such as
// EvalModuleCallArgument for variables in descendent modules.
func evalVariableValidations(addr addrs.AbsInputVariableInstance, config *configs.Variable, expr hcl.Expression, ctx EvalContext) (diags tfdiags.Diagnostics) {
if config == nil || len(config.Validations) == 0 {
core: Handle root and child module input variables consistently Previously we had a significant discrepancy between these two situations: we wrote the raw root module variables directly into the EvalContext and then applied type conversions only at expression evaluation time, while for child modules we converted and validated the values while visiting the variable graph node and wrote only the _final_ value into the EvalContext. This confusion seems to have been the root cause for #29899, where validation rules for root module variables were being applied at the wrong point in the process, prior to type conversion. To fix that bug and also make similar mistakes less likely in the future, I've made the root module variable handling more like the child module variable handling in the following ways: - The "raw value" (exactly as given by the user) lives only in the graph node representing the variable, which mirrors how the _expression_ for a child module variable lives in its graph node. This means that the flow for the two is the same except that there's no expression evaluation step for root module variables, because they arrive as constant values from the caller. - The set of variable values in the EvalContext is always only "final" values, after type conversion is complete. That in turn means we no longer need to do "just in time" conversion in evaluationStateData.GetInputVariable, and can just return the value exactly as stored, which is consistent with how we handle all other references between objects. This diff is noisier than I'd like because of how much it takes to wire a new argument (the raw variable values) through to the plan graph builder, but those changes are pretty mechanical and the interesting logic lives inside the plan graph builder itself, in NodeRootVariable, and the shared helper functions in eval_variable.go. While here I also took the opportunity to fix a historical API wart in EvalContext, where SetModuleCallArguments was built to take a set of variable values all at once but our current caller always calls with only one at a time. That is now just SetModuleCallArgument singular, to match with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
log.Printf("[TRACE] evalVariableValidations: no validation rules declared for %s, so skipping", addr)
return nil
}
core: Handle root and child module input variables consistently Previously we had a significant discrepancy between these two situations: we wrote the raw root module variables directly into the EvalContext and then applied type conversions only at expression evaluation time, while for child modules we converted and validated the values while visiting the variable graph node and wrote only the _final_ value into the EvalContext. This confusion seems to have been the root cause for #29899, where validation rules for root module variables were being applied at the wrong point in the process, prior to type conversion. To fix that bug and also make similar mistakes less likely in the future, I've made the root module variable handling more like the child module variable handling in the following ways: - The "raw value" (exactly as given by the user) lives only in the graph node representing the variable, which mirrors how the _expression_ for a child module variable lives in its graph node. This means that the flow for the two is the same except that there's no expression evaluation step for root module variables, because they arrive as constant values from the caller. - The set of variable values in the EvalContext is always only "final" values, after type conversion is complete. That in turn means we no longer need to do "just in time" conversion in evaluationStateData.GetInputVariable, and can just return the value exactly as stored, which is consistent with how we handle all other references between objects. This diff is noisier than I'd like because of how much it takes to wire a new argument (the raw variable values) through to the plan graph builder, but those changes are pretty mechanical and the interesting logic lives inside the plan graph builder itself, in NodeRootVariable, and the shared helper functions in eval_variable.go. While here I also took the opportunity to fix a historical API wart in EvalContext, where SetModuleCallArguments was built to take a set of variable values all at once but our current caller always calls with only one at a time. That is now just SetModuleCallArgument singular, to match with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
log.Printf("[TRACE] evalVariableValidations: validating %s", addr)
// Variable nodes evaluate in the parent module to where they were declared
// because the value expression (n.Expr, if set) comes from the calling
// "module" block in the parent module.
//
// Validation expressions are statically validated (during configuration
// loading) to refer only to the variable being validated, so we can
// bypass our usual evaluation machinery here and just produce a minimal
// evaluation context containing just the required value, and thus avoid
// the problem that ctx's evaluation functions refer to the wrong module.
val := ctx.GetVariableValue(addr)
core: Handle root and child module input variables consistently Previously we had a significant discrepancy between these two situations: we wrote the raw root module variables directly into the EvalContext and then applied type conversions only at expression evaluation time, while for child modules we converted and validated the values while visiting the variable graph node and wrote only the _final_ value into the EvalContext. This confusion seems to have been the root cause for #29899, where validation rules for root module variables were being applied at the wrong point in the process, prior to type conversion. To fix that bug and also make similar mistakes less likely in the future, I've made the root module variable handling more like the child module variable handling in the following ways: - The "raw value" (exactly as given by the user) lives only in the graph node representing the variable, which mirrors how the _expression_ for a child module variable lives in its graph node. This means that the flow for the two is the same except that there's no expression evaluation step for root module variables, because they arrive as constant values from the caller. - The set of variable values in the EvalContext is always only "final" values, after type conversion is complete. That in turn means we no longer need to do "just in time" conversion in evaluationStateData.GetInputVariable, and can just return the value exactly as stored, which is consistent with how we handle all other references between objects. This diff is noisier than I'd like because of how much it takes to wire a new argument (the raw variable values) through to the plan graph builder, but those changes are pretty mechanical and the interesting logic lives inside the plan graph builder itself, in NodeRootVariable, and the shared helper functions in eval_variable.go. While here I also took the opportunity to fix a historical API wart in EvalContext, where SetModuleCallArguments was built to take a set of variable values all at once but our current caller always calls with only one at a time. That is now just SetModuleCallArgument singular, to match with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
if val == cty.NilVal {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "No final value for variable",
Detail: fmt.Sprintf("Terraform doesn't have a final value for %s during validation. This is a bug in Terraform; please report it!", addr),
})
return diags
}
hclCtx := &hcl.EvalContext{
Variables: map[string]cty.Value{
"var": cty.ObjectVal(map[string]cty.Value{
config.Name: val,
}),
},
Functions: ctx.EvaluationScope(nil, EvalDataForNoInstanceKey).Functions(),
}
for _, validation := range config.Validations {
const errInvalidCondition = "Invalid variable validation result"
const errInvalidValue = "Invalid value for variable"
var ruleDiags tfdiags.Diagnostics
result, moreDiags := validation.Condition.Value(hclCtx)
ruleDiags = ruleDiags.Append(moreDiags)
errorValue, errorDiags := validation.ErrorMessage.Value(hclCtx)
// The following error handling is a workaround to preserve backwards
// compatibility. Due to an implementation quirk, all prior versions of
// Terraform would treat error messages specified using JSON
// configuration syntax (.tf.json) as string literals, even if they
// contained the "${" template expression operator. This behaviour did
// not match that of HCL configuration syntax, where a template
// expression would result in a validation error.
//
// As a result, users writing or generating JSON configuration syntax
// may have specified error messages which are invalid template
// expressions. As we add support for error message expressions, we are
// unable to perfectly distinguish between these two cases.
//
// To ensure that we don't break backwards compatibility, we have the
// below fallback logic if the error message fails to evaluate. This
// should only have any effect for JSON configurations. The gohcl
// DecodeExpression function behaves differently when the source of the
// expression is a JSON configuration file and a nil context is passed.
if errorDiags.HasErrors() {
// Attempt to decode the expression as a string literal. Passing
// nil as the context forces a JSON syntax string value to be
// interpreted as a string literal.
var errorString string
moreErrorDiags := gohcl.DecodeExpression(validation.ErrorMessage, nil, &errorString)
if !moreErrorDiags.HasErrors() {
// Decoding succeeded, meaning that this is a JSON syntax
// string value. We rewrap that as a cty value to allow later
// decoding to succeed.
errorValue = cty.StringVal(errorString)
// This warning diagnostic explains this odd behaviour, while
// giving us an escape hatch to change this to a hard failure
// in some future Terraform 1.x version.
errorDiags = hcl.Diagnostics{
&hcl.Diagnostic{
Severity: hcl.DiagWarning,
Summary: "Validation error message expression is invalid",
Detail: fmt.Sprintf("The error message provided could not be evaluated as an expression, so Terraform is interpreting it as a string literal.\n\nIn future versions of Terraform, this will be considered an error. Please file a GitHub issue if this would break your workflow.\n\n%s", errorDiags.Error()),
Subject: validation.ErrorMessage.Range().Ptr(),
Context: validation.DeclRange.Ptr(),
Expression: validation.ErrorMessage,
EvalContext: hclCtx,
},
}
}
// We want to either report the original diagnostics if the
// fallback failed, or the warning generated above if it succeeded.
ruleDiags = ruleDiags.Append(errorDiags)
}
diags = diags.Append(ruleDiags)
if ruleDiags.HasErrors() {
log.Printf("[TRACE] evalVariableValidations: %s rule %s check rule evaluation failed: %s", addr, validation.DeclRange, ruleDiags.Err().Error())
}
if !result.IsKnown() {
log.Printf("[TRACE] evalVariableValidations: %s rule %s condition value is unknown, so skipping validation for now", addr, validation.DeclRange)
continue // We'll wait until we've learned more, then.
}
if result.IsNull() {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: errInvalidCondition,
Detail: "Validation condition expression must return either true or false, not null.",
Subject: validation.Condition.Range().Ptr(),
Expression: validation.Condition,
EvalContext: hclCtx,
})
continue
}
var err error
result, err = convert.Convert(result, cty.Bool)
if err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: errInvalidCondition,
Detail: fmt.Sprintf("Invalid validation condition result value: %s.", tfdiags.FormatError(err)),
Subject: validation.Condition.Range().Ptr(),
Expression: validation.Condition,
EvalContext: hclCtx,
})
continue
}
// Validation condition may be marked if the input variable is bound to
// a sensitive value. This is irrelevant to the validation process, so
// we discard the marks now.
result, _ = result.Unmark()
if result.True() {
continue
}
var errorMessage string
if !errorDiags.HasErrors() && errorValue.IsKnown() && !errorValue.IsNull() {
var err error
errorValue, err = convert.Convert(errorValue, cty.String)
if err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid error message",
Detail: fmt.Sprintf("Unsuitable value for error message: %s.", tfdiags.FormatError(err)),
Subject: validation.ErrorMessage.Range().Ptr(),
Expression: validation.ErrorMessage,
EvalContext: hclCtx,
})
} else {
if marks.Has(errorValue, marks.Sensitive) {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Error message refers to sensitive values",
Detail: `The error expression used to explain this condition refers to sensitive values. Terraform will not display the resulting message.
You can correct this by removing references to sensitive values, or by carefully using the nonsensitive() function if the expression will not reveal the sensitive data.`,
Subject: validation.ErrorMessage.Range().Ptr(),
Expression: validation.ErrorMessage,
EvalContext: hclCtx,
})
errorMessage = "The error message included a sensitive value, so it will not be displayed."
} else {
errorMessage = strings.TrimSpace(errorValue.AsString())
}
}
}
if errorMessage == "" {
errorMessage = "Failed to evaluate condition error message."
}
if expr != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: errInvalidValue,
Detail: fmt.Sprintf("%s\n\nThis was checked by the validation rule at %s.", errorMessage, validation.DeclRange.String()),
Subject: expr.Range().Ptr(),
Expression: validation.Condition,
EvalContext: hclCtx,
})
} else {
// Since we don't have a source expression for a root module
// variable, we'll just report the error from the perspective
// of the variable declaration itself.
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: errInvalidValue,
Detail: fmt.Sprintf("%s\n\nThis was checked by the validation rule at %s.", errorMessage, validation.DeclRange.String()),
Subject: config.DeclRange.Ptr(),
Expression: validation.Condition,
EvalContext: hclCtx,
})
}
}
return diags
}