2015-02-12 02:01:08 +01:00
package terraform
import (
2015-02-20 18:40:41 +01:00
"fmt"
2016-07-06 16:11:46 +02:00
"log"
core: Check rule error message expressions
Error messages for preconditions, postconditions, and custom variable
validations have until now been string literals. This commit changes
this to treat the field as an HCL expression, which must evaluate to a
string. Most commonly this will either be a string literal or a template
expression.
When the check rule condition is evaluated, we also evaluate the error
message. This means that the error message should always evaluate to a
string value, even if the condition passes. If it does not, this will
result in an error diagnostic.
If the condition fails, and the error message also fails to evaluate, we
fall back to a default error message. This means that the check rule
failure will still be reported, alongside diagnostics explaining why the
custom error message failed to render.
As part of this change, we also necessarily remove the heuristic about
the error message format. This guidance can be readded in future as part
of a configuration hint system.
2022-02-03 20:14:21 +01:00
"strings"
2015-02-20 18:40:41 +01:00
2019-09-10 00:58:44 +02:00
"github.com/hashicorp/hcl/v2"
2022-02-07 20:46:31 +01:00
"github.com/hashicorp/hcl/v2/gohcl"
2021-05-17 21:00:50 +02:00
"github.com/hashicorp/terraform/internal/addrs"
2021-05-17 21:17:09 +02:00
"github.com/hashicorp/terraform/internal/configs"
2022-03-11 17:11:30 +01:00
"github.com/hashicorp/terraform/internal/lang/marks"
2021-05-17 19:11:06 +02:00
"github.com/hashicorp/terraform/internal/tfdiags"
terraform: ugly huge change to weave in new HCL2-oriented types
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.
The three main goals here are:
- Use the configuration models from the "configs" package instead of the
older models in the "config" package, which is now deprecated and
preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
new "lang" package, instead of the Interpolator type and related
functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
rather than hand-constructed strings. This is not critical to support
the above, but was a big help during the implementation of these other
points since it made it much more explicit what kind of address is
expected in each context.
Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.
I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
2018-04-30 19:33:53 +02:00
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
2015-02-12 02:01:08 +01:00
)
2021-12-22 03:04:24 +01:00
func prepareFinalInputVariableValue ( addr addrs . AbsInputVariableInstance , raw * InputValue , cfg * configs . Variable ) ( cty . Value , tfdiags . Diagnostics ) {
core: Handle root and child module input variables consistently
Previously we had a significant discrepancy between these two situations:
we wrote the raw root module variables directly into the EvalContext and
then applied type conversions only at expression evaluation time, while
for child modules we converted and validated the values while visiting
the variable graph node and wrote only the _final_ value into the
EvalContext.
This confusion seems to have been the root cause for #29899, where
validation rules for root module variables were being applied at the wrong
point in the process, prior to type conversion.
To fix that bug and also make similar mistakes less likely in the future,
I've made the root module variable handling more like the child module
variable handling in the following ways:
- The "raw value" (exactly as given by the user) lives only in the graph
node representing the variable, which mirrors how the _expression_
for a child module variable lives in its graph node. This means that
the flow for the two is the same except that there's no expression
evaluation step for root module variables, because they arrive as
constant values from the caller.
- The set of variable values in the EvalContext is always only "final"
values, after type conversion is complete. That in turn means we no
longer need to do "just in time" conversion in
evaluationStateData.GetInputVariable, and can just return the value
exactly as stored, which is consistent with how we handle all other
references between objects.
This diff is noisier than I'd like because of how much it takes to wire
a new argument (the raw variable values) through to the plan graph builder,
but those changes are pretty mechanical and the interesting logic lives
inside the plan graph builder itself, in NodeRootVariable, and
the shared helper functions in eval_variable.go.
While here I also took the opportunity to fix a historical API wart in
EvalContext, where SetModuleCallArguments was built to take a set of
variable values all at once but our current caller always calls with only
one at a time. That is now just SetModuleCallArgument singular, to match
with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
var diags tfdiags . Diagnostics
convertTy := cfg . ConstraintType
log . Printf ( "[TRACE] prepareFinalInputVariableValue: preparing %s" , addr )
var defaultVal cty . Value
if cfg . Default != cty . NilVal {
log . Printf ( "[TRACE] prepareFinalInputVariableValue: %s has a default value" , addr )
var err error
defaultVal , err = convert . Convert ( cfg . Default , convertTy )
if err != nil {
// Validation of the declaration should typically catch this,
// but we'll check it here too to be robust.
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "Invalid default value for module argument" ,
Detail : fmt . Sprintf (
"The default value for variable %q is incompatible with its type constraint: %s." ,
cfg . Name , err ,
) ,
Subject : & cfg . DeclRange ,
} )
// We'll return a placeholder unknown value to avoid producing
// redundant downstream errors.
return cty . UnknownVal ( cfg . Type ) , diags
}
}
2021-12-22 03:04:24 +01:00
var sourceRange tfdiags . SourceRange
var nonFileSource string
if raw . HasSourceRange ( ) {
sourceRange = raw . SourceRange
} else {
// If the value came from a place that isn't a file and thus doesn't
// have its own source range, we'll use the declaration range as
// our source range and generate some slightly different error
// messages.
sourceRange = tfdiags . SourceRangeFromHCL ( cfg . DeclRange )
switch raw . SourceType {
case ValueFromCLIArg :
nonFileSource = fmt . Sprintf ( "set using -var=\"%s=...\"" , addr . Variable . Name )
case ValueFromEnvVar :
nonFileSource = fmt . Sprintf ( "set using the TF_VAR_%s environment variable" , addr . Variable . Name )
case ValueFromInput :
nonFileSource = "set using an interactive prompt"
default :
nonFileSource = "set from outside of the configuration"
}
}
given := raw . Value
core: Handle root and child module input variables consistently
Previously we had a significant discrepancy between these two situations:
we wrote the raw root module variables directly into the EvalContext and
then applied type conversions only at expression evaluation time, while
for child modules we converted and validated the values while visiting
the variable graph node and wrote only the _final_ value into the
EvalContext.
This confusion seems to have been the root cause for #29899, where
validation rules for root module variables were being applied at the wrong
point in the process, prior to type conversion.
To fix that bug and also make similar mistakes less likely in the future,
I've made the root module variable handling more like the child module
variable handling in the following ways:
- The "raw value" (exactly as given by the user) lives only in the graph
node representing the variable, which mirrors how the _expression_
for a child module variable lives in its graph node. This means that
the flow for the two is the same except that there's no expression
evaluation step for root module variables, because they arrive as
constant values from the caller.
- The set of variable values in the EvalContext is always only "final"
values, after type conversion is complete. That in turn means we no
longer need to do "just in time" conversion in
evaluationStateData.GetInputVariable, and can just return the value
exactly as stored, which is consistent with how we handle all other
references between objects.
This diff is noisier than I'd like because of how much it takes to wire
a new argument (the raw variable values) through to the plan graph builder,
but those changes are pretty mechanical and the interesting logic lives
inside the plan graph builder itself, in NodeRootVariable, and
the shared helper functions in eval_variable.go.
While here I also took the opportunity to fix a historical API wart in
EvalContext, where SetModuleCallArguments was built to take a set of
variable values all at once but our current caller always calls with only
one at a time. That is now just SetModuleCallArgument singular, to match
with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
if given == cty . NilVal { // The variable wasn't set at all (even to null)
log . Printf ( "[TRACE] prepareFinalInputVariableValue: %s has no defined value" , addr )
if cfg . Required ( ) {
// NOTE: The CLI layer typically checks for itself whether all of
2021-12-21 01:38:52 +01:00
// the required _root_ module variables are set, which would
// mask this error with a more specific one that refers to the
// CLI features for setting such variables. We can get here for
// child module variables, though.
log . Printf ( "[ERROR] prepareFinalInputVariableValue: %s is required but is not set" , addr )
core: Handle root and child module input variables consistently
Previously we had a significant discrepancy between these two situations:
we wrote the raw root module variables directly into the EvalContext and
then applied type conversions only at expression evaluation time, while
for child modules we converted and validated the values while visiting
the variable graph node and wrote only the _final_ value into the
EvalContext.
This confusion seems to have been the root cause for #29899, where
validation rules for root module variables were being applied at the wrong
point in the process, prior to type conversion.
To fix that bug and also make similar mistakes less likely in the future,
I've made the root module variable handling more like the child module
variable handling in the following ways:
- The "raw value" (exactly as given by the user) lives only in the graph
node representing the variable, which mirrors how the _expression_
for a child module variable lives in its graph node. This means that
the flow for the two is the same except that there's no expression
evaluation step for root module variables, because they arrive as
constant values from the caller.
- The set of variable values in the EvalContext is always only "final"
values, after type conversion is complete. That in turn means we no
longer need to do "just in time" conversion in
evaluationStateData.GetInputVariable, and can just return the value
exactly as stored, which is consistent with how we handle all other
references between objects.
This diff is noisier than I'd like because of how much it takes to wire
a new argument (the raw variable values) through to the plan graph builder,
but those changes are pretty mechanical and the interesting logic lives
inside the plan graph builder itself, in NodeRootVariable, and
the shared helper functions in eval_variable.go.
While here I also took the opportunity to fix a historical API wart in
EvalContext, where SetModuleCallArguments was built to take a set of
variable values all at once but our current caller always calls with only
one at a time. That is now just SetModuleCallArgument singular, to match
with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : ` Required variable not set ` ,
Detail : fmt . Sprintf ( ` The variable %q is required, but is not set. ` , addr . Variable . Name ) ,
2021-12-22 03:04:24 +01:00
Subject : cfg . DeclRange . Ptr ( ) ,
core: Handle root and child module input variables consistently
Previously we had a significant discrepancy between these two situations:
we wrote the raw root module variables directly into the EvalContext and
then applied type conversions only at expression evaluation time, while
for child modules we converted and validated the values while visiting
the variable graph node and wrote only the _final_ value into the
EvalContext.
This confusion seems to have been the root cause for #29899, where
validation rules for root module variables were being applied at the wrong
point in the process, prior to type conversion.
To fix that bug and also make similar mistakes less likely in the future,
I've made the root module variable handling more like the child module
variable handling in the following ways:
- The "raw value" (exactly as given by the user) lives only in the graph
node representing the variable, which mirrors how the _expression_
for a child module variable lives in its graph node. This means that
the flow for the two is the same except that there's no expression
evaluation step for root module variables, because they arrive as
constant values from the caller.
- The set of variable values in the EvalContext is always only "final"
values, after type conversion is complete. That in turn means we no
longer need to do "just in time" conversion in
evaluationStateData.GetInputVariable, and can just return the value
exactly as stored, which is consistent with how we handle all other
references between objects.
This diff is noisier than I'd like because of how much it takes to wire
a new argument (the raw variable values) through to the plan graph builder,
but those changes are pretty mechanical and the interesting logic lives
inside the plan graph builder itself, in NodeRootVariable, and
the shared helper functions in eval_variable.go.
While here I also took the opportunity to fix a historical API wart in
EvalContext, where SetModuleCallArguments was built to take a set of
variable values all at once but our current caller always calls with only
one at a time. That is now just SetModuleCallArgument singular, to match
with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
} )
// We'll return a placeholder unknown value to avoid producing
// redundant downstream errors.
return cty . UnknownVal ( cfg . Type ) , diags
}
given = defaultVal // must be set, because we checked above that the variable isn't required
}
val , err := convert . Convert ( given , convertTy )
if err != nil {
2021-12-21 01:38:52 +01:00
log . Printf ( "[ERROR] prepareFinalInputVariableValue: %s has unsuitable type\n got: %s\n want: %s" , addr , given . Type ( ) , convertTy )
2021-12-22 03:04:24 +01:00
if nonFileSource != "" {
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "Invalid value for input variable" ,
Detail : fmt . Sprintf (
"Unsuitable value for %s %s: %s." ,
addr , nonFileSource , err ,
) ,
Subject : cfg . DeclRange . Ptr ( ) ,
} )
} else {
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "Invalid value for input variable" ,
Detail : fmt . Sprintf (
"The given value is not suitable for %s declared at %s: %s." ,
addr , cfg . DeclRange . String ( ) , err ,
) ,
Subject : sourceRange . ToHCL ( ) . Ptr ( ) ,
} )
}
core: Handle root and child module input variables consistently
Previously we had a significant discrepancy between these two situations:
we wrote the raw root module variables directly into the EvalContext and
then applied type conversions only at expression evaluation time, while
for child modules we converted and validated the values while visiting
the variable graph node and wrote only the _final_ value into the
EvalContext.
This confusion seems to have been the root cause for #29899, where
validation rules for root module variables were being applied at the wrong
point in the process, prior to type conversion.
To fix that bug and also make similar mistakes less likely in the future,
I've made the root module variable handling more like the child module
variable handling in the following ways:
- The "raw value" (exactly as given by the user) lives only in the graph
node representing the variable, which mirrors how the _expression_
for a child module variable lives in its graph node. This means that
the flow for the two is the same except that there's no expression
evaluation step for root module variables, because they arrive as
constant values from the caller.
- The set of variable values in the EvalContext is always only "final"
values, after type conversion is complete. That in turn means we no
longer need to do "just in time" conversion in
evaluationStateData.GetInputVariable, and can just return the value
exactly as stored, which is consistent with how we handle all other
references between objects.
This diff is noisier than I'd like because of how much it takes to wire
a new argument (the raw variable values) through to the plan graph builder,
but those changes are pretty mechanical and the interesting logic lives
inside the plan graph builder itself, in NodeRootVariable, and
the shared helper functions in eval_variable.go.
While here I also took the opportunity to fix a historical API wart in
EvalContext, where SetModuleCallArguments was built to take a set of
variable values all at once but our current caller always calls with only
one at a time. That is now just SetModuleCallArgument singular, to match
with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
// We'll return a placeholder unknown value to avoid producing
// redundant downstream errors.
return cty . UnknownVal ( cfg . Type ) , diags
}
// By the time we get here, we know:
// - val matches the variable's type constraint
// - val is definitely not cty.NilVal, but might be a null value if the given was already null.
//
// That means we just need to handle the case where the value is null,
// which might mean we need to use the default value, or produce an error.
//
// For historical reasons we do this only for a "non-nullable" variable.
// Nullable variables just appear as null if they were set to null,
// regardless of any default value.
if val . IsNull ( ) && ! cfg . Nullable {
log . Printf ( "[TRACE] prepareFinalInputVariableValue: %s is defined as null" , addr )
if defaultVal != cty . NilVal {
val = defaultVal
} else {
2021-12-21 01:38:52 +01:00
log . Printf ( "[ERROR] prepareFinalInputVariableValue: %s is non-nullable but set to null, and is required" , addr )
2021-12-22 03:04:24 +01:00
if nonFileSource != "" {
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : ` Required variable not set ` ,
Detail : fmt . Sprintf (
"Unsuitable value for %s %s: required variable may not be set to null." ,
addr , nonFileSource ,
) ,
Subject : cfg . DeclRange . Ptr ( ) ,
} )
} else {
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : ` Required variable not set ` ,
Detail : fmt . Sprintf (
"The given value is not suitable for %s defined at %s: required variable may not be set to null." ,
addr , cfg . DeclRange . String ( ) ,
) ,
Subject : sourceRange . ToHCL ( ) . Ptr ( ) ,
} )
}
core: Handle root and child module input variables consistently
Previously we had a significant discrepancy between these two situations:
we wrote the raw root module variables directly into the EvalContext and
then applied type conversions only at expression evaluation time, while
for child modules we converted and validated the values while visiting
the variable graph node and wrote only the _final_ value into the
EvalContext.
This confusion seems to have been the root cause for #29899, where
validation rules for root module variables were being applied at the wrong
point in the process, prior to type conversion.
To fix that bug and also make similar mistakes less likely in the future,
I've made the root module variable handling more like the child module
variable handling in the following ways:
- The "raw value" (exactly as given by the user) lives only in the graph
node representing the variable, which mirrors how the _expression_
for a child module variable lives in its graph node. This means that
the flow for the two is the same except that there's no expression
evaluation step for root module variables, because they arrive as
constant values from the caller.
- The set of variable values in the EvalContext is always only "final"
values, after type conversion is complete. That in turn means we no
longer need to do "just in time" conversion in
evaluationStateData.GetInputVariable, and can just return the value
exactly as stored, which is consistent with how we handle all other
references between objects.
This diff is noisier than I'd like because of how much it takes to wire
a new argument (the raw variable values) through to the plan graph builder,
but those changes are pretty mechanical and the interesting logic lives
inside the plan graph builder itself, in NodeRootVariable, and
the shared helper functions in eval_variable.go.
While here I also took the opportunity to fix a historical API wart in
EvalContext, where SetModuleCallArguments was built to take a set of
variable values all at once but our current caller always calls with only
one at a time. That is now just SetModuleCallArgument singular, to match
with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
// Stub out our return value so that the semantic checker doesn't
// produce redundant downstream errors.
val = cty . UnknownVal ( cfg . Type )
}
}
return val , diags
}
2020-10-07 16:58:05 +02:00
// evalVariableValidations ensures that all of the configured custom validations
2020-09-16 17:32:48 +02:00
// for a variable are passing.
2020-01-04 02:12:49 +01:00
//
// This must be used only after any side-effects that make the value of the
// variable available for use in expression evaluation, such as
// EvalModuleCallArgument for variables in descendent modules.
2020-10-28 18:47:04 +01:00
func evalVariableValidations ( addr addrs . AbsInputVariableInstance , config * configs . Variable , expr hcl . Expression , ctx EvalContext ) ( diags tfdiags . Diagnostics ) {
2020-09-16 17:32:48 +02:00
if config == nil || len ( config . Validations ) == 0 {
core: Handle root and child module input variables consistently
Previously we had a significant discrepancy between these two situations:
we wrote the raw root module variables directly into the EvalContext and
then applied type conversions only at expression evaluation time, while
for child modules we converted and validated the values while visiting
the variable graph node and wrote only the _final_ value into the
EvalContext.
This confusion seems to have been the root cause for #29899, where
validation rules for root module variables were being applied at the wrong
point in the process, prior to type conversion.
To fix that bug and also make similar mistakes less likely in the future,
I've made the root module variable handling more like the child module
variable handling in the following ways:
- The "raw value" (exactly as given by the user) lives only in the graph
node representing the variable, which mirrors how the _expression_
for a child module variable lives in its graph node. This means that
the flow for the two is the same except that there's no expression
evaluation step for root module variables, because they arrive as
constant values from the caller.
- The set of variable values in the EvalContext is always only "final"
values, after type conversion is complete. That in turn means we no
longer need to do "just in time" conversion in
evaluationStateData.GetInputVariable, and can just return the value
exactly as stored, which is consistent with how we handle all other
references between objects.
This diff is noisier than I'd like because of how much it takes to wire
a new argument (the raw variable values) through to the plan graph builder,
but those changes are pretty mechanical and the interesting logic lives
inside the plan graph builder itself, in NodeRootVariable, and
the shared helper functions in eval_variable.go.
While here I also took the opportunity to fix a historical API wart in
EvalContext, where SetModuleCallArguments was built to take a set of
variable values all at once but our current caller always calls with only
one at a time. That is now just SetModuleCallArgument singular, to match
with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
log . Printf ( "[TRACE] evalVariableValidations: no validation rules declared for %s, so skipping" , addr )
2020-09-16 17:32:48 +02:00
return nil
2020-01-04 02:12:49 +01:00
}
core: Handle root and child module input variables consistently
Previously we had a significant discrepancy between these two situations:
we wrote the raw root module variables directly into the EvalContext and
then applied type conversions only at expression evaluation time, while
for child modules we converted and validated the values while visiting
the variable graph node and wrote only the _final_ value into the
EvalContext.
This confusion seems to have been the root cause for #29899, where
validation rules for root module variables were being applied at the wrong
point in the process, prior to type conversion.
To fix that bug and also make similar mistakes less likely in the future,
I've made the root module variable handling more like the child module
variable handling in the following ways:
- The "raw value" (exactly as given by the user) lives only in the graph
node representing the variable, which mirrors how the _expression_
for a child module variable lives in its graph node. This means that
the flow for the two is the same except that there's no expression
evaluation step for root module variables, because they arrive as
constant values from the caller.
- The set of variable values in the EvalContext is always only "final"
values, after type conversion is complete. That in turn means we no
longer need to do "just in time" conversion in
evaluationStateData.GetInputVariable, and can just return the value
exactly as stored, which is consistent with how we handle all other
references between objects.
This diff is noisier than I'd like because of how much it takes to wire
a new argument (the raw variable values) through to the plan graph builder,
but those changes are pretty mechanical and the interesting logic lives
inside the plan graph builder itself, in NodeRootVariable, and
the shared helper functions in eval_variable.go.
While here I also took the opportunity to fix a historical API wart in
EvalContext, where SetModuleCallArguments was built to take a set of
variable values all at once but our current caller always calls with only
one at a time. That is now just SetModuleCallArgument singular, to match
with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
log . Printf ( "[TRACE] evalVariableValidations: validating %s" , addr )
2020-01-04 02:12:49 +01:00
// Variable nodes evaluate in the parent module to where they were declared
// because the value expression (n.Expr, if set) comes from the calling
// "module" block in the parent module.
//
// Validation expressions are statically validated (during configuration
// loading) to refer only to the variable being validated, so we can
// bypass our usual evaluation machinery here and just produce a minimal
// evaluation context containing just the required value, and thus avoid
// the problem that ctx's evaluation functions refer to the wrong module.
2020-09-16 17:32:48 +02:00
val := ctx . GetVariableValue ( addr )
core: Handle root and child module input variables consistently
Previously we had a significant discrepancy between these two situations:
we wrote the raw root module variables directly into the EvalContext and
then applied type conversions only at expression evaluation time, while
for child modules we converted and validated the values while visiting
the variable graph node and wrote only the _final_ value into the
EvalContext.
This confusion seems to have been the root cause for #29899, where
validation rules for root module variables were being applied at the wrong
point in the process, prior to type conversion.
To fix that bug and also make similar mistakes less likely in the future,
I've made the root module variable handling more like the child module
variable handling in the following ways:
- The "raw value" (exactly as given by the user) lives only in the graph
node representing the variable, which mirrors how the _expression_
for a child module variable lives in its graph node. This means that
the flow for the two is the same except that there's no expression
evaluation step for root module variables, because they arrive as
constant values from the caller.
- The set of variable values in the EvalContext is always only "final"
values, after type conversion is complete. That in turn means we no
longer need to do "just in time" conversion in
evaluationStateData.GetInputVariable, and can just return the value
exactly as stored, which is consistent with how we handle all other
references between objects.
This diff is noisier than I'd like because of how much it takes to wire
a new argument (the raw variable values) through to the plan graph builder,
but those changes are pretty mechanical and the interesting logic lives
inside the plan graph builder itself, in NodeRootVariable, and
the shared helper functions in eval_variable.go.
While here I also took the opportunity to fix a historical API wart in
EvalContext, where SetModuleCallArguments was built to take a set of
variable values all at once but our current caller always calls with only
one at a time. That is now just SetModuleCallArgument singular, to match
with the new SetRootModuleArgument to deal with root module variables.
2021-11-11 02:29:45 +01:00
if val == cty . NilVal {
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "No final value for variable" ,
Detail : fmt . Sprintf ( "Terraform doesn't have a final value for %s during validation. This is a bug in Terraform; please report it!" , addr ) ,
} )
return diags
}
2020-01-04 02:12:49 +01:00
hclCtx := & hcl . EvalContext {
Variables : map [ string ] cty . Value {
"var" : cty . ObjectVal ( map [ string ] cty . Value {
2020-09-16 17:32:48 +02:00
config . Name : val ,
2020-01-04 02:12:49 +01:00
} ) ,
} ,
Functions : ctx . EvaluationScope ( nil , EvalDataForNoInstanceKey ) . Functions ( ) ,
}
2020-09-16 17:32:48 +02:00
for _ , validation := range config . Validations {
2020-01-04 02:12:49 +01:00
const errInvalidCondition = "Invalid variable validation result"
const errInvalidValue = "Invalid value for variable"
core: Check rule error message expressions
Error messages for preconditions, postconditions, and custom variable
validations have until now been string literals. This commit changes
this to treat the field as an HCL expression, which must evaluate to a
string. Most commonly this will either be a string literal or a template
expression.
When the check rule condition is evaluated, we also evaluate the error
message. This means that the error message should always evaluate to a
string value, even if the condition passes. If it does not, this will
result in an error diagnostic.
If the condition fails, and the error message also fails to evaluate, we
fall back to a default error message. This means that the check rule
failure will still be reported, alongside diagnostics explaining why the
custom error message failed to render.
As part of this change, we also necessarily remove the heuristic about
the error message format. This guidance can be readded in future as part
of a configuration hint system.
2022-02-03 20:14:21 +01:00
var ruleDiags tfdiags . Diagnostics
2020-01-04 02:12:49 +01:00
result , moreDiags := validation . Condition . Value ( hclCtx )
core: Check rule error message expressions
Error messages for preconditions, postconditions, and custom variable
validations have until now been string literals. This commit changes
this to treat the field as an HCL expression, which must evaluate to a
string. Most commonly this will either be a string literal or a template
expression.
When the check rule condition is evaluated, we also evaluate the error
message. This means that the error message should always evaluate to a
string value, even if the condition passes. If it does not, this will
result in an error diagnostic.
If the condition fails, and the error message also fails to evaluate, we
fall back to a default error message. This means that the check rule
failure will still be reported, alongside diagnostics explaining why the
custom error message failed to render.
As part of this change, we also necessarily remove the heuristic about
the error message format. This guidance can be readded in future as part
of a configuration hint system.
2022-02-03 20:14:21 +01:00
ruleDiags = ruleDiags . Append ( moreDiags )
errorValue , errorDiags := validation . ErrorMessage . Value ( hclCtx )
2022-02-07 20:46:31 +01:00
// The following error handling is a workaround to preserve backwards
// compatibility. Due to an implementation quirk, all prior versions of
// Terraform would treat error messages specified using JSON
// configuration syntax (.tf.json) as string literals, even if they
// contained the "${" template expression operator. This behaviour did
// not match that of HCL configuration syntax, where a template
// expression would result in a validation error.
//
// As a result, users writing or generating JSON configuration syntax
// may have specified error messages which are invalid template
// expressions. As we add support for error message expressions, we are
// unable to perfectly distinguish between these two cases.
//
// To ensure that we don't break backwards compatibility, we have the
// below fallback logic if the error message fails to evaluate. This
// should only have any effect for JSON configurations. The gohcl
// DecodeExpression function behaves differently when the source of the
// expression is a JSON configuration file and a nil context is passed.
if errorDiags . HasErrors ( ) {
// Attempt to decode the expression as a string literal. Passing
// nil as the context forces a JSON syntax string value to be
// interpreted as a string literal.
var errorString string
moreErrorDiags := gohcl . DecodeExpression ( validation . ErrorMessage , nil , & errorString )
if ! moreErrorDiags . HasErrors ( ) {
// Decoding succeeded, meaning that this is a JSON syntax
// string value. We rewrap that as a cty value to allow later
// decoding to succeed.
errorValue = cty . StringVal ( errorString )
// This warning diagnostic explains this odd behaviour, while
// giving us an escape hatch to change this to a hard failure
// in some future Terraform 1.x version.
errorDiags = hcl . Diagnostics {
& hcl . Diagnostic {
Severity : hcl . DiagWarning ,
Summary : "Validation error message expression is invalid" ,
Detail : fmt . Sprintf ( "The error message provided could not be evaluated as an expression, so Terraform is interpreting it as a string literal.\n\nIn future versions of Terraform, this will be considered an error. Please file a GitHub issue if this would break your workflow.\n\n%s" , errorDiags . Error ( ) ) ,
Subject : validation . ErrorMessage . Range ( ) . Ptr ( ) ,
Context : validation . DeclRange . Ptr ( ) ,
Expression : validation . ErrorMessage ,
EvalContext : hclCtx ,
} ,
}
}
// We want to either report the original diagnostics if the
// fallback failed, or the warning generated above if it succeeded.
ruleDiags = ruleDiags . Append ( errorDiags )
}
core: Check rule error message expressions
Error messages for preconditions, postconditions, and custom variable
validations have until now been string literals. This commit changes
this to treat the field as an HCL expression, which must evaluate to a
string. Most commonly this will either be a string literal or a template
expression.
When the check rule condition is evaluated, we also evaluate the error
message. This means that the error message should always evaluate to a
string value, even if the condition passes. If it does not, this will
result in an error diagnostic.
If the condition fails, and the error message also fails to evaluate, we
fall back to a default error message. This means that the check rule
failure will still be reported, alongside diagnostics explaining why the
custom error message failed to render.
As part of this change, we also necessarily remove the heuristic about
the error message format. This guidance can be readded in future as part
of a configuration hint system.
2022-02-03 20:14:21 +01:00
diags = diags . Append ( ruleDiags )
if ruleDiags . HasErrors ( ) {
2022-02-07 20:46:31 +01:00
log . Printf ( "[TRACE] evalVariableValidations: %s rule %s check rule evaluation failed: %s" , addr , validation . DeclRange , ruleDiags . Err ( ) . Error ( ) )
2020-01-04 02:12:49 +01:00
}
if ! result . IsKnown ( ) {
2020-09-16 17:32:48 +02:00
log . Printf ( "[TRACE] evalVariableValidations: %s rule %s condition value is unknown, so skipping validation for now" , addr , validation . DeclRange )
2020-01-04 02:12:49 +01:00
continue // We'll wait until we've learned more, then.
}
if result . IsNull ( ) {
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : errInvalidCondition ,
Detail : "Validation condition expression must return either true or false, not null." ,
Subject : validation . Condition . Range ( ) . Ptr ( ) ,
Expression : validation . Condition ,
EvalContext : hclCtx ,
} )
continue
}
var err error
result , err = convert . Convert ( result , cty . Bool )
if err != nil {
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : errInvalidCondition ,
Detail : fmt . Sprintf ( "Invalid validation condition result value: %s." , tfdiags . FormatError ( err ) ) ,
Subject : validation . Condition . Range ( ) . Ptr ( ) ,
Expression : validation . Condition ,
EvalContext : hclCtx ,
} )
continue
}
2021-01-05 19:49:04 +01:00
// Validation condition may be marked if the input variable is bound to
// a sensitive value. This is irrelevant to the validation process, so
// we discard the marks now.
result , _ = result . Unmark ( )
core: Check rule error message expressions
Error messages for preconditions, postconditions, and custom variable
validations have until now been string literals. This commit changes
this to treat the field as an HCL expression, which must evaluate to a
string. Most commonly this will either be a string literal or a template
expression.
When the check rule condition is evaluated, we also evaluate the error
message. This means that the error message should always evaluate to a
string value, even if the condition passes. If it does not, this will
result in an error diagnostic.
If the condition fails, and the error message also fails to evaluate, we
fall back to a default error message. This means that the check rule
failure will still be reported, alongside diagnostics explaining why the
custom error message failed to render.
As part of this change, we also necessarily remove the heuristic about
the error message format. This guidance can be readded in future as part
of a configuration hint system.
2022-02-03 20:14:21 +01:00
if result . True ( ) {
continue
}
var errorMessage string
if ! errorDiags . HasErrors ( ) && errorValue . IsKnown ( ) && ! errorValue . IsNull ( ) {
var err error
errorValue , err = convert . Convert ( errorValue , cty . String )
if err != nil {
2020-01-04 02:12:49 +01:00
diags = diags . Append ( & hcl . Diagnostic {
core: Check rule error message expressions
Error messages for preconditions, postconditions, and custom variable
validations have until now been string literals. This commit changes
this to treat the field as an HCL expression, which must evaluate to a
string. Most commonly this will either be a string literal or a template
expression.
When the check rule condition is evaluated, we also evaluate the error
message. This means that the error message should always evaluate to a
string value, even if the condition passes. If it does not, this will
result in an error diagnostic.
If the condition fails, and the error message also fails to evaluate, we
fall back to a default error message. This means that the check rule
failure will still be reported, alongside diagnostics explaining why the
custom error message failed to render.
As part of this change, we also necessarily remove the heuristic about
the error message format. This guidance can be readded in future as part
of a configuration hint system.
2022-02-03 20:14:21 +01:00
Severity : hcl . DiagError ,
Summary : "Invalid error message" ,
Detail : fmt . Sprintf ( "Unsuitable value for error message: %s." , tfdiags . FormatError ( err ) ) ,
Subject : validation . ErrorMessage . Range ( ) . Ptr ( ) ,
Expression : validation . ErrorMessage ,
EvalContext : hclCtx ,
2020-01-04 02:12:49 +01:00
} )
} else {
2022-03-11 17:11:30 +01:00
if marks . Has ( errorValue , marks . Sensitive ) {
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "Error message refers to sensitive values" ,
Detail : ` The error expression used to explain this condition refers to sensitive values . Terraform will not display the resulting message .
You can correct this by removing references to sensitive values , or by carefully using the nonsensitive ( ) function if the expression will not reveal the sensitive data . ` ,
Subject : validation . ErrorMessage . Range ( ) . Ptr ( ) ,
Expression : validation . ErrorMessage ,
EvalContext : hclCtx ,
} )
errorMessage = "The error message included a sensitive value, so it will not be displayed."
} else {
errorMessage = strings . TrimSpace ( errorValue . AsString ( ) )
}
2020-01-04 02:12:49 +01:00
}
}
core: Check rule error message expressions
Error messages for preconditions, postconditions, and custom variable
validations have until now been string literals. This commit changes
this to treat the field as an HCL expression, which must evaluate to a
string. Most commonly this will either be a string literal or a template
expression.
When the check rule condition is evaluated, we also evaluate the error
message. This means that the error message should always evaluate to a
string value, even if the condition passes. If it does not, this will
result in an error diagnostic.
If the condition fails, and the error message also fails to evaluate, we
fall back to a default error message. This means that the check rule
failure will still be reported, alongside diagnostics explaining why the
custom error message failed to render.
As part of this change, we also necessarily remove the heuristic about
the error message format. This guidance can be readded in future as part
of a configuration hint system.
2022-02-03 20:14:21 +01:00
if errorMessage == "" {
errorMessage = "Failed to evaluate condition error message."
}
if expr != nil {
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : errInvalidValue ,
Detail : fmt . Sprintf ( "%s\n\nThis was checked by the validation rule at %s." , errorMessage , validation . DeclRange . String ( ) ) ,
Subject : expr . Range ( ) . Ptr ( ) ,
Expression : validation . Condition ,
EvalContext : hclCtx ,
} )
} else {
// Since we don't have a source expression for a root module
// variable, we'll just report the error from the perspective
// of the variable declaration itself.
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : errInvalidValue ,
Detail : fmt . Sprintf ( "%s\n\nThis was checked by the validation rule at %s." , errorMessage , validation . DeclRange . String ( ) ) ,
Subject : config . DeclRange . Ptr ( ) ,
Expression : validation . Condition ,
EvalContext : hclCtx ,
} )
}
2020-01-04 02:12:49 +01:00
}
2020-10-28 18:47:04 +01:00
return diags
2016-06-22 22:52:12 +02:00
}