terraform/configs/parser_config.go

291 lines
8.2 KiB
Go
Raw Normal View History

package configs
import (
"github.com/hashicorp/hcl/v2"
)
// LoadConfigFile reads the file at the given path and parses it as a config
// file.
//
// If the file cannot be read -- for example, if it does not exist -- then
// a nil *File will be returned along with error diagnostics. Callers may wish
// to disregard the returned diagnostics in this case and instead generate
// their own error message(s) with additional context.
//
// If the returned diagnostics has errors when a non-nil map is returned
// then the map may be incomplete but should be valid enough for careful
// static analysis.
//
// This method wraps LoadHCLFile, and so it inherits the syntax selection
// behaviors documented for that method.
func (p *Parser) LoadConfigFile(path string) (*File, hcl.Diagnostics) {
return p.loadConfigFile(path, false)
}
// LoadConfigFileOverride is the same as LoadConfigFile except that it relaxes
// certain required attribute constraints in order to interpret the given
// file as an overrides file.
func (p *Parser) LoadConfigFileOverride(path string) (*File, hcl.Diagnostics) {
return p.loadConfigFile(path, true)
}
func (p *Parser) loadConfigFile(path string, override bool) (*File, hcl.Diagnostics) {
body, diags := p.LoadHCLFile(path)
if body == nil {
return nil, diags
}
file := &File{}
var reqDiags hcl.Diagnostics
file.CoreVersionConstraints, reqDiags = sniffCoreVersionRequirements(body)
diags = append(diags, reqDiags...)
experiments: a mechanism for opt-in experimental language features Traditionally we've preferred to release new language features in major releases only, because we can then use the beta cycle to gather feedback on the feature and learn about any usability challenges or other situations we didn't consider during our design in time to make those changes before inclusion in a stable release. This "experiments" feature is intended to decouple the feedback cycle for new features from the major release rhythm, and thus allow us to release new features in minor releases by first releasing them as experimental for a minor release or two, adjust for any feedback gathered during that period, and then finally remove the experiment gate and enable the feature for everyone. The intended model here is that anything behind an experiment gate is subject to breaking changes even in patch releases, and so any module using these experimental features will be broken by a future Terraform upgrade. The behavior implemented here is: - Recognize a new "experiments" setting in the "terraform" block which allows module authors to explicitly opt in to experimental features. terraform { experiments = [resource_for_each] } - Generate a warning whenever loading a module that has experiments enabled, to avoid accidentally depending on experimental features and thus risking unexpected breakage on next Terraform upgrade. - We check the enabled experiments against the configuration at module load time, which means that experiments are scoped to a particular module. Enabling an experiment in one module does not automatically enable it in any other module. This experiments mechanism is itself an experiment, and so I'd like to use the resource for_each feature to trial it. Because any configuration using experiments is subject to breaking changes, we are free to adjust this experiments feature in future releases as we see fit, but once for_each is shipped without an experiment gate we'll be blocked from making significant changes to it until the next major release at least.
2019-07-10 21:37:11 +02:00
// We'll load the experiments first because other decoding logic in the
// loop below might depend on these experiments.
var expDiags hcl.Diagnostics
file.ActiveExperiments, expDiags = sniffActiveExperiments(body)
diags = append(diags, expDiags...)
content, contentDiags := body.Content(configFileSchema)
diags = append(diags, contentDiags...)
for _, block := range content.Blocks {
switch block.Type {
case "terraform":
content, contentDiags := block.Body.Content(terraformBlockSchema)
diags = append(diags, contentDiags...)
// We ignore the "terraform_version", "language" and "experiments"
// attributes here because sniffCoreVersionRequirements and
experiments: a mechanism for opt-in experimental language features Traditionally we've preferred to release new language features in major releases only, because we can then use the beta cycle to gather feedback on the feature and learn about any usability challenges or other situations we didn't consider during our design in time to make those changes before inclusion in a stable release. This "experiments" feature is intended to decouple the feedback cycle for new features from the major release rhythm, and thus allow us to release new features in minor releases by first releasing them as experimental for a minor release or two, adjust for any feedback gathered during that period, and then finally remove the experiment gate and enable the feature for everyone. The intended model here is that anything behind an experiment gate is subject to breaking changes even in patch releases, and so any module using these experimental features will be broken by a future Terraform upgrade. The behavior implemented here is: - Recognize a new "experiments" setting in the "terraform" block which allows module authors to explicitly opt in to experimental features. terraform { experiments = [resource_for_each] } - Generate a warning whenever loading a module that has experiments enabled, to avoid accidentally depending on experimental features and thus risking unexpected breakage on next Terraform upgrade. - We check the enabled experiments against the configuration at module load time, which means that experiments are scoped to a particular module. Enabling an experiment in one module does not automatically enable it in any other module. This experiments mechanism is itself an experiment, and so I'd like to use the resource for_each feature to trial it. Because any configuration using experiments is subject to breaking changes, we are free to adjust this experiments feature in future releases as we see fit, but once for_each is shipped without an experiment gate we'll be blocked from making significant changes to it until the next major release at least.
2019-07-10 21:37:11 +02:00
// sniffActiveExperiments already dealt with those above.
for _, innerBlock := range content.Blocks {
switch innerBlock.Type {
case "backend":
backendCfg, cfgDiags := decodeBackendBlock(innerBlock)
diags = append(diags, cfgDiags...)
if backendCfg != nil {
file.Backends = append(file.Backends, backendCfg)
}
case "required_providers":
reqs, reqsDiags := decodeRequiredProvidersBlock(innerBlock)
diags = append(diags, reqsDiags...)
file.RequiredProviders = append(file.RequiredProviders, reqs)
case "provider_meta":
providerCfg, cfgDiags := decodeProviderMetaBlock(innerBlock)
diags = append(diags, cfgDiags...)
if providerCfg != nil {
file.ProviderMetas = append(file.ProviderMetas, providerCfg)
}
default:
// Should never happen because the above cases should be exhaustive
// for all block type names in our schema.
continue
}
}
case "required_providers":
// required_providers should be nested inside a "terraform" block
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid required_providers block",
Detail: "A \"required_providers\" block must be nested inside a \"terraform\" block.",
Subject: block.TypeRange.Ptr(),
})
case "provider":
cfg, cfgDiags := decodeProviderBlock(block)
diags = append(diags, cfgDiags...)
if cfg != nil {
file.ProviderConfigs = append(file.ProviderConfigs, cfg)
}
case "variable":
configs: allow full type constraints for variables Previously we just ported over the simple "string", "list", and "map" type hint keywords from the old loader, which exist primarily as hints to the CLI for whether to treat -var=... arguments and environment variables as literal strings or as HCL expressions. However, we've been requested before to allow more specific constraints here because it's generally better UX for a type error to be detected within an expression in a calling "module" block rather than at some point deep inside a third-party module. To allow for more specific constraints, here we use the type constraint expression syntax defined as an extension within HCL, which uses the variable and function call syntaxes to represent types rather than values, like this: - string - number - bool - list(string) - list(any) - list(map(string)) - object({id=string,name=string}) In native HCL syntax this looks like: variable "foo" { type = map(string) } In JSON, this looks like: { "variable": { "foo": { "type": "map(string)" } } } The selection of literal processing or HCL parsing of CLI-set values is now explicit in the model and separate from the type, though it's still derived from the type constraint and thus not directly controllable in configuration. Since this syntax is more complex than the keywords that replaced it, for now the simpler keywords are still supported and "list" and "map" are interpreted as list(any) and map(any) respectively, mimicking how they were interpreted by Terraform 0.11 and earlier. For the time being our documentation should continue to recommend these shorthand versions until we gain more experience with the more-specific type constraints; most users should just make use of the additional primitive type constraints this enables: bool and number. As a result of these more-complete type constraints, we can now type-check the default value at config load time, which has the nice side-effect of allowing us to produce a tailored error message if an override file produces an invalid situation; previously the result was rather confusing because the error message referred to the original definition of the variable and not the overridden parts.
2018-03-07 02:37:51 +01:00
cfg, cfgDiags := decodeVariableBlock(block, override)
diags = append(diags, cfgDiags...)
if cfg != nil {
file.Variables = append(file.Variables, cfg)
}
case "locals":
defs, defsDiags := decodeLocalsBlock(block)
diags = append(diags, defsDiags...)
file.Locals = append(file.Locals, defs...)
case "output":
cfg, cfgDiags := decodeOutputBlock(block, override)
diags = append(diags, cfgDiags...)
if cfg != nil {
file.Outputs = append(file.Outputs, cfg)
}
case "module":
cfg, cfgDiags := decodeModuleBlock(block, override)
diags = append(diags, cfgDiags...)
if cfg != nil {
file.ModuleCalls = append(file.ModuleCalls, cfg)
}
case "resource":
cfg, cfgDiags := decodeResourceBlock(block)
diags = append(diags, cfgDiags...)
if cfg != nil {
file.ManagedResources = append(file.ManagedResources, cfg)
}
case "data":
cfg, cfgDiags := decodeDataBlock(block)
diags = append(diags, cfgDiags...)
if cfg != nil {
file.DataResources = append(file.DataResources, cfg)
}
default:
// Should never happen because the above cases should be exhaustive
// for all block type names in our schema.
continue
}
}
return file, diags
}
// sniffCoreVersionRequirements does minimal parsing of the given body for
// "terraform" blocks with "required_version" attributes, returning the
// requirements found.
//
// This is intended to maximize the chance that we'll be able to read the
// requirements (syntax errors notwithstanding) even if the config file contains
// constructs that might've been added in future Terraform versions
//
// This is a "best effort" sort of method which will return constraints it is
// able to find, but may return no constraints at all if the given body is
// so invalid that it cannot be decoded at all.
func sniffCoreVersionRequirements(body hcl.Body) ([]VersionConstraint, hcl.Diagnostics) {
experiments: a mechanism for opt-in experimental language features Traditionally we've preferred to release new language features in major releases only, because we can then use the beta cycle to gather feedback on the feature and learn about any usability challenges or other situations we didn't consider during our design in time to make those changes before inclusion in a stable release. This "experiments" feature is intended to decouple the feedback cycle for new features from the major release rhythm, and thus allow us to release new features in minor releases by first releasing them as experimental for a minor release or two, adjust for any feedback gathered during that period, and then finally remove the experiment gate and enable the feature for everyone. The intended model here is that anything behind an experiment gate is subject to breaking changes even in patch releases, and so any module using these experimental features will be broken by a future Terraform upgrade. The behavior implemented here is: - Recognize a new "experiments" setting in the "terraform" block which allows module authors to explicitly opt in to experimental features. terraform { experiments = [resource_for_each] } - Generate a warning whenever loading a module that has experiments enabled, to avoid accidentally depending on experimental features and thus risking unexpected breakage on next Terraform upgrade. - We check the enabled experiments against the configuration at module load time, which means that experiments are scoped to a particular module. Enabling an experiment in one module does not automatically enable it in any other module. This experiments mechanism is itself an experiment, and so I'd like to use the resource for_each feature to trial it. Because any configuration using experiments is subject to breaking changes, we are free to adjust this experiments feature in future releases as we see fit, but once for_each is shipped without an experiment gate we'll be blocked from making significant changes to it until the next major release at least.
2019-07-10 21:37:11 +02:00
rootContent, _, diags := body.PartialContent(configFileTerraformBlockSniffRootSchema)
var constraints []VersionConstraint
for _, block := range rootContent.Blocks {
content, _, blockDiags := block.Body.PartialContent(configFileVersionSniffBlockSchema)
diags = append(diags, blockDiags...)
attr, exists := content.Attributes["required_version"]
if !exists {
continue
}
constraint, constraintDiags := decodeVersionConstraint(attr)
diags = append(diags, constraintDiags...)
if !constraintDiags.HasErrors() {
constraints = append(constraints, constraint)
}
}
return constraints, diags
}
// configFileSchema is the schema for the top-level of a config file. We use
// the low-level HCL API for this level so we can easily deal with each
// block type separately with its own decoding logic.
var configFileSchema = &hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "terraform",
},
{
// This one is not really valid, but we include it here so we
// can create a specialized error message hinting the user to
// nest it inside a "terraform" block.
Type: "required_providers",
},
{
Type: "provider",
LabelNames: []string{"name"},
},
{
Type: "variable",
LabelNames: []string{"name"},
},
{
Type: "locals",
},
{
Type: "output",
LabelNames: []string{"name"},
},
{
Type: "module",
LabelNames: []string{"name"},
},
{
Type: "resource",
LabelNames: []string{"type", "name"},
},
{
Type: "data",
LabelNames: []string{"type", "name"},
},
},
}
// terraformBlockSchema is the schema for a top-level "terraform" block in
// a configuration file.
var terraformBlockSchema = &hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
experiments: a mechanism for opt-in experimental language features Traditionally we've preferred to release new language features in major releases only, because we can then use the beta cycle to gather feedback on the feature and learn about any usability challenges or other situations we didn't consider during our design in time to make those changes before inclusion in a stable release. This "experiments" feature is intended to decouple the feedback cycle for new features from the major release rhythm, and thus allow us to release new features in minor releases by first releasing them as experimental for a minor release or two, adjust for any feedback gathered during that period, and then finally remove the experiment gate and enable the feature for everyone. The intended model here is that anything behind an experiment gate is subject to breaking changes even in patch releases, and so any module using these experimental features will be broken by a future Terraform upgrade. The behavior implemented here is: - Recognize a new "experiments" setting in the "terraform" block which allows module authors to explicitly opt in to experimental features. terraform { experiments = [resource_for_each] } - Generate a warning whenever loading a module that has experiments enabled, to avoid accidentally depending on experimental features and thus risking unexpected breakage on next Terraform upgrade. - We check the enabled experiments against the configuration at module load time, which means that experiments are scoped to a particular module. Enabling an experiment in one module does not automatically enable it in any other module. This experiments mechanism is itself an experiment, and so I'd like to use the resource for_each feature to trial it. Because any configuration using experiments is subject to breaking changes, we are free to adjust this experiments feature in future releases as we see fit, but once for_each is shipped without an experiment gate we'll be blocked from making significant changes to it until the next major release at least.
2019-07-10 21:37:11 +02:00
{Name: "required_version"},
{Name: "experiments"},
{Name: "language"},
},
Blocks: []hcl.BlockHeaderSchema{
{
Type: "backend",
LabelNames: []string{"type"},
},
{
Type: "required_providers",
},
{
Type: "provider_meta",
LabelNames: []string{"provider"},
},
},
}
experiments: a mechanism for opt-in experimental language features Traditionally we've preferred to release new language features in major releases only, because we can then use the beta cycle to gather feedback on the feature and learn about any usability challenges or other situations we didn't consider during our design in time to make those changes before inclusion in a stable release. This "experiments" feature is intended to decouple the feedback cycle for new features from the major release rhythm, and thus allow us to release new features in minor releases by first releasing them as experimental for a minor release or two, adjust for any feedback gathered during that period, and then finally remove the experiment gate and enable the feature for everyone. The intended model here is that anything behind an experiment gate is subject to breaking changes even in patch releases, and so any module using these experimental features will be broken by a future Terraform upgrade. The behavior implemented here is: - Recognize a new "experiments" setting in the "terraform" block which allows module authors to explicitly opt in to experimental features. terraform { experiments = [resource_for_each] } - Generate a warning whenever loading a module that has experiments enabled, to avoid accidentally depending on experimental features and thus risking unexpected breakage on next Terraform upgrade. - We check the enabled experiments against the configuration at module load time, which means that experiments are scoped to a particular module. Enabling an experiment in one module does not automatically enable it in any other module. This experiments mechanism is itself an experiment, and so I'd like to use the resource for_each feature to trial it. Because any configuration using experiments is subject to breaking changes, we are free to adjust this experiments feature in future releases as we see fit, but once for_each is shipped without an experiment gate we'll be blocked from making significant changes to it until the next major release at least.
2019-07-10 21:37:11 +02:00
// configFileTerraformBlockSniffRootSchema is a schema for
// sniffCoreVersionRequirements and sniffActiveExperiments.
var configFileTerraformBlockSniffRootSchema = &hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "terraform",
},
},
}
// configFileVersionSniffBlockSchema is a schema for sniffCoreVersionRequirements
var configFileVersionSniffBlockSchema = &hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "required_version",
},
},
}
experiments: a mechanism for opt-in experimental language features Traditionally we've preferred to release new language features in major releases only, because we can then use the beta cycle to gather feedback on the feature and learn about any usability challenges or other situations we didn't consider during our design in time to make those changes before inclusion in a stable release. This "experiments" feature is intended to decouple the feedback cycle for new features from the major release rhythm, and thus allow us to release new features in minor releases by first releasing them as experimental for a minor release or two, adjust for any feedback gathered during that period, and then finally remove the experiment gate and enable the feature for everyone. The intended model here is that anything behind an experiment gate is subject to breaking changes even in patch releases, and so any module using these experimental features will be broken by a future Terraform upgrade. The behavior implemented here is: - Recognize a new "experiments" setting in the "terraform" block which allows module authors to explicitly opt in to experimental features. terraform { experiments = [resource_for_each] } - Generate a warning whenever loading a module that has experiments enabled, to avoid accidentally depending on experimental features and thus risking unexpected breakage on next Terraform upgrade. - We check the enabled experiments against the configuration at module load time, which means that experiments are scoped to a particular module. Enabling an experiment in one module does not automatically enable it in any other module. This experiments mechanism is itself an experiment, and so I'd like to use the resource for_each feature to trial it. Because any configuration using experiments is subject to breaking changes, we are free to adjust this experiments feature in future releases as we see fit, but once for_each is shipped without an experiment gate we'll be blocked from making significant changes to it until the next major release at least.
2019-07-10 21:37:11 +02:00
// configFileExperimentsSniffBlockSchema is a schema for sniffActiveExperiments,
// to decode a single attribute from inside a "terraform" block.
var configFileExperimentsSniffBlockSchema = &hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{Name: "experiments"},
{Name: "language"},
experiments: a mechanism for opt-in experimental language features Traditionally we've preferred to release new language features in major releases only, because we can then use the beta cycle to gather feedback on the feature and learn about any usability challenges or other situations we didn't consider during our design in time to make those changes before inclusion in a stable release. This "experiments" feature is intended to decouple the feedback cycle for new features from the major release rhythm, and thus allow us to release new features in minor releases by first releasing them as experimental for a minor release or two, adjust for any feedback gathered during that period, and then finally remove the experiment gate and enable the feature for everyone. The intended model here is that anything behind an experiment gate is subject to breaking changes even in patch releases, and so any module using these experimental features will be broken by a future Terraform upgrade. The behavior implemented here is: - Recognize a new "experiments" setting in the "terraform" block which allows module authors to explicitly opt in to experimental features. terraform { experiments = [resource_for_each] } - Generate a warning whenever loading a module that has experiments enabled, to avoid accidentally depending on experimental features and thus risking unexpected breakage on next Terraform upgrade. - We check the enabled experiments against the configuration at module load time, which means that experiments are scoped to a particular module. Enabling an experiment in one module does not automatically enable it in any other module. This experiments mechanism is itself an experiment, and so I'd like to use the resource for_each feature to trial it. Because any configuration using experiments is subject to breaking changes, we are free to adjust this experiments feature in future releases as we see fit, but once for_each is shipped without an experiment gate we'll be blocked from making significant changes to it until the next major release at least.
2019-07-10 21:37:11 +02:00
},
}