terraform/internal/command/providers_schema.go

122 lines
2.9 KiB
Go
Raw Normal View History

package command
import (
"fmt"
"os"
"github.com/hashicorp/terraform/internal/backend"
"github.com/hashicorp/terraform/internal/command/jsonprovider"
"github.com/hashicorp/terraform/internal/tfdiags"
)
// ProvidersCommand is a Command implementation that prints out information
// about the providers used in the current configuration/state.
type ProvidersSchemaCommand struct {
Meta
}
func (c *ProvidersSchemaCommand) Help() string {
return providersSchemaCommandHelp
}
func (c *ProvidersSchemaCommand) Synopsis() string {
return "Show schemas for the providers used in the configuration"
}
func (c *ProvidersSchemaCommand) Run(args []string) int {
args = c.Meta.process(args)
cmdFlags := c.Meta.defaultFlagSet("providers schema")
var jsonOutput bool
cmdFlags.BoolVar(&jsonOutput, "json", false, "produce JSON output")
cmdFlags.Usage = func() { c.Ui.Error(c.Help()) }
if err := cmdFlags.Parse(args); err != nil {
c.Ui.Error(fmt.Sprintf("Error parsing command-line flags: %s\n", err.Error()))
return 1
}
if !jsonOutput {
c.Ui.Error(
"The `terraform providers schema` command requires the `-json` flag.\n")
cmdFlags.Usage()
return 1
}
// Check for user-supplied plugin path
var err error
if c.pluginPath, err = c.loadPluginPath(); err != nil {
c.Ui.Error(fmt.Sprintf("Error loading plugin path: %s", err))
return 1
}
var diags tfdiags.Diagnostics
// Load the backend
b, backendDiags := c.Backend(nil)
diags = diags.Append(backendDiags)
if backendDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
// We require a local backend
local, ok := b.(backend.Local)
if !ok {
c.showDiagnostics(diags) // in case of any warnings in here
c.Ui.Error(ErrUnsupportedLocalOp)
return 1
}
backend: Validate remote backend Terraform version When using the enhanced remote backend, a subset of all Terraform operations are supported. Of these, only plan and apply can be executed on the remote infrastructure (e.g. Terraform Cloud). Other operations run locally and use the remote backend for state storage. This causes problems when the local version of Terraform does not match the configured version from the remote workspace. If the two versions are incompatible, an `import` or `state mv` operation can cause the remote workspace to be unusable until a manual fix is applied. To prevent this from happening accidentally, this commit introduces a check that the local Terraform version and the configured remote workspace Terraform version are compatible. This check is skipped for commands which do not write state, and can also be disabled by the use of a new command-line flag, `-ignore-remote-version`. Terraform version compatibility is defined as: - For all releases before 0.14.0, local must exactly equal remote, as two different versions cannot share state; - 0.14.0 to 1.0.x are compatible, as we will not change the state version number until at least Terraform 1.1.0; - Versions after 1.1.0 must have the same major and minor versions, as we will not change the state version number in a patch release. If the two versions are incompatible, a diagnostic is displayed, advising that the error can be suppressed with `-ignore-remote-version`. When this flag is used, the diagnostic is still displayed, but as a warning instead of an error. Commands which will not write state can assert this fact by calling the helper `meta.ignoreRemoteBackendVersionConflict`, which will disable the checks. Those which can write state should instead call the helper `meta.remoteBackendVersionCheck`, which will return diagnostics for display. In addition to these explicit paths for managing the version check, we have an implicit check in the remote backend's state manager initialization method. Both of the above helpers will disable this check. This fallback is in place to ensure that future code paths which access state cannot accidentally skip the remote version check.
2020-11-13 22:43:56 +01:00
// This is a read-only command
c.ignoreRemoteBackendVersionConflict(b)
// we expect that the config dir is the cwd
cwd, err := os.Getwd()
if err != nil {
c.Ui.Error(fmt.Sprintf("Error getting cwd: %s", err))
return 1
}
// Build the operation
opReq := c.Operation(b)
opReq.ConfigDir = cwd
opReq.ConfigLoader, err = c.initConfigLoader()
opReq.AllowUnsetVariables = true
if err != nil {
diags = diags.Append(err)
c.showDiagnostics(diags)
return 1
}
// Get the context
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 21:06:38 +02:00
lr, _, ctxDiags := local.LocalRun(opReq)
diags = diags.Append(ctxDiags)
if ctxDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 21:06:38 +02:00
schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState)
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
jsonSchemas, err := jsonprovider.Marshal(schemas)
if err != nil {
c.Ui.Error(fmt.Sprintf("Failed to marshal provider schemas to json: %s", err))
return 1
}
c.Ui.Output(string(jsonSchemas))
return 0
}
const providersSchemaCommandHelp = `
Usage: terraform [global options] providers schema -json
Prints out a json representation of the schemas for all providers used
in the current configuration.
`