terraform/command/meta.go

803 lines
25 KiB
Go
Raw Normal View History

package command
import (
"bytes"
"context"
"errors"
2014-07-18 20:37:27 +02:00
"flag"
"fmt"
"io/ioutil"
"log"
"os"
2014-09-22 20:15:27 +02:00
"path/filepath"
"strconv"
2017-01-19 05:50:45 +01:00
"strings"
"time"
command: Unmanaged providers This adds supports for "unmanaged" providers, or providers with process lifecycles not controlled by Terraform. These providers are assumed to be started before Terraform is launched, and are assumed to shut themselves down after Terraform has finished running. To do this, we must update the go-plugin dependency to v1.3.0, which added support for the "test mode" plugin serving that powers all this. As a side-effect of not needing to manage the process lifecycle anymore, Terraform also no longer needs to worry about the provider's binary, as it won't be used for anything anymore. Because of this, we can disable the init behavior that concerns itself with downloading that provider's binary, checking its version, and otherwise managing the binary. This is all managed on a per-provider basis, so managed providers that Terraform downloads, starts, and stops can be used in the same commands as unmanaged providers. The TF_REATTACH_PROVIDERS environment variable is added, and is a JSON encoding of the provider's address to the information we need to connect to it. This change enables two benefits: first, delve and other debuggers can now be attached to provider server processes, and Terraform can connect. This allows for attaching debuggers to provider processes, which before was difficult to impossible. Second, it allows the SDK test framework to host the provider in the same process as the test driver, while running a production Terraform binary against the provider. This allows for Go's built-in race detector and test coverage tooling to work as expected in provider tests. Unmanaged providers are expected to work in the exact same way as managed providers, with one caveat: Terraform kills provider processes and restarts them once per graph walk, meaning multiple times during most Terraform CLI commands. As unmanaged providers can't be killed by Terraform, and have no visibility into graph walks, unmanaged providers are likely to have differences in how their global mutable state behaves when compared to managed providers. Namely, unmanaged providers are likely to retain global state when managed providers would have reset it. Developers relying on global state should be aware of this.
2020-05-27 02:48:57 +02:00
plugin "github.com/hashicorp/go-plugin"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/hashicorp/terraform-svchost/disco"
terraform: ugly huge change to weave in new HCL2-oriented types Due to how deeply the configuration types go into Terraform Core, there isn't a great way to switch out to HCL2 gradually. As a consequence, this huge commit gets us from the old state to a _compilable_ new state, but does not yet attempt to fix any tests and has a number of known missing parts and bugs. We will continue to iterate on this in forthcoming commits, heading back towards passing tests and making Terraform fully-functional again. The three main goals here are: - Use the configuration models from the "configs" package instead of the older models in the "config" package, which is now deprecated and preserved only to help us write our migration tool. - Do expression inspection and evaluation using the functionality of the new "lang" package, instead of the Interpolator type and related functionality in the main "terraform" package. - Represent addresses of various objects using types in the addrs package, rather than hand-constructed strings. This is not critical to support the above, but was a big help during the implementation of these other points since it made it much more explicit what kind of address is expected in each context. Since our new packages are built to accommodate some future planned features that are not yet implemented (e.g. the "for_each" argument on resources, "count"/"for_each" on modules), and since there's still a fair amount of functionality still using old-style APIs, there is a moderate amount of shimming here to connect new assumptions with old, hopefully in a way that makes it easier to find and eliminate these shims later. I apologize in advance to the person who inevitably just found this huge commit while spelunking through the commit history.
2018-04-30 19:33:53 +02:00
"github.com/hashicorp/terraform/addrs"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/backend/local"
"github.com/hashicorp/terraform/command/arguments"
"github.com/hashicorp/terraform/command/format"
"github.com/hashicorp/terraform/command/views"
"github.com/hashicorp/terraform/command/webbrowser"
"github.com/hashicorp/terraform/configs/configload"
"github.com/hashicorp/terraform/internal/getproviders"
"github.com/hashicorp/terraform/internal/terminal"
"github.com/hashicorp/terraform/providers"
"github.com/hashicorp/terraform/provisioners"
"github.com/hashicorp/terraform/terraform"
"github.com/hashicorp/terraform/tfdiags"
"github.com/mitchellh/cli"
"github.com/mitchellh/colorstring"
legacy "github.com/hashicorp/terraform/internal/legacy/terraform"
)
// Meta are the meta-options that are available on all or most commands.
type Meta struct {
2017-01-19 05:50:45 +01:00
// The exported fields below should be set by anyone using a
// command with a Meta field. These are expected to be set externally
// (not from within the command itself).
2014-07-28 07:58:35 +02:00
main: new global option -chdir This new option is intended to address the previous inconsistencies where some older subcommands supported partially changing the target directory (where Terraform would use the new directory inconsistently) where newer commands did not support that override at all. Instead, now Terraform will accept a -chdir command at the start of the command line (before the subcommand) and will interpret it as a request to direct all actions that would normally be taken in the current working directory into the target directory instead. This is similar to options offered by some other similar tools, such as the -C option in "make". The new option is only accepted at the start of the command line (before the subcommand) as a way to reflect that it is a global command (not specific to a particular subcommand) and that it takes effect _before_ executing the subcommand. This also means it'll be forced to appear before any other command-specific arguments that take file paths, which hopefully communicates that those other arguments are interpreted relative to the overridden path. As a measure of pragmatism for existing uses, the path.cwd object in the Terraform language will continue to return the _original_ working directory (ignoring -chdir), in case that is important in some exceptional workflows. The path.root object gives the root module directory, which will always match the overriden working directory unless the user simultaneously uses one of the legacy directory override arguments, which is not a pattern we intend to support in the long run. As a first step down the deprecation path, this commit adjusts the documentation to de-emphasize the inconsistent old command line arguments, including specific guidance on what to use instead for the main three workflow commands, but all of those options remain supported in the same way as they were before. In a later commit we'll make those arguments produce a visible deprecation warning in Terraform's output, and then in an even later commit we'll remove them entirely so that -chdir is the single supported way to run Terraform from a directory other than the one containing the root module configuration.
2020-09-02 00:45:12 +02:00
// OriginalWorkingDir, if set, is the actual working directory where
// Terraform was run from. This might not be the _actual_ current working
// directory, because users can add the -chdir=... option to the beginning
// of their command line to ask Terraform to switch.
//
// Most things should just use the current working directory in order to
// respect the user's override, but we retain this for exceptional
// situations where we need to refer back to the original working directory
// for some reason.
OriginalWorkingDir string
// Streams tracks the raw Stdout, Stderr, and Stdin handles along with
// some basic metadata about them, such as whether each is connected to
// a terminal, how wide the possible terminal is, etc.
//
// For historical reasons this might not be set in unit test code, and
// so functions working with this field must check if it's nil and
// do some default behavior instead if so, rather than panicking.
Streams *terminal.Streams
View *views.View
Color bool // True if output should be colored
GlobalPluginDirs []string // Additional paths to search for plugins
Ui cli.Ui // Ui for output
// Services provides access to remote endpoint information for
// "terraform-native' services running at a specific user-facing hostname.
Services *disco.Disco
cli: allow disabling "next steps" message in terraform plan In #15884 we adjusted the plan output to give an explicit command to run to apply a plan, whereas before this command was just alluded to in the prose. Since releasing that, we've got good feedback that it's confusing to include such instructions when Terraform is running in a workflow automation tool, because such tools usually abstract away exactly what commands are run and require users to take different actions to proceed through the workflow. To accommodate such environments while retaining helpful messages for normal CLI usage, here we introduce a new environment variable TF_IN_AUTOMATION which, when set to a non-empty value, is a hint to Terraform that it isn't being run in an interactive command shell and it should thus tone down the "next steps" messaging. The documentation for this setting is included as part of the "...in automation" guide since it's not generally useful in other cases. We also intentionally disclaim comprehensive support for this since we want to avoid creating an extreme number of "if running in automation..." codepaths that would increase the testing matrix and hurt maintainability. The focus is specifically on the output of the three commands we give in the automation guide, which at present means the following two situations: * "terraform init" does not include the final paragraphs that suggest running "terraform plan" and tell you in what situations you might need to re-run "terraform init". * "terraform plan" does not include the final paragraphs that either warn about not specifying "-out=..." or instruct to run "terraform apply" with the generated plan file.
2017-09-09 02:14:37 +02:00
// RunningInAutomation indicates that commands are being run by an
// automated system rather than directly at a command prompt.
//
// This is a hint to various command routines that it may be confusing
// to print out messages that suggest running specific follow-up
// commands, since the user consuming the output will not be
// in a position to run such commands.
//
// The intended use-case of this flag is when Terraform is running in
// some sort of workflow orchestration tool which is abstracting away
// the specific commands being run.
RunningInAutomation bool
2019-07-09 21:06:20 +02:00
// CLIConfigDir is the directory from which CLI configuration files were
// read by the caller and the directory where any changes to CLI
// configuration files by commands should be made.
//
// If this is empty then no configuration directory is available and
// commands which require one cannot proceed.
CLIConfigDir string
// PluginCacheDir, if non-empty, enables caching of downloaded plugins
// into the given directory.
PluginCacheDir string
// ProviderSource allows determining the available versions of a provider
// and determines where a distribution package for a particular
// provider version can be obtained.
ProviderSource getproviders.Source
// OverrideDataDir, if non-empty, overrides the return value of the
// DataDir method for situations where the local .terraform/ directory
// is not suitable, e.g. because of a read-only filesystem.
OverrideDataDir string
// BrowserLauncher is used by commands that need to open a URL in a
// web browser.
BrowserLauncher webbrowser.Launcher
// When this channel is closed, the command will be cancelled.
ShutdownCh <-chan struct{}
// ProviderDevOverrides are providers where we ignore the lock file, the
// configured version constraints, and the local cache directory and just
// always use exactly the path specified. This is intended to allow
// provider developers to easily test local builds without worrying about
// what version number they might eventually be released as, or what
// checksums they have.
ProviderDevOverrides map[addrs.Provider]getproviders.PackageLocalDir
// UnmanagedProviders are a set of providers that exist as processes
// predating Terraform, which Terraform should use but not worry about the
// lifecycle of.
//
// This is essentially a more extreme version of ProviderDevOverrides where
// Terraform doesn't even worry about how the provider server gets launched,
// just trusting that someone else did it before running Terraform.
command: Unmanaged providers This adds supports for "unmanaged" providers, or providers with process lifecycles not controlled by Terraform. These providers are assumed to be started before Terraform is launched, and are assumed to shut themselves down after Terraform has finished running. To do this, we must update the go-plugin dependency to v1.3.0, which added support for the "test mode" plugin serving that powers all this. As a side-effect of not needing to manage the process lifecycle anymore, Terraform also no longer needs to worry about the provider's binary, as it won't be used for anything anymore. Because of this, we can disable the init behavior that concerns itself with downloading that provider's binary, checking its version, and otherwise managing the binary. This is all managed on a per-provider basis, so managed providers that Terraform downloads, starts, and stops can be used in the same commands as unmanaged providers. The TF_REATTACH_PROVIDERS environment variable is added, and is a JSON encoding of the provider's address to the information we need to connect to it. This change enables two benefits: first, delve and other debuggers can now be attached to provider server processes, and Terraform can connect. This allows for attaching debuggers to provider processes, which before was difficult to impossible. Second, it allows the SDK test framework to host the provider in the same process as the test driver, while running a production Terraform binary against the provider. This allows for Go's built-in race detector and test coverage tooling to work as expected in provider tests. Unmanaged providers are expected to work in the exact same way as managed providers, with one caveat: Terraform kills provider processes and restarts them once per graph walk, meaning multiple times during most Terraform CLI commands. As unmanaged providers can't be killed by Terraform, and have no visibility into graph walks, unmanaged providers are likely to have differences in how their global mutable state behaves when compared to managed providers. Namely, unmanaged providers are likely to retain global state when managed providers would have reset it. Developers relying on global state should be aware of this.
2020-05-27 02:48:57 +02:00
UnmanagedProviders map[addrs.Provider]*plugin.ReattachConfig
2017-01-19 05:50:45 +01:00
//----------------------------------------------------------
// Protected: commands can set these
//----------------------------------------------------------
// Modify the data directory location. This should be accessed through the
// DataDir method.
dataDir string
// pluginPath is a user defined set of directories to look for plugins.
// This is set during init with the `-plugin-dir` flag, saved to a file in
// the data directory.
// This overrides all other search paths when discovering plugins.
pluginPath []string
// Override certain behavior for tests within this package
testingOverrides *testingOverrides
2017-01-19 05:50:45 +01:00
//----------------------------------------------------------
// Private: do not set these
//----------------------------------------------------------
// configLoader is a shared configuration loader that is used by
// LoadConfig and other commands that access configuration files.
// It is initialized on first use.
configLoader *configload.Loader
2017-01-19 05:50:45 +01:00
// backendState is the currently active backend state
backendState *legacy.BackendState
2017-01-19 05:50:45 +01:00
2014-07-18 20:37:27 +02:00
// Variables for the context (private)
terraform: ugly huge change to weave in new HCL2-oriented types Due to how deeply the configuration types go into Terraform Core, there isn't a great way to switch out to HCL2 gradually. As a consequence, this huge commit gets us from the old state to a _compilable_ new state, but does not yet attempt to fix any tests and has a number of known missing parts and bugs. We will continue to iterate on this in forthcoming commits, heading back towards passing tests and making Terraform fully-functional again. The three main goals here are: - Use the configuration models from the "configs" package instead of the older models in the "config" package, which is now deprecated and preserved only to help us write our migration tool. - Do expression inspection and evaluation using the functionality of the new "lang" package, instead of the Interpolator type and related functionality in the main "terraform" package. - Represent addresses of various objects using types in the addrs package, rather than hand-constructed strings. This is not critical to support the above, but was a big help during the implementation of these other points since it made it much more explicit what kind of address is expected in each context. Since our new packages are built to accommodate some future planned features that are not yet implemented (e.g. the "for_each" argument on resources, "count"/"for_each" on modules), and since there's still a fair amount of functionality still using old-style APIs, there is a moderate amount of shimming here to connect new assumptions with old, hopefully in a way that makes it easier to find and eliminate these shims later. I apologize in advance to the person who inevitably just found this huge commit while spelunking through the commit history.
2018-04-30 19:33:53 +02:00
variableArgs rawFlags
input bool
2014-07-18 20:37:27 +02:00
// Targets for this context (private)
targets []addrs.Targetable
targetFlags []string
2017-01-19 05:50:45 +01:00
// Internal fields
color bool
oldUi cli.Ui
2015-02-22 03:17:40 +01:00
// The fields below are expected to be set by the command via
// command line flags. See the Apply command for an example.
//
// statePath is the path to the state file. If this is empty, then
// no state will be loaded. It is also okay for this to be a path to
// a file that doesn't exist; it is assumed that this means that there
// is simply no state.
2015-02-22 03:17:40 +01:00
//
// stateOutPath is used to override the output path for the state.
// If not provided, the StatePath is used causing the old state to
// be overridden.
2015-02-22 03:17:40 +01:00
//
// backupPath is used to backup the state file before writing a modified
2015-09-11 20:56:20 +02:00
// version. It defaults to stateOutPath + DefaultBackupExtension
2015-05-06 17:58:42 +02:00
//
// parallelism is used to control the number of concurrent operations
// allowed when walking the graph
//
// provider is to specify specific resource providers
//
// stateLock is set to false to disable state locking
//
// stateLockTimeout is the optional duration to retry a state locks locks
// when it is already locked by another process.
//
// forceInitCopy suppresses confirmation for copying state data during
// init.
//
// reconfigure forces init to ignore any stored configuration.
//
// compactWarnings (-compact-warnings) selects a more compact presentation
// of warnings in the output when they are not accompanied by errors.
statePath string
stateOutPath string
backupPath string
parallelism int
stateLock bool
stateLockTimeout time.Duration
forceInitCopy bool
reconfigure bool
compactWarnings bool
// Used with the import command to allow import of state when no matching config exists.
allowMissingConfig bool
backend: Validate remote backend Terraform version When using the enhanced remote backend, a subset of all Terraform operations are supported. Of these, only plan and apply can be executed on the remote infrastructure (e.g. Terraform Cloud). Other operations run locally and use the remote backend for state storage. This causes problems when the local version of Terraform does not match the configured version from the remote workspace. If the two versions are incompatible, an `import` or `state mv` operation can cause the remote workspace to be unusable until a manual fix is applied. To prevent this from happening accidentally, this commit introduces a check that the local Terraform version and the configured remote workspace Terraform version are compatible. This check is skipped for commands which do not write state, and can also be disabled by the use of a new command-line flag, `-ignore-remote-version`. Terraform version compatibility is defined as: - For all releases before 0.14.0, local must exactly equal remote, as two different versions cannot share state; - 0.14.0 to 1.0.x are compatible, as we will not change the state version number until at least Terraform 1.1.0; - Versions after 1.1.0 must have the same major and minor versions, as we will not change the state version number in a patch release. If the two versions are incompatible, a diagnostic is displayed, advising that the error can be suppressed with `-ignore-remote-version`. When this flag is used, the diagnostic is still displayed, but as a warning instead of an error. Commands which will not write state can assert this fact by calling the helper `meta.ignoreRemoteBackendVersionConflict`, which will disable the checks. Those which can write state should instead call the helper `meta.remoteBackendVersionCheck`, which will return diagnostics for display. In addition to these explicit paths for managing the version check, we have an implicit check in the remote backend's state manager initialization method. Both of the above helpers will disable this check. This fallback is in place to ensure that future code paths which access state cannot accidentally skip the remote version check.
2020-11-13 22:43:56 +01:00
// Used with commands which write state to allow users to write remote
// state even if the remote and local Terraform versions don't match.
ignoreRemoteVersion bool
}
type testingOverrides struct {
Providers map[addrs.Provider]providers.Factory
Provisioners map[string]provisioners.Factory
}
// initStatePaths is used to initialize the default values for
// statePath, stateOutPath, and backupPath
func (m *Meta) initStatePaths() {
if m.statePath == "" {
m.statePath = DefaultStateFilename
}
if m.stateOutPath == "" {
m.stateOutPath = m.statePath
}
if m.backupPath == "" {
2015-09-11 20:56:20 +02:00
m.backupPath = m.stateOutPath + DefaultBackupExtension
}
}
// StateOutPath returns the true output path for the state file
func (m *Meta) StateOutPath() string {
return m.stateOutPath
}
// Colorize returns the colorization structure for a command.
func (m *Meta) Colorize() *colorstring.Colorize {
colors := make(map[string]string)
for k, v := range colorstring.DefaultColors {
colors[k] = v
}
colors["purple"] = "38;5;57"
return &colorstring.Colorize{
Colors: colors,
Disable: !m.color,
Reset: true,
}
}
2015-03-05 05:52:06 +01:00
// DataDir returns the directory where local data will be stored.
// Defaults to DefaultDataDir in the current working directory.
2015-03-05 05:52:06 +01:00
func (m *Meta) DataDir() string {
if m.OverrideDataDir != "" {
return m.OverrideDataDir
2015-03-05 05:52:06 +01:00
}
return DefaultDataDir
2015-03-05 05:52:06 +01:00
}
const (
// InputModeEnvVar is the environment variable that, if set to "false" or
// "0", causes terraform commands to behave as if the `-input=false` flag was
// specified.
InputModeEnvVar = "TF_INPUT"
)
// InputMode returns the type of input we should ask for in the form of
// terraform.InputMode which is passed directly to Context.Input.
func (m *Meta) InputMode() terraform.InputMode {
if test || !m.input {
return 0
}
if envVar := os.Getenv(InputModeEnvVar); envVar != "" {
if v, err := strconv.ParseBool(envVar); err == nil {
if !v {
return 0
}
}
}
var mode terraform.InputMode
mode |= terraform.InputModeProvider
return mode
}
// UIInput returns a UIInput object to be used for asking for input.
func (m *Meta) UIInput() terraform.UIInput {
return &UIInput{
Colorize: m.Colorize(),
}
}
// OutputColumns returns the number of columns that normal (non-error) UI
// output should be wrapped to fill.
//
// This is the column count to use if you'll be printing your message via
// the Output or Info methods of m.Ui.
func (m *Meta) OutputColumns() int {
if m.Streams == nil {
// A default for unit tests that don't populate Meta fully.
return 78
}
return m.Streams.Stdout.Columns()
}
// ErrorColumns returns the number of columns that error UI output should be
// wrapped to fill.
//
// This is the column count to use if you'll be printing your message via
// the Error or Warn methods of m.Ui.
func (m *Meta) ErrorColumns() int {
if m.Streams == nil {
// A default for unit tests that don't populate Meta fully.
return 78
}
return m.Streams.Stderr.Columns()
}
2016-11-14 07:18:18 +01:00
// StdinPiped returns true if the input is piped.
func (m *Meta) StdinPiped() bool {
if m.Streams == nil {
// If we don't have m.Streams populated then we're presumably in a unit
// test that doesn't properly populate Meta, so we'll just say the
// output _isn't_ piped because that's the common case and so most likely
// to be useful to a unit test.
2016-11-14 07:18:18 +01:00
return false
}
return !m.Streams.Stdin.IsTerminal()
2016-11-14 07:18:18 +01:00
}
// InterruptibleContext returns a context.Context that will be cancelled
// if the process is interrupted by a platform-specific interrupt signal.
//
// As usual with cancelable contexts, the caller must always call the given
// cancel function once all operations are complete in order to make sure
// that the context resources will still be freed even if there is no
// interruption.
func (m *Meta) InterruptibleContext() (context.Context, context.CancelFunc) {
base := context.Background()
if m.ShutdownCh == nil {
// If we're running in a unit testing context without a shutdown
// channel populated then we'll return an uncancelable channel.
return base, func() {}
}
ctx, cancel := context.WithCancel(base)
go func() {
select {
case <-m.ShutdownCh:
cancel()
case <-ctx.Done():
// finished without being interrupted
}
}()
return ctx, cancel
}
// RunOperation executes the given operation on the given backend, blocking
// until that operation completes or is interrupted, and then returns
// the RunningOperation object representing the completed or
// aborted operation that is, despite the name, no longer running.
//
// An error is returned if the operation either fails to start or is cancelled.
// If the operation runs to completion then no error is returned even if the
// operation itself is unsuccessful. Use the "Result" field of the
// returned operation object to recognize operation-level failure.
func (m *Meta) RunOperation(b backend.Enhanced, opReq *backend.Operation) (*backend.RunningOperation, error) {
2021-02-18 23:23:34 +01:00
if opReq.View == nil {
panic("RunOperation called with nil View")
}
if opReq.ConfigDir != "" {
opReq.ConfigDir = m.normalizePath(opReq.ConfigDir)
}
op, err := b.Operation(context.Background(), opReq)
if err != nil {
return nil, fmt.Errorf("error starting operation: %s", err)
}
// Wait for the operation to complete or an interrupt to occur
select {
case <-m.ShutdownCh:
// gracefully stop the operation
op.Stop()
// Notify the user
2021-02-18 23:23:34 +01:00
opReq.View.Interrupted()
// Still get the result, since there is still one
select {
case <-m.ShutdownCh:
2021-02-18 23:23:34 +01:00
opReq.View.FatalInterrupt()
// cancel the operation completely
op.Cancel()
// the operation should return asap
// but timeout just in case
select {
case <-op.Done():
case <-time.After(5 * time.Second):
}
return nil, errors.New("operation canceled")
case <-op.Done():
// operation completed after Stop
}
case <-op.Done():
// operation completed normally
}
return op, nil
}
// contextOpts returns the options to use to initialize a Terraform
// context with the settings from this Meta.
func (m *Meta) contextOpts() (*terraform.ContextOpts, error) {
workspace, err := m.Workspace()
if err != nil {
return nil, err
}
2017-01-19 05:50:45 +01:00
var opts terraform.ContextOpts
2014-07-18 20:37:27 +02:00
opts.Targets = m.targets
2014-10-01 07:01:11 +02:00
opts.UIInput = m.UIInput()
2017-01-19 05:50:45 +01:00
opts.Parallelism = m.parallelism
2014-07-18 20:37:27 +02:00
// If testingOverrides are set, we'll skip the plugin discovery process
// and just work with what we've been given, thus allowing the tests
// to provide mock providers and provisioners.
if m.testingOverrides != nil {
opts.Providers = m.testingOverrides.Providers
opts.Provisioners = m.testingOverrides.Provisioners
} else {
providerFactories, err := m.providerFactories()
if err != nil {
// providerFactories can fail if the plugin selections file is
// invalid in some way, but we don't have any way to report that
// from here so we'll just behave as if no providers are available
// in that case. However, we will produce a warning in case this
// shows up unexpectedly and prompts a bug report.
// This situation shouldn't arise commonly in practice because
// the selections file is generated programmatically.
log.Printf("[WARN] Failed to determine selected providers: %s", err)
// variable providerFactories may now be incomplete, which could
// lead to errors reported downstream from here. providerFactories
// tries to populate as many providers as possible even in an
// error case, so that operations not using problematic providers
// can still succeed.
}
opts.Providers = providerFactories
opts.Provisioners = m.provisionerFactories()
// Read the dependency locks so that they can be verified against the
// provider requirements in the configuration
lockedDependencies, diags := m.lockedDependencies()
// If the locks file is invalid, we should fail early rather than
// ignore it. A missing locks file will return no error.
if diags.HasErrors() {
return nil, diags.Err()
}
opts.LockedDependencies = lockedDependencies
// If any unmanaged providers or dev overrides are enabled, they must
// be listed in the context so that they can be ignored when verifying
// the locks against the configuration
opts.ProvidersInDevelopment = make(map[addrs.Provider]struct{})
for provider := range m.UnmanagedProviders {
opts.ProvidersInDevelopment[provider] = struct{}{}
}
for provider := range m.ProviderDevOverrides {
opts.ProvidersInDevelopment[provider] = struct{}{}
}
}
opts.ProviderSHA256s = m.providerPluginsLock().Read()
opts.Meta = &terraform.ContextMeta{
main: new global option -chdir This new option is intended to address the previous inconsistencies where some older subcommands supported partially changing the target directory (where Terraform would use the new directory inconsistently) where newer commands did not support that override at all. Instead, now Terraform will accept a -chdir command at the start of the command line (before the subcommand) and will interpret it as a request to direct all actions that would normally be taken in the current working directory into the target directory instead. This is similar to options offered by some other similar tools, such as the -C option in "make". The new option is only accepted at the start of the command line (before the subcommand) as a way to reflect that it is a global command (not specific to a particular subcommand) and that it takes effect _before_ executing the subcommand. This also means it'll be forced to appear before any other command-specific arguments that take file paths, which hopefully communicates that those other arguments are interpreted relative to the overridden path. As a measure of pragmatism for existing uses, the path.cwd object in the Terraform language will continue to return the _original_ working directory (ignoring -chdir), in case that is important in some exceptional workflows. The path.root object gives the root module directory, which will always match the overriden working directory unless the user simultaneously uses one of the legacy directory override arguments, which is not a pattern we intend to support in the long run. As a first step down the deprecation path, this commit adjusts the documentation to de-emphasize the inconsistent old command line arguments, including specific guidance on what to use instead for the main three workflow commands, but all of those options remain supported in the same way as they were before. In a later commit we'll make those arguments produce a visible deprecation warning in Terraform's output, and then in an even later commit we'll remove them entirely so that -chdir is the single supported way to run Terraform from a directory other than the one containing the root module configuration.
2020-09-02 00:45:12 +02:00
Env: workspace,
OriginalWorkingDir: m.OriginalWorkingDir,
}
return &opts, nil
}
// defaultFlagSet creates a default flag set for commands.
// See also command/arguments/default.go
func (m *Meta) defaultFlagSet(n string) *flag.FlagSet {
2014-07-18 20:37:27 +02:00
f := flag.NewFlagSet(n, flag.ContinueOnError)
f.SetOutput(ioutil.Discard)
// Set the default Usage to empty
f.Usage = func() {}
return f
}
backend: Validate remote backend Terraform version When using the enhanced remote backend, a subset of all Terraform operations are supported. Of these, only plan and apply can be executed on the remote infrastructure (e.g. Terraform Cloud). Other operations run locally and use the remote backend for state storage. This causes problems when the local version of Terraform does not match the configured version from the remote workspace. If the two versions are incompatible, an `import` or `state mv` operation can cause the remote workspace to be unusable until a manual fix is applied. To prevent this from happening accidentally, this commit introduces a check that the local Terraform version and the configured remote workspace Terraform version are compatible. This check is skipped for commands which do not write state, and can also be disabled by the use of a new command-line flag, `-ignore-remote-version`. Terraform version compatibility is defined as: - For all releases before 0.14.0, local must exactly equal remote, as two different versions cannot share state; - 0.14.0 to 1.0.x are compatible, as we will not change the state version number until at least Terraform 1.1.0; - Versions after 1.1.0 must have the same major and minor versions, as we will not change the state version number in a patch release. If the two versions are incompatible, a diagnostic is displayed, advising that the error can be suppressed with `-ignore-remote-version`. When this flag is used, the diagnostic is still displayed, but as a warning instead of an error. Commands which will not write state can assert this fact by calling the helper `meta.ignoreRemoteBackendVersionConflict`, which will disable the checks. Those which can write state should instead call the helper `meta.remoteBackendVersionCheck`, which will return diagnostics for display. In addition to these explicit paths for managing the version check, we have an implicit check in the remote backend's state manager initialization method. Both of the above helpers will disable this check. This fallback is in place to ensure that future code paths which access state cannot accidentally skip the remote version check.
2020-11-13 22:43:56 +01:00
// ignoreRemoteVersionFlagSet add the ignore-remote version flag to suppress
// the error when the configured Terraform version on the remote workspace
// does not match the local Terraform version.
func (m *Meta) ignoreRemoteVersionFlagSet(n string) *flag.FlagSet {
f := m.defaultFlagSet(n)
f.BoolVar(&m.ignoreRemoteVersion, "ignore-remote-version", false, "continue even if remote and local Terraform versions are incompatible")
backend: Validate remote backend Terraform version When using the enhanced remote backend, a subset of all Terraform operations are supported. Of these, only plan and apply can be executed on the remote infrastructure (e.g. Terraform Cloud). Other operations run locally and use the remote backend for state storage. This causes problems when the local version of Terraform does not match the configured version from the remote workspace. If the two versions are incompatible, an `import` or `state mv` operation can cause the remote workspace to be unusable until a manual fix is applied. To prevent this from happening accidentally, this commit introduces a check that the local Terraform version and the configured remote workspace Terraform version are compatible. This check is skipped for commands which do not write state, and can also be disabled by the use of a new command-line flag, `-ignore-remote-version`. Terraform version compatibility is defined as: - For all releases before 0.14.0, local must exactly equal remote, as two different versions cannot share state; - 0.14.0 to 1.0.x are compatible, as we will not change the state version number until at least Terraform 1.1.0; - Versions after 1.1.0 must have the same major and minor versions, as we will not change the state version number in a patch release. If the two versions are incompatible, a diagnostic is displayed, advising that the error can be suppressed with `-ignore-remote-version`. When this flag is used, the diagnostic is still displayed, but as a warning instead of an error. Commands which will not write state can assert this fact by calling the helper `meta.ignoreRemoteBackendVersionConflict`, which will disable the checks. Those which can write state should instead call the helper `meta.remoteBackendVersionCheck`, which will return diagnostics for display. In addition to these explicit paths for managing the version check, we have an implicit check in the remote backend's state manager initialization method. Both of the above helpers will disable this check. This fallback is in place to ensure that future code paths which access state cannot accidentally skip the remote version check.
2020-11-13 22:43:56 +01:00
return f
}
// extendedFlagSet adds custom flags that are mostly used by commands
// that are used to run an operation like plan or apply.
func (m *Meta) extendedFlagSet(n string) *flag.FlagSet {
f := m.defaultFlagSet(n)
f.BoolVar(&m.input, "input", true, "input")
f.Var((*FlagStringSlice)(&m.targetFlags), "target", "resource to target")
f.BoolVar(&m.compactWarnings, "compact-warnings", false, "use compact warnings")
if m.variableArgs.items == nil {
m.variableArgs = newRawFlags("-var")
}
varValues := m.variableArgs.Alias("-var")
varFiles := m.variableArgs.Alias("-var-file")
f.Var(varValues, "var", "variables")
f.Var(varFiles, "var-file", "variable file")
// commands that bypass locking will supply their own flag on this var,
// but set the initial meta value to true as a failsafe.
m.stateLock = true
2014-07-18 20:37:27 +02:00
return f
}
// parseTargetFlags must be called for any commands supporting -target
// arguments. This method attempts to parse each -target flag into an
// addrs.Target, storing in the Meta.targets slice.
//
// If any flags cannot be parsed, we rewrap the first error diagnostic with a
// custom title to clarify the source of the error. The normal approach of
// directly returning the diags from HCL or the addrs package results in
// confusing incorrect "source" results when presented.
func (m *Meta) parseTargetFlags() tfdiags.Diagnostics {
var diags tfdiags.Diagnostics
m.targets = nil
for _, tf := range m.targetFlags {
traversal, syntaxDiags := hclsyntax.ParseTraversalAbs([]byte(tf), "", hcl.Pos{Line: 1, Column: 1})
if syntaxDiags.HasErrors() {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
fmt.Sprintf("Invalid target %q", tf),
syntaxDiags[0].Detail,
))
continue
}
target, targetDiags := addrs.ParseTarget(traversal)
if targetDiags.HasErrors() {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
fmt.Sprintf("Invalid target %q", tf),
targetDiags[0].Description().Detail,
))
continue
}
m.targets = append(m.targets, target.Subject)
}
return diags
}
// process will process any -no-color entries out of the arguments. This
// will potentially modify the args in-place. It will return the resulting
// slice, and update the Meta and Ui.
func (m *Meta) process(args []string) []string {
// We do this so that we retain the ability to technically call
// process multiple times, even if we have no plans to do so
if m.oldUi != nil {
m.Ui = m.oldUi
}
// Set colorization
m.color = m.Color
i := 0 // output index
for _, v := range args {
if v == "-no-color" {
m.color = false
2015-06-21 22:51:40 +02:00
m.Color = false
} else {
// copy and increment index
args[i] = v
i++
}
}
args = args[:i]
// Set the UI
m.oldUi = m.Ui
2014-08-19 19:22:26 +02:00
m.Ui = &cli.ConcurrentUi{
Ui: &ColorizeUi{
Colorize: m.Colorize(),
ErrorColor: "[red]",
WarnColor: "[yellow]",
2014-08-19 19:22:26 +02:00
Ui: m.oldUi,
},
}
// Reconfigure the view. This is necessary for commands which use both
// views.View and cli.Ui during the migration phase.
if m.View != nil {
m.View.Configure(&arguments.View{
CompactWarnings: m.compactWarnings,
NoColor: !m.Color,
})
}
return args
}
// uiHook returns the UiHook to use with the context.
func (m *Meta) uiHook() *views.UiHook {
return views.NewUiHook(m.View)
}
2017-01-19 05:50:45 +01:00
// confirm asks a yes/no confirmation.
func (m *Meta) confirm(opts *terraform.InputOpts) (bool, error) {
if !m.Input() {
return false, errors.New("input is disabled")
}
for i := 0; i < 2; i++ {
v, err := m.UIInput().Input(context.Background(), opts)
2017-01-19 05:50:45 +01:00
if err != nil {
return false, fmt.Errorf(
"Error asking for confirmation: %s", err)
}
switch strings.ToLower(v) {
case "no":
return false, nil
case "yes":
return true, nil
}
}
return false, nil
2017-01-19 05:50:45 +01:00
}
// showDiagnostics displays error and warning messages in the UI.
//
// "Diagnostics" here means the Diagnostics type from the tfdiag package,
// though as a convenience this function accepts anything that could be
// passed to the "Append" method on that type, converting it to Diagnostics
// before displaying it.
//
// Internally this function uses Diagnostics.Append, and so it will panic
// if given unsupported value types, just as Append does.
func (m *Meta) showDiagnostics(vals ...interface{}) {
var diags tfdiags.Diagnostics
diags = diags.Append(vals...)
diags.Sort()
if len(diags) == 0 {
return
}
outputWidth := m.ErrorColumns()
diags = diags.ConsolidateWarnings(1)
// Since warning messages are generally competing
if m.compactWarnings {
// If the user selected compact warnings and all of the diagnostics are
// warnings then we'll use a more compact representation of the warnings
// that only includes their summaries.
// We show full warnings if there are also errors, because a warning
// can sometimes serve as good context for a subsequent error.
useCompact := true
for _, diag := range diags {
if diag.Severity() != tfdiags.Warning {
useCompact = false
break
}
}
if useCompact {
msg := format.DiagnosticWarningsCompact(diags, m.Colorize())
msg = "\n" + msg + "\nTo see the full warning notes, run Terraform without -compact-warnings.\n"
m.Ui.Warn(msg)
return
}
}
for _, diag := range diags {
var msg string
if m.Color {
msg = format.Diagnostic(diag, m.configSources(), m.Colorize(), outputWidth)
} else {
msg = format.DiagnosticPlain(diag, m.configSources(), outputWidth)
}
switch diag.Severity() {
case tfdiags.Error:
m.Ui.Error(msg)
case tfdiags.Warning:
m.Ui.Warn(msg)
default:
m.Ui.Output(msg)
}
}
}
// WorkspaceNameEnvVar is the name of the environment variable that can be used
// to set the name of the Terraform workspace, overriding the workspace chosen
// by `terraform workspace select`.
//
// Note that this environment variable is ignored by `terraform workspace new`
// and `terraform workspace delete`.
const WorkspaceNameEnvVar = "TF_WORKSPACE"
2020-12-01 18:34:50 +01:00
var errInvalidWorkspaceNameEnvVar = fmt.Errorf("Invalid workspace name set using %s", WorkspaceNameEnvVar)
// Workspace returns the name of the currently configured workspace, corresponding
// to the desired named state.
func (m *Meta) Workspace() (string, error) {
current, overridden := m.WorkspaceOverridden()
if overridden && !validWorkspaceName(current) {
2020-12-01 18:34:50 +01:00
return "", errInvalidWorkspaceNameEnvVar
}
return current, nil
}
// WorkspaceOverridden returns the name of the currently configured workspace,
// corresponding to the desired named state, as well as a bool saying whether
// this was set via the TF_WORKSPACE environment variable.
func (m *Meta) WorkspaceOverridden() (string, bool) {
if envVar := os.Getenv(WorkspaceNameEnvVar); envVar != "" {
return envVar, true
}
envData, err := ioutil.ReadFile(filepath.Join(m.DataDir(), local.DefaultWorkspaceFile))
current := string(bytes.TrimSpace(envData))
if current == "" {
current = backend.DefaultStateName
}
if err != nil && !os.IsNotExist(err) {
// always return the default if we can't get a workspace name
log.Printf("[ERROR] failed to read current workspace: %s", err)
}
return current, false
}
// SetWorkspace saves the given name as the current workspace in the local
// filesystem.
func (m *Meta) SetWorkspace(name string) error {
err := os.MkdirAll(m.DataDir(), 0755)
if err != nil {
return err
}
err = ioutil.WriteFile(filepath.Join(m.DataDir(), local.DefaultWorkspaceFile), []byte(name), 0644)
if err != nil {
return err
}
return nil
}
2017-06-22 03:22:07 +02:00
// isAutoVarFile determines if the file ends with .auto.tfvars or .auto.tfvars.json
func isAutoVarFile(path string) bool {
return strings.HasSuffix(path, ".auto.tfvars") ||
strings.HasSuffix(path, ".auto.tfvars.json")
}
2021-02-18 23:23:34 +01:00
// FIXME: as an interim refactoring step, we apply the contents of the state
// arguments directly to the Meta object. Future work would ideally update the
// code paths which use these arguments to be passed them directly for clarity.
func (m *Meta) applyStateArguments(args *arguments.State) {
m.stateLock = args.Lock
m.stateLockTimeout = args.LockTimeout
m.statePath = args.StatePath
m.stateOutPath = args.StateOutPath
m.backupPath = args.BackupPath
}