Commit Graph

29414 Commits

Author SHA1 Message Date
Aurel Pintea a37d47e955 Improve the help.go docs: typo and a more explicit comment. 2022-01-24 10:52:37 +00:00
Alisdair McDiarmid 5a9bc76d1a
Merge pull request #30316 from superkooks/main
Fix autocomplete for workspace subcommands
2022-01-21 12:01:41 -05:00
Craig Wright f9603d3b14
Merge pull request #30371 from giannakopoulosj/patch-2
Merging docs correction, looks good to me!
2022-01-20 17:18:42 -08:00
Michele Degges 2f465e6047 Update packaging action name 2022-01-20 13:15:19 -08:00
James Bardin 5f2f90faf8
Merge pull request #30363 from ychulovskyy/fix/add-jakarta-region
Update aws-sdk-go dependency to support new region (Jakarta)
2022-01-20 13:21:19 -05:00
Brian Flad a477d10bd1
Introduce Terraform Plugin Protocol 6.2 with legacy_type_system fields from Protocol 5 (#30375)
Reference: https://github.com/hashicorp/terraform/issues/30373

This change forward ports the `legacy_type_system` boolean fields in the `ApplyResourceChange.Response` and `PlanResourceChange.Response` messages that existed in protocol version 5, so that existing terraform-plugin-sdk/v2 providers can be muxed with protocol version 6 providers (e.g. terraform-plugin-framework) while also taking advantage of the newer protocol features. This functionality should not be used by any providers or SDKs except those built with terraform-plugin-sdk.

Updated via:

```shell
cp docs/plugin-protocol/tfplugin6.1.proto docs/plugin-protocol/tfplugin6.2.proto
# Copy legacy_type_system fields from tfplugin5.2.proto into ApplyResourceChange.Response and PlanResourceChange
rm internal/tfplugin6/tfplugin6.proto
ln -s ../../docs/plugin-protocol/tfplugin6.2.proto internal/tfplugin6/tfplugin6.proto
go run tools/protobuf-compile/protobuf-compile.go `pwd`
# Updates to internal/plugin6/grpc_provider.go
```
2022-01-20 09:57:42 -05:00
GiannakopoulosJ 13907690a2
Typo in zipmap documentation
zipmap does not produce a map with comma in terraform cli
tested with Terraform v1.1.3
2022-01-19 16:02:24 +02:00
yuriy d4ebed4c09 Updates from 'go mod tidy' 2022-01-17 23:08:39 +01:00
yuriy 08fb9078ea Update aws-sdk-go dependency to support new region (Jakarta) 2022-01-17 09:09:02 +01:00
Krista LaFentres (she/her) 6dcf00aefc
Merge pull request #30344 from hashicorp/lafentres/refactor-show-command
cli: Refactor show command & migrate to command arguments and views
2022-01-13 13:58:53 -06:00
Krista LaFentres 64e1241ae3 backend/local: Remove unused DisablePlanFileStateLineageChecks flag
Now that show command has been refactored to remove its dependence
on a local backend and local run, this flag is no longer needed to
fix #30195.
2022-01-13 11:00:10 -06:00
Krista LaFentres fea8f6cfa2 cli: Migrate show command to use command arguments and views 2022-01-13 11:00:03 -06:00
Krista LaFentres 8d1bced812 cli: Refactor show command to remove dependence on local run and only load the backend when we need it
See https://github.com/hashicorp/terraform/pull/30205#issuecomment-997113175 for more context
2022-01-12 13:47:59 -06:00
Martin Atkins e95f29bf9d lang/funcs: fileexists slightly better "not a file" error message
Previously we were just returning a string representation of the file mode,
which spends more characters on the irrelevant permission bits that it
does on the directory entry type, and is presented in a Unix-centric
format that likely won't be familiar to the user of a Windows system.

Instead, we'll recognize a few specific directory entry types that seem
worth mentioning by name, and then use a generic message for the rest.

The original motivation here was actually to deal with the fact that our
tests for this function were previously not portable due to the error
message leaking system-specific permission detail that are not relevant
to the test. Rather than just directly addressing that portability
problem, I took the opportunity to improve the error messages at the same
time.

However, because of that initial focus there are only actually tests here
for the directory case. A test that tries to test any of these other file
modes would not be portable and in some cases would require superuser
access, so we'll just leave those cases untested for the moment since they
weren't tested before anyway, and so we've not _lost_ any test coverage
here.
2022-01-11 08:46:29 -08:00
Martin Atkins bdc5f152d7 refactoring: Implied move statements can be cross-package
Terraform uses "implied" move statements to represent the situation where
it automatically handles a switch from count to no-count on a resource.
Because that situation requires targeting only a specific resource
instance inside a specific module instance, implied move statements are
always presented as if they had been declared in the root module and then
traversed through the exact module instance path to reach the target
resource.

However, that means they can potentially cross a module package boundary,
if the changed resource belongs to an external module. Normally we
prohibit that to avoid the root module depending on implementation details
of the called module, but Terraform generates these implied statements
based only on information in the called module and so there's no need to
apply that same restriction to implied move statements, which will always
have source and destination addresses belonging to the same module
instance.

This change therefore fixes a misbehavior where Terraform would reject
an attempt to switch from no-count to count in a called module, where
previously the author of the calling configuration had no recourse to fix
it because the change has actually happened upstream.
2022-01-11 08:43:57 -08:00
Laura Pacilio 5f61140655
Merge pull request #30332 from hashicorp/fix-broken-links-1-10
Fix broken links to external docs
2022-01-11 10:40:12 -05:00
Martin Atkins beb9432155 website: CLI state manipulation section de-emphasize old features
Previously the "terraform state ..." subcommands were the only way to
perform various manipulations of the state, but in recent Terraform
versions we have replaced these with better options.

Since these pages seem to already have pretty good search engine
optimization for the use-cases they are describing, we'll prioritize
mentioning the new approaches and only mention the now-deprecated or
de-emphasized features as a secondary approach.

Specifically:
 - Describe the -replace=... planning option in preference to
   "terraform taint", and present taint as primarily a mechanism for
   Terraform to use itself, as opposed to something end-users should
   typically use directly.
 - Introduce the config-based refactoring features before describing
   "terraform state mv".

The older features here are still applicable in some situations and are
required for those still using older versions of Terraform, so we will
retain the information about them for now while aiming to be clearer in
each case about which is our preferred, modern approach.
2022-01-11 07:36:20 -08:00
Laura Pacilio 1b7e7b967d Fix broken links to external docs 2022-01-10 17:49:49 -05:00
James Bardin 28671f9e46
Merge pull request #30329 from hashicorp/jbardin/module-variable-default
remove synthetic default expression for variables
2022-01-10 16:57:38 -05:00
James Bardin 684ed7505d remove synthetic default expression for variables
Now that variable evaluation checks for a nil expression the graph
transformer does not need to generate a synthetic expression for
variable defaults. This means that all default handling is now located
in one place, and we are not surprised by a configuration expression
showing up which doesn't actually exist in the configuration.

Rename nodeModuleVariable.evalModuleCallArgument to evalModuleVariable.
This method is no longer handling only the module call argument, it is
also dealing with the variable declaration defaults and validation
statements.

Add an additional tests for validation with a non-nullable variable.
2022-01-10 16:22:33 -05:00
Martin Atkins dabd7567af
Update CHANGELOG.md 2022-01-10 12:29:10 -08:00
Martin Atkins 9ebc3e1cd2 core: More accurate error message for invalid variable values
In earlier Terraform versions we had an extra validation step prior to
the graph walk which tried to partially validate root module input
variable values (just checking their type constraints) and then return
error messages which specified as accurately as possible where the value
had originally come from.

We're now handling that sort of validation exclusively during the graph
walk so that we can share the main logic between both root module and
child module variable values, but previously that shared code wasn't
able to generate such specific information about where the values had
originated, because it was adapted from code originally written to only
deal with child module variables.

Here then we restore a similar level of detail as before, when we're
processing root module variables. For child module variables, we use
synthetic InputValue objects which state that the value was declared
in the configuration, thus causing us to produce a similar sort of error
message as we would've before which includes a source range covering
the argument expression in the calling module block.
2022-01-10 12:26:54 -08:00
Martin Atkins 36c4d4c241 core and backend: remove redundant handling of default variable values
Previously we had three different layers all thinking they were
responsible for substituting a default value for an unset root module
variable:
 - the local backend, via logic in backend.ParseVariableValues
 - the context.Plan function (and other similar functions) trying to
   preprocess the input variables using
   terraform.mergeDefaultInputVariableValues .
 - the newer prepareFinalInputVariableValue, which aims to centralize all
   of the variable preparation logic so it can be common to both root and
   child module variables.

The second of these was also trying to handle type constraint checking,
which is also the responsibility of the central function and not something
we need to handle so early.

Only the last of these consistently handles both root and child module
variables, and so is the one we ought to keep. The others are now
redundant and are causing prepareFinalInputVariableValue to get a slightly
corrupted view of the caller's chosen variable values.

To rectify that, here we remove the two redundant layers altogether and
have unset root variables pass through as cty.NilVal all the way to the
central prepareFinalInputVariableValue function, which will then handle
them in a suitable way which properly respects the "nullable" setting.

This commit includes some test changes in the terraform package to make
those tests no longer rely on the mergeDefaultInputVariableValues logic
we've removed, and to instead explicitly set cty.NilVal for all unset
variables to comply with our intended contract for PlanOpts.SetVariables,
and similar. (This is so that we can more easily catch bugs in callers
where they _don't_ correctly handle input variables; it allows us to
distinguish between the caller explicitly marking a variable as unset vs.
not describing it at all, where the latter is a bug in the caller.)
2022-01-10 12:26:54 -08:00
Martin Atkins 37b1413ab3 core: Handle root and child module input variables consistently
Previously we had a significant discrepancy between these two situations:
we wrote the raw root module variables directly into the EvalContext and
then applied type conversions only at expression evaluation time, while
for child modules we converted and validated the values while visiting
the variable graph node and wrote only the _final_ value into the
EvalContext.

This confusion seems to have been the root cause for #29899, where
validation rules for root module variables were being applied at the wrong
point in the process, prior to type conversion.

To fix that bug and also make similar mistakes less likely in the future,
I've made the root module variable handling more like the child module
variable handling in the following ways:
 - The "raw value" (exactly as given by the user) lives only in the graph
   node representing the variable, which mirrors how the _expression_
   for a child module variable lives in its graph node. This means that
   the flow for the two is the same except that there's no expression
   evaluation step for root module variables, because they arrive as
   constant values from the caller.
 - The set of variable values in the EvalContext is always only "final"
   values, after type conversion is complete. That in turn means we no
   longer need to do "just in time" conversion in
   evaluationStateData.GetInputVariable, and can just return the value
   exactly as stored, which is consistent with how we handle all other
   references between objects.

This diff is noisier than I'd like because of how much it takes to wire
a new argument (the raw variable values) through to the plan graph builder,
but those changes are pretty mechanical and the interesting logic lives
inside the plan graph builder itself, in NodeRootVariable, and
the shared helper functions in eval_variable.go.

While here I also took the opportunity to fix a historical API wart in
EvalContext, where SetModuleCallArguments was built to take a set of
variable values all at once but our current caller always calls with only
one at a time. That is now just SetModuleCallArgument singular, to match
with the new SetRootModuleArgument to deal with root module variables.
2022-01-10 12:26:54 -08:00
Martin Atkins 483c38aca1 core: Remove TestContext2Validate_PlanGraphBuilder
This test seems to be a holdover from the many-moons-ago switch from one
graph for all operations to separate graphs for plan and apply. It is
effectively just a copy of a subset of the content of the Context.Validate
function and is a maintainability hazard because it tends to lag behind
updates to that function unless changes there happen to make it fail.

This test doesn't cover anything that the other validate context tests
don't exercise as an implementation detail of calling Context.Validate,
so I've just removed it with no replacement.
2022-01-10 12:26:54 -08:00
Martin Atkins a579ae15b0
Update CHANGELOG.md 2022-01-10 12:24:53 -08:00
Martin Atkins 171e7ef6d9 core: Invalid for_each argument messaging improvements
Our original messaging here was largely just lifted from the equivalent
message for unknown values in "count", and it didn't really include any
specific advice on how to update a configuration to make for_each valid,
instead focusing only on the workaround of using the -target planning
option.

It's tough to pack in a fully-actionable suggestion here since unknown
values in for_each keys tends to be a gnarly architectural problem rather
than a local quirk -- when data flows between modules it can sometimes be
unclear whether it'll end up being used in a context which allows unknown
values.

I did my best to summarize the advice we've been giving in community forum
though, in the hope that more people will be able to address this for
themselves without asking for help, until we're one day able to smooth
this out better with a mechanism such as "partial apply".
2022-01-10 12:23:13 -08:00
SuperKooks 5171c89e1e Fix autocomplete for workspace subcommands 2022-01-08 14:41:20 +11:00
Nick Fagerlund 1e9075b4fa
Merge pull request #30226 from hashicorp/nf/dec21-derandomize-dependencies
Sort dependencies when encoding `ResourceInstanceObject`
2022-01-07 15:26:07 -08:00
Laura Pacilio f4eb0ed24d
Merge pull request #30280 from Mukesh05/patch-2
Update local values definition
2022-01-07 10:22:57 -05:00
Mukesh Kumar 3ac334f267
Update website/docs/language/values/locals.mdx
Thank you. Your suggestion to use "instead of" makes the sentence even more easy to understand.

Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com>
2022-01-07 05:52:12 +05:30
Martin Atkins 4ec1feaa3d build: CGO_ENABLED when building for macOS
Normally when we cross-compile we do so without CGo, because we don't have
suitable C headers available for systems other than the host.

However, building for macOS on macOS is special because there are
sufficient headers available on darwin_amd64 to build for both
darwin_amd64 _and_ darwin_arm64. Also, we _must_ use CGo on macOS because
the system resolver is only available via darwin's libc, and so building
without CGo produces executables that don't resolve hostnames correctly.

This is a conditional in bash to avoid having to duplicate the entire
step. Perhaps later we'll find a more general version of this which can
avoid the special case, but this is sufficient for the moment.
2022-01-06 14:51:09 -08:00
Martin Atkins 087c2f06ee website: Documentation of how provisioners upload files
We recently made a change to how provisioners upload files in order to
address an unintended remote code execution vector when using SSH, which
revealed that we had not previously documented well enough the expected
contract for how provisioners upload files to remote systems, and so some
users were depending on unintended consequences of the the bug now fixed.

We are retaining the fix on security-related grounds, but this is a good
prompt to be clearer in the docs about what exactly Terraform is doing
when asked to upload files over SSH and WinRM, so users can understand
what is supported and write their configurations accordingly.

This also includes an additional section to the v1.1 upgrade guide, since
we apparently neglected to document this intentional breaking change in
the first draft of that page.

Of course, provisioners as a whole remain a last resort, and so we're
documenting this as hopefully a helpful aid to those who have no other
option, and not meaning in any way to recommend their use for any new
use-cases.
2022-01-06 13:04:55 -08:00
Laura Pacilio 6ef9cf652e
Merge pull request #30270 from addison-grant/patch-1
Update variables.mdx to fix typo
2022-01-06 11:14:46 -05:00
Laura Pacilio 662301610d
Merge pull request #30264 from minamijoyo/fix-typo-in-doc
docs: Fix typo in docs/plugin-protocol/releasing-new-version.md
2022-01-06 11:13:36 -05:00
Nick Fagerlund 05d0febf7f Relax test to focus on the behavior we care about (encoded == encoded)
The specific output order is meaningless, but it should always be the same after
two Encode() calls with identical (ignoring in-memory order) dependency sets.
2022-01-05 14:38:53 -08:00
Martin Atkins 28a6036cf2 build: Separate Linux distro package builds into separate job
In our build workflow we'll treat Linux distribution packaging (currently
.deb and .rpm packages) as a separate job, instead of embedding it into
the "build" job, so that this step can happen concurrently with the other
derived actions like the docker image build, and the e2etest runs.
2022-01-05 14:31:04 -08:00
Martin Atkins c1699ea80c build: Constrain permissions for the "build" workflow steps
This workflow only generates artifacts and doesn't need to modify anything
about the repository.
2022-01-05 14:31:04 -08:00
Martin Atkins 3bf758eaf5 build: Consolidate build-darwin job into just "build"
We can use an extra matrix dimension to select which execution environment
we'll use for each GOOS/GOARCH pair, and thus avoid duplicating the
job definition for darwin just to set runs-on: macos-latest for it.

This is not really an intended use of a matrix dimension because it's
directly related to the existing "goos" one, rather than being an
independent third dimension, but it doesn't matter in practice because
we're using the "include" option to specify exact combinations, and thus
we're not relying on the built-in functionality to generate all possible
matrix combinations.
2022-01-05 14:31:04 -08:00
Martin Atkins 218e55b23c build: Build docs source package as part of the release pipeline
This should eventually grow to be a step that actually verifies the
validity of the docs source prior to publishing the artifact that a
downstream publishing pipeline can consume, but for the moment it's really
just a placeholder since we have no such validation step and no downstream
pipeline consuming this artifact.

The general idea here is that the artifacts from this workflow should be
sufficient for all downstream release steps to occur without any direct
access to the Terraform CLI repository, and so this is intended to
eventually meet that ideal but as of this commit the website docs
publishing step _does_ still depend on direct access to this repository.
2022-01-05 14:31:04 -08:00
Martin Atkins b802db75d7 build: Build and run e2etest as part of the release build pipeline
This uses the decoupled build and run strategy to run the e2etests so that
we can arrange to run the tests against the real release packages produced
elsewhere in this workflow, rather than ones generated just in time by
the test harness.

The modifications to make-archive.sh here make it more consistent with its
originally-intended purpose of producing a harness for testing "real"
release executables. Our earlier compromise of making it include its own
terraform executable came from a desire to use that script as part of
manual cross-platform testing when we weren't yet set up to support
automation of those tests as we're doing here. That does mean, however,
that the terraform-e2etest package content must be combined with content
from a terraform release package in order to produce a valid contest for
running the tests.

We use a single job to cross-compile the test harness for all of the
supported platforms, because that build is relatively fast and so not
worth the overhead of matrix build, but then use a matrix build to
actually run the tests so that we can run them in a worker matching the
target platform.

We currently have access only to amd64 (x64) runners in GitHub Actions
and so for the moment this process is limited only to the subset of our
supported platforms which use that architecture.
2022-01-05 14:31:04 -08:00
Martin Atkins 6704f8c795 Initial work on a new package build process
For the moment this is just an experimental additional sidecar package
build process, separate from the one we really use for releases, so that
we can get some experience building in the GitHub Actions environment
before hopefully eventually switching to using the artifacts from this
process as the packages we'll release through the official release
channels.

It will react to any push to one of our release branches or to a release
tag by building official-release-like .zip, .deb, and .rpm packages, along
with Docker images, based on the content of the corresponding commit.

For the moment this doesn't actually produce _shippable_ packages because
in particular it doesn't know how to update our version/version.go file
to hard-code the correct version number. Once Go 1.18 is release and we've
upgraded to it we'll switch to using debug.ReadBuildInfo to determine
our version number at runtime and so no longer need to directly update
a source file for each release, but that functionality isn't yet available
in our current Go 1.17 release.
2022-01-05 14:31:04 -08:00
Alisdair McDiarmid df36a03be1 states: Add failing test for ordered dependencies 2022-01-05 14:24:03 -08:00
Alisdair McDiarmid 535da4ebc7
Merge pull request #30205 from hashicorp/alisdair/fix-show-plan-against-non-default-state
command/show: Disable plan state lineage checks
2022-01-05 12:03:28 -05:00
kmoe 73f5c7779f
Merge pull request #30292 from hashicorp/kmoe/dag-basicedge-hashcode
dags: fix BasicEdge pointer issue
2022-01-05 16:49:31 +00:00
Katy Moe f8fdb6de3f
do not use pointer addr strings as map keys in set
When creating a Set of BasicEdges, the Hashcode function is used to determine
map keys for the underlying set data structure.

The string hex representation of the two vertices' pointers is unsafe to use
as a map key, since these addresses may change between the time they are added
to the set and the time the set is operated on.

Instead we modify the Hashcode function to maintain the references to the
underlying vertices so they cannot be garbage collected during the lifetime
of the Set.
2022-01-05 11:28:47 +00:00
Dylan Staley a8bc7a0170
Merge pull request #30237 from hashicorp/ds.submodule-nav-main
Store website nav files (main)
2022-01-04 11:45:49 -08:00
James Bardin 9272ff2c29
Merge pull request #30286 from hashicorp/jbardin/dag
dag: minor cleanup
2022-01-04 12:51:21 -05:00
James Bardin 8bbba22f8c
Merge pull request #30253 from hashicorp/jbardin/move-graph
cleanup some move graph handling
2022-01-04 12:51:12 -05:00
Alisdair McDiarmid ef01d5d134
Merge pull request #30067 from hashicorp/alisdair/redact-sensitive-values-from-function-errors
lang/funcs: Redact sensitive values from function errors
2022-01-04 11:34:04 -05:00