Merge branch 'master' into patch-1

This commit is contained in:
Chris Griggs 2020-03-04 10:11:40 -08:00 committed by GitHub
commit 0782ca8c38
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1093 changed files with 64382 additions and 60100 deletions

View File

@ -1,265 +1,270 @@
# Contributing to Terraform
**First:** if you're unsure or afraid of _anything_, just ask
or submit the issue or pull request anyways. You won't be yelled at for
giving your best effort. The worst that can happen is that you'll be
politely asked to change something. We appreciate any sort of contributions,
and don't want a wall of rules to get in the way of that.
---
However, for those individuals who want a bit more guidance on the
best way to contribute to the project, read on. This document will cover
what we're looking for. By addressing all the points we're looking for,
it raises the chances we can quickly merge or address your contributions.
This repository contains only Terraform core, which includes the command line
interface and the main graph engine. Providers are implemented as plugins that
each have their own repository in
[the `terraform-providers` organization](https://github.com/terraform-providers)
on GitHub. Instructions for developing each provider are in the associated
README file. For more information, see
[the provider development overview](https://www.terraform.io/docs/plugins/provider.html).
Specifically, we have provided checklists below for each type of issue and pull
request that can happen on the project. These checklists represent everything
we need to be able to review and respond quickly.
---
## HashiCorp, Official, and Community Providers
Terraform is an open source project and we appreciate contributions of various
kinds, including bug reports and fixes, enhancement proposals, documentation
updates, and user experience feedback.
We separate providers out into what we call "HashiCorp Providers", "Partner Providers" and "Community Providers".
To record a bug report, enhancement proposal, or give any other product
feedback, please [open a GitHub issue](https://github.com/hashicorp/terraform/issues/new/choose)
using the most appropriate issue template. Please do fill in all of the
information the issue templates request, because we've seen from experience that
this will maximize the chance that we'll be able to act on your feedback.
HashiCorp providers are providers that we dedicate full time engineers to
improving, supporting the latest features, and fixing bugs. These are providers
we understand deeply and are confident we have the resources to manage
ourselves.
Please note that we _don't_ use GitHub issues for usage questions. If you have
a question about how to use Terraform in general or how to solve a specific
problem with Terraform, please start a topic in
[the Terraform community forum](https://discuss.hashicorp.com/c/terraform-core),
where both Terraform team members and community members participate in
discussions.
Partner providers are providers where we depend on our partners to
contribute fixes and enhancements to improve. HashiCorp will run automated
tests and ensure these providers continue to work, but will not dedicate full
time engineers to add new features to these providers. These providers are
available in official Terraform releases, but the functionality is primarily
contributed.
**All communication on GitHub, the community forum, and other HashiCorp-provided
communication channels is subject to
[the HashiCorp community guidelines](https://www.hashicorp.com/community-guidelines).**
All HashiCorp and Partner providers can be found in the (terraform-providers github organization)[https://github.com/terraform-providers].
Any provider issues should be opened in the provider's repository.
## Terraform CLI/Core Development Environment
Our testing standards are the same for both HashiCorp and Official providers,
and HashiCorp runs full acceptance test suites for every provider nightly to
ensure Terraform remains stable.
This repository contains the source code for Terraform CLI, which is the main
component of Terraform that contains the core Terraform engine.
Community Providers are providers that are neither maintained nor tested by
HashiCorp. We can make no promises that these providers will work with any given
version of Terraform. These providers are not automatically installed by
`terraform init` and instead require manual installation.
The HashiCorp-maintained Terraform providers are also open source but are not
in this repository; instead, they are each in their own repository in
[the `terraform-providers` organization](https://github.com/terraform-providers)
on GitHub.
We make the distinction between these types of providers to help
highlight the vast amounts of community effort that goes in to making Terraform
great, and to help contributors better understand the role HashiCorp employees
play in the various areas of the code base.
This repository also does not include the source code for some other parts of
the Terraform product including Terraform Cloud, Terraform Enterprise, and the
Terraform Registry. Those components are not open source, though if you have
feedback about them (including bug reports) please do feel free to
[open a GitHub issue on this repository](https://github.com/hashicorp/terraform/issues/new/choose).
## Issues
---
### Issue Reporting Checklists
If you wish to work on the Terraform CLI source code, you'll first need to
install the [Go](https://golang.org/) compiler and the version control system
[Git](https://git-scm.com/).
We welcome feature requests and bug reports. Below you'll find checklists with
guidelines for well-formed issues of each type.
At this time the Terraform development environment is targeting only Linux and
Mac OS X systems. While Terraform itself is compatible with Windows,
unfortunately the unit test suite currently contains Unix-specific assumptions
around maximum path lengths, path separators, etc.
#### Bug Reports
Refer to the file [`.go-version`](.go-version) to see which version of Go
Terraform is currently built with. Other versions will often work, but if you
run into any build or testing problems please try with the specific Go version
indicated. You can optionally simplify the installation of multiple specific
versions of Go on your system by installing
[`goenv`](https://github.com/syndbg/goenv), which reads `.go-version` and
automatically selects the correct Go version.
- [ ] __Test against latest release__: Make sure you test against the latest
released version. It is possible we already fixed the bug you're experiencing.
Use Git to clone this repository into a location of your choice. Terraform is
using [Go Modules](https://blog.golang.org/using-go-modules), and so you
should _not_ clone it inside your `GOPATH`.
- [ ] __Search for possible duplicate reports__: It's helpful to keep bug
reports consolidated to one thread, so do a quick search on existing bug
reports to check if anybody else has reported the same thing. You can scope
searches by the label "bug" to help narrow things down.
Switch into the root directory of the cloned repository and build Terraform
using the Go toolchain in the standard way:
- [ ] __Include steps to reproduce__: Provide steps to reproduce the issue,
along with your `.tf` files, with secrets removed, so we can try to
reproduce it. Without this, it makes it much harder to fix the issue.
```
cd terraform
go install .
```
- [ ] __For panics, include `crash.log`__: If you experienced a panic, please
create a [gist](https://gist.github.com) of the *entire* generated crash log
for us to look at. Double check no sensitive items were in the log.
The first time you run the `go install` command, the Go toolchain will download
any library dependencies that you don't already have in your Go modules cache.
Subsequent builds will be faster because these dependencies will already be
available on your local disk.
#### Feature Requests
Once the compilation process succeeds, you can find a `terraform` executable in
the Go executable directory. If you haven't overridden it with the `GOBIN`
environment variable, the executable directory is the `bin` directory inside
the directory returned by the following command:
- [ ] __Search for possible duplicate requests__: It's helpful to keep requests
consolidated to one thread, so do a quick search on existing requests to
check if anybody else has reported the same thing. You can scope searches by
the label "enhancement" to help narrow things down.
```
go env GOPATH
```
- [ ] __Include a use case description__: In addition to describing the
behavior of the feature you'd like to see added, it's helpful to also lay
out the reason why the feature would be important and how it would benefit
Terraform users.
If you are planning to make changes to the Terraform source code, you should
run the unit test suite before you start to make sure everything is initially
passing:
#### Questions
```
go test ./...
```
Please do not use GitHub to ask questions! Instead:
As you make your changes, you can re-run the above command to ensure that the
tests are _still_ passing. If you are working only on a specific Go package,
you can speed up your testing cycle by testing only that single package, or
packages under a particular package prefix:
* __Search for answers in Terraform documentation__
```
go test ./command/...
go test ./addrs
```
* __Ask in the Community Forum__: Use [the community forum](https://discuss.hashicorp.com/c/terraform-core) for questions not answered by the documentation.
## Acceptance Tests: Testing interactions with external services
* __Request an update to the documentation__: If you find that the
documentation is confusing or incorrect, open an issue (or a pull request) and
let us know.
Terraform's unit test suite is self-contained, using mocks and local files
to help ensure that it can run offline and is unlikely to be broken by changes
to outside systems.
### Issue Lifecycle
However, several Terraform components interact with external services, such
as the automatic provider installation mechanism, the Terraform Registry,
Terraform Cloud, etc.
1. The issue is reported.
There are some optional tests in the Terraform CLI codebase that _do_ interact
with external services, which we collectively refer to as "acceptance tests".
You can enable these by setting the environment variable `TF_ACC=1` when
running the tests. We recommend focusing only on the specific package you
are working on when enabling acceptance tests, both because it can help the
test run to complete faster and because you are less likely to encounter
failures due to drift in systems unrelated to your current goal:
2. The issue is verified and categorized by a Terraform collaborator.
Categorization is done via GitHub labels. We generally use a two-label
system of (1) issue/PR type, and (2) section of the codebase. Type is
usually "bug", "enhancement", "documentation", or "question", and section
can be any of the providers or provisioners or "core".
```
TF_ACC=1 go test ./internal/initwd
```
3. Unless it is critical, the issue is left for a period of time (sometimes
many weeks), giving outside contributors a chance to address the issue.
Because the acceptance tests depend on services outside of the Terraform
codebase, and because the acceptance tests are usually used only when making
changes to the systems they cover, it is common and expected that drift in
those external systems will cause test failures. Because of this, prior to
working on a system covered by acceptance tests it's important to run the
existing tests for that system in an _unchanged_ work tree first and respond
to any test failures that preexist, to avoid misinterpreting such failures as
bugs in your new changes.
4. The issue is addressed in a pull request or commit. The issue will be
referenced in the commit message so that the code that fixes it is clearly
linked.
## Generated Code
5. The issue is closed. Sometimes, valid issues will be closed to keep
the issue tracker clean. The issue is still indexed and available for
future viewers, or can be re-opened if necessary.
Some files in the Terraform CLI codebase are generated. In most cases, we
update these using `go generate`, which is the standard way to encapsulate
code generation steps in a Go codebase.
## Pull Requests
```
go generate ./...
```
Thank you for contributing! Here you'll find information on what to include in
your Pull Request to ensure it is accepted quickly.
Use `git diff` afterwards to inspect the changes and ensure that they are what
you expected.
* Pull requests that don't follow the guidelines will be annotated with what
they're missing. A community or core team member may be able to swing around
and help finish up the work, but these PRs will generally hang out much
longer until they can be completed and merged.
Terraform includes generated Go stub code for the Terraform provider plugin
protocol, which is defined using Protocol Buffers. Because the Protocol Buffers
tools are not written in Go and thus cannot be automatically installed using
`go get`, we follow a different process for generating these, which requires
that you've already installed a suitable version of `protoc`:
### Pull Request Lifecycle
```
make protobuf
```
1. You are welcome to submit your pull request for commentary or review before
it is fully completed. Please prefix the title of your pull request with
"[WIP]" to indicate this. It's also a good idea to include specific
questions or items you'd like feedback on.
## External Dependencies
2. Once you believe your pull request is ready to be merged, you can remove any
"[WIP]" prefix from the title and a core team member will review. Follow
[the checklists below](#checklists-for-contribution) to help ensure that
your contribution will be merged quickly.
Terraform uses Go Modules for dependency management, but currently uses
"vendoring" to include copies of all of the external library dependencies
in the Terraform repository to allow builds to complete even if third-party
dependency sources are unavailable.
3. One of Terraform's core team members will look over your contribution and
either provide comments letting you know if there is anything left to do. We
do our best to provide feedback in a timely manner, but it may take some
time for us to respond.
Our dependency licensing policy for Terraform excludes proprietary licenses
and "copyleft"-style licenses. We accept the common Mozilla Public License v2,
MIT License, and BSD licenses. We will consider other open source licenses
in similar spirit to those three, but if you plan to include such a dependency
in a contribution we'd recommend opening a GitHub issue first to discuss what
you intend to implement and what dependencies it will require so that the
Terraform team can review the relevant licenses to for whether they meet our
licensing needs.
4. Once all outstanding comments and checklist items have been addressed, your
contribution will be merged! Merged PRs will be included in the next
Terraform release. The core team takes care of updating the CHANGELOG as
they merge.
If you need to add a new dependency to Terraform or update the selected version
for an existing one, use `go get` from the root of the Terraform repository
as follows:
5. In rare cases, we might decide that a PR should be closed. We'll make sure
to provide clear reasoning when this happens.
```
go get github.com/hashicorp/hcl/v2@2.0.0
```
### Checklists for Contribution
This command will download the requested version (2.0.0 in the above example)
and record that version selection in the `go.mod` file. It will also record
checksums for the module in the `go.sum`.
There are several different kinds of contribution, each of which has its own
standards for a speedy review. The following sections describe guidelines for
each type of contribution.
To complete the dependency change, clean up any redundancy in the module
metadata files and resynchronize the `vendor` directory with the new package
selections by running the following commands:
#### Documentation Update
```
go mod tidy
go mod vendor
```
Because [Terraform's website][website] is in the same repo as the code, it's
easy for anybody to help us improve our docs.
To ensure that the vendoring has worked correctly, be sure to run the unit
test suite at least once in _vendoring_ mode, where Go will use the vendored
dependencies to build the test programs:
- [ ] __Reasoning for docs update__: Including a quick explanation for why the
update needed is helpful for reviewers.
- [ ] __Relevant Terraform version__: Is this update worth deploying to the
site immediately, or is it referencing an upcoming version of Terraform and
should get pushed out with the next release?
```
go test -mod=vendor ./...
```
#### New Provider
Because dependency changes affect a shared, top-level file, they are more likely
than some other change types to become conflicted with other proposed changes
during the code review process. For that reason, and to make dependency changes
more visible in the change history, we prefer to record dependency changes as
separate commits that include only the results of the above commands and the
minimal set of changes to Terraform's own code for compatibility with the
new version:
Implementing a new provider gives Terraform the ability to manage resources in
a whole new API. It's a larger undertaking, but brings major new functionality
into Terraform.
```
git add go.mod go.sum vendor
git commit -m "vendor: go get github.com/hashicorp/hcl/v2@2.0.0"
```
Terraform Providers are external plugins, not in the Terraform codebase. Please
see the [Provider Development Program](https://www.terraform.io/guides/terraform-provider-development-program.html) documentation if you are interested in
submitting a new provider.
You can then make use of the new or updated dependency in new code added in
subsequent commits.
#### Core Bugfix/Enhancement
## Proposing a Change
We are always happy when any developer is interested in diving into Terraform's
core to help out! Here's what we look for in smaller Core PRs.
If you'd like to contribute a code change to Terraform, we'd love to review
a GitHub pull request.
- [ ] __Unit tests__: Terraform's core is covered by hundreds of unit tests at
several different layers of abstraction. Generally the best place to start
is with a "Context Test". These are higher level test that interact
end-to-end with most of Terraform's core. They are divided into test files
for each major action (plan, apply, etc.). Getting a failing test is a great
way to prove out a bug report or a new enhancement. With a context test in
place, you can work on implementation and lower level unit tests. Lower
level tests are largely context dependent, but the Context Tests are almost
always part of core work.
- [ ] __Documentation updates__: If the core change involves anything that
needs to be reflected in our documentation, you can make those changes in
the same PR. The [Terraform website][website] source is in this repo and
includes instructions for getting a local copy of the site up and running if
you'd like to preview your changes.
- [ ] __Well-formed Code__: Do your best to follow existing conventions you
see in the codebase, and ensure your code is formatted with `go fmt`. (The
Travis CI build will fail if `go fmt` has not been run on incoming code.)
The PR reviewers can help out on this front, and may provide comments with
suggestions on how to improve the code.
In order to be respectful of the time of community contributors, we prefer to
discuss potential changes in GitHub issues prior to implementation. That will
allow us to give design feedback up front and set expectations about the scope
of the change, and, for larger changes, how best to approach the work such that
the Terraform team can review it and merge it along with other concurrent work.
#### Core Feature
If the bug you wish to fix or enhancement you wish to implement isn't already
covered by a GitHub issue that contains feedback from the Terraform team,
please do start a discussion (either in
[a new GitHub issue](https://github.com/hashicorp/terraform/issues/new/choose)
or an existing one, as appropriate) before you invest significant development
time. If you mention your intent to implement the change described in your
issue, the Terraform team can prioritize including implementation-related
feedback in the subsequent discussion.
If you're interested in taking on a larger core feature, it's a good idea to
get feedback early and often on the effort.
At this time, we do not have a formal process for reviewing outside proposals
that significantly change Terraform's workflow, its primary usage patterns,
and its language. While we do hope to put such a thing in place in the future,
we wish to be up front with potential contributors that unfortunately we are
unlikely to be able to give prompt feedback for large proposals that could
entail a significant design phase, though we are still interested to hear about
your use-cases so that we can consider ways to meet them as part of other
larger projects.
- [ ] __Early validation of idea and implementation plan__: Terraform's core
is complicated enough that there are often several ways to implement
something, each of which has different implications and tradeoffs. Working
through a plan of attack with the team before you dive into implementation
will help ensure that you're working in the right direction. Opening a GitHub
issue, or commenting on an existing issue, is a great way to get these
conversations started.
- [ ] __Unit tests__: Terraform's core is covered by hundreds of unit tests at
several different layers of abstraction. Generally the best place to start
is with a "Context Test". These are higher level test that interact
end-to-end with most of Terraform's core. They are divided into test files
for each major action (plan, apply, etc.). Getting a failing test is a great
way to prove out a bug report or a new enhancement. With a context test in
place, you can work on implementation and lower level unit tests. Lower
level tests are largely context dependent, but the Context Tests are almost
always part of core work.
- [ ] __Documentation updates__: If the core change involves anything that
needs to be reflected in our documentation, you can make those changes in
the same PR. The [Terraform website][website] source is in this repo and
includes instructions for getting a local copy of the site up and running if
you'd like to preview your changes.
- [ ] __Well-formed Code__: Do your best to follow existing conventions you
see in the codebase, and ensure your code is formatted with `go fmt`. (The
Travis CI build will fail if `go fmt` has not been run on incoming code.)
The PR reviewers can help out on this front, and may provide comments with
suggestions on how to improve the code.
Most changes will involve updates to the test suite, and changes to Terraform's
documentation. The Terraform team can advise on different testing strategies
for specific scenarios, and may ask you to revise the specific phrasing of
your proposed documentation prose to match better with the standard "voice" of
Terraform's documentation.
### Writing Acceptance Tests
#### Acceptance Tests Often Cost Money to Run
Because acceptance tests create real resources, they often cost money to run.
Because the resources only exist for a short period of time, the total amount
of money required is usually a relatively small. Nevertheless, we don't want
financial limitations to be a barrier to contribution, so if you are unable to
pay to run acceptance tests for your contribution, simply mention this in your
pull request. We will happily accept "best effort" implementations of
acceptance tests and run them for you on our side. This might mean that your PR
takes a bit longer to merge, but it most definitely is not a blocker for
contributions.
#### Running an Acceptance Test
Acceptance tests can be run using the `testacc` target in the Terraform
`Makefile`. The individual tests to run can be controlled using a regular
expression. Prior to running the tests provider configuration details such as
access keys must be made available as environment variables.
[website]: https://github.com/hashicorp/terraform/tree/master/website
[acctests]: https://github.com/hashicorp/terraform#acceptance-tests
[community forum]: https://discuss.hashicorp.com/c/terraform-core
[ml]: https://groups.google.com/group/terraform-tool
This repository is primarily maintained by a small team at HashiCorp along with
their other responsibilities, so unfortunately we cannot always respond
promptly to pull requests, particularly if they do not relate to an existing
GitHub issue where the Terraform team has already participated. We _are_
grateful for all contributions however, and will give feedback on pull requests
as soon as we're able.

11
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@ -0,0 +1,11 @@
blank_issues_enabled: false
contact_links:
- name: Provider-related Feedback and Questions
url: https://github.com/terraform-providers
about: Each provider (e.g. AWS, Azure, GCP, Oracle, K8S, etc.) has its own repository, any provider related issues or questions should be directed to appropriate provider repository.
- name: Provider Development Feedback and Questions
url: https://github.com/hashicorp/terraform-plugin-sdk/issues/new/choose
about: Plugin SDK has its own repository, any SDK and provider development related issues or questions should be directed there.
- name: Terraform Language or Workflow Questions
url: https://discuss.hashicorp.com/c/terraform-core
about: Please ask and answer language or workflow related questions through the Terraform Core Community Forum.

View File

@ -1,18 +0,0 @@
---
name: Provider issue (AWS, Azure, GCP, Oracle, Kubernetes, etc.)
about: Do you have a bug, feature request, or other issue with a provider (not Terraform core or the HCL language itself)?
---
Hi there,
Each provider has it's own repository, and issues should be opened there not on the main Terraform repository.
Here are some of the most common:
* [AWS](https://github.com/terraform-providers/terraform-provider-aws)
* [Azure](https://github.com/terraform-providers/terraform-provider-azurerm)
* [Google](https://github.com/terraform-providers/terraform-provider-google)
* [Oracle](https://github.com/terraform-providers/terraform-provider-oci)
* [Kubernetes](https://github.com/terraform-providers/terraform-provider-kubernetes)
See the [terraform-providers](https://github.com/terraform-providers) GitHub organization for many others.

4
.github/SECURITY.md vendored Normal file
View File

@ -0,0 +1,4 @@
# Vulnerability Reporting
Please disclose security vulnerabilities responsibly by following the procedure
described at https://www.hashicorp.com/security#vulnerability-reporting

5
.github/SUPPORT.md vendored
View File

@ -1,5 +1,4 @@
# Support
Terraform is a mature project with a growing community. There are active, dedicated people willing to help you through various mediums.
Take a look at those mediums listed at https://www.terraform.io/community.html
If you have questions about Terraform usage, please feel free to create a topic
on [the official community forum](https://discuss.hashicorp.com/c/terraform-core).

3
.gitignore vendored
View File

@ -29,3 +29,6 @@ website/vendor
# Test exclusions
!command/testdata/**/*.tfstate
!command/testdata/**/.terraform/
# Coverage
coverage.txt

View File

@ -1 +1 @@
1.12.9
1.12.13

View File

@ -4,7 +4,7 @@ services:
- docker
language: go
go:
- "1.12.9"
- "1.12.13"
# add TF_CONSUL_TEST=1 to run consul tests
# they were causing timouts in travis
@ -33,17 +33,22 @@ before_script:
- git config --global url.https://github.com/.insteadOf ssh://git@github.com/
script:
- make test
- make fmtcheck generate
- bash scripts/travis.sh
- go mod verify
- make e2etest
- GOOS=windows go build -mod=vendor
# website-test is temporarily disabled while we get the website build back in shape after the v0.12 reorganization
#- make website-test
after_success:
- bash <(curl -s https://codecov.io/bash)
branches:
only:
- master
- v0.11
- v0.12
notifications:
irc:
channels:

View File

@ -1,43 +1,65 @@
# Building Terraform
This document contains details about the process for building binaries for
Terraform.
This document contains details about the process for building release-style
binaries for Terraform.
(If you are intending instead to make changes to Terraform and build binaries
only for your local testing, see
[the contributing guide](.github/CONTRIBUTING.md).)
## Versioning
As a pre-1.0 project, we use the MINOR and PATCH versions as follows:
Until Terraform v1.0, Terraform's versioning scheme is as follows:
* a `MINOR` version increment indicates a release that may contain backwards
incompatible changes
* a `PATCH` version increment indicates a release that may contain bugfixes as
well as additive (backwards compatible) features and enhancements
* Full version strings start with a zero in the initial position.
* The second position increments for _major_ releases, which may contain
backwards incompatible changes.
* The third and final position increments for _minor_ releases, which
we aim to keep backwards compatible with prior releases for the same major
version.
Although the Terraform team takes care to preserve compatibility between
major releases, major release upgrades will often require a subset of users
to take specific upgrade actions. This issue will persist while the product
design is refined in preparation for more specific backward-compatibility promises
in a later Terraform 1.0 release.
## Process
If only need to build binaries for the platform you're running (Windows, Linux,
Mac OS X etc..), you can follow the instructions in the README for [Developing
Terraform][1].
Terraform release binaries are built via cross-compilation on a Linux
system, using [gox](https://github.com/mitchellh/gox).
The guide below outlines the steps HashiCorp takes to build the official release
binaries for Terraform. This process will generate a set of binaries for each supported
platform, using the [gox](https://github.com/mitchellh/gox) tool.
The steps below are a subset of the steps HashiCorp uses to prepare the
official distribution packages available from
[the download page](https://www.terraform.io/downloads.html). This
process will generate an executable for each of the supported target platforms.
HashiCorp prepares release binaries on Linux amd64 systems. This build process
may need to be adjusted for other host platforms.
```sh
# clone the repository if needed
git clone https://github.com/hashicorp/terraform.git
cd terraform
# Verify unit tests pass
# Verify that the unit tests are passing
make test
# Build the release
# Run preparation steps and then build the executable for each target platform
# in the subdirectory "pkg".
# This generates binaries for each platform and places them in the pkg folder
make bin
```
After running these commands, you should have binaries for all supported
platforms in the `pkg` folder.
Official releases are subsequently then packaged, hashed, and signed before
uploading to [the HashiCorp releases service](https://releases.hashicorp.com/terraform/).
Those final packaging steps are not fully reproducible using the contents
of this repository due to the use of HashiCorp's private signing key. However,
you can place the generated executables in `.zip` archives to produce a
similar result without the checksums and digital signature.
## Release Bundles for use in Terraform Enterprise
[1]: https://github.com/hashicorp/terraform#developing-terraform
If you wish to build distribution archives that blend official Terraform
release executables with a mixture of official and third-party provider builds,
see [the `terraform-bundle` tool](tools/terraform-bundle).

View File

@ -1,287 +1,22 @@
## 0.12.11 (Unreleased)
## 0.13.0 (Unreleased)
BUG FIXES:
BREAKING CHANGES:
* config: Clean up orphan modules in the presence of -target [GH-21313]
## 0.12.10 (October 07, 2019)
* command/import: remove the deprecated `-provider` command line argument [GH-24090]
#22862 fixed a bug where the `import` command was not properly attaching the configured provider for a resource to be imported, making the `-provider` command line argument unnecessary.
* config: Inside `provisioner` blocks that have `when = destroy` set, and inside any `connection` blocks that are used by such `provisioner` blocks, it is now an error to refer to any objects other than `self`, `count`, or `each` [GH-24083]
* config: The `merge` function now returns more precise type information, making it usable for values passed to `for_each` [GH-24032]
ENHANCEMENTS:
* config: `templatefile` function will now return a helpful error message if a given variable has an invalid name, rather than relying on a syntax error in the template parsing itself. [GH-24184]
* `terraform plan` and `terraform apply` will now warn when the `-target` option is used, to draw attention to the fact that the result of applying the plan is likely to be incomplete, and to remind to re-run `terraform plan` with no targets afterwards to ensure that the configuration has converged. ([#22783](https://github.com/hashicorp/terraform/issues/22783))
* config: New function `parseint` for parsing strings containing digits as integers in various bases. ([#22747](https://github.com/hashicorp/terraform/issues/22747))
* config: New function `cidrsubnets`, which is a companion to the existing function `cidrsubnet` which can allocate multiple consecutive subnet prefixes (possibly of different prefix lengths) in a single call. ([#22858](https://github.com/hashicorp/terraform/issues/22858))
* backend/google: The GCS backend now supports OAuth2 token authentication. ([#21772](https://github.com/hashicorp/terraform/issues/21772))
* provisioner/habitat: Multiple updates and fixes, see PR for details ([#22705](https://github.com/hashicorp/terraform/issues/22705))
BUG FIXES:
* backend/manta: fix panic when `insecure_skip_tls_verify` was not set ([#22918](https://github.com/hashicorp/terraform/issues/22918))
## 0.12.9 (September 17, 2019)
NOTES:
* core: `ignore_changes` is now processed (in addition to existing behaviors) before the provider plan is run. This means that users may see fewer planned changes when using `ignore_changes`, as before this change, changes to ignored attributes were still being sent to CustomizeDiff in providers (which could mean cascading changes for some resources). This should be indicative that providers are no longer getting changes that were marked as ignored, but if unexpected plans are seen while using `ignore_changes`, investigate the settings in the `ignore_changes` block to ensure the appropriate attributes are set. ([#22520](https://github.com/hashicorp/terraform/issues/22520))
ENHANCEMENTS:
* provisioners/habitat: `accept_license` argument available to automate accepting the EULA, now required by this client ([#22745](https://github.com/hashicorp/terraform/issues/22745))
* config: add source addressing to unknown value errors in `for_each` ([#22760](https://github.com/hashicorp/terraform/issues/22760))
BUG FIXES:
* command/console: support -var and -var-file flags ([#22145](https://github.com/hashicorp/terraform/issues/22145))
* command/show: Fixed bug with wrong errors being returned or swallowed. ([#22772](https://github.com/hashicorp/terraform/issues/22772))
* config: The `cidrhost`, `cidrsubnet`, and `cidrnetmask` functions now behave correctly with IPv6 prefixes that are short enough for the host portion to be greater than 64-bit or 32-bit (depending on the target architecture). ([#22505](https://github.com/hashicorp/terraform/issues/22505))
* config: Fixed bug on empty sets with `for_each` ([#22281](https://github.com/hashicorp/terraform/issues/22281))
## 0.12.8 (September 04, 2019)
NEW FEATURES:
* lang/funcs: New `fileset` function, for finding static local files that match a glob pattern. ([#22523](https://github.com/hashicorp/terraform/issues/22523))
ENHANCEMENTS:
* remote-state/pg: add option to skip schema creation ([#21607](https://github.com/hashicorp/terraform/issues/21607))
BUG FIXES:
* command/console: use user-supplied `-plugin-dir` ([#22616](https://github.com/hashicorp/terraform/issues/22616))
* config: ensure sets are appropriately known for `for_each` ([#22597](https://github.com/hashicorp/terraform/issues/22597))
## 0.12.7 (August 22, 2019)
NEW FEATURES:
* New functions `regex` and `regexall` allow applying a regular expression pattern to a string and retrieving any matching substring(s) ([#22353](https://github.com/hashicorp/terraform/issues/22353))
ENHANCEMENTS:
* lang/funcs: `lookup()` can work with maps of lists, maps and objects ([#22269](https://github.com/hashicorp/terraform/issues/22269))
* SDK: helper/acctest: Add function to return random IP address ([#22312](https://github.com/hashicorp/terraform/issues/22312))
* SDK: httpclient: Introduce composable `UserAgent(version)` ([#22272](https://github.com/hashicorp/terraform/issues/22272))
* connection/ssh: Support certificate authentication ([#22156](https://github.com/hashicorp/terraform/issues/22156))
BUG FIXES:
* config: reduce MinItems and MaxItems validation during decoding, to allow for use of dynamic blocks ([#22530](https://github.com/hashicorp/terraform/issues/22530))
* config: don't validate MinItems and MaxItems in CoerceValue, allowing providers to set incomplete values ([#22478](https://github.com/hashicorp/terraform/issues/22478))
* config: fix panic on tuples with `for_each` ([#22279](https://github.com/hashicorp/terraform/issues/22279))
* config: fix references to `each` of `for_each` in
s ([#22289](https://github.com/hashicorp/terraform/issues/22289))
* config: fix panic when using nested dynamic blocks ([#22314](https://github.com/hashicorp/terraform/issues/22314))
* config: ensure consistent evaluation when moving between single resources and `for_each` in addressing ([#22454](https://github.com/hashicorp/terraform/issues/22454))
* core: only start a single instance of each required provisioner ([#22553](https://github.com/hashicorp/terraform/issues/22553))
* command: fix issue where commands occasionally exited before the error message printed ([#22373](https://github.com/hashicorp/terraform/issues/22373))
* command/0.12upgrade: use user-supplied plugin-dir ([#22306](https://github.com/hashicorp/terraform/issues/22306))
* command/hook_ui: Truncate the ID considering multibyte characters ([#18823](https://github.com/hashicorp/terraform/issues/18823))
* command/fmt: Terraform fmt no longer inserts spaces after % ([#22356](https://github.com/hashicorp/terraform/issues/22356))
* command/state: Allow moving resources to modules not yet in state ([#22299](https://github.com/hashicorp/terraform/issues/22299))
* backend/google: Now using the OAuth2 token endpoint on `googleapis.com` instead of `google.com`. These endpoints are equivalent in functionality but `googleapis.com` hosts are resolvable from private Google Cloud Platform VPCs where other connectivity is restricted. ([#22451](https://github.com/hashicorp/terraform/issues/22451))
## 0.12.6 (July 31, 2019)
NOTES:
* backend/s3: After this update, the AWS Go SDK will prefer credentials found via the `AWS_PROFILE` environment variable when both the `AWS_PROFILE` environment variable and the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables are statically defined. Previously the SDK would ignore the `AWS_PROFILE` environment variable, if static environment credentials were also specified. This is listed as a bug fix in the AWS Go SDK release notes. ([#22253](https://github.com/hashicorp/terraform/issues/22253))
NEW FEATURES:
* backend/oss: added support for assume role config ([#22186](https://github.com/hashicorp/terraform/issues/22186))
* config: Resources can now use a for_each meta-argument ([#17179](https://github.com/hashicorp/terraform/issues/17179))
ENHANCEMENTS:
* backend/s3: Add support for assuming role via web identity token via the `AWS_WEB_IDENTITY_TOKEN_FILE` and `AWS_ROLE_ARN` environment variables ([#22253](https://github.com/hashicorp/terraform/issues/22253))
* backend/s3: Support automatic region validation for `me-south-1`. For AWS operations to work in the new region, the region must be explicitly enabled as outlined in the [AWS Documentation](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html#rande-manage-enable) ([#22253](https://github.com/hashicorp/terraform/issues/22253))
* connection/ssh: Improve connection debug messages ([#22097](https://github.com/hashicorp/terraform/issues/22097))
BUG FIXES:
* backend/remote: remove misleading contents from error message ([#22148](https://github.com/hashicorp/terraform/issues/22148))
* backend/s3: Load credentials via the `AWS_PROFILE` environment variable (if available) when `AWS_PROFILE` is defined along with `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` ([#22253](https://github.com/hashicorp/terraform/issues/22253))
* config: Improve conditionals to returns the correct type when dynamic values are present but unevaluated ([#22137](https://github.com/hashicorp/terraform/issues/22137))
* config: Fix panic when mistakingly using `dynamic` on an attribute ([#22169](https://github.com/hashicorp/terraform/issues/22169))
* cli: Fix crash with reset connection during init ([#22146](https://github.com/hashicorp/terraform/issues/22146))
* cli: show all deposed instances and prevent crash in `show` command ([#22149](https://github.com/hashicorp/terraform/issues/22149))
* configs/configupgrade: Fix crash with nil hilNode ([#22181](https://github.com/hashicorp/terraform/issues/22181))
* command/fmt: now formats correctly in presence of here-docs ([#21434](https://github.com/hashicorp/terraform/issues/21434))
* helper/schema: don't skip deprecation check during validation when attribute value is unknown ([#22262](https://github.com/hashicorp/terraform/issues/22262))
* plugin/sdk: allow MinItems > 1 when dynamic blocks ([#22221](https://github.com/hashicorp/terraform/issues/22221))
* plugin/sdk: fix reflect panics in helper/schema validation ([#22236](https://github.com/hashicorp/terraform/issues/22236))
## 0.12.5 (July 18, 2019)
ENHANCEMENTS:
* command/format: No longer show no-ops in `terraform show`, since nothing will change ([#21907](https://github.com/hashicorp/terraform/issues/21907))
* backend/s3: Support for assuming role using credential process from the shared AWS configuration file (support profile containing both `credential_process` and `role_arn` configurations) ([#21908](https://github.com/hashicorp/terraform/issues/21908))
* connection/ssh: Abort ssh connections when the server is no longer responding ([#22037](https://github.com/hashicorp/terraform/issues/22037))
* connection/ssh: Support ssh diffie-hellman-group-exchange-sha256 key exchange ([#22037](https://github.com/hashicorp/terraform/issues/22037))
BUG FIXES:
* backend/remote: fix conflict with normalized config dir and vcs root working directory ([#22096](https://github.com/hashicorp/terraform/issues/22096))
* backend/remote: be transparent about what filesystem prefix Terraform is uploading to the remote system, and why it's doing that ([#22121](https://github.com/hashicorp/terraform/issues/22121))
* configs: Ensure diagnostics are properly recorded from nested modules ([#22098](https://github.com/hashicorp/terraform/issues/22098))
* core: Prevent inconsistent final plan error when using dynamic in a set-type block ([#22057](https://github.com/hashicorp/terraform/issues/22057))
* lang/funcs: Allow null values in `compact` function ([#22044](https://github.com/hashicorp/terraform/issues/22044))
* lang/funcs: Pass through empty list in `chunklist` ([#22119](https://github.com/hashicorp/terraform/issues/22119))
## 0.12.4 (July 11, 2019)
NEW FEATURES:
* lang/funcs: new `abspath` function returns the absolute path to a given file ([#21409](https://github.com/hashicorp/terraform/issues/21409))
* backend/swift: support for user configured state object names in swift containers ([#17465](https://github.com/hashicorp/terraform/issues/17465))
BUG FIXES:
* core: Prevent crash when a resource has no current valid instance ([#21979](https://github.com/hashicorp/terraform/issues/21979))
* plugin/sdk: Prevent empty strings from being replaced with default values ([#21806](https://github.com/hashicorp/terraform/issues/21806))
* plugin/sdk: Ensure resource timeouts are not lost when there is an empty plan ([#21814](https://github.com/hashicorp/terraform/issues/21814))
* plugin/sdk: Don't add null elements to diagnostic paths when validating config ([#21884](https://github.com/hashicorp/terraform/issues/21884))
* lang/funcs: Add missing map of bool support for `lookup` ([#21863](https://github.com/hashicorp/terraform/issues/21863))
* config: Fix issue with downloading BitBucket modules from deprecated V1 API by updating go-getter dependency ([#21948](https://github.com/hashicorp/terraform/issues/21948))
* config: Fix conditionals to evaluate to the correct type when using null ([#21957](https://github.com/hashicorp/terraform/issues/21957))
## 0.12.3 (June 24, 2019)
ENHANCEMENTS:
* config: add GCS source support for modules ([#21254](https://github.com/hashicorp/terraform/issues/21254))
* command/format: Reduce extra whitespaces & new lines ([#21334](https://github.com/hashicorp/terraform/issues/21334))
* backend/s3: Support for chaining assume IAM role from AWS shared configuration files ([#21815](https://github.com/hashicorp/terraform/issues/21815))
BUG FIXES:
* configs: Can now use references like `tags["foo"]` in `ignore_changes` to ignore in-place updates to specific keys in a map ([#21788](https://github.com/hashicorp/terraform/issues/21788))
* configs: Fix panic on missing value for `version` attribute in `provider` blocks. ([#21825](https://github.com/hashicorp/terraform/issues/21825))
* lang/funcs: Fix `merge` panic on null values. Now will give an error if null used ([#21695](https://github.com/hashicorp/terraform/issues/21695))
* backend/remote: Fix "Conflict" error if the first state snapshot written after a Terraform CLI upgrade has the same content as the prior state. ([#21811](https://github.com/hashicorp/terraform/issues/21811))
* backend/s3: Fix AWS shared configuration file credential source not assuming a role with environment and ECS credentials ([#21815](https://github.com/hashicorp/terraform/issues/21815))
## 0.12.2 (June 12, 2019)
NEW FEATURES:
* provisioners: new provisioner: `puppet` ([#18851](https://github.com/hashicorp/terraform/issues/18851))
* `range` function for generating a sequence of numbers as a list ([#21461](https://github.com/hashicorp/terraform/issues/21461))
* `yamldecode` and *experimental* `yamlencode` functions for working with YAML-serialized data ([#21459](https://github.com/hashicorp/terraform/issues/21459))
* `uuidv5` function for generating name-based (as opposed to pseudorandom) UUIDs ([#21244](https://github.com/hashicorp/terraform/issues/21244))
* backend/oss: Add support for Alibaba OSS remote state ([#16927](https://github.com/hashicorp/terraform/issues/16927))
ENHANCEMENTS:
* config: consider build metadata when interpreting module versions ([#21640](https://github.com/hashicorp/terraform/issues/21640))
* backend/http: implement retries for the http backend ([#19702](https://github.com/hashicorp/terraform/issues/19702))
* backend/swift: authentication mechanisms now more consistent with other OpenStack-compatible tools ([#18671](https://github.com/hashicorp/terraform/issues/18671))
* backend/swift: add application credential support ([#20914](https://github.com/hashicorp/terraform/pull/20914))
BUG FIXES:
* command/show: use the state snapshot included in the planfile when rendering a plan to json ([#21597](https://github.com/hashicorp/terraform/issues/21597))
* config: Fix issue with empty dynamic blocks failing when usign ConfigModeAttr ([#21549](https://github.com/hashicorp/terraform/issues/21549))
* core: Re-validate resource config during final plan ([#21555](https://github.com/hashicorp/terraform/issues/21555))
* core: Fix missing resource timeouts during destroy ([#21611](https://github.com/hashicorp/terraform/issues/21611))
* core: Don't panic when encountering an invalid `depends_on` ([#21590](https://github.com/hashicorp/terraform/issues/21590))
* backend: Fix panic when upgrading from a state with a hash value greater than MaxInt ([#21484](https://github.com/hashicorp/terraform/issues/21484))
## 0.12.1 (June 3, 2019)
BUG FIXES:
* core: Always try to select a workspace after initialization ([#21234](https://github.com/hashicorp/terraform/issues/21234))
* command/show: fix inconsistent json output causing a panic ([#21541](https://github.com/hashicorp/terraform/issues/21541))
* config: `distinct` function no longer panics when given an empty list ([#21538](https://github.com/hashicorp/terraform/issues/21538))
* config: Don't panic when a `version` constraint is added to a module that was previously initialized without one ([#21542](https://github.com/hashicorp/terraform/issues/21542))
* config: `matchkeys` function argument type checking will no longer fail incorrectly during validation ([#21576](https://github.com/hashicorp/terraform/issues/21576))
* backend/local: Don't panic if an instance in the state only has deposed instances, and no current instance ([#21575](https://github.com/hashicorp/terraform/issues/21575))
## 0.12.0 (May 22, 2019)
BUG FIXES:
* cli: Fix `terraform state mv` to correctly set the resource each mode based on the target address [GH-24254]
* cli: The `terraform plan` command (and the implied plan run by `terraform apply` with no arguments) will now print any warnings that were generated even if there are no changes to be made. [GH-24095]
* core: Instances are now destroyed only using their stored state, removing many cycle errors [GH-24083]
---
For information on prior major releases, see their changelogs:
This is the aggregated summary of changes compared to v0.11.14. If you'd like to see the incremental changelog through each of the v0.12.0 prereleases, please refer to [the v0.12.0-rc1 changelog](https://github.com/hashicorp/terraform/blob/v0.12.0-rc1/CHANGELOG.md).
---
The focus of v0.12.0 was on improvements to the Terraform language made in response to all of the feedback and experience gathered on prior versions. We hope that these language improvements will help to make configurations for more complex situations more readable, and improve the usability of re-usable modules.
However, an overhaul of this kind inevitably means that 100% compatibility is not possible. The updated language is designed to be broadly compatible with the 0.11 language as documented, but some of the improvements required a slightly stricter parser and language model in order to resolve ambiguity or to give better feedback in error messages.
If you are upgrading to v0.12.0, we strongly recommend reading [the upgrade guide](https://www.terraform.io/upgrade-guides/0-12.html) to learn the recommended upgrade process, which includes a tool to automatically upgrade many improved language constructs and to indicate situations where human intuition is required to complete the upgrade.
### Incompatibilities and Notes
* As noted above, the language overhaul means that several aspects of the language are now parsed or evaluated more strictly than before, so configurations that employ workarounds for prior version limitations or that followed conventions other than what was shown in documentation may require some updates. For more information, please refer to [the upgrade guide](https://www.terraform.io/upgrade-guides/0-12.html).
* In order to give better feedback about mistakes, Terraform now validates that all variable names set via `-var` and `-var-file` options correspond to declared variables, generating errors or warnings if not. In situations where automation is providing a fixed set of variables to all configurations (whether they are using them or not), use [`TF_VAR_` environment variables](https://www.terraform.io/docs/commands/environment-variables.html#tf_var_name) instead, which are ignored if they do not correspond to a declared variable.
* The wire protocol for provider and provisioner plugins has changed, so plugins built against prior versions of Terraform are not compatible with Terraform v0.12. The most commonly-downloaded providers already had v0.12-compatible releases at the time of v0.12.0 release, but some other providers (particularly those distributed independently of the `terraform init` installation mechanism) will need to make new releases before they can be used with Terraform v0.12 or later.
* The index API for automatic provider installation in `terraform init` is now provided by the Terraform Registry at `registry.terraform.io`, rather than the indexes directly on `releases.hashicorp.com`. The "releases" server is still currently the distribution source for the release archives themselves at the time of writing, but that may change over time.
* The serialization formats for persisted state snapshots and saved plans have changed. Third-party tools that parse these artifacts will need to be updated to support these new serialization formats.
For most use-cases, we recommend instead using [`terraform show -json`](https://www.terraform.io/docs/commands/show.html#json-output) to read the content of state or plan, in a form that is less likely to see significant breaking changes in future releases.
* [`terraform validate`](https://www.terraform.io/docs/commands/validate.html) now has a slightly smaller scope than before, focusing only on configuration syntax and type/value checking. This makes it safe to run in unattended scenarios, such as on save in a text editor.
### New Features
The full set of language improvements is too large to list them all out exhaustively, so the list below covers some highlights:
* **First-class expressions:** Prior to v0.12, expressions could be used only via string interpolation, like `"${var.foo}"`. Expressions are now fully integrated into the language, allowing them to be used directly as argument values, like `ami = var.ami`.
* **`for` expressions:** This new expression construct allows the construction of a list or map by transforming and filtering elements from another list or map. For more information, refer to [the _`for` expressions_ documentation](https://www.terraform.io/docs/configuration/expressions.html#for-expressions).
* **Dynamic configuration blocks:** For nested configuration blocks accepted as part of a resource configuration, it is now possible to dynamically generate zero or more blocks corresponding to items in a list or map using the special new `dynamic` block construct. This is the official replacement for the common (but buggy) unofficial workaround of treating a block type name as if it were an attribute expecting a list of maps value, which worked sometimes before as a result of some unintended coincidences in the implementation.
* **Generalised "splat" operator:** The `aws_instance.foo.*.id` syntax was previously a special case only for resources with `count` set. It is now an operator within the expression language that can be applied to any list value. There is also an optional new splat variant that allows both index and attribute access operations on each item in the list. For more information, refer to [the _Splat Expressions_ documentation](https://www.terraform.io/docs/configuration/expressions.html#splat-expressions).
* **Nullable argument values:** It is now possible to use a conditional expression like `var.foo != "" ? var.foo : null` to conditionally leave an argument value unset, whereas before Terraform required the configuration author to provide a specific default value in this case. Assigning `null` to an argument is equivalent to omitting that argument entirely.
* **Rich types in module inputs variables and output values:** Terraform v0.7 added support for returning flat lists and maps of strings, but this is now generalized to allow returning arbitrary nested data structures with mixed types. Module authors can specify an expected [type constraint](https://www.terraform.io/docs/configuration/types.html) for each input variable to allow early type checking of arguments.
* **Resource and module object values:** An entire resource or module can now be treated as an object value within expressions, including passing them through input variables and output values to other modules, using an attribute-less reference syntax, like `aws_instance.foo`.
* **Extended template syntax:** The simple interpolation syntax from prior versions is extended to become a simple template language, with support for conditional interpolations and repeated interpolations through iteration. For more information, see [the _String Templates_ documentation](https://www.terraform.io/docs/configuration/expressions.html#string-templates).
* **`jsondecode` and `csvdecode` interpolation functions:** Due to the richer type system in the new configuration language implementation, we can now offer functions for decoding serialization formats. [`jsondecode`](https://www.terraform.io/docs/configuration/functions/jsondecode.html) is the opposite of [`jsonencode`](https://www.terraform.io/docs/configuration/functions/jsonencode.html), while [`csvdecode`](https://www.terraform.io/docs/configuration/functions/csvdecode.html) provides a way to load in lists of maps from a compact tabular representation.
* **Revamped error messages:** Error messages relating to configuration now always include information about where in the configuration the problem was found, along with other contextual information. We have also revisited many of the most common error messages to reword them for clarity, consistency, and actionability.
* **Structual plan output:** When Terraform renders the set of changes it plans to make, it will now use formatting designed to be similar to the input configuration language, including nested rendering of individual changes within multi-line strings, JSON strings, and nested collections.
### Other Improvements
* `terraform validate` now accepts an argument `-json` which produces machine-readable output. Please refer to the documentation for this command for details on the format and some caveats that consumers must consider when using this interface. ([#17539](https://github.com/hashicorp/terraform/issues/17539))
* The JSON-based variant of the Terraform language now has a more tightly-specified and reliable mapping to the native syntax variant. In prior versions, certain Terraform configuration features did not function as expected or were not usable via the JSON-based forms. For more information, see [the _JSON Configuration Syntax_ documentation](https://www.terraform.io/docs/configuration/syntax-json.html).
* The new built-in function [`templatefile`](https://www.terraform.io/docs/configuration/functions/templatefile.html) allows rendering a template from a file directly in the language, without installing the separate Template provider and using the `template_file` data source.
* The new built-in function [`formatdate`](https://www.terraform.io/docs/configuration/functions/formatdate.html), which is a specialized string formatting function for creating machine-oriented timestamp strings in various formats.
* The new built-in functions [`reverse`](https://www.terraform.io/docs/configuration/functions/reverse.html), which reverses the order of items in a list, and [`strrev`](https://www.terraform.io/docs/configuration/functions/strrev.html), which reverses the order of Unicode characters in a string.
* A new `pg` state storage backend allows storing state in a PostgreSQL database.
* The `azurerm` state storage backend supports new authentication mechanisms, custom resource manager endpoints, and HTTP proxies.
* The `s3` state storage backend now supports `credential_source` in AWS configuration files, support for the new AWS regions `eu-north-1` and `ap-east-1`, and several other improvements previously made in the `aws` provider.
* The `swift` state storage backend now supports locking and workspaces.
### Bug Fixes
Quite a few bugs were fixed indirectly as a result of improvements to the underlying language engine, so a fully-comprehensive list of fixed bugs is not possible, but some of the more commonly-encountered bugs that are fixed in this release include:
* config: The conditional operator `... ? ... : ...` now works with result values of any type and only returns evaluation errors for the chosen result expression, as those familiar with this operator in other languages might expect.
* config: Accept and ignore UTF-8 byte-order mark for configuration files ([#19715](https://github.com/hashicorp/terraform/issues/19715))
* config: When using a splat expression like `aws_instance.foo.*.id`, the addition of a new instance to the set (whose `id` is therefore not known until after apply) will no longer cause all of the other ids in the resulting list to appear unknown.
* config: The `jsonencode` function now preserves the types of values passed to it, even inside nested structures, whereas before it had a tendency to convert primitive-typed values to string representations.
* config: The `format` and `formatlist` functions now attempt automatic type conversions when the given values do not match the "verbs" in the format string, rather than producing a result with error placeholders in it.
* config: Assigning a list containing one or more unknown values to an argument expecting a list no longer produces the incorrect error message "should be a list", because Terraform is now able to track the individual elements as being unknown rather than the list as a whole, and to track the type of each unknown value. (This also avoids any need to place seemingly-redundant list brackets around values that are already lists, which would now be interpreted as a list of lists.)
* cli: When `create_before_destroy` is enabled for a resource, replacement actions are reflected correctly in rendered plans as `+/-` rather than `-/+`, and described as such in the UI messages.
* core: Various root causes of the "diffs didn't match during apply" class of error are now checked at their source, allowing Terraform to either avoid the problem occurring altogether (ideally) or to provide a more actionable error message to help with reporting, finding, and fixing the bug.
---
For information on v0.11 and prior releases, please see [the v0.11 branch changelog](https://github.com/hashicorp/terraform/blob/v0.11/CHANGELOG.md).
* [v0.12](https://github.com/hashicorp/terraform/blob/v0.12/CHANGELOG.md)
* [v0.11 and earlier](https://github.com/hashicorp/terraform/blob/v0.11/CHANGELOG.md)

View File

@ -1,4 +1,8 @@
# remote-state backends.
/backend/remote-state/azure @terraform-azure
/backend/remote-state/gcs @terraform-google
/backend/remote-state/s3 @terraform-aws
# Each line is a file pattern followed by one or more owners.
# More on CODEOWNERS files: https://help.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners
# Remote-state backends
/backend/remote-state/azure @hashicorp/terraform-azure
/backend/remote-state/gcs @hashicorp/terraform-google
/backend/remote-state/s3 @hashicorp/terraform-aws
/backend/remote-state/s3 @likexian

View File

@ -11,7 +11,7 @@
FROM golang:alpine
LABEL maintainer="HashiCorp Terraform Team <terraform@hashicorp.com>"
RUN apk add --update git bash openssh
RUN apk add --no-cache git bash openssh
ENV TF_DEV=true
ENV TF_RELEASE=1

View File

@ -5,11 +5,6 @@ WEBSITE_REPO=github.com/hashicorp/terraform-website
default: test
tools:
GO111MODULE=off go get -u golang.org/x/tools/cmd/stringer
GO111MODULE=off go get -u golang.org/x/tools/cmd/cover
GO111MODULE=off go get -u github.com/golang/mock/mockgen
# bin generates the releaseable binaries for Terraform
bin: fmtcheck generate
@TF_RELEASE=1 sh -c "'$(CURDIR)/scripts/build.sh'"
@ -44,10 +39,12 @@ testacc: fmtcheck generate
TF_ACC=1 go test $(TEST) -v $(TESTARGS) -mod=vendor -timeout 120m
# e2etest runs the end-to-end tests against a generated Terraform binary
# and a generated terraform-bundle binary.
# The TF_ACC here allows network access, but does not require any special
# credentials since the e2etests use local-only providers such as "null".
# credentials.
e2etest: generate
TF_ACC=1 go test -mod=vendor -v ./command/e2etest
TF_ACC=1 go test -mod=vendor -v ./tools/terraform-bundle/e2etest
test-compile: fmtcheck generate
@if [ "$(TEST)" = "./..." ]; then \
@ -62,9 +59,6 @@ testrace: fmtcheck generate
TF_ACC= go test -mod=vendor -race $(TEST) $(TESTARGS)
cover:
@go tool cover 2>/dev/null; if [ $$? -eq 3 ]; then \
go get -u golang.org/x/tools/cmd/cover; \
fi
go test $(TEST) -coverprofile=coverage.out
go tool cover -html=coverage.out
rm coverage.out
@ -72,7 +66,7 @@ cover:
# generate runs `go generate` to build the dynamically generated
# source files, except the protobuf stubs which are built instead with
# "make protobuf".
generate: tools
generate:
GOFLAGS=-mod=vendor go generate ./...
# go fmt doesn't support -mod=vendor but it still wants to populate the
# module cache with everything in go.mod even though formatting requires
@ -147,4 +141,4 @@ endif
# under parallel conditions.
.NOTPARALLEL:
.PHONY: bin cover default dev e2etest fmt fmtcheck generate protobuf plugin-dev quickdev test-compile test testacc testrace tools vendor-status website website-test
.PHONY: bin cover default dev e2etest fmt fmtcheck generate protobuf plugin-dev quickdev test-compile test testacc testrace vendor-status website website-test

132
README.md
View File

@ -2,10 +2,9 @@ Terraform
=========
- Website: https://www.terraform.io
- [![Gitter chat](https://badges.gitter.im/hashicorp-terraform/Lobby.png)](https://gitter.im/hashicorp-terraform/Lobby)
- Mailing list: [Google Groups](http://groups.google.com/group/terraform-tool)
- Forums: [HashiCorp Discuss](https://discuss.hashicorp.com/c/terraform-core)
<img alt="Terraform" src="https://cdn.rawgit.com/hashicorp/terraform-website/master/content/source/assets/images/logo-hashicorp.svg" width="600px">
<img alt="Terraform" src="https://www.terraform.io/assets/images/logo-hashicorp-3f10732f.svg" width="600px">
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
@ -34,134 +33,9 @@ All documentation is available on the [Terraform website](http://www.terraform.i
Developing Terraform
--------------------
If you wish to work on Terraform itself or any of its built-in providers, you'll first need [Go](http://www.golang.org) installed on your machine (version 1.11+ is *required*).
This repository contains only Terraform core, which includes the command line interface and the main graph engine. Providers are implemented as plugins that each have their own repository in [the `terraform-providers` organization](https://github.com/terraform-providers) on GitHub. Instructions for developing each provider are in the associated README file. For more information, see [the provider development overview](https://www.terraform.io/docs/plugins/provider.html).
For local development of Terraform core, first make sure Go is properly installed and that a
[GOPATH](http://golang.org/doc/code.html#GOPATH) has been set. You will also need to add `$GOPATH/bin` to your `$PATH`.
Next, using [Git](https://git-scm.com/), clone this repository into `$GOPATH/src/github.com/hashicorp/terraform`.
You'll need to run `make tools` to install some required tools, then `make`. This will compile the code and then run the tests. If this exits with exit status 0, then everything is working!
You only need to run `make tools` once (or when the tools change).
```sh
$ cd "$GOPATH/src/github.com/hashicorp/terraform"
$ make tools
$ make
```
To compile a development version of Terraform and the built-in plugins, run `make dev`. This will build everything using [gox](https://github.com/mitchellh/gox) and put Terraform binaries in the `bin` and `$GOPATH/bin` folders:
```sh
$ make dev
...
$ bin/terraform
...
```
If you're developing a specific package, you can run tests for just that package by specifying the `TEST` variable. For example below, only `terraform` package tests will be run.
```sh
$ make test TEST=./terraform
...
```
If you're working on a specific provider which has not been separated into an individual repository and only wish to rebuild that provider, you can use the `plugin-dev` target. For example, to build only the Test provider:
```sh
$ make plugin-dev PLUGIN=provider-test
```
### Dependencies
Terraform uses Go Modules for dependency management, but for the moment is
continuing to use Go 1.6-style vendoring for compatibility with tools that
have not yet been updated for full Go Modules support.
If you're developing Terraform, there are a few tasks you might need to perform.
#### Adding a dependency
If you're adding a dependency, you'll need to vendor it in the same Pull Request as the code that depends on it. You should do this in a separate commit from your code, as makes PR review easier and Git history simpler to read in the future.
To add a dependency:
Assuming your work is on a branch called `my-feature-branch`, the steps look like this:
1. Add an `import` statement to a suitable package in the Terraform code.
2. Run `go mod vendor` to download the latest version of the module containing
the imported package into the `vendor/` directory, and update the `go.mod`
and `go.sum` files.
3. Review the changes in git and commit them.
#### Updating a dependency
To update a dependency:
1. Run `go get -u module-path@version-number`, such as `go get -u github.com/hashicorp/hcl@2.0.0`
2. Run `go mod vendor` to update the vendored copy in the `vendor/` directory.
3. Review the changes in git and commit them.
### Acceptance Tests
Terraform has a comprehensive [acceptance
test](http://en.wikipedia.org/wiki/Acceptance_testing) suite covering the
built-in providers.
### Cross Compilation and Building for Distribution
If you wish to cross-compile Terraform for another architecture, you can set the `XC_OS` and `XC_ARCH` environment variables to values representing the target operating system and architecture before calling `make`. The output is placed in the `pkg` subdirectory tree both expanded in a directory representing the OS/architecture combination and as a ZIP archive.
For example, to compile 64-bit Linux binaries on Mac OS X, you can run:
```sh
$ XC_OS=linux XC_ARCH=amd64 make bin
...
$ file pkg/linux_amd64/terraform
terraform: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped
```
`XC_OS` and `XC_ARCH` can be space separated lists representing different combinations of operating system and architecture. For example, to compile for both Linux and Mac OS X, targeting both 32- and 64-bit architectures, you can run:
```sh
$ XC_OS="linux darwin" XC_ARCH="386 amd64" make bin
...
$ tree ./pkg/ -P "terraform|*.zip"
./pkg/
├── darwin_386
│   └── terraform
├── darwin_386.zip
├── darwin_amd64
│   └── terraform
├── darwin_amd64.zip
├── linux_386
│   └── terraform
├── linux_386.zip
├── linux_amd64
│   └── terraform
└── linux_amd64.zip
4 directories, 8 files
```
_Note: Cross-compilation uses [gox](https://github.com/mitchellh/gox), which requires toolchains to be built with versions of Go prior to 1.5. In order to successfully cross-compile with older versions of Go, you will need to run `gox -build-toolchain` before running the commands detailed above._
#### Docker
When using docker you don't need to have any of the Go development tools installed and you can clone terraform to any location on disk (doesn't have to be in your $GOPATH). This is useful for users who want to build `master` or a specific branch for testing without setting up a proper Go environment.
For example, run the following command to install the required tools and build terraform in a linux-based container for macOS.
```sh
docker run --rm -v $(pwd):/go/src/github.com/hashicorp/terraform -w /go/src/github.com/hashicorp/terraform -e XC_OS=darwin -e XC_ARCH=amd64 golang:latest bash -c "apt-get update && apt-get install -y zip && make tools bin"
```
To learn more about compiling Terraform and contributing suggested changes, please refer to [the contributing guide](.github/CONTRIBUTING.md).
## License
[Mozilla Public License v2.0](https://github.com/hashicorp/terraform/blob/master/LICENSE)

View File

@ -14,6 +14,15 @@ func (v InputVariable) String() string {
return "var." + v.Name
}
// Absolute converts the receiver into an absolute address within the given
// module instance.
func (v InputVariable) Absolute(m ModuleInstance) AbsInputVariableInstance {
return AbsInputVariableInstance{
Module: m,
Variable: v,
}
}
// AbsInputVariableInstance is the address of an input variable within a
// particular module instance.
type AbsInputVariableInstance struct {
@ -34,7 +43,7 @@ func (m ModuleInstance) InputVariable(name string) AbsInputVariableInstance {
func (v AbsInputVariableInstance) String() string {
if len(v.Module) == 0 {
return v.String()
return v.Variable.String()
}
return fmt.Sprintf("%s.%s", v.Module.String(), v.Variable.String())

View File

@ -18,6 +18,10 @@ import (
type InstanceKey interface {
instanceKeySigil()
String() string
// Value returns the cty.Value of the appropriate type for the InstanceKey
// value.
Value() cty.Value
}
// ParseInstanceKey returns the instance key corresponding to the given value,
@ -56,6 +60,10 @@ func (k IntKey) String() string {
return fmt.Sprintf("[%d]", int(k))
}
func (k IntKey) Value() cty.Value {
return cty.NumberIntVal(int64(k))
}
// StringKey is the InstanceKey representation representing string indices, as
// used when the "for_each" argument is specified with a map or object type.
type StringKey string
@ -69,6 +77,10 @@ func (k StringKey) String() string {
return fmt.Sprintf("[%q]", string(k))
}
func (k StringKey) Value() cty.Value {
return cty.StringVal(string(k))
}
// InstanceKeyLess returns true if the first given instance key i should sort
// before the second key j, and false otherwise.
func InstanceKeyLess(i, j InstanceKey) bool {

View File

@ -33,7 +33,11 @@ func (m Module) String() string {
if len(m) == 0 {
return ""
}
return strings.Join([]string(m), ".")
var steps []string
for _, s := range m {
steps = append(steps, "module", s)
}
return strings.Join(steps, ".")
}
// Child returns the address of a child call in the receiver, identified by the

View File

@ -57,7 +57,7 @@ func ParseModuleInstance(traversal hcl.Traversal) (ModuleInstance, tfdiags.Diagn
// If a reference string is coming from a source that should be identified in
// error messages then the caller should instead parse it directly using a
// suitable function from the HCL API and pass the traversal itself to
// ParseProviderConfigCompact.
// ParseModuleInstance.
//
// Error diagnostics are returned if either the parsing fails or the analysis
// of the traversal fails. There is no way for the caller to distinguish the
@ -410,6 +410,26 @@ func (m ModuleInstance) TargetContains(other Targetable) bool {
}
}
// Module returns the address of the module that this instance is an instance
// of.
func (m ModuleInstance) Module() Module {
if len(m) == 0 {
return nil
}
ret := make(Module, len(m))
for i, step := range m {
ret[i] = step.Name
}
return ret
}
func (m ModuleInstance) targetableSigil() {
// ModuleInstance is targetable
}
func (s ModuleInstanceStep) String() string {
if s.InstanceKey != NoKey {
return s.Name + s.InstanceKey.String()
}
return s.Name
}

302
addrs/provider.go Normal file
View File

@ -0,0 +1,302 @@
package addrs
import (
"fmt"
"strings"
"golang.org/x/net/idna"
"github.com/hashicorp/hcl/v2"
svchost "github.com/hashicorp/terraform-svchost"
"github.com/hashicorp/terraform/tfdiags"
)
// Provider encapsulates a single provider type. In the future this will be
// extended to include additional fields including Namespace and SourceHost
type Provider struct {
Type string
Namespace string
Hostname svchost.Hostname
}
// DefaultRegistryHost is the hostname used for provider addresses that do
// not have an explicit hostname.
const DefaultRegistryHost = svchost.Hostname("registry.terraform.io")
// LegacyProviderNamespace is the special string used in the Namespace field
// of type Provider to mark a legacy provider address. This special namespace
// value would normally be invalid, and can be used only when the hostname is
// DefaultRegistryHost because that host owns the mapping from legacy name to
// FQN.
const LegacyProviderNamespace = "-"
// String returns an FQN string, indended for use in output.
func (pt Provider) String() string {
if pt.IsZero() {
panic("called String on zero-value addrs.Provider")
}
return pt.Hostname.ForDisplay() + "/" + pt.Namespace + "/" + pt.Type
}
// NewProvider constructs a provider address from its parts, and normalizes
// the namespace and type parts to lowercase using unicode case folding rules
// so that resulting addrs.Provider values can be compared using standard
// Go equality rules (==).
//
// The hostname is given as a svchost.Hostname, which is required by the
// contract of that type to have already been normalized for equality testing.
//
// This function will panic if the given namespace or type name are not valid.
// When accepting namespace or type values from outside the program, use
// ParseProviderPart first to check that the given value is valid.
func NewProvider(hostname svchost.Hostname, namespace, typeName string) Provider {
if namespace == LegacyProviderNamespace {
// Legacy provider addresses must always be created via
// NewLegacyProvider so that we can use static analysis to find
// codepaths still working with those.
panic("attempt to create legacy provider address using NewProvider; use NewLegacyProvider instead")
}
return Provider{
Type: MustParseProviderPart(typeName),
Namespace: MustParseProviderPart(namespace),
Hostname: hostname,
}
}
// NewDefaultProvider returns the default address of a HashiCorp-maintained,
// Registry-hosted provider.
func NewDefaultProvider(name string) Provider {
return Provider{
Type: MustParseProviderPart(name),
Namespace: "hashicorp",
Hostname: DefaultRegistryHost,
}
}
// NewLegacyProvider returns a mock address for a provider.
// This will be removed when ProviderType is fully integrated.
func NewLegacyProvider(name string) Provider {
return Provider{
// We intentionally don't normalize and validate the legacy names,
// because existing code expects legacy provider names to pass through
// verbatim, even if not compliant with our new naming rules.
Type: name,
Namespace: LegacyProviderNamespace,
Hostname: DefaultRegistryHost,
}
}
// LegacyString returns the provider type, which is frequently used
// interchangeably with provider name. This function can and should be removed
// when provider type is fully integrated. As a safeguard for future
// refactoring, this function panics if the Provider is not a legacy provider.
func (pt Provider) LegacyString() string {
if pt.IsZero() {
panic("called LegacyString on zero-value addrs.Provider")
}
if pt.Namespace != LegacyProviderNamespace {
panic(pt.String() + " is not a legacy addrs.Provider")
}
return pt.Type
}
// IsZero returns true if the receiver is the zero value of addrs.Provider.
//
// The zero value is not a valid addrs.Provider and calling other methods on
// such a value is likely to either panic or otherwise misbehave.
func (pt Provider) IsZero() bool {
return pt == Provider{}
}
// LessThan returns true if the receiver should sort before the other given
// address in an ordered list of provider addresses.
//
// This ordering is an arbitrary one just to allow deterministic results from
// functions that would otherwise have no natural ordering. It's subject
// to change in future.
func (pt Provider) LessThan(other Provider) bool {
switch {
case pt.Hostname != other.Hostname:
return pt.Hostname < other.Hostname
case pt.Namespace != other.Namespace:
return pt.Namespace < other.Namespace
default:
return pt.Type < other.Type
}
}
// ParseProviderSourceString parses the source attribute and returns a provider.
// This is intended primarily to parse the FQN-like strings returned by
// terraform-config-inspect.
//
// The following are valid source string formats:
// name
// namespace/name
// hostname/namespace/name
func ParseProviderSourceString(str string) (Provider, tfdiags.Diagnostics) {
var ret Provider
var diags tfdiags.Diagnostics
// split the source string into individual components
parts := strings.Split(str, "/")
if len(parts) == 0 || len(parts) > 3 {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid provider source string",
Detail: `The "source" attribute must be in the format "[hostname/][namespace/]name"`,
})
return ret, diags
}
// check for an invalid empty string in any part
for i := range parts {
if parts[i] == "" {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid provider source string",
Detail: `The "source" attribute must be in the format "[hostname/][namespace/]name"`,
})
return ret, diags
}
}
// check the 'name' portion, which is always the last part
givenName := parts[len(parts)-1]
name, err := ParseProviderPart(givenName)
if err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid provider type",
Detail: fmt.Sprintf(`Invalid provider type %q in source %q: %s"`, name, str, err),
})
return ret, diags
}
ret.Type = name
ret.Hostname = DefaultRegistryHost
if len(parts) == 1 {
// FIXME: update this to NewDefaultProvider in the provider source release
return NewLegacyProvider(parts[0]), diags
}
if len(parts) >= 2 {
// the namespace is always the second-to-last part
givenNamespace := parts[len(parts)-2]
if givenNamespace == LegacyProviderNamespace {
// For now we're tolerating legacy provider addresses until we've
// finished updating the rest of the codebase to no longer use them,
// or else we'd get errors round-tripping through legacy subsystems.
ret.Namespace = LegacyProviderNamespace
} else {
namespace, err := ParseProviderPart(givenNamespace)
if err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid provider namespace",
Detail: fmt.Sprintf(`Invalid provider namespace %q in source %q: %s"`, namespace, str, err),
})
return Provider{}, diags
}
ret.Namespace = namespace
}
}
// Final Case: 3 parts
if len(parts) == 3 {
// the namespace is always the first part in a three-part source string
hn, err := svchost.ForComparison(parts[0])
if err != nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid provider source hostname",
Detail: fmt.Sprintf(`Invalid provider source hostname namespace %q in source %q: %s"`, hn, str, err),
})
return Provider{}, diags
}
ret.Hostname = hn
}
if ret.Namespace == LegacyProviderNamespace && ret.Hostname != DefaultRegistryHost {
// Legacy provider addresses must always be on the default registry
// host, because the default registry host decides what actual FQN
// each one maps to.
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid provider namespace",
Detail: "The legacy provider namespace \"-\" can be used only with hostname " + DefaultRegistryHost.ForDisplay() + ".",
})
return Provider{}, diags
}
return ret, diags
}
// ParseProviderPart processes an addrs.Provider namespace or type string
// provided by an end-user, producing a normalized version if possible or
// an error if the string contains invalid characters.
//
// A provider part is processed in the same way as an individual label in a DNS
// domain name: it is transformed to lowercase per the usual DNS case mapping
// and normalization rules and may contain only letters, digits, and dashes.
// Additionally, dashes may not appear at the start or end of the string.
//
// These restrictions are intended to allow these names to appear in fussy
// contexts such as directory/file names on case-insensitive filesystems,
// repository names on GitHub, etc. We're using the DNS rules in particular,
// rather than some similar rules defined locally, because the hostname part
// of an addrs.Provider is already a hostname and it's ideal to use exactly
// the same case folding and normalization rules for all of the parts.
//
// In practice a provider type string conventionally does not contain dashes
// either. Such names are permitted, but providers with such type names will be
// hard to use because their resource type names will not be able to contain
// the provider type name and thus each resource will need an explicit provider
// address specified. (A real-world example of such a provider is the
// "google-beta" variant of the GCP provider, which has resource types that
// start with the "google_" prefix instead.)
//
// It's valid to pass the result of this function as the argument to a
// subsequent call, in which case the result will be identical.
func ParseProviderPart(given string) (string, error) {
if len(given) == 0 {
return "", fmt.Errorf("must have at least one character")
}
// We're going to process the given name using the same "IDNA" library we
// use for the hostname portion, since it already implements the case
// folding rules we want.
//
// The idna library doesn't expose individual label parsing directly, but
// once we've verified it doesn't contain any dots we can just treat it
// like a top-level domain for this library's purposes.
if strings.ContainsRune(given, '.') {
return "", fmt.Errorf("dots are not allowed")
}
// We don't allow names containing multiple consecutive dashes, just as
// a matter of preference: they look weird, confusing, or incorrect.
// This also, as a side-effect, prevents the use of the "punycode"
// indicator prefix "xn--" that would cause the IDNA library to interpret
// the given name as punycode, because that would be weird and unexpected.
if strings.Contains(given, "--") {
return "", fmt.Errorf("cannot use multiple consecutive dashes")
}
result, err := idna.Lookup.ToUnicode(given)
if err != nil {
return "", fmt.Errorf("must contain only letters, digits, and dashes, and may not use leading or trailing dashes")
}
return result, nil
}
// MustParseProviderPart is a wrapper around ParseProviderPart that panics if
// it returns an error.
func MustParseProviderPart(given string) string {
result, err := ParseProviderPart(given)
if err != nil {
panic(err.Error())
}
return result
}

View File

@ -4,153 +4,104 @@ import (
"fmt"
"github.com/hashicorp/terraform/tfdiags"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
)
// ProviderConfig is the address of a provider configuration.
type ProviderConfig struct {
Type string
// ProviderConfig is an interface type whose dynamic type can be either
// LocalProviderConfig or AbsProviderConfig, in order to represent situations
// where a value might either be module-local or absolute but the decision
// cannot be made until runtime.
//
// Where possible, use either LocalProviderConfig or AbsProviderConfig directly
// instead, to make intent more clear. ProviderConfig can be used only in
// situations where the recipient of the value has some out-of-band way to
// determine a "current module" to use if the value turns out to be
// a LocalProviderConfig.
//
// Recipients of non-nil ProviderConfig values that actually need
// AbsProviderConfig values should call ResolveAbsProviderAddr on the
// *configs.Config value representing the root module configuration, which
// handles the translation from local to fully-qualified using mapping tables
// defined in the configuration.
//
// Recipients of a ProviderConfig value can assume it can contain only a
// LocalProviderConfig value, an AbsProviderConfigValue, or nil to represent
// the absense of a provider config in situations where that is meaningful.
type ProviderConfig interface {
providerConfig()
}
// LocalProviderConfig is the address of a provider configuration from the
// perspective of references in a particular module.
//
// Finding the corresponding AbsProviderConfig will require looking up the
// LocalName in the providers table in the module's configuration; there is
// no syntax-only translation between these types.
type LocalProviderConfig struct {
LocalName string
// If not empty, Alias identifies which non-default (aliased) provider
// configuration this address refers to.
Alias string
}
// NewDefaultProviderConfig returns the address of the default (un-aliased)
// configuration for the provider with the given type name.
func NewDefaultProviderConfig(typeName string) ProviderConfig {
return ProviderConfig{
Type: typeName,
var _ ProviderConfig = LocalProviderConfig{}
// NewDefaultLocalProviderConfig returns the address of the default (un-aliased)
// configuration for the provider with the given local type name.
func NewDefaultLocalProviderConfig(LocalNameName string) LocalProviderConfig {
return LocalProviderConfig{
LocalName: LocalNameName,
}
}
// ParseProviderConfigCompact parses the given absolute traversal as a relative
// provider address in compact form. The following are examples of traversals
// that can be successfully parsed as compact relative provider configuration
// addresses:
//
// aws
// aws.foo
//
// This function will panic if given a relative traversal.
//
// If the returned diagnostics contains errors then the result value is invalid
// and must not be used.
func ParseProviderConfigCompact(traversal hcl.Traversal) (ProviderConfig, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
ret := ProviderConfig{
Type: traversal.RootName(),
}
// providerConfig Implements addrs.ProviderConfig.
func (pc LocalProviderConfig) providerConfig() {}
if len(traversal) < 2 {
// Just a type name, then.
return ret, diags
}
aliasStep := traversal[1]
switch ts := aliasStep.(type) {
case hcl.TraverseAttr:
ret.Alias = ts.Name
return ret, diags
default:
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid provider configuration address",
Detail: "The provider type name must either stand alone or be followed by an alias name separated with a dot.",
Subject: aliasStep.SourceRange().Ptr(),
})
}
if len(traversal) > 2 {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid provider configuration address",
Detail: "Extraneous extra operators after provider configuration address.",
Subject: traversal[2:].SourceRange().Ptr(),
})
}
return ret, diags
}
// ParseProviderConfigCompactStr is a helper wrapper around ParseProviderConfigCompact
// that takes a string and parses it with the HCL native syntax traversal parser
// before interpreting it.
//
// This should be used only in specialized situations since it will cause the
// created references to not have any meaningful source location information.
// If a reference string is coming from a source that should be identified in
// error messages then the caller should instead parse it directly using a
// suitable function from the HCL API and pass the traversal itself to
// ParseProviderConfigCompact.
//
// Error diagnostics are returned if either the parsing fails or the analysis
// of the traversal fails. There is no way for the caller to distinguish the
// two kinds of diagnostics programmatically. If error diagnostics are returned
// then the returned address is invalid.
func ParseProviderConfigCompactStr(str string) (ProviderConfig, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
traversal, parseDiags := hclsyntax.ParseTraversalAbs([]byte(str), "", hcl.Pos{Line: 1, Column: 1})
diags = diags.Append(parseDiags)
if parseDiags.HasErrors() {
return ProviderConfig{}, diags
}
addr, addrDiags := ParseProviderConfigCompact(traversal)
diags = diags.Append(addrDiags)
return addr, diags
}
// Absolute returns an AbsProviderConfig from the receiver and the given module
// instance address.
func (pc ProviderConfig) Absolute(module ModuleInstance) AbsProviderConfig {
return AbsProviderConfig{
Module: module,
ProviderConfig: pc,
}
}
func (pc ProviderConfig) String() string {
if pc.Type == "" {
func (pc LocalProviderConfig) String() string {
if pc.LocalName == "" {
// Should never happen; always indicates a bug
return "provider.<invalid>"
}
if pc.Alias != "" {
return fmt.Sprintf("provider.%s.%s", pc.Type, pc.Alias)
return fmt.Sprintf("provider.%s.%s", pc.LocalName, pc.Alias)
}
return "provider." + pc.Type
return "provider." + pc.LocalName
}
// StringCompact is an alternative to String that returns the form that can
// be parsed by ParseProviderConfigCompact, without the "provider." prefix.
func (pc ProviderConfig) StringCompact() string {
func (pc LocalProviderConfig) StringCompact() string {
if pc.Alias != "" {
return fmt.Sprintf("%s.%s", pc.Type, pc.Alias)
return fmt.Sprintf("%s.%s", pc.LocalName, pc.Alias)
}
return pc.Type
return pc.LocalName
}
// AbsProviderConfig is the absolute address of a provider configuration
// within a particular module instance.
type AbsProviderConfig struct {
Module ModuleInstance
ProviderConfig ProviderConfig
Module ModuleInstance
Provider Provider
Alias string
}
var _ ProviderConfig = AbsProviderConfig{}
// ParseAbsProviderConfig parses the given traversal as an absolute provider
// address. The following are examples of traversals that can be successfully
// parsed as absolute provider configuration addresses:
//
// provider.aws
// provider.aws.foo
// module.bar.provider.aws
// module.bar.module.baz.provider.aws.foo
// module.foo[1].provider.aws.foo
// provider["registry.terraform.io/hashicorp/aws"]
// provider["registry.terraform.io/hashicorp/aws"].foo
// module.bar.provider["registry.terraform.io/hashicorp/aws"]
// module.bar.module.baz.provider["registry.terraform.io/hashicorp/aws"].foo
// module.foo[1].provider["registry.terraform.io/hashicorp/aws"].foo
//
// This type of address is used, for example, to record the relationships
// between resources and provider configurations in the state structure.
@ -180,8 +131,22 @@ func ParseAbsProviderConfig(traversal hcl.Traversal) (AbsProviderConfig, tfdiags
return ret, diags
}
if tt, ok := remain[1].(hcl.TraverseAttr); ok {
ret.ProviderConfig.Type = tt.Name
if tt, ok := remain[1].(hcl.TraverseIndex); ok {
if !tt.Key.Type().Equals(cty.String) {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid provider configuration address",
Detail: "The prefix \"provider.\" must be followed by a provider type name.",
Subject: remain[1].SourceRange().Ptr(),
})
return ret, diags
}
p, sourceDiags := ParseProviderSourceString(tt.Key.AsString())
ret.Provider = p
if sourceDiags.HasErrors() {
diags = diags.Append(sourceDiags)
return ret, diags
}
} else {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
@ -194,7 +159,7 @@ func ParseAbsProviderConfig(traversal hcl.Traversal) (AbsProviderConfig, tfdiags
if len(remain) == 3 {
if tt, ok := remain[2].(hcl.TraverseAttr); ok {
ret.ProviderConfig.Alias = tt.Name
ret.Alias = tt.Name
} else {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
@ -226,6 +191,18 @@ func ParseAbsProviderConfig(traversal hcl.Traversal) (AbsProviderConfig, tfdiags
// the returned address is invalid.
func ParseAbsProviderConfigStr(str string) (AbsProviderConfig, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
traversal, parseDiags := hclsyntax.ParseTraversalAbs([]byte(str), "", hcl.Pos{Line: 1, Column: 1})
diags = diags.Append(parseDiags)
if parseDiags.HasErrors() {
return AbsProviderConfig{}, diags
}
addr, addrDiags := ParseAbsProviderConfig(traversal)
diags = diags.Append(addrDiags)
return addr, diags
}
func ParseLegacyAbsProviderConfigStr(str string) (AbsProviderConfig, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
traversal, parseDiags := hclsyntax.ParseTraversalAbs([]byte(str), "", hcl.Pos{Line: 1, Column: 1})
diags = diags.Append(parseDiags)
@ -233,34 +210,101 @@ func ParseAbsProviderConfigStr(str string) (AbsProviderConfig, tfdiags.Diagnosti
return AbsProviderConfig{}, diags
}
addr, addrDiags := ParseAbsProviderConfig(traversal)
addr, addrDiags := ParseLegacyAbsProviderConfig(traversal)
diags = diags.Append(addrDiags)
return addr, diags
}
// ProviderConfigDefault returns the address of the default provider config
// of the given type inside the recieving module instance.
func (m ModuleInstance) ProviderConfigDefault(name string) AbsProviderConfig {
// ParseLegacyAbsProviderConfig parses the given traversal as an absolute
// provider address. The following are examples of traversals that can be
// successfully parsed as legacy absolute provider configuration addresses:
//
// provider.aws
// provider.aws.foo
// module.bar.provider.aws
// module.bar.module.baz.provider.aws.foo
// module.foo[1].provider.aws.foo
//
// This type of address is used in legacy state and may appear in state v4 if
// the provider config addresses have not been normalized to include provider
// FQN.
func ParseLegacyAbsProviderConfig(traversal hcl.Traversal) (AbsProviderConfig, tfdiags.Diagnostics) {
modInst, remain, diags := parseModuleInstancePrefix(traversal)
ret := AbsProviderConfig{
Module: modInst,
}
if len(remain) < 2 || remain.RootName() != "provider" {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid provider configuration address",
Detail: "Provider address must begin with \"provider.\", followed by a provider type name.",
Subject: remain.SourceRange().Ptr(),
})
return ret, diags
}
if len(remain) > 3 {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid provider configuration address",
Detail: "Extraneous operators after provider configuration alias.",
Subject: hcl.Traversal(remain[3:]).SourceRange().Ptr(),
})
return ret, diags
}
// We always assume legacy-style providers in legacy state
if tt, ok := remain[1].(hcl.TraverseAttr); ok {
ret.Provider = NewLegacyProvider(tt.Name)
} else {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid provider configuration address",
Detail: "The prefix \"provider.\" must be followed by a provider type name.",
Subject: remain[1].SourceRange().Ptr(),
})
return ret, diags
}
if len(remain) == 3 {
if tt, ok := remain[2].(hcl.TraverseAttr); ok {
ret.Alias = tt.Name
} else {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid provider configuration address",
Detail: "Provider type name must be followed by a configuration alias name.",
Subject: remain[2].SourceRange().Ptr(),
})
return ret, diags
}
}
return ret, diags
}
// ProviderConfigDefault returns the address of the default provider config of
// the given type inside the recieving module instance.
func (m ModuleInstance) ProviderConfigDefault(provider Provider) AbsProviderConfig {
return AbsProviderConfig{
Module: m,
ProviderConfig: ProviderConfig{
Type: name,
},
Module: m,
Provider: provider,
}
}
// ProviderConfigAliased returns the address of an aliased provider config
// of with given type and alias inside the recieving module instance.
func (m ModuleInstance) ProviderConfigAliased(name, alias string) AbsProviderConfig {
// ProviderConfigAliased returns the address of an aliased provider config of
// the given type and alias inside the recieving module instance.
func (m ModuleInstance) ProviderConfigAliased(provider Provider, alias string) AbsProviderConfig {
return AbsProviderConfig{
Module: m,
ProviderConfig: ProviderConfig{
Type: name,
Alias: alias,
},
Module: m,
Provider: provider,
Alias: alias,
}
}
// providerConfig Implements addrs.ProviderConfig.
func (pc AbsProviderConfig) providerConfig() {}
// Inherited returns an address that the receiving configuration address might
// inherit from in a parent module. The second bool return value indicates if
// such inheritance is possible, and thus whether the returned address is valid.
@ -269,9 +313,9 @@ func (m ModuleInstance) ProviderConfigAliased(name, alias string) AbsProviderCon
// other than the root module. Even if a valid address is returned, inheritence
// may not be performed for other reasons, such as if the calling module
// provided explicit provider configurations within the call for this module.
// The ProviderTransformer graph transform in the main terraform module has
// the authoritative logic for provider inheritance, and this method is here
// mainly just for its benefit.
// The ProviderTransformer graph transform in the main terraform module has the
// authoritative logic for provider inheritance, and this method is here mainly
// just for its benefit.
func (pc AbsProviderConfig) Inherited() (AbsProviderConfig, bool) {
// Can't inherit if we're already in the root.
if len(pc.Module) == 0 {
@ -279,19 +323,52 @@ func (pc AbsProviderConfig) Inherited() (AbsProviderConfig, bool) {
}
// Can't inherit if we have an alias.
if pc.ProviderConfig.Alias != "" {
if pc.Alias != "" {
return AbsProviderConfig{}, false
}
// Otherwise, we might inherit from a configuration with the same
// provider name in the parent module instance.
// provider type in the parent module instance.
parentMod := pc.Module.Parent()
return pc.ProviderConfig.Absolute(parentMod), true
return AbsProviderConfig{
Module: parentMod,
Provider: pc.Provider,
}, true
}
func (pc AbsProviderConfig) String() string {
if len(pc.Module) == 0 {
return pc.ProviderConfig.String()
// LegacyString() returns a legacy-style AbsProviderConfig string and should only be used for legacy state shimming.
func (pc AbsProviderConfig) LegacyString() string {
if pc.Alias != "" {
if len(pc.Module) == 0 {
return fmt.Sprintf("%s.%s.%s", "provider", pc.Provider.LegacyString(), pc.Alias)
} else {
return fmt.Sprintf("%s.%s.%s.%s", pc.Module.String(), "provider", pc.Provider.LegacyString(), pc.Alias)
}
}
return fmt.Sprintf("%s.%s", pc.Module.String(), pc.ProviderConfig.String())
if len(pc.Module) == 0 {
return fmt.Sprintf("%s.%s", "provider", pc.Provider.LegacyString())
}
return fmt.Sprintf("%s.%s.%s", pc.Module.String(), "provider", pc.Provider.LegacyString())
}
// String() returns a string representation of an AbsProviderConfig in the following format:
//
// provider["example.com/namespace/name"]
// provider["example.com/namespace/name"].alias
// module.module-name.provider["example.com/namespace/name"]
// module.module-name.provider["example.com/namespace/name"].alias
func (pc AbsProviderConfig) String() string {
if pc.Alias != "" {
if len(pc.Module) == 0 {
return fmt.Sprintf("%s[%q].%s", "provider", pc.Provider.String(), pc.Alias)
} else {
return fmt.Sprintf("%s.%s[%q].%s", pc.Module.String(), "provider", pc.Provider.String(), pc.Alias)
}
}
if len(pc.Module) == 0 {
return fmt.Sprintf("%s[%q]", "provider", pc.Provider.String())
}
return fmt.Sprintf("%s.%s[%q]", pc.Module.String(), "provider", pc.Provider.String())
}

View File

@ -9,68 +9,6 @@ import (
"github.com/hashicorp/hcl/v2/hclsyntax"
)
func TestParseProviderConfigCompact(t *testing.T) {
tests := []struct {
Input string
Want ProviderConfig
WantDiag string
}{
{
`aws`,
ProviderConfig{
Type: "aws",
},
``,
},
{
`aws.foo`,
ProviderConfig{
Type: "aws",
Alias: "foo",
},
``,
},
{
`aws["foo"]`,
ProviderConfig{},
`The provider type name must either stand alone or be followed by an alias name separated with a dot.`,
},
}
for _, test := range tests {
t.Run(test.Input, func(t *testing.T) {
traversal, parseDiags := hclsyntax.ParseTraversalAbs([]byte(test.Input), "", hcl.Pos{})
if len(parseDiags) != 0 {
t.Errorf("unexpected diagnostics during parse")
for _, diag := range parseDiags {
t.Logf("- %s", diag)
}
return
}
got, diags := ParseProviderConfigCompact(traversal)
if test.WantDiag != "" {
if len(diags) != 1 {
t.Fatalf("got %d diagnostics; want 1", len(diags))
}
gotDetail := diags[0].Description().Detail
if gotDetail != test.WantDiag {
t.Fatalf("wrong diagnostic detail\ngot: %s\nwant: %s", gotDetail, test.WantDiag)
}
return
} else {
if len(diags) != 0 {
t.Fatalf("got %d diagnostics; want 0", len(diags))
}
}
for _, problem := range deep.Equal(got, test.Want) {
t.Error(problem)
}
})
}
}
func TestParseAbsProviderConfig(t *testing.T) {
tests := []struct {
Input string
@ -78,57 +16,65 @@ func TestParseAbsProviderConfig(t *testing.T) {
WantDiag string
}{
{
`provider.aws`,
`provider["registry.terraform.io/hashicorp/aws"]`,
AbsProviderConfig{
Module: RootModuleInstance,
ProviderConfig: ProviderConfig{
Type: "aws",
Provider: Provider{
Type: "aws",
Namespace: "hashicorp",
Hostname: "registry.terraform.io",
},
},
``,
},
{
`provider.aws.foo`,
`provider["registry.terraform.io/hashicorp/aws"].foo`,
AbsProviderConfig{
Module: RootModuleInstance,
ProviderConfig: ProviderConfig{
Type: "aws",
Alias: "foo",
Provider: Provider{
Type: "aws",
Namespace: "hashicorp",
Hostname: "registry.terraform.io",
},
Alias: "foo",
},
``,
},
{
`module.baz.provider.aws`,
`module.baz.provider["registry.terraform.io/hashicorp/aws"]`,
AbsProviderConfig{
Module: ModuleInstance{
{
Name: "baz",
},
},
ProviderConfig: ProviderConfig{
Type: "aws",
Provider: Provider{
Type: "aws",
Namespace: "hashicorp",
Hostname: "registry.terraform.io",
},
},
``,
},
{
`module.baz.provider.aws.foo`,
`module.baz.provider["registry.terraform.io/hashicorp/aws"].foo`,
AbsProviderConfig{
Module: ModuleInstance{
{
Name: "baz",
},
},
ProviderConfig: ProviderConfig{
Type: "aws",
Alias: "foo",
Provider: Provider{
Type: "aws",
Namespace: "hashicorp",
Hostname: "registry.terraform.io",
},
Alias: "foo",
},
``,
},
{
`module.baz["foo"].provider.aws`,
`module.baz["foo"].provider["registry.terraform.io/hashicorp/aws"]`,
AbsProviderConfig{
Module: ModuleInstance{
{
@ -136,14 +82,16 @@ func TestParseAbsProviderConfig(t *testing.T) {
InstanceKey: StringKey("foo"),
},
},
ProviderConfig: ProviderConfig{
Type: "aws",
Provider: Provider{
Type: "aws",
Namespace: "hashicorp",
Hostname: "registry.terraform.io",
},
},
``,
},
{
`module.baz[1].provider.aws`,
`module.baz[1].provider["registry.terraform.io/hashicorp/aws"]`,
AbsProviderConfig{
Module: ModuleInstance{
{
@ -151,14 +99,16 @@ func TestParseAbsProviderConfig(t *testing.T) {
InstanceKey: IntKey(1),
},
},
ProviderConfig: ProviderConfig{
Type: "aws",
Provider: Provider{
Type: "aws",
Namespace: "hashicorp",
Hostname: "registry.terraform.io",
},
},
``,
},
{
`module.baz[1].module.bar.provider.aws`,
`module.baz[1].module.bar.provider["registry.terraform.io/hashicorp/aws"]`,
AbsProviderConfig{
Module: ModuleInstance{
{
@ -169,8 +119,10 @@ func TestParseAbsProviderConfig(t *testing.T) {
Name: "bar",
},
},
ProviderConfig: ProviderConfig{
Type: "aws",
Provider: Provider{
Type: "aws",
Namespace: "hashicorp",
Hostname: "registry.terraform.io",
},
},
``,
@ -196,12 +148,7 @@ func TestParseAbsProviderConfig(t *testing.T) {
`Extraneous operators after provider configuration alias.`,
},
{
`provider["aws"]`,
AbsProviderConfig{},
`The prefix "provider." must be followed by a provider type name.`,
},
{
`provider.aws["foo"]`,
`provider["aws"]["foo"]`,
AbsProviderConfig{},
`Provider type name must be followed by a configuration alias name.`,
},
@ -211,9 +158,9 @@ func TestParseAbsProviderConfig(t *testing.T) {
`Provider address must begin with "provider.", followed by a provider type name.`,
},
{
`module.foo["provider"]`,
`provider[0]`,
AbsProviderConfig{},
`Provider address must begin with "provider.", followed by a provider type name.`,
`The prefix "provider." must be followed by a provider type name.`,
},
}
@ -251,3 +198,93 @@ func TestParseAbsProviderConfig(t *testing.T) {
})
}
}
func TestAbsProviderConfigString(t *testing.T) {
tests := []struct {
Config AbsProviderConfig
Want string
}{
{
AbsProviderConfig{
Module: RootModuleInstance,
Provider: NewLegacyProvider("foo"),
},
`provider["registry.terraform.io/-/foo"]`,
},
{
AbsProviderConfig{
Module: RootModuleInstance.Child("child_module", NoKey),
Provider: NewLegacyProvider("foo"),
},
`module.child_module.provider["registry.terraform.io/-/foo"]`,
},
{
AbsProviderConfig{
Module: RootModuleInstance,
Alias: "bar",
Provider: NewLegacyProvider("foo"),
},
`provider["registry.terraform.io/-/foo"].bar`,
},
{
AbsProviderConfig{
Module: RootModuleInstance.Child("child_module", NoKey),
Alias: "bar",
Provider: NewLegacyProvider("foo"),
},
`module.child_module.provider["registry.terraform.io/-/foo"].bar`,
},
}
for _, test := range tests {
got := test.Config.String()
if got != test.Want {
t.Errorf("wrong result. Got %s, want %s\n", got, test.Want)
}
}
}
func TestAbsProviderConfigLegacyString(t *testing.T) {
tests := []struct {
Config AbsProviderConfig
Want string
}{
{
AbsProviderConfig{
Module: RootModuleInstance,
Provider: NewLegacyProvider("foo"),
},
`provider.foo`,
},
{
AbsProviderConfig{
Module: RootModuleInstance.Child("child_module", NoKey),
Provider: NewLegacyProvider("foo"),
},
`module.child_module.provider.foo`,
},
{
AbsProviderConfig{
Module: RootModuleInstance,
Alias: "bar",
Provider: NewLegacyProvider("foo"),
},
`provider.foo.bar`,
},
{
AbsProviderConfig{
Module: RootModuleInstance.Child("child_module", NoKey),
Alias: "bar",
Provider: NewLegacyProvider("foo"),
},
`module.child_module.provider.foo.bar`,
},
}
for _, test := range tests {
got := test.Config.LegacyString()
if got != test.Want {
t.Errorf("wrong result. Got %s, want %s\n", got, test.Want)
}
}
}

245
addrs/provider_test.go Normal file
View File

@ -0,0 +1,245 @@
package addrs
import (
"testing"
"github.com/go-test/deep"
svchost "github.com/hashicorp/terraform-svchost"
)
func TestParseProviderSourceStr(t *testing.T) {
tests := map[string]struct {
Want Provider
Err bool
}{
"registry.terraform.io/hashicorp/aws": {
Provider{
Type: "aws",
Namespace: "hashicorp",
Hostname: DefaultRegistryHost,
},
false,
},
"registry.Terraform.io/HashiCorp/AWS": {
Provider{
Type: "aws",
Namespace: "hashicorp",
Hostname: DefaultRegistryHost,
},
false,
},
"hashicorp/aws": {
Provider{
Type: "aws",
Namespace: "hashicorp",
Hostname: DefaultRegistryHost,
},
false,
},
"HashiCorp/AWS": {
Provider{
Type: "aws",
Namespace: "hashicorp",
Hostname: DefaultRegistryHost,
},
false,
},
"aws": {
Provider{
Type: "aws",
Namespace: "-",
Hostname: DefaultRegistryHost,
},
false,
},
"AWS": {
Provider{
// No case folding here because we're currently handling this
// as a legacy one. When this changes to be a _default_
// address in future (registry.terraform.io/hashicorp/aws)
// then we should start applying case folding to it, making
// Type appear as "aws" here instead.
Type: "AWS",
Namespace: "-",
Hostname: DefaultRegistryHost,
},
false,
},
"example.com/foo-bar/baz-boop": {
Provider{
Type: "baz-boop",
Namespace: "foo-bar",
Hostname: svchost.Hostname("example.com"),
},
false,
},
"foo-bar/baz-boop": {
Provider{
Type: "baz-boop",
Namespace: "foo-bar",
Hostname: DefaultRegistryHost,
},
false,
},
"localhost:8080/foo/bar": {
Provider{
Type: "bar",
Namespace: "foo",
Hostname: svchost.Hostname("localhost:8080"),
},
false,
},
"example.com/too/many/parts/here": {
Provider{},
true,
},
"/too///many//slashes": {
Provider{},
true,
},
"///": {
Provider{},
true,
},
"badhost!/hashicorp/aws": {
Provider{},
true,
},
"example.com/badnamespace!/aws": {
Provider{},
true,
},
"example.com/bad--namespace/aws": {
Provider{},
true,
},
"example.com/-badnamespace/aws": {
Provider{},
true,
},
"example.com/badnamespace-/aws": {
Provider{},
true,
},
"example.com/bad.namespace/aws": {
Provider{},
true,
},
"example.com/hashicorp/badtype!": {
Provider{},
true,
},
"example.com/hashicorp/bad--type": {
Provider{},
true,
},
"example.com/hashicorp/-badtype": {
Provider{},
true,
},
"example.com/hashicorp/badtype-": {
Provider{},
true,
},
"example.com/hashicorp/bad.type": {
Provider{},
true,
},
}
for name, test := range tests {
got, diags := ParseProviderSourceString(name)
for _, problem := range deep.Equal(got, test.Want) {
t.Errorf(problem)
}
if len(diags) > 0 {
if test.Err == false {
t.Errorf("got error, expected success")
}
} else {
if test.Err {
t.Errorf("got success, expected error")
}
}
}
}
func TestParseProviderPart(t *testing.T) {
tests := map[string]struct {
Want string
Error string
}{
`foo`: {
`foo`,
``,
},
`FOO`: {
`foo`,
``,
},
`Foo`: {
`foo`,
``,
},
`abc-123`: {
`abc-123`,
``,
},
`Испытание`: {
`испытание`,
``,
},
`münchen`: { // this is a precomposed u with diaeresis
`münchen`, // this is a precomposed u with diaeresis
``,
},
`münchen`: { // this is a separate u and combining diaeresis
`münchen`, // this is a precomposed u with diaeresis
``,
},
`abc--123`: {
``,
`cannot use multiple consecutive dashes`,
},
`xn--80akhbyknj4f`: { // this is the punycode form of "испытание", but we don't accept punycode here
``,
`cannot use multiple consecutive dashes`,
},
`abc.123`: {
``,
`dots are not allowed`,
},
`-abc123`: {
``,
`must contain only letters, digits, and dashes, and may not use leading or trailing dashes`,
},
`abc123-`: {
``,
`must contain only letters, digits, and dashes, and may not use leading or trailing dashes`,
},
``: {
``,
`must have at least one character`,
},
}
for given, test := range tests {
t.Run(given, func(t *testing.T) {
got, err := ParseProviderPart(given)
if test.Error != "" {
if err == nil {
t.Errorf("unexpected success\ngot: %s\nwant: %s", err, test.Error)
} else if got := err.Error(); got != test.Error {
t.Errorf("wrong error\ngot: %s\nwant: %s", got, test.Error)
}
} else {
if err != nil {
t.Errorf("unexpected error\ngot: %s\nwant: <nil>", err)
} else if got != test.Want {
t.Errorf("wrong result\ngot: %s\nwant: %s", got, test.Want)
}
}
})
}
}

View File

@ -1,7 +0,0 @@
package addrs
// ProviderType encapsulates a single provider type. In the future this will be
// extended to include additional fields including Namespace and SourceHost
type ProviderType struct {
Name string
}

View File

@ -50,9 +50,9 @@ func (r Resource) Absolute(module ModuleInstance) AbsResource {
}
}
// DefaultProviderConfig returns the address of the provider configuration
// that should be used for the resource identified by the reciever if it
// does not have a provider configuration address explicitly set in
// DefaultProvider returns the address of the provider whose default
// configuration shouldbe used for the resource identified by the reciever if
// it does not have a provider configuration address explicitly set in
// configuration.
//
// This method is not able to verify that such a configuration exists, nor
@ -60,14 +60,18 @@ func (r Resource) Absolute(module ModuleInstance) AbsResource {
// configurations from parent modules. It just does a static analysis of the
// receiving address and returns an address to start from, relative to the
// same module that contains the resource.
func (r Resource) DefaultProviderConfig() ProviderConfig {
func (r Resource) DefaultProvider() Provider {
typeName := r.Type
if under := strings.Index(typeName, "_"); under != -1 {
typeName = typeName[:under]
}
return ProviderConfig{
Type: typeName,
}
// TODO: For now we're returning a _legacy_ provider address here
// because the rest of Terraform isn't yet prepared to deal with
// non-legacy ones. Once we phase out legacy addresses this should
// switch to being a _default_ provider address, i.e. one in the
// releases.hashicorp.com/hashicorp/... namespace.
return NewLegacyProvider(typeName)
}
// ResourceInstance is an address for a specific instance of a resource.
@ -253,7 +257,7 @@ func (r AbsResourceInstance) Less(o AbsResourceInstance) bool {
// resource lifecycle has a slightly different address format.
type ResourceMode rune
//go:generate stringer -type ResourceMode
//go:generate go run golang.org/x/tools/cmd/stringer -type ResourceMode
const (
// InvalidResourceMode is the zero value of ResourceMode and is not

View File

@ -7,6 +7,7 @@ import (
"crypto/tls"
"crypto/x509"
"encoding/json"
"errors"
"net/http"
"net/http/httptest"
"net/url"
@ -216,17 +217,20 @@ func TestStateClient_UnresolvableConflict(t *testing.T) {
if err := terraform.WriteState(state, &stateJson); err != nil {
t.Fatalf("err: %s", err)
}
doneCh := make(chan struct{})
errCh := make(chan error)
go func() {
defer close(doneCh)
defer close(errCh)
if err := client.Put(stateJson.Bytes()); err == nil {
t.Fatal("Expected error from state conflict, got none.")
errCh <- errors.New("expected error from state conflict, got none.")
return
}
}()
select {
case <-doneCh:
// OK
case err := <-errCh:
if err != nil {
t.Fatalf("error from anonymous test goroutine: %s", err)
}
case <-time.After(500 * time.Millisecond):
t.Fatalf("Timed out after 500ms, probably because retrying infinitely.")
}

View File

@ -196,6 +196,16 @@ type Operation struct {
Targets []addrs.Targetable
Variables map[string]UnparsedVariableValue
// Some operations use root module variables only opportunistically or
// don't need them at all. If this flag is set, the backend must treat
// all variables as optional and provide an unknown value for any required
// variables that aren't set in order to allow partial evaluation against
// the resulting incomplete context.
//
// This flag is honored only if PlanFile isn't set. If PlanFile is set then
// the variables set in the plan are used instead, and they must be valid.
AllowUnsetVariables bool
// Input/output/control options.
UIIn terraform.UIInput
UIOut terraform.UIOutput

View File

@ -5,8 +5,8 @@ package init
import (
"sync"
"github.com/hashicorp/terraform-svchost/disco"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/svchost/disco"
"github.com/hashicorp/terraform/tfdiags"
"github.com/zclconf/go-cty/cty"
@ -16,6 +16,7 @@ import (
backendArtifactory "github.com/hashicorp/terraform/backend/remote-state/artifactory"
backendAzure "github.com/hashicorp/terraform/backend/remote-state/azure"
backendConsul "github.com/hashicorp/terraform/backend/remote-state/consul"
backendCos "github.com/hashicorp/terraform/backend/remote-state/cos"
backendEtcdv2 "github.com/hashicorp/terraform/backend/remote-state/etcdv2"
backendEtcdv3 "github.com/hashicorp/terraform/backend/remote-state/etcdv3"
backendGCS "github.com/hashicorp/terraform/backend/remote-state/gcs"
@ -57,6 +58,7 @@ func Init(services *disco.Disco) {
"atlas": func() backend.Backend { return backendAtlas.New() },
"azurerm": func() backend.Backend { return backendAzure.New() },
"consul": func() backend.Backend { return backendConsul.New() },
"cos": func() backend.Backend { return backendCos.New() },
"etcd": func() backend.Backend { return backendEtcdv2.New() },
"etcdv3": func() backend.Backend { return backendEtcdv3.New() },
"gcs": func() backend.Backend { return backendGCS.New() },

View File

@ -18,6 +18,7 @@ func TestInit_backend(t *testing.T) {
{"atlas", "*atlas.Backend"},
{"azurerm", "*azure.Backend"},
{"consul", "*consul.Backend"},
{"cos", "*cos.Backend"},
{"etcdv3", "*etcd.Backend"},
{"gcs", "*gcs.Backend"},
{"inmem", "*inmem.Backend"},

View File

@ -14,7 +14,6 @@ import (
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/command/clistate"
"github.com/hashicorp/terraform/configs/configschema"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/states/statemgr"
"github.com/hashicorp/terraform/terraform"
"github.com/hashicorp/terraform/tfdiags"
@ -180,11 +179,6 @@ func (b *Local) Configure(obj cty.Value) tfdiags.Diagnostics {
var diags tfdiags.Diagnostics
type Config struct {
Path string `hcl:"path,optional"`
WorkspaceDir string `hcl:"workspace_dir,optional"`
}
if val := obj.GetAttr("path"); !val.IsNull() {
p := val.AsString()
b.StatePath = p
@ -456,39 +450,6 @@ func (b *Local) Colorize() *colorstring.Colorize {
}
}
func (b *Local) schemaConfigure(ctx context.Context) error {
d := schema.FromContextBackendConfig(ctx)
// Set the path if it is set
pathRaw, ok := d.GetOk("path")
if ok {
path := pathRaw.(string)
if path == "" {
return fmt.Errorf("configured path is empty")
}
b.StatePath = path
b.StateOutPath = path
}
if raw, ok := d.GetOk("workspace_dir"); ok {
path := raw.(string)
if path != "" {
b.StateWorkspaceDir = path
}
}
// Legacy name, which ConflictsWith workspace_dir
if raw, ok := d.GetOk("environment_dir"); ok {
path := raw.(string)
if path != "" {
b.StateWorkspaceDir = path
}
}
return nil
}
// StatePaths returns the StatePath, StateOutPath, and StateBackupPath as
// configured from the CLI.
func (b *Local) StatePaths(name string) (stateIn, stateOut, backupOut string) {

View File

@ -65,9 +65,9 @@ func (b *Local) opApply(
// If we're refreshing before apply, perform that
if op.PlanRefresh {
log.Printf("[INFO] backend/local: apply calling Refresh")
_, err := tfCtx.Refresh()
if err != nil {
diags = diags.Append(err)
_, refreshDiags := tfCtx.Refresh()
diags = diags.Append(refreshDiags)
if diags.HasErrors() {
runningOp.Result = backend.OperationFailure
b.ShowDiagnostics(diags)
return

View File

@ -59,7 +59,7 @@ func TestLocal_applyBasic(t *testing.T) {
checkState(t, b.StateOutPath, `
test_instance.foo:
ID = yes
provider = provider.test
provider = provider["registry.terraform.io/-/test"]
ami = bar
`)
}
@ -176,7 +176,7 @@ func TestLocal_applyError(t *testing.T) {
checkState(t, b.StateOutPath, `
test_instance.foo:
ID = foo
provider = provider.test
provider = provider["registry.terraform.io/-/test"]
ami = bar
`)
}
@ -226,7 +226,7 @@ func TestLocal_applyBackendFail(t *testing.T) {
checkState(t, "errored.tfstate", `
test_instance.foo:
ID = yes
provider = provider.test
provider = provider["registry.terraform.io/-/test"]
ami = bar
`)
}
@ -261,26 +261,6 @@ func testOperationApply(t *testing.T, configDir string) (*backend.Operation, fun
}, configCleanup
}
// testApplyState is just a common state that we use for testing refresh.
func testApplyState() *terraform.State {
return &terraform.State{
Version: 2,
Modules: []*terraform.ModuleState{
&terraform.ModuleState{
Path: []string{"root"},
Resources: map[string]*terraform.ResourceState{
"test_instance.foo": &terraform.ResourceState{
Type: "test_instance",
Primary: &terraform.InstanceState{
ID: "bar",
},
},
},
},
},
}
}
// applyFixtureSchema returns a schema suitable for processing the
// configuration in testdata/apply . This schema should be
// assigned to a mock provider named "test".

View File

@ -4,10 +4,12 @@ import (
"context"
"fmt"
"log"
"sort"
"github.com/hashicorp/errwrap"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/command/clistate"
"github.com/hashicorp/terraform/configs"
"github.com/hashicorp/terraform/configs/configload"
"github.com/hashicorp/terraform/plans/planfile"
"github.com/hashicorp/terraform/states/statemgr"
@ -103,8 +105,6 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.
// If input asking is enabled, then do that
if op.PlanFile == nil && b.OpInput {
mode := terraform.InputModeProvider
mode |= terraform.InputModeVar
mode |= terraform.InputModeVarUnset
log.Printf("[TRACE] backend/local: requesting interactive input, if necessary")
inputDiags := tfCtx.Input(mode)
@ -136,14 +136,27 @@ func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts)
}
opts.Config = config
variables, varDiags := backend.ParseVariableValues(op.Variables, config.Module.Variables)
var rawVariables map[string]backend.UnparsedVariableValue
if op.AllowUnsetVariables {
// Rather than prompting for input, we'll just stub out the required
// but unset variables with unknown values to represent that they are
// placeholders for values the user would need to provide for other
// operations.
rawVariables = b.stubUnsetRequiredVariables(op.Variables, config.Module.Variables)
} else {
// If interactive input is enabled, we might gather some more variable
// values through interactive prompts.
// TODO: Need to route the operation context through into here, so that
// the interactive prompts can be sensitive to its timeouts/etc.
rawVariables = b.interactiveCollectVariables(context.TODO(), op.Variables, config.Module.Variables, opts.UIInput)
}
variables, varDiags := backend.ParseVariableValues(rawVariables, config.Module.Variables)
diags = diags.Append(varDiags)
if diags.HasErrors() {
return nil, nil, diags
}
if op.Variables != nil {
opts.Variables = variables
}
opts.Variables = variables
tfCtx, ctxDiags := terraform.NewContext(&opts)
diags = diags.Append(ctxDiags)
@ -245,10 +258,155 @@ func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextO
return tfCtx, snap, diags
}
const validateWarnHeader = `
There are warnings related to your configuration. If no errors occurred,
Terraform will continue despite these warnings. It is a good idea to resolve
these warnings in the near future.
// interactiveCollectVariables attempts to complete the given existing
// map of variables by interactively prompting for any variables that are
// declared as required but not yet present.
//
// If interactive input is disabled for this backend instance then this is
// a no-op. If input is enabled but fails for some reason, the resulting
// map will be incomplete. For these reasons, the caller must still validate
// that the result is complete and valid.
//
// This function does not modify the map given in "existing", but may return
// it unchanged if no modifications are required. If modifications are required,
// the result is a new map with all of the elements from "existing" plus
// additional elements as appropriate.
//
// Interactive prompting is a "best effort" thing for first-time user UX and
// not something we expect folks to be relying on for routine use. Terraform
// is primarily a non-interactive tool and so we prefer to report in error
// messages that variables are not set rather than reporting that input failed:
// the primary resolution to missing variables is to provide them by some other
// means.
func (b *Local) interactiveCollectVariables(ctx context.Context, existing map[string]backend.UnparsedVariableValue, vcs map[string]*configs.Variable, uiInput terraform.UIInput) map[string]backend.UnparsedVariableValue {
var needed []string
if b.OpInput && uiInput != nil {
for name, vc := range vcs {
if !vc.Required() {
continue // We only prompt for required variables
}
if _, exists := existing[name]; !exists {
needed = append(needed, name)
}
}
} else {
log.Print("[DEBUG] backend/local: Skipping interactive prompts for variables because input is disabled")
}
if len(needed) == 0 {
return existing
}
Warnings:
`
log.Printf("[DEBUG] backend/local: will prompt for input of unset required variables %s", needed)
// If we get here then we're planning to prompt for at least one additional
// variable's value.
sort.Strings(needed) // prompt in lexical order
ret := make(map[string]backend.UnparsedVariableValue, len(vcs))
for k, v := range existing {
ret[k] = v
}
for _, name := range needed {
vc := vcs[name]
rawValue, err := uiInput.Input(ctx, &terraform.InputOpts{
Id: fmt.Sprintf("var.%s", name),
Query: fmt.Sprintf("var.%s", name),
Description: vc.Description,
})
if err != nil {
// Since interactive prompts are best-effort, we'll just continue
// here and let subsequent validation report this as a variable
// not specified.
log.Printf("[WARN] backend/local: Failed to request user input for variable %q: %s", name, err)
continue
}
ret[name] = unparsedInteractiveVariableValue{Name: name, RawValue: rawValue}
}
return ret
}
// stubUnsetVariables ensures that all required variables defined in the
// configuration exist in the resulting map, by adding new elements as necessary.
//
// The stubbed value of any additions will be an unknown variable conforming
// to the variable's configured type constraint, meaning that no particular
// value is known and that one must be provided by the user in order to get
// a complete result.
//
// Unset optional attributes (those with default values) will not be populated
// by this function, under the assumption that a later step will handle those.
// In this sense, stubUnsetRequiredVariables is essentially a non-interactive,
// non-error-producing variant of interactiveCollectVariables that creates
// placeholders for values the user would be prompted for interactively on
// other operations.
//
// This function should be used only in situations where variables values
// will not be directly used and the variables map is being constructed only
// to produce a complete Terraform context for some ancillary functionality
// like "terraform console", "terraform state ...", etc.
//
// This function is guaranteed not to modify the given map, but it may return
// the given map unchanged if no additions are required. If additions are
// required then the result will be a new map containing everything in the
// given map plus additional elements.
func (b *Local) stubUnsetRequiredVariables(existing map[string]backend.UnparsedVariableValue, vcs map[string]*configs.Variable) map[string]backend.UnparsedVariableValue {
var missing bool // Do we need to add anything?
for name, vc := range vcs {
if !vc.Required() {
continue // We only stub required variables
}
if _, exists := existing[name]; !exists {
missing = true
}
}
if !missing {
return existing
}
// If we get down here then there's at least one variable value to add.
ret := make(map[string]backend.UnparsedVariableValue, len(vcs))
for k, v := range existing {
ret[k] = v
}
for name, vc := range vcs {
if !vc.Required() {
continue
}
if _, exists := existing[name]; !exists {
ret[name] = unparsedUnknownVariableValue{Name: name, WantType: vc.Type}
}
}
return ret
}
type unparsedInteractiveVariableValue struct {
Name, RawValue string
}
var _ backend.UnparsedVariableValue = unparsedInteractiveVariableValue{}
func (v unparsedInteractiveVariableValue) ParseVariableValue(mode configs.VariableParsingMode) (*terraform.InputValue, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
val, valDiags := mode.Parse(v.Name, v.RawValue)
diags = diags.Append(valDiags)
if diags.HasErrors() {
return nil, diags
}
return &terraform.InputValue{
Value: val,
SourceType: terraform.ValueFromInput,
}, diags
}
type unparsedUnknownVariableValue struct {
Name string
WantType cty.Type
}
var _ backend.UnparsedVariableValue = unparsedUnknownVariableValue{}
func (v unparsedUnknownVariableValue) ParseVariableValue(mode configs.VariableParsingMode) (*terraform.InputValue, tfdiags.Diagnostics) {
return &terraform.InputValue{
Value: cty.UnknownVal(v.WantType),
SourceType: terraform.ValueFromInput,
}, nil
}

View File

@ -8,6 +8,9 @@ import (
"sort"
"strings"
"github.com/mitchellh/cli"
"github.com/mitchellh/colorstring"
"github.com/hashicorp/terraform/addrs"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/command/format"
@ -83,9 +86,9 @@ func (b *Local) opPlan(
b.CLI.Output(b.Colorize().Color(strings.TrimSpace(planRefreshing) + "\n"))
}
refreshedState, err := tfCtx.Refresh()
if err != nil {
diags = diags.Append(err)
refreshedState, refreshDiags := tfCtx.Refresh()
diags = diags.Append(refreshDiags)
if diags.HasErrors() {
b.ReportResult(runningOp, diags)
return
}
@ -159,6 +162,8 @@ func (b *Local) opPlan(
if plan.Changes.Empty() {
b.CLI.Output("\n" + b.Colorize().Color(strings.TrimSpace(planNoChanges)))
// Even if there are no changes, there still could be some warnings
b.ShowDiagnostics(diags)
return
}
@ -190,6 +195,21 @@ func (b *Local) opPlan(
}
func (b *Local) renderPlan(plan *plans.Plan, state *states.State, schemas *terraform.Schemas) {
RenderPlan(plan, state, schemas, b.CLI, b.Colorize())
}
// RenderPlan renders the given plan to the given UI.
//
// This is exported only so that the "terraform show" command can re-use it.
// Ideally it would be somewhere outside of this backend code so that both
// can call into it, but we're leaving it here for now in order to avoid
// disruptive refactoring.
//
// If you find yourself wanting to call this function from a third callsite,
// please consider whether it's time to do the more disruptive refactoring
// so that something other than the local backend package is offering this
// functionality.
func RenderPlan(plan *plans.Plan, state *states.State, schemas *terraform.Schemas, ui cli.Ui, colorize *colorstring.Colorize) {
counts := map[plans.Action]int{}
var rChanges []*plans.ResourceInstanceChangeSrc
for _, change := range plan.Changes.Resources {
@ -223,9 +243,9 @@ func (b *Local) renderPlan(plan *plans.Plan, state *states.State, schemas *terra
fmt.Fprintf(headerBuf, "%s read (data resources)\n", format.DiffActionSymbol(plans.Read))
}
b.CLI.Output(b.Colorize().Color(headerBuf.String()))
ui.Output(colorize.Color(headerBuf.String()))
b.CLI.Output("Terraform will perform the following actions:\n")
ui.Output("Terraform will perform the following actions:\n")
// Note: we're modifying the backing slice of this plan object in-place
// here. The ordering of resource changes in a plan is not significant,
@ -244,16 +264,17 @@ func (b *Local) renderPlan(plan *plans.Plan, state *states.State, schemas *terra
if rcs.Action == plans.NoOp {
continue
}
providerSchema := schemas.ProviderSchema(rcs.ProviderAddr.ProviderConfig.Type)
providerSchema := schemas.ProviderSchema(rcs.ProviderAddr.Provider)
if providerSchema == nil {
// Should never happen
b.CLI.Output(fmt.Sprintf("(schema missing for %s)\n", rcs.ProviderAddr))
ui.Output(fmt.Sprintf("(schema missing for %s)\n", rcs.ProviderAddr))
continue
}
rSchema, _ := providerSchema.SchemaForResourceAddr(rcs.Addr.Resource.Resource)
if rSchema == nil {
// Should never happen
b.CLI.Output(fmt.Sprintf("(schema missing for %s)\n", rcs.Addr))
ui.Output(fmt.Sprintf("(schema missing for %s)\n", rcs.Addr))
continue
}
@ -267,11 +288,11 @@ func (b *Local) renderPlan(plan *plans.Plan, state *states.State, schemas *terra
}
}
b.CLI.Output(format.ResourceChange(
ui.Output(format.ResourceChange(
rcs,
tainted,
rSchema,
b.CLIColor,
colorize,
))
}
@ -288,23 +309,13 @@ func (b *Local) renderPlan(plan *plans.Plan, state *states.State, schemas *terra
stats[change.Action]++
}
}
b.CLI.Output(b.Colorize().Color(fmt.Sprintf(
ui.Output(colorize.Color(fmt.Sprintf(
"[reset][bold]Plan:[reset] "+
"%d to add, %d to change, %d to destroy.",
stats[plans.Create], stats[plans.Update], stats[plans.Delete],
)))
}
const planErrNoConfig = `
No configuration files found!
Plan requires configuration to be present. Planning without a configuration
would mark everything for destruction, which is normally not what is desired.
If you would like to destroy everything, please run plan with the "-destroy"
flag or create a single empty configuration file. Otherwise, please create
a Terraform configuration file in the path being executed and try again.
`
const planHeaderIntro = `
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:

View File

@ -215,9 +215,10 @@ func TestLocal_planDeposedOnly(t *testing.T) {
}]
}`),
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
}))
b.CLI = cli.NewMockUi()
@ -658,9 +659,10 @@ func testPlanState() *states.State {
}]
}`),
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
return state
}
@ -684,9 +686,10 @@ func testPlanState_withDataSource() *states.State {
}]
}`),
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
rootModule.SetResourceInstanceCurrent(
addrs.Resource{
@ -700,9 +703,10 @@ func testPlanState_withDataSource() *states.State {
"filter": "foo"
}`),
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
return state
}
@ -726,9 +730,10 @@ func testPlanState_tainted() *states.State {
}]
}`),
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
return state
}

View File

@ -42,7 +42,7 @@ func TestLocal_refresh(t *testing.T) {
checkState(t, b.StateOutPath, `
test_instance.foo:
ID = yes
provider = provider.test
provider = provider["registry.terraform.io/-/test"]
`)
}
@ -72,7 +72,7 @@ func TestLocal_refreshNoConfig(t *testing.T) {
checkState(t, b.StateOutPath, `
test_instance.foo:
ID = yes
provider = provider.test
provider = provider["registry.terraform.io/-/test"]
`)
}
@ -105,7 +105,7 @@ func TestLocal_refreshNilModuleWithInput(t *testing.T) {
checkState(t, b.StateOutPath, `
test_instance.foo:
ID = yes
provider = provider.test
provider = provider["registry.terraform.io/-/test"]
`)
}
@ -163,7 +163,7 @@ func TestLocal_refreshInput(t *testing.T) {
checkState(t, b.StateOutPath, `
test_instance.foo:
ID = yes
provider = provider.test
provider = provider["registry.terraform.io/-/test"]
`)
}
@ -196,7 +196,7 @@ func TestLocal_refreshValidate(t *testing.T) {
checkState(t, b.StateOutPath, `
test_instance.foo:
ID = yes
provider = provider.test
provider = provider["registry.terraform.io/-/test"]
`)
}

View File

@ -1,6 +1,6 @@
package local
//go:generate stringer -type=countHookAction hook_count_action.go
//go:generate go run golang.org/x/tools/cmd/stringer -type=countHookAction hook_count_action.go
type countHookAction byte

View File

@ -113,8 +113,8 @@ func TestLocalProvider(t *testing.T, b *Local, name string, schema *terraform.Pr
// Setup our provider
b.ContextOpts.ProviderResolver = providers.ResolverFixed(
map[string]providers.Factory{
name: providers.FactoryFixed(p),
map[addrs.Provider]providers.Factory{
addrs.NewLegacyProvider(name): providers.FactoryFixed(p),
},
)

View File

@ -1,6 +1,6 @@
package backend
//go:generate stringer -type=OperationType operation_type.go
//go:generate go run golang.org/x/tools/cmd/stringer -type=OperationType operation_type.go
// OperationType is an enum used with Operation to specify the operation
// type to perform for Terraform.

View File

@ -3,6 +3,7 @@ package artifactory
import (
"context"
cleanhttp "github.com/hashicorp/go-cleanhttp"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/state"
@ -65,9 +66,10 @@ func (b *Backend) configure(ctx context.Context) error {
subpath := data.Get("subpath").(string)
clientConf := &artifactory.ClientConfig{
BaseURL: url,
Username: userName,
Password: password,
BaseURL: url,
Username: userName,
Password: password,
Transport: cleanhttp.DefaultPooledTransport(),
}
nativeClient := artifactory.NewClient(clientConf)

View File

@ -13,9 +13,9 @@ import (
armStorage "github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/storage/mgmt/storage"
"github.com/Azure/azure-sdk-for-go/storage"
"github.com/Azure/go-autorest/autorest"
"github.com/Azure/go-autorest/autorest/adal"
"github.com/Azure/go-autorest/autorest/azure"
"github.com/hashicorp/go-azure-helpers/authentication"
"github.com/hashicorp/go-azure-helpers/sender"
"github.com/hashicorp/terraform/httpclient"
)
@ -75,12 +75,12 @@ func buildArmClient(config BackendConfig) (*ArmClient, error) {
return nil, fmt.Errorf("Error building ARM Config: %+v", err)
}
oauthConfig, err := adal.NewOAuthConfig(env.ActiveDirectoryEndpoint, armConfig.TenantID)
oauthConfig, err := armConfig.BuildOAuthConfig(env.ActiveDirectoryEndpoint)
if err != nil {
return nil, err
}
auth, err := armConfig.GetAuthorizationToken(oauthConfig, env.TokenAudience)
auth, err := armConfig.GetAuthorizationToken(sender.BuildSender("backend/remote-state/azure"), oauthConfig, env.TokenAudience)
if err != nil {
return nil, err
}

View File

@ -21,7 +21,7 @@ func withRequestLogging() autorest.SendDecorator {
return func(s autorest.Sender) autorest.Sender {
return autorest.SenderFunc(func(r *http.Request) (*http.Response, error) {
// only log if logging's enabled
logLevel := logging.LogLevel()
logLevel := logging.CurrentLogLevel()
if logLevel == "" {
return s.Do(r)
}

View File

@ -0,0 +1,169 @@
package cos
import (
"context"
"fmt"
"net/http"
"net/url"
"strings"
"time"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/helper/schema"
"github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common"
"github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common/profile"
tag "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag/v20180813"
"github.com/tencentyun/cos-go-sdk-v5"
)
// Default value from environment variable
const (
PROVIDER_SECRET_ID = "TENCENTCLOUD_SECRET_ID"
PROVIDER_SECRET_KEY = "TENCENTCLOUD_SECRET_KEY"
PROVIDER_REGION = "TENCENTCLOUD_REGION"
)
// Backend implements "backend".Backend for tencentCloud cos
type Backend struct {
*schema.Backend
cosContext context.Context
cosClient *cos.Client
tagClient *tag.Client
region string
bucket string
prefix string
key string
encrypt bool
acl string
}
// New creates a new backend for TencentCloud cos remote state.
func New() backend.Backend {
s := &schema.Backend{
Schema: map[string]*schema.Schema{
"secret_id": {
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc(PROVIDER_SECRET_ID, nil),
Description: "Secret id of Tencent Cloud",
},
"secret_key": {
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc(PROVIDER_SECRET_KEY, nil),
Description: "Secret key of Tencent Cloud",
Sensitive: true,
},
"region": {
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc(PROVIDER_REGION, nil),
Description: "The region of the COS bucket",
InputDefault: "ap-guangzhou",
},
"bucket": {
Type: schema.TypeString,
Required: true,
Description: "The name of the COS bucket",
},
"prefix": {
Type: schema.TypeString,
Optional: true,
Description: "The directory for saving the state file in bucket",
ValidateFunc: func(v interface{}, s string) ([]string, []error) {
prefix := v.(string)
if strings.HasPrefix(prefix, "/") || strings.HasPrefix(prefix, "./") {
return nil, []error{fmt.Errorf("prefix must not start with '/' or './'")}
}
return nil, nil
},
},
"key": {
Type: schema.TypeString,
Optional: true,
Description: "The path for saving the state file in bucket",
Default: "terraform.tfstate",
ValidateFunc: func(v interface{}, s string) ([]string, []error) {
if strings.HasPrefix(v.(string), "/") || strings.HasSuffix(v.(string), "/") {
return nil, []error{fmt.Errorf("key can not start and end with '/'")}
}
return nil, nil
},
},
"encrypt": {
Type: schema.TypeBool,
Optional: true,
Description: "Whether to enable server side encryption of the state file",
Default: true,
},
"acl": {
Type: schema.TypeString,
Optional: true,
Description: "Object ACL to be applied to the state file",
Default: "private",
ValidateFunc: func(v interface{}, s string) ([]string, []error) {
value := v.(string)
if value != "private" && value != "public-read" {
return nil, []error{fmt.Errorf(
"acl value invalid, expected %s or %s, got %s",
"private", "public-read", value)}
}
return nil, nil
},
},
},
}
result := &Backend{Backend: s}
result.Backend.ConfigureFunc = result.configure
return result
}
// configure init cos client
func (b *Backend) configure(ctx context.Context) error {
if b.cosClient != nil {
return nil
}
b.cosContext = ctx
data := schema.FromContextBackendConfig(b.cosContext)
b.region = data.Get("region").(string)
b.bucket = data.Get("bucket").(string)
b.prefix = data.Get("prefix").(string)
b.key = data.Get("key").(string)
b.encrypt = data.Get("encrypt").(bool)
b.acl = data.Get("acl").(string)
u, err := url.Parse(fmt.Sprintf("https://%s.cos.%s.myqcloud.com", b.bucket, b.region))
if err != nil {
return err
}
b.cosClient = cos.NewClient(
&cos.BaseURL{BucketURL: u},
&http.Client{
Timeout: 60 * time.Second,
Transport: &cos.AuthorizationTransport{
SecretID: data.Get("secret_id").(string),
SecretKey: data.Get("secret_key").(string),
},
},
)
credential := common.NewCredential(
data.Get("secret_id").(string),
data.Get("secret_key").(string),
)
cpf := profile.NewClientProfile()
cpf.HttpProfile.ReqMethod = "POST"
cpf.HttpProfile.ReqTimeout = 300
cpf.Language = "en-US"
b.tagClient, err = tag.NewClient(credential, b.region, cpf)
return err
}

View File

@ -0,0 +1,178 @@
package cos
import (
"fmt"
"log"
"path"
"sort"
"strings"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/state"
"github.com/hashicorp/terraform/state/remote"
"github.com/hashicorp/terraform/states"
"github.com/likexian/gokit/assert"
)
// Define file suffix
const (
stateFileSuffix = ".tfstate"
lockFileSuffix = ".tflock"
)
// Workspaces returns a list of names for the workspaces
func (b *Backend) Workspaces() ([]string, error) {
c, err := b.client("tencentcloud")
if err != nil {
return nil, err
}
obs, err := c.getBucket(b.prefix)
log.Printf("[DEBUG] list all workspaces, objects: %v, error: %v", obs, err)
if err != nil {
return nil, err
}
ws := []string{backend.DefaultStateName}
for _, vv := range obs {
// <name>.tfstate
if !strings.HasSuffix(vv.Key, stateFileSuffix) {
continue
}
// default worksapce
if path.Join(b.prefix, b.key) == vv.Key {
continue
}
// <prefix>/<worksapce>/<key>
prefix := strings.TrimRight(b.prefix, "/") + "/"
parts := strings.Split(strings.TrimPrefix(vv.Key, prefix), "/")
if len(parts) > 0 && parts[0] != "" {
ws = append(ws, parts[0])
}
}
sort.Strings(ws[1:])
log.Printf("[DEBUG] list all workspaces, workspaces: %v", ws)
return ws, nil
}
// DeleteWorkspace deletes the named workspaces. The "default" state cannot be deleted.
func (b *Backend) DeleteWorkspace(name string) error {
log.Printf("[DEBUG] delete workspace, workspace: %v", name)
if name == backend.DefaultStateName || name == "" {
return fmt.Errorf("default state is not allow to delete")
}
c, err := b.client(name)
if err != nil {
return err
}
return c.Delete()
}
// StateMgr manage the state, if the named state not exists, a new file will created
func (b *Backend) StateMgr(name string) (state.State, error) {
log.Printf("[DEBUG] state manager, current workspace: %v", name)
c, err := b.client(name)
if err != nil {
return nil, err
}
stateMgr := &remote.State{Client: c}
ws, err := b.Workspaces()
if err != nil {
return nil, err
}
if !assert.IsContains(ws, name) {
log.Printf("[DEBUG] workspace %v not exists", name)
// take a lock on this state while we write it
lockInfo := state.NewLockInfo()
lockInfo.Operation = "init"
lockId, err := c.Lock(lockInfo)
if err != nil {
return nil, fmt.Errorf("Failed to lock cos state: %s", err)
}
// Local helper function so we can call it multiple places
lockUnlock := func(e error) error {
if err := stateMgr.Unlock(lockId); err != nil {
return fmt.Errorf(unlockErrMsg, err, lockId)
}
return e
}
// Grab the value
if err := stateMgr.RefreshState(); err != nil {
err = lockUnlock(err)
return nil, err
}
// If we have no state, we have to create an empty state
if v := stateMgr.State(); v == nil {
if err := stateMgr.WriteState(states.NewState()); err != nil {
err = lockUnlock(err)
return nil, err
}
if err := stateMgr.PersistState(); err != nil {
err = lockUnlock(err)
return nil, err
}
}
// Unlock, the state should now be initialized
if err := lockUnlock(nil); err != nil {
return nil, err
}
}
return stateMgr, nil
}
// client returns a remoteClient for the named state.
func (b *Backend) client(name string) (*remoteClient, error) {
if strings.TrimSpace(name) == "" {
return nil, fmt.Errorf("state name not allow to be empty")
}
return &remoteClient{
cosContext: b.cosContext,
cosClient: b.cosClient,
tagClient: b.tagClient,
bucket: b.bucket,
stateFile: b.stateFile(name),
lockFile: b.lockFile(name),
encrypt: b.encrypt,
acl: b.acl,
}, nil
}
// stateFile returns state file path by name
func (b *Backend) stateFile(name string) string {
if name == backend.DefaultStateName {
return path.Join(b.prefix, b.key)
}
return path.Join(b.prefix, name, b.key)
}
// lockFile returns lock file path by name
func (b *Backend) lockFile(name string) string {
return b.stateFile(name) + lockFileSuffix
}
// unlockErrMsg is error msg for unlock failed
const unlockErrMsg = `
Unlocking the state file on TencentCloud cos backend failed:
Error message: %v
Lock ID (gen): %s
You may have to force-unlock this state in order to use it again.
The TencentCloud backend acquires a lock during initialization
to ensure the initial state file is created.
`

View File

@ -0,0 +1,227 @@
package cos
import (
"crypto/md5"
"fmt"
"os"
"testing"
"time"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/state/remote"
"github.com/likexian/gokit/assert"
)
const (
defaultPrefix = ""
defaultKey = "terraform.tfstate"
)
// Testing Thanks to GCS
func TestStateFile(t *testing.T) {
t.Parallel()
cases := []struct {
prefix string
stateName string
key string
wantStateFile string
wantLockFile string
}{
{"", "default", "default.tfstate", "default.tfstate", "default.tfstate.tflock"},
{"", "default", "test.tfstate", "test.tfstate", "test.tfstate.tflock"},
{"", "dev", "test.tfstate", "dev/test.tfstate", "dev/test.tfstate.tflock"},
{"terraform/test", "default", "default.tfstate", "terraform/test/default.tfstate", "terraform/test/default.tfstate.tflock"},
{"terraform/test", "default", "test.tfstate", "terraform/test/test.tfstate", "terraform/test/test.tfstate.tflock"},
{"terraform/test", "dev", "test.tfstate", "terraform/test/dev/test.tfstate", "terraform/test/dev/test.tfstate.tflock"},
}
for _, c := range cases {
b := &Backend{
prefix: c.prefix,
key: c.key,
}
assert.Equal(t, b.stateFile(c.stateName), c.wantStateFile)
assert.Equal(t, b.lockFile(c.stateName), c.wantLockFile)
}
}
func TestRemoteClient(t *testing.T) {
t.Parallel()
bucket := bucketName(t)
be := setupBackend(t, bucket, defaultPrefix, defaultKey, false)
defer teardownBackend(t, be)
ss, err := be.StateMgr(backend.DefaultStateName)
assert.Nil(t, err)
rs, ok := ss.(*remote.State)
assert.True(t, ok)
remote.TestClient(t, rs.Client)
}
func TestRemoteClientWithPrefix(t *testing.T) {
t.Parallel()
prefix := "prefix/test"
bucket := bucketName(t)
be := setupBackend(t, bucket, prefix, defaultKey, false)
defer teardownBackend(t, be)
ss, err := be.StateMgr(backend.DefaultStateName)
assert.Nil(t, err)
rs, ok := ss.(*remote.State)
assert.True(t, ok)
remote.TestClient(t, rs.Client)
}
func TestRemoteClientWithEncryption(t *testing.T) {
t.Parallel()
bucket := bucketName(t)
be := setupBackend(t, bucket, defaultPrefix, defaultKey, true)
defer teardownBackend(t, be)
ss, err := be.StateMgr(backend.DefaultStateName)
assert.Nil(t, err)
rs, ok := ss.(*remote.State)
assert.True(t, ok)
remote.TestClient(t, rs.Client)
}
func TestRemoteLocks(t *testing.T) {
t.Parallel()
bucket := bucketName(t)
be := setupBackend(t, bucket, defaultPrefix, defaultKey, false)
defer teardownBackend(t, be)
remoteClient := func() (remote.Client, error) {
ss, err := be.StateMgr(backend.DefaultStateName)
if err != nil {
return nil, err
}
rs, ok := ss.(*remote.State)
if !ok {
return nil, fmt.Errorf("be.StateMgr(): got a %T, want a *remote.State", ss)
}
return rs.Client, nil
}
c0, err := remoteClient()
assert.Nil(t, err)
c1, err := remoteClient()
assert.Nil(t, err)
remote.TestRemoteLocks(t, c0, c1)
}
func TestBackend(t *testing.T) {
t.Parallel()
bucket := bucketName(t)
be0 := setupBackend(t, bucket, defaultPrefix, defaultKey, false)
defer teardownBackend(t, be0)
be1 := setupBackend(t, bucket, defaultPrefix, defaultKey, false)
defer teardownBackend(t, be1)
backend.TestBackendStates(t, be0)
backend.TestBackendStateLocks(t, be0, be1)
backend.TestBackendStateForceUnlock(t, be0, be1)
}
func TestBackendWithPrefix(t *testing.T) {
t.Parallel()
prefix := "prefix/test"
bucket := bucketName(t)
be0 := setupBackend(t, bucket, prefix, defaultKey, false)
defer teardownBackend(t, be0)
be1 := setupBackend(t, bucket, prefix+"/", defaultKey, false)
defer teardownBackend(t, be1)
backend.TestBackendStates(t, be0)
backend.TestBackendStateLocks(t, be0, be1)
}
func TestBackendWithEncryption(t *testing.T) {
t.Parallel()
bucket := bucketName(t)
be0 := setupBackend(t, bucket, defaultPrefix, defaultKey, true)
defer teardownBackend(t, be0)
be1 := setupBackend(t, bucket, defaultPrefix, defaultKey, true)
defer teardownBackend(t, be1)
backend.TestBackendStates(t, be0)
backend.TestBackendStateLocks(t, be0, be1)
}
func setupBackend(t *testing.T, bucket, prefix, key string, encrypt bool) backend.Backend {
t.Helper()
skip := os.Getenv("TF_COS_APPID") == ""
if skip {
t.Skip("This test require setting TF_COS_APPID environment variables")
}
if os.Getenv(PROVIDER_REGION) == "" {
os.Setenv(PROVIDER_REGION, "ap-guangzhou")
}
appId := os.Getenv("TF_COS_APPID")
region := os.Getenv(PROVIDER_REGION)
config := map[string]interface{}{
"region": region,
"bucket": bucket + appId,
"prefix": prefix,
"key": key,
}
b := backend.TestBackendConfig(t, New(), backend.TestWrapConfig(config))
be := b.(*Backend)
c, err := be.client("tencentcloud")
assert.Nil(t, err)
err = c.putBucket()
assert.Nil(t, err)
return b
}
func teardownBackend(t *testing.T, b backend.Backend) {
t.Helper()
c, err := b.(*Backend).client("tencentcloud")
assert.Nil(t, err)
err = c.deleteBucket(true)
assert.Nil(t, err)
}
func bucketName(t *testing.T) string {
unique := fmt.Sprintf("%s-%x", t.Name(), time.Now().UnixNano())
return fmt.Sprintf("terraform-test-%s-%s", fmt.Sprintf("%x", md5.Sum([]byte(unique)))[:10], "")
}

View File

@ -0,0 +1,403 @@
package cos
import (
"bytes"
"context"
"crypto/md5"
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
"strings"
"time"
multierror "github.com/hashicorp/go-multierror"
"github.com/hashicorp/terraform/state"
"github.com/hashicorp/terraform/state/remote"
tag "github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag/v20180813"
"github.com/tencentyun/cos-go-sdk-v5"
)
const (
lockTagKey = "tencentcloud-terraform-lock"
)
// RemoteClient implements the client of remote state
type remoteClient struct {
cosContext context.Context
cosClient *cos.Client
tagClient *tag.Client
bucket string
stateFile string
lockFile string
encrypt bool
acl string
}
// Get returns remote state file
func (c *remoteClient) Get() (*remote.Payload, error) {
log.Printf("[DEBUG] get remote state file %s", c.stateFile)
exists, data, checksum, err := c.getObject(c.stateFile)
if err != nil {
return nil, err
}
if !exists {
return nil, nil
}
payload := &remote.Payload{
Data: data,
MD5: []byte(checksum),
}
return payload, nil
}
// Put put state file to remote
func (c *remoteClient) Put(data []byte) error {
log.Printf("[DEBUG] put remote state file %s", c.stateFile)
return c.putObject(c.stateFile, data)
}
// Delete delete remote state file
func (c *remoteClient) Delete() error {
log.Printf("[DEBUG] delete remote state file %s", c.stateFile)
return c.deleteObject(c.stateFile)
}
// Lock lock remote state file for writing
func (c *remoteClient) Lock(info *state.LockInfo) (string, error) {
log.Printf("[DEBUG] lock remote state file %s", c.lockFile)
err := c.cosLock(c.bucket, c.lockFile)
if err != nil {
return "", c.lockError(err)
}
defer c.cosUnlock(c.bucket, c.lockFile)
exists, _, _, err := c.getObject(c.lockFile)
if err != nil {
return "", c.lockError(err)
}
if exists {
return "", c.lockError(fmt.Errorf("lock file %s exists", c.lockFile))
}
info.Path = c.lockFile
data, err := json.Marshal(info)
if err != nil {
return "", c.lockError(err)
}
check := fmt.Sprintf("%x", md5.Sum(data))
err = c.putObject(c.lockFile, data)
if err != nil {
return "", c.lockError(err)
}
return check, nil
}
// Unlock unlock remote state file
func (c *remoteClient) Unlock(check string) error {
log.Printf("[DEBUG] unlock remote state file %s", c.lockFile)
info, err := c.lockInfo()
if err != nil {
return c.lockError(err)
}
if info.ID != check {
return c.lockError(fmt.Errorf("lock id mismatch, %v != %v", info.ID, check))
}
err = c.deleteObject(c.lockFile)
if err != nil {
return c.lockError(err)
}
return nil
}
// lockError returns state.LockError
func (c *remoteClient) lockError(err error) *state.LockError {
log.Printf("[DEBUG] failed to lock or unlock %s: %v", c.lockFile, err)
lockErr := &state.LockError{
Err: err,
}
info, infoErr := c.lockInfo()
if infoErr != nil {
lockErr.Err = multierror.Append(lockErr.Err, infoErr)
} else {
lockErr.Info = info
}
return lockErr
}
// lockInfo returns LockInfo from lock file
func (c *remoteClient) lockInfo() (*state.LockInfo, error) {
exists, data, checksum, err := c.getObject(c.lockFile)
if err != nil {
return nil, err
}
if !exists {
return nil, fmt.Errorf("lock file %s not exists", c.lockFile)
}
info := &state.LockInfo{}
if err := json.Unmarshal(data, info); err != nil {
return nil, err
}
info.ID = checksum
return info, nil
}
// getObject get remote object
func (c *remoteClient) getObject(cosFile string) (exists bool, data []byte, checksum string, err error) {
rsp, err := c.cosClient.Object.Get(c.cosContext, cosFile, nil)
if rsp == nil {
log.Printf("[DEBUG] getObject %s: error: %v", cosFile, err)
err = fmt.Errorf("failed to open file at %v: %v", cosFile, err)
return
}
defer rsp.Body.Close()
log.Printf("[DEBUG] getObject %s: code: %d, error: %v", cosFile, rsp.StatusCode, err)
if err != nil {
if rsp.StatusCode == 404 {
err = nil
} else {
err = fmt.Errorf("failed to open file at %v: %v", cosFile, err)
}
return
}
checksum = rsp.Header.Get("X-Cos-Meta-Md5")
log.Printf("[DEBUG] getObject %s: checksum: %s", cosFile, checksum)
if len(checksum) != 32 {
err = fmt.Errorf("failed to open file at %v: checksum %s invalid", cosFile, checksum)
return
}
exists = true
data, err = ioutil.ReadAll(rsp.Body)
log.Printf("[DEBUG] getObject %s: data length: %d", cosFile, len(data))
if err != nil {
err = fmt.Errorf("failed to open file at %v: %v", cosFile, err)
return
}
check := fmt.Sprintf("%x", md5.Sum(data))
log.Printf("[DEBUG] getObject %s: check: %s", cosFile, check)
if check != checksum {
err = fmt.Errorf("failed to open file at %v: checksum mismatch, %s != %s", cosFile, check, checksum)
return
}
return
}
// putObject put object to remote
func (c *remoteClient) putObject(cosFile string, data []byte) error {
opt := &cos.ObjectPutOptions{
ObjectPutHeaderOptions: &cos.ObjectPutHeaderOptions{
XCosMetaXXX: &http.Header{
"X-Cos-Meta-Md5": []string{fmt.Sprintf("%x", md5.Sum(data))},
},
},
ACLHeaderOptions: &cos.ACLHeaderOptions{
XCosACL: c.acl,
},
}
if c.encrypt {
opt.ObjectPutHeaderOptions.XCosServerSideEncryption = "AES256"
}
r := bytes.NewReader(data)
rsp, err := c.cosClient.Object.Put(c.cosContext, cosFile, r, opt)
if rsp == nil {
log.Printf("[DEBUG] putObject %s: error: %v", cosFile, err)
return fmt.Errorf("failed to save file to %v: %v", cosFile, err)
}
defer rsp.Body.Close()
log.Printf("[DEBUG] putObject %s: code: %d, error: %v", cosFile, rsp.StatusCode, err)
if err != nil {
return fmt.Errorf("failed to save file to %v: %v", cosFile, err)
}
return nil
}
// deleteObject delete remote object
func (c *remoteClient) deleteObject(cosFile string) error {
rsp, err := c.cosClient.Object.Delete(c.cosContext, cosFile)
if rsp == nil {
log.Printf("[DEBUG] deleteObject %s: error: %v", cosFile, err)
return fmt.Errorf("failed to delete file %v: %v", cosFile, err)
}
defer rsp.Body.Close()
log.Printf("[DEBUG] deleteObject %s: code: %d, error: %v", cosFile, rsp.StatusCode, err)
if rsp.StatusCode == 404 {
return nil
}
if err != nil {
return fmt.Errorf("failed to delete file %v: %v", cosFile, err)
}
return nil
}
// getBucket list bucket by prefix
func (c *remoteClient) getBucket(prefix string) (obs []cos.Object, err error) {
fs, rsp, err := c.cosClient.Bucket.Get(c.cosContext, &cos.BucketGetOptions{Prefix: prefix})
if rsp == nil {
log.Printf("[DEBUG] getBucket %s/%s: error: %v", c.bucket, prefix, err)
err = fmt.Errorf("bucket %s not exists", c.bucket)
return
}
defer rsp.Body.Close()
log.Printf("[DEBUG] getBucket %s/%s: code: %d, error: %v", c.bucket, prefix, rsp.StatusCode, err)
if rsp.StatusCode == 404 {
err = fmt.Errorf("bucket %s not exists", c.bucket)
return
}
if err != nil {
return
}
return fs.Contents, nil
}
// putBucket create cos bucket
func (c *remoteClient) putBucket() error {
rsp, err := c.cosClient.Bucket.Put(c.cosContext, nil)
if rsp == nil {
log.Printf("[DEBUG] putBucket %s: error: %v", c.bucket, err)
return fmt.Errorf("failed to create bucket %v: %v", c.bucket, err)
}
defer rsp.Body.Close()
log.Printf("[DEBUG] putBucket %s: code: %d, error: %v", c.bucket, rsp.StatusCode, err)
if rsp.StatusCode == 409 {
return nil
}
if err != nil {
return fmt.Errorf("failed to create bucket %v: %v", c.bucket, err)
}
return nil
}
// deleteBucket delete cos bucket
func (c *remoteClient) deleteBucket(recursive bool) error {
if recursive {
obs, err := c.getBucket("")
if err != nil {
if strings.Contains(err.Error(), "not exists") {
return nil
}
log.Printf("[DEBUG] deleteBucket %s: empty bucket error: %v", c.bucket, err)
return fmt.Errorf("failed to empty bucket %v: %v", c.bucket, err)
}
for _, v := range obs {
c.deleteObject(v.Key)
}
}
rsp, err := c.cosClient.Bucket.Delete(c.cosContext)
if rsp == nil {
log.Printf("[DEBUG] deleteBucket %s: error: %v", c.bucket, err)
return fmt.Errorf("failed to delete bucket %v: %v", c.bucket, err)
}
defer rsp.Body.Close()
log.Printf("[DEBUG] deleteBucket %s: code: %d, error: %v", c.bucket, rsp.StatusCode, err)
if rsp.StatusCode == 404 {
return nil
}
if err != nil {
return fmt.Errorf("failed to delete bucket %v: %v", c.bucket, err)
}
return nil
}
// cosLock lock cos for writing
func (c *remoteClient) cosLock(bucket, cosFile string) error {
log.Printf("[DEBUG] lock cos file %s:%s", bucket, cosFile)
cosPath := fmt.Sprintf("%s:%s", bucket, cosFile)
lockTagValue := fmt.Sprintf("%x", md5.Sum([]byte(cosPath)))
return c.CreateTag(lockTagKey, lockTagValue)
}
// cosUnlock unlock cos writing
func (c *remoteClient) cosUnlock(bucket, cosFile string) error {
log.Printf("[DEBUG] unlock cos file %s:%s", bucket, cosFile)
cosPath := fmt.Sprintf("%s:%s", bucket, cosFile)
lockTagValue := fmt.Sprintf("%x", md5.Sum([]byte(cosPath)))
var err error
for i := 0; i < 30; i++ {
err = c.DeleteTag(lockTagKey, lockTagValue)
if err == nil {
return nil
}
time.Sleep(1 * time.Second)
}
return err
}
// CreateTag create tag by key and value
func (c *remoteClient) CreateTag(key, value string) error {
request := tag.NewCreateTagRequest()
request.TagKey = &key
request.TagValue = &value
_, err := c.tagClient.CreateTag(request)
log.Printf("[DEBUG] create tag %s:%s: error: %v", key, value, err)
if err != nil {
return fmt.Errorf("failed to create tag: %s -> %s: %s", key, value, err)
}
return nil
}
// DeleteTag create tag by key and value
func (c *remoteClient) DeleteTag(key, value string) error {
request := tag.NewDeleteTagRequest()
request.TagKey = &key
request.TagValue = &value
_, err := c.tagClient.DeleteTag(request)
log.Printf("[DEBUG] delete tag %s:%s: error: %v", key, value, err)
if err != nil {
return fmt.Errorf("failed to delete tag: %s -> %s: %s", key, value, err)
}
return nil
}

View File

@ -136,6 +136,8 @@ func (b *Backend) configure(ctx context.Context) error {
})
} else if v, ok := data.GetOk("credentials"); ok {
creds = v.(string)
} else if v := os.Getenv("GOOGLE_BACKEND_CREDENTIALS"); v != "" {
creds = v
} else {
creds = os.Getenv("GOOGLE_CREDENTIALS")
}

View File

@ -26,7 +26,7 @@ func TestHTTPClient(t *testing.T) {
url, err := url.Parse(ts.URL)
if err != nil {
t.Fatalf("err: %s", err)
t.Fatalf("Parse: %s", err)
}
// Test basic get/update
@ -73,6 +73,10 @@ func TestHTTPClient(t *testing.T) {
UpdateMethod: "PUT",
Client: retryablehttp.NewClient(),
}
if err != nil {
t.Fatalf("Parse: %s", err)
}
remote.TestClient(t, client) // first time through: 201
remote.TestClient(t, client) // second time, with identical data: 204
@ -83,18 +87,13 @@ func TestHTTPClient(t *testing.T) {
defer ts.Close()
url, err = url.Parse(ts.URL)
if err != nil {
t.Fatalf("Parse: %s", err)
}
client = &httpClient{URL: url, Client: retryablehttp.NewClient()}
remote.TestClient(t, client)
}
func assertError(t *testing.T, err error, expected string) {
if err == nil {
t.Fatalf("Expected empty config to err")
} else if err.Error() != expected {
t.Fatalf("Expected err.Error() to be \"%s\", got \"%s\"", expected, err.Error())
}
}
type testHTTPHandler struct {
Data []byte
Locked bool

View File

@ -5,11 +5,13 @@ import (
"encoding/json"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/requests"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses"
"github.com/aliyun/alibaba-cloud-sdk-go/services/sts"
"github.com/aliyun/aliyun-oss-go-sdk/oss"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/helper/validation"
"github.com/jmespath/go-jmespath"
"io/ioutil"
"os"
"runtime"
@ -21,6 +23,7 @@ import (
"github.com/aliyun/aliyun-tablestore-go-sdk/tablestore"
"github.com/hashicorp/go-cleanhttp"
"github.com/hashicorp/terraform/version"
"github.com/mitchellh/go-homedir"
"log"
"net/http"
"strconv"
@ -52,6 +55,13 @@ func New() backend.Backend {
DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_SECURITY_TOKEN", ""),
},
"ecs_role_name": {
Type: schema.TypeString,
Optional: true,
DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_ECS_ROLE_NAME", os.Getenv("ALICLOUD_ECS_ROLE_NAME")),
Description: "The RAM Role Name attached on a ECS instance for API operations. You can retrieve this from the 'Access Control' section of the Alibaba Cloud console.",
},
"region": &schema.Schema{
Type: schema.TypeString,
Optional: true,
@ -140,6 +150,7 @@ func New() backend.Backend {
"shared_credentials_file": {
Type: schema.TypeString,
Optional: true,
DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_SHARED_CREDENTIALS_FILE", ""),
Description: "This is the path to the shared credentials file. If this is not set and a profile is specified, `~/.aliyun/config.json` will be used.",
},
"profile": {
@ -276,8 +287,17 @@ func (b *Backend) configure(ctx context.Context) error {
}
}
if accessKey == "" {
ecsRoleName := getBackendConfig(d.Get("ecs_role_name").(string), "ram_role_name")
subAccessKeyId, subAccessKeySecret, subSecurityToken, err := getAuthCredentialByEcsRoleName(ecsRoleName)
if err != nil {
return err
}
accessKey, secretKey, securityToken = subAccessKeyId, subAccessKeySecret, subSecurityToken
}
if roleArn != "" {
subAccessKeyId, subAccessKeySecret, subSecurityToken, err := getAssumeRoleAK(accessKey, secretKey, region, roleArn, sessionName, policy, sessionExpiration)
subAccessKeyId, subAccessKeySecret, subSecurityToken, err := getAssumeRoleAK(accessKey, secretKey, securityToken, region, roleArn, sessionName, policy, sessionExpiration)
if err != nil {
return err
}
@ -347,7 +367,7 @@ func (b *Backend) getOSSEndpointByRegion(access_key, secret_key, security_token,
return endpointsResponse, nil
}
func getAssumeRoleAK(accessKey, secretKey, region, roleArn, sessionName, policy string, sessionExpiration int) (string, string, string, error) {
func getAssumeRoleAK(accessKey, secretKey, stsToken, region, roleArn, sessionName, policy string, sessionExpiration int) (string, string, string, error) {
request := sts.CreateAssumeRoleRequest()
request.RoleArn = roleArn
request.RoleSessionName = sessionName
@ -355,7 +375,13 @@ func getAssumeRoleAK(accessKey, secretKey, region, roleArn, sessionName, policy
request.Policy = policy
request.Scheme = "https"
client, err := sts.NewClientWithAccessKey(region, accessKey, secretKey)
var client *sts.Client
var err error
if stsToken == "" {
client, err = sts.NewClientWithAccessKey(region, accessKey, secretKey)
} else {
client, err = sts.NewClientWithStsToken(region, accessKey, secretKey, stsToken)
}
if err != nil {
return "", "", "", err
}
@ -445,7 +471,11 @@ func getConfigFromProfile(d *schema.ResourceData, ProfileKey string) (interface{
return nil, nil
}
current := d.Get("profile").(string)
profilePath := d.Get("shared_credentials_file").(string)
// Set CredsFilename, expanding home directory
profilePath, err := homedir.Expand(d.Get("shared_credentials_file").(string))
if err != nil {
return nil, err
}
if profilePath == "" {
profilePath = fmt.Sprintf("%s/.aliyun/config.json", os.Getenv("HOME"))
if runtime.GOOS == "windows" {
@ -453,7 +483,7 @@ func getConfigFromProfile(d *schema.ResourceData, ProfileKey string) (interface{
}
}
providerConfig = make(map[string]interface{})
_, err := os.Stat(profilePath)
_, err = os.Stat(profilePath)
if !os.IsNotExist(err) {
data, err := ioutil.ReadFile(profilePath)
if err != nil {
@ -503,3 +533,78 @@ func getConfigFromProfile(d *schema.ResourceData, ProfileKey string) (interface{
return providerConfig[ProfileKey], nil
}
var securityCredURL = "http://100.100.100.200/latest/meta-data/ram/security-credentials/"
// getAuthCredentialByEcsRoleName aims to access meta to get sts credential
// Actually, the job should be done by sdk, but currently not all resources and products support alibaba-cloud-sdk-go,
// and their go sdk does support ecs role name.
// This method is a temporary solution and it should be removed after all go sdk support ecs role name
// The related PR: https://github.com/terraform-providers/terraform-provider-alicloud/pull/731
func getAuthCredentialByEcsRoleName(ecsRoleName string) (accessKey, secretKey, token string, err error) {
if ecsRoleName == "" {
return
}
requestUrl := securityCredURL + ecsRoleName
httpRequest, err := http.NewRequest(requests.GET, requestUrl, strings.NewReader(""))
if err != nil {
err = fmt.Errorf("build sts requests err: %s", err.Error())
return
}
httpClient := &http.Client{}
httpResponse, err := httpClient.Do(httpRequest)
if err != nil {
err = fmt.Errorf("get Ecs sts token err : %s", err.Error())
return
}
response := responses.NewCommonResponse()
err = responses.Unmarshal(response, httpResponse, "")
if err != nil {
err = fmt.Errorf("Unmarshal Ecs sts token response err : %s", err.Error())
return
}
if response.GetHttpStatus() != http.StatusOK {
err = fmt.Errorf("get Ecs sts token err, httpStatus: %d, message = %s", response.GetHttpStatus(), response.GetHttpContentString())
return
}
var data interface{}
err = json.Unmarshal(response.GetHttpContentBytes(), &data)
if err != nil {
err = fmt.Errorf("refresh Ecs sts token err, json.Unmarshal fail: %s", err.Error())
return
}
code, err := jmespath.Search("Code", data)
if err != nil {
err = fmt.Errorf("refresh Ecs sts token err, fail to get Code: %s", err.Error())
return
}
if code.(string) != "Success" {
err = fmt.Errorf("refresh Ecs sts token err, Code is not Success")
return
}
accessKeyId, err := jmespath.Search("AccessKeyId", data)
if err != nil {
err = fmt.Errorf("refresh Ecs sts token err, fail to get AccessKeyId: %s", err.Error())
return
}
accessKeySecret, err := jmespath.Search("AccessKeySecret", data)
if err != nil {
err = fmt.Errorf("refresh Ecs sts token err, fail to get AccessKeySecret: %s", err.Error())
return
}
securityToken, err := jmespath.Search("SecurityToken", data)
if err != nil {
err = fmt.Errorf("refresh Ecs sts token err, fail to get SecurityToken: %s", err.Error())
return
}
if accessKeyId == nil || accessKeySecret == nil || securityToken == nil {
err = fmt.Errorf("there is no any available accesskey, secret and security token for Ecs role %s", ecsRoleName)
return
}
return accessKeyId.(string), accessKeySecret.(string), securityToken.(string), nil
}

View File

@ -18,6 +18,8 @@ import (
)
func (b *Backend) Workspaces() ([]string, error) {
const maxKeys = 1000
prefix := ""
if b.workspaceKeyPrefix != "" {
@ -25,24 +27,24 @@ func (b *Backend) Workspaces() ([]string, error) {
}
params := &s3.ListObjectsInput{
Bucket: &b.bucketName,
Prefix: aws.String(prefix),
}
resp, err := b.s3Client.ListObjects(params)
if err != nil {
if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == s3.ErrCodeNoSuchBucket {
return nil, fmt.Errorf(errS3NoSuchBucket, err)
}
return nil, err
Bucket: &b.bucketName,
Prefix: aws.String(prefix),
MaxKeys: aws.Int64(maxKeys),
}
wss := []string{backend.DefaultStateName}
for _, obj := range resp.Contents {
ws := b.keyEnv(*obj.Key)
if ws != "" {
wss = append(wss, ws)
err := b.s3Client.ListObjectsPages(params, func(page *s3.ListObjectsOutput, lastPage bool) bool {
for _, obj := range page.Contents {
ws := b.keyEnv(*obj.Key)
if ws != "" {
wss = append(wss, ws)
}
}
return !lastPage
})
if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == s3.ErrCodeNoSuchBucket {
return nil, fmt.Errorf(errS3NoSuchBucket, err)
}
sort.Strings(wss[1:])

View File

@ -229,19 +229,24 @@ func TestBackendExtraPaths(t *testing.T) {
ddbTable: b.ddbTable,
}
// Write the first state
stateMgr := &remote.State{Client: client}
stateMgr.WriteState(s1)
if err := stateMgr.PersistState(); err != nil {
t.Fatal(err)
}
// Write the second state
// Note a new state manager - otherwise, because these
// states are equal, the state will not Put to the remote
client.path = b.path("s2")
stateMgr.WriteState(s2)
if err := stateMgr.PersistState(); err != nil {
stateMgr2 := &remote.State{Client: client}
stateMgr2.WriteState(s2)
if err := stateMgr2.PersistState(); err != nil {
t.Fatal(err)
}
s2Lineage := stateMgr.StateSnapshotMeta().Lineage
s2Lineage := stateMgr2.StateSnapshotMeta().Lineage
if err := checkStateList(b, []string{"default", "s1", "s2"}); err != nil {
t.Fatal(err)

View File

@ -14,12 +14,12 @@ import (
tfe "github.com/hashicorp/go-tfe"
version "github.com/hashicorp/go-version"
svchost "github.com/hashicorp/terraform-svchost"
"github.com/hashicorp/terraform-svchost/disco"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/configs/configschema"
"github.com/hashicorp/terraform/state"
"github.com/hashicorp/terraform/state/remote"
"github.com/hashicorp/terraform/svchost"
"github.com/hashicorp/terraform/svchost/disco"
"github.com/hashicorp/terraform/terraform"
"github.com/hashicorp/terraform/tfdiags"
tfversion "github.com/hashicorp/terraform/version"
@ -267,12 +267,17 @@ func (b *Remote) Configure(obj cty.Value) tfdiags.Diagnostics {
// Return an error if we still don't have a token at this point.
if token == "" {
loginCommand := "terraform login"
if b.hostname != defaultHostname {
loginCommand = loginCommand + " " + b.hostname
}
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Required token could not be found",
fmt.Sprintf(
"Make sure you configured a credentials block for %s in your CLI Config File.",
"Run the following command to generate a token for %s:\n %s",
b.hostname,
loginCommand,
),
))
return diags

View File

@ -75,10 +75,7 @@ func (b *Remote) opApply(stopCtx, cancelCtx context.Context, op *backend.Operati
))
}
variables, parseDiags := b.parseVariableValues(op)
diags = diags.Append(parseDiags)
if len(variables) > 0 {
if b.hasExplicitVariableValues(op) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Run variables are currently not supported",

View File

@ -14,7 +14,6 @@ import (
tfe "github.com/hashicorp/go-tfe"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/terraform"
"github.com/hashicorp/terraform/tfdiags"
)
var (
@ -201,32 +200,42 @@ func (b *Remote) waitForRun(stopCtx, cancelCtx context.Context, op *backend.Oper
}
}
func (b *Remote) parseVariableValues(op *backend.Operation) (terraform.InputValues, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
result := make(terraform.InputValues)
// hasExplicitVariableValues is a best-effort check to determine whether the
// user has provided -var or -var-file arguments to a remote operation.
//
// The results may be inaccurate if the configuration is invalid or if
// individual variable values are invalid. That's okay because we only use this
// result to hint the user to set variables a different way. It's always the
// remote system's responsibility to do final validation of the input.
func (b *Remote) hasExplicitVariableValues(op *backend.Operation) bool {
// Load the configuration using the caller-provided configuration loader.
config, _, configDiags := op.ConfigLoader.LoadConfigWithSnapshot(op.ConfigDir)
diags = diags.Append(configDiags)
if diags.HasErrors() {
return nil, diags
if configDiags.HasErrors() {
// If we can't load the configuration then we'll assume no explicit
// variable values just to let the remote operation start and let
// the remote system return the same set of configuration errors.
return false
}
variables, varDiags := backend.ParseVariableValues(op.Variables, config.Module.Variables)
diags = diags.Append(varDiags)
if diags.HasErrors() {
return nil, diags
}
// We're intentionally ignoring the diagnostics here because validation
// of the variable values is the responsibilty of the remote system. Our
// goal here is just to make a best effort count of how many variable
// values are coming from -var or -var-file CLI arguments so that we can
// hint the user that those are not supported for remote operations.
variables, _ := backend.ParseVariableValues(op.Variables, config.Module.Variables)
// Save only the explicitly defined variables.
for k, v := range variables {
// Check for explicitly-defined (-var and -var-file) variables, which the
// remote backend does not support. All other source types are okay,
// because they are implicit from the execution context anyway and so
// their final values will come from the _remote_ execution context.
for _, v := range variables {
switch v.SourceType {
case terraform.ValueFromCLIArg, terraform.ValueFromNamedFile:
result[k] = v
return true
}
}
return result, diags
return false
}
func (b *Remote) costEstimate(stopCtx, cancelCtx context.Context, op *backend.Operation, r *tfe.Run) error {

View File

@ -2,16 +2,21 @@ package remote
import (
"context"
"fmt"
"log"
"strings"
"github.com/hashicorp/errwrap"
tfe "github.com/hashicorp/go-tfe"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/command/clistate"
"github.com/hashicorp/terraform/configs"
"github.com/hashicorp/terraform/states/statemgr"
"github.com/hashicorp/terraform/terraform"
"github.com/hashicorp/terraform/tfdiags"
"github.com/zclconf/go-cty/cty"
)
// Context implements backend.Enhanced.
@ -47,6 +52,17 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
return nil, nil, diags
}
defer func() {
// If we're returning with errors, and thus not producing a valid
// context, we'll want to avoid leaving the remote workspace locked.
if diags.HasErrors() {
err := op.StateLocker.Unlock(nil)
if err != nil {
diags = diags.Append(errwrap.Wrapf("Error unlocking state: {{err}}", err))
}
}
}()
log.Printf("[TRACE] backend/remote: reading remote state for workspace %q", workspace)
if err := stateMgr.RefreshState(); err != nil {
diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err))
@ -88,28 +104,34 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
return nil, nil, diags
}
if tfeVariables != nil {
if op.Variables == nil {
op.Variables = make(map[string]backend.UnparsedVariableValue)
}
for _, v := range tfeVariables.Items {
if v.Sensitive {
v.Value = "<sensitive>"
if op.AllowUnsetVariables {
// If we're not going to use the variables in an operation we'll be
// more lax about them, stubbing out any unset ones as unknown.
// This gives us enough information to produce a consistent context,
// but not enough information to run a real operation (plan, apply, etc)
opts.Variables = stubAllVariables(op.Variables, config.Module.Variables)
} else {
if tfeVariables != nil {
if op.Variables == nil {
op.Variables = make(map[string]backend.UnparsedVariableValue)
}
op.Variables[v.Key] = &unparsedVariableValue{
value: v.Value,
source: terraform.ValueFromEnvVar,
for _, v := range tfeVariables.Items {
if v.Category == tfe.CategoryTerraform {
op.Variables[v.Key] = &remoteStoredVariableValue{
definition: v,
}
}
}
}
}
if op.Variables != nil {
variables, varDiags := backend.ParseVariableValues(op.Variables, config.Module.Variables)
diags = diags.Append(varDiags)
if diags.HasErrors() {
return nil, nil, diags
if op.Variables != nil {
variables, varDiags := backend.ParseVariableValues(op.Variables, config.Module.Variables)
diags = diags.Append(varDiags)
if diags.HasErrors() {
return nil, nil, diags
}
opts.Variables = variables
}
opts.Variables = variables
}
tfCtx, ctxDiags := terraform.NewContext(&opts)
@ -119,3 +141,114 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
return tfCtx, stateMgr, diags
}
func stubAllVariables(vv map[string]backend.UnparsedVariableValue, decls map[string]*configs.Variable) terraform.InputValues {
ret := make(terraform.InputValues, len(decls))
for name, cfg := range decls {
raw, exists := vv[name]
if !exists {
ret[name] = &terraform.InputValue{
Value: cty.UnknownVal(cfg.Type),
SourceType: terraform.ValueFromConfig,
}
continue
}
val, diags := raw.ParseVariableValue(cfg.ParsingMode)
if diags.HasErrors() {
ret[name] = &terraform.InputValue{
Value: cty.UnknownVal(cfg.Type),
SourceType: terraform.ValueFromConfig,
}
continue
}
ret[name] = val
}
return ret
}
// remoteStoredVariableValue is a backend.UnparsedVariableValue implementation
// that translates from the go-tfe representation of stored variables into
// the Terraform Core backend representation of variables.
type remoteStoredVariableValue struct {
definition *tfe.Variable
}
var _ backend.UnparsedVariableValue = (*remoteStoredVariableValue)(nil)
func (v *remoteStoredVariableValue) ParseVariableValue(mode configs.VariableParsingMode) (*terraform.InputValue, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
var val cty.Value
switch {
case v.definition.Sensitive:
// If it's marked as sensitive then it's not available for use in
// local operations. We'll use an unknown value as a placeholder for
// it so that operations that don't need it might still work, but
// we'll also produce a warning about it to add context for any
// errors that might result here.
val = cty.DynamicVal
if !v.definition.HCL {
// If it's not marked as HCL then we at least know that the
// value must be a string, so we'll set that in case it allows
// us to do some more precise type checking.
val = cty.UnknownVal(cty.String)
}
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Warning,
fmt.Sprintf("Value for var.%s unavailable", v.definition.Key),
fmt.Sprintf("The value of variable %q is marked as sensitive in the remote workspace. This operation always runs locally, so the value for that variable is not available.", v.definition.Key),
))
case v.definition.HCL:
// If the variable value is marked as being in HCL syntax, we need to
// parse it the same way as it would be interpreted in a .tfvars
// file because that is how it would get passed to Terraform CLI for
// a remote operation and we want to mimic that result as closely as
// possible.
var exprDiags hcl.Diagnostics
expr, exprDiags := hclsyntax.ParseExpression([]byte(v.definition.Value), "<remote workspace>", hcl.Pos{Line: 1, Column: 1})
if expr != nil {
var moreDiags hcl.Diagnostics
val, moreDiags = expr.Value(nil)
exprDiags = append(exprDiags, moreDiags...)
} else {
// We'll have already put some errors in exprDiags above, so we'll
// just stub out the value here.
val = cty.DynamicVal
}
// We don't have sufficient context to return decent error messages
// for syntax errors in the remote values, so we'll just return a
// generic message instead for now.
// (More complete error messages will still result from true remote
// operations, because they'll run on the remote system where we've
// materialized the values into a tfvars file we can report from.)
if exprDiags.HasErrors() {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
fmt.Sprintf("Invalid expression for var.%s", v.definition.Key),
fmt.Sprintf("The value of variable %q is marked in the remote workspace as being specified in HCL syntax, but the given value is not valid HCL. Stored variable values must be valid literal expressions and may not contain references to other variables or calls to functions.", v.definition.Key),
))
}
default:
// A variable value _not_ marked as HCL is always be a string, given
// literally.
val = cty.StringVal(v.definition.Value)
}
return &terraform.InputValue{
Value: val,
// We mark these as "from input" with the rationale that entering
// variable values into the Terraform Cloud or Enterprise UI is,
// roughly speaking, a similar idea to entering variable values at
// the interactive CLI prompts. It's not a perfect correspondance,
// but it's closer than the other options.
SourceType: terraform.ValueFromInput,
}, diags
}

View File

@ -0,0 +1,214 @@
package remote
import (
"testing"
tfe "github.com/hashicorp/go-tfe"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/configs"
"github.com/hashicorp/terraform/internal/initwd"
"github.com/zclconf/go-cty/cty"
)
func TestRemoteStoredVariableValue(t *testing.T) {
tests := map[string]struct {
Def *tfe.Variable
Want cty.Value
WantError string
}{
"string literal": {
&tfe.Variable{
Key: "test",
Value: "foo",
HCL: false,
Sensitive: false,
},
cty.StringVal("foo"),
``,
},
"string HCL": {
&tfe.Variable{
Key: "test",
Value: `"foo"`,
HCL: true,
Sensitive: false,
},
cty.StringVal("foo"),
``,
},
"list HCL": {
&tfe.Variable{
Key: "test",
Value: `[]`,
HCL: true,
Sensitive: false,
},
cty.EmptyTupleVal,
``,
},
"null HCL": {
&tfe.Variable{
Key: "test",
Value: `null`,
HCL: true,
Sensitive: false,
},
cty.NullVal(cty.DynamicPseudoType),
``,
},
"literal sensitive": {
&tfe.Variable{
Key: "test",
HCL: false,
Sensitive: true,
},
cty.UnknownVal(cty.String),
``,
},
"HCL sensitive": {
&tfe.Variable{
Key: "test",
HCL: true,
Sensitive: true,
},
cty.DynamicVal,
``,
},
"HCL computation": {
// This (stored expressions containing computation) is not a case
// we intentionally supported, but it became possible for remote
// operations in Terraform 0.12 (due to Terraform Cloud/Enterprise
// just writing the HCL verbatim into generated `.tfvars` files).
// We support it here for consistency, and we continue to support
// it in both places for backward-compatibility. In practice,
// there's little reason to do computation in a stored variable
// value because references are not supported.
&tfe.Variable{
Key: "test",
Value: `[for v in ["a"] : v]`,
HCL: true,
Sensitive: false,
},
cty.TupleVal([]cty.Value{cty.StringVal("a")}),
``,
},
"HCL syntax error": {
&tfe.Variable{
Key: "test",
Value: `[`,
HCL: true,
Sensitive: false,
},
cty.DynamicVal,
`Invalid expression for var.test: The value of variable "test" is marked in the remote workspace as being specified in HCL syntax, but the given value is not valid HCL. Stored variable values must be valid literal expressions and may not contain references to other variables or calls to functions.`,
},
"HCL with references": {
&tfe.Variable{
Key: "test",
Value: `foo.bar`,
HCL: true,
Sensitive: false,
},
cty.DynamicVal,
`Invalid expression for var.test: The value of variable "test" is marked in the remote workspace as being specified in HCL syntax, but the given value is not valid HCL. Stored variable values must be valid literal expressions and may not contain references to other variables or calls to functions.`,
},
}
for name, test := range tests {
t.Run(name, func(t *testing.T) {
v := &remoteStoredVariableValue{
definition: test.Def,
}
// This ParseVariableValue implementation ignores the parsing mode,
// so we'll just always parse literal here. (The parsing mode is
// selected by the remote server, not by our local configuration.)
gotIV, diags := v.ParseVariableValue(configs.VariableParseLiteral)
if test.WantError != "" {
if !diags.HasErrors() {
t.Fatalf("missing expected error\ngot: <no error>\nwant: %s", test.WantError)
}
errStr := diags.Err().Error()
if errStr != test.WantError {
t.Fatalf("wrong error\ngot: %s\nwant: %s", errStr, test.WantError)
}
} else {
if diags.HasErrors() {
t.Fatalf("unexpected error\ngot: %s\nwant: <no error>", diags.Err().Error())
}
got := gotIV.Value
if !test.Want.RawEquals(got) {
t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want)
}
}
})
}
}
func TestRemoteContextWithVars(t *testing.T) {
catTerraform := tfe.CategoryTerraform
catEnv := tfe.CategoryEnv
tests := map[string]struct {
Opts *tfe.VariableCreateOptions
WantError string
}{
"Terraform variable": {
&tfe.VariableCreateOptions{
Category: &catTerraform,
},
`Value for undeclared variable: A variable named "key" was assigned a value, but the root module does not declare a variable of that name. To use this value, add a "variable" block to the configuration.`,
},
"environment variable": {
&tfe.VariableCreateOptions{
Category: &catEnv,
},
``,
},
}
for name, test := range tests {
t.Run(name, func(t *testing.T) {
configDir := "./testdata/empty"
b, bCleanup := testBackendDefault(t)
defer bCleanup()
_, configLoader, configCleanup := initwd.MustLoadConfigForTests(t, configDir)
defer configCleanup()
op := &backend.Operation{
ConfigDir: configDir,
ConfigLoader: configLoader,
Workspace: backend.DefaultStateName,
}
v := test.Opts
if v.Key == nil {
key := "key"
v.Key = &key
}
if v.Workspace == nil {
v.Workspace = &tfe.Workspace{
Name: b.workspace,
}
}
b.client.Variables.Create(nil, *v)
_, _, diags := b.Context(op)
if test.WantError != "" {
if !diags.HasErrors() {
t.Fatalf("missing expected error\ngot: <no error>\nwant: %s", test.WantError)
}
errStr := diags.Err().Error()
if errStr != test.WantError {
t.Fatalf("wrong error\ngot: %s\nwant: %s", errStr, test.WantError)
}
} else {
if diags.HasErrors() {
t.Fatalf("unexpected error\ngot: %s\nwant: <no error>", diags.Err().Error())
}
}
})
}
}

View File

@ -27,6 +27,7 @@ type mockClient struct {
PolicyChecks *mockPolicyChecks
Runs *mockRuns
StateVersions *mockStateVersions
Variables *mockVariables
Workspaces *mockWorkspaces
}
@ -40,6 +41,7 @@ func newMockClient() *mockClient {
c.PolicyChecks = newMockPolicyChecks(c)
c.Runs = newMockRuns(c)
c.StateVersions = newMockStateVersions(c)
c.Variables = newMockVariables(c)
c.Workspaces = newMockWorkspaces(c)
return c
}
@ -945,6 +947,63 @@ func (m *mockStateVersions) Download(ctx context.Context, url string) ([]byte, e
return state, nil
}
type mockVariables struct {
client *mockClient
workspaces map[string]*tfe.VariableList
}
func newMockVariables(client *mockClient) *mockVariables {
return &mockVariables{
client: client,
workspaces: make(map[string]*tfe.VariableList),
}
}
func (m *mockVariables) List(ctx context.Context, options tfe.VariableListOptions) (*tfe.VariableList, error) {
vl := m.workspaces[*options.Workspace]
return vl, nil
}
func (m *mockVariables) Create(ctx context.Context, options tfe.VariableCreateOptions) (*tfe.Variable, error) {
v := &tfe.Variable{
ID: generateID("var-"),
Key: *options.Key,
Category: *options.Category,
}
if options.Value != nil {
v.Value = *options.Value
}
if options.HCL != nil {
v.HCL = *options.HCL
}
if options.Sensitive != nil {
v.Sensitive = *options.Sensitive
}
workspace := options.Workspace.Name
if m.workspaces[workspace] == nil {
m.workspaces[workspace] = &tfe.VariableList{}
}
vl := m.workspaces[workspace]
vl.Items = append(vl.Items, v)
return v, nil
}
func (m *mockVariables) Read(ctx context.Context, variableID string) (*tfe.Variable, error) {
panic("not implemented")
}
func (m *mockVariables) Update(ctx context.Context, variableID string, options tfe.VariableUpdateOptions) (*tfe.Variable, error) {
panic("not implemented")
}
func (m *mockVariables) Delete(ctx context.Context, variableID string) error {
panic("not implemented")
}
type mockWorkspaces struct {
client *mockClient
workspaceIDs map[string]*tfe.Workspace

View File

@ -78,10 +78,7 @@ func (b *Remote) opPlan(stopCtx, cancelCtx context.Context, op *backend.Operatio
))
}
variables, parseDiags := b.parseVariableValues(op)
diags = diags.Append(parseDiags)
if len(variables) > 0 {
if b.hasExplicitVariableValues(op) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Run variables are currently not supported",
@ -164,10 +161,12 @@ func (b *Remote) plan(stopCtx, cancelCtx context.Context, op *backend.Operation,
The remote workspace is configured to work with configuration at
%s relative to the target repository.
Therefore Terraform will upload the full contents of the following directory
to capture the filesystem context the remote workspace expects:
Terraform will upload the contents of the following directory,
excluding files or directories as defined by a .terraformignore file
at %s/.terraformignore (if it is present),
in order to capture the filesystem context the remote workspace expects:
%s
`), w.WorkingDirectory, configDir) + "\n")
`), w.WorkingDirectory, configDir, configDir) + "\n")
}
}

View File

@ -5,8 +5,8 @@ import (
"strings"
"testing"
"github.com/hashicorp/terraform-svchost/disco"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/svchost/disco"
"github.com/hashicorp/terraform/version"
"github.com/zclconf/go-cty/cty"
@ -64,6 +64,19 @@ func TestRemote_config(t *testing.T) {
}),
confErr: "Failed to request discovery document",
},
// localhost advertises TFE services, but has no token in the credentials
"without_a_token": {
config: cty.ObjectVal(map[string]cty.Value{
"hostname": cty.StringVal("localhost"),
"organization": cty.StringVal("hashicorp"),
"token": cty.NullVal(cty.String),
"workspaces": cty.ObjectVal(map[string]cty.Value{
"name": cty.StringVal("prod"),
"prefix": cty.NullVal(cty.String),
}),
}),
confErr: "terraform login localhost",
},
"with_a_name": {
config: cty.ObjectVal(map[string]cty.Value{
"hostname": cty.NullVal(cty.String),

View File

@ -10,16 +10,18 @@ import (
"testing"
tfe "github.com/hashicorp/go-tfe"
svchost "github.com/hashicorp/terraform-svchost"
"github.com/hashicorp/terraform-svchost/auth"
"github.com/hashicorp/terraform-svchost/disco"
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/configs"
"github.com/hashicorp/terraform/configs/configschema"
"github.com/hashicorp/terraform/httpclient"
"github.com/hashicorp/terraform/providers"
"github.com/hashicorp/terraform/state/remote"
"github.com/hashicorp/terraform/svchost"
"github.com/hashicorp/terraform/svchost/auth"
"github.com/hashicorp/terraform/svchost/disco"
"github.com/hashicorp/terraform/terraform"
"github.com/hashicorp/terraform/tfdiags"
"github.com/hashicorp/terraform/version"
"github.com/mitchellh/cli"
"github.com/zclconf/go-cty/cty"
@ -121,6 +123,7 @@ func testBackend(t *testing.T, obj cty.Value) (*Remote, func()) {
b.client.PolicyChecks = mc.PolicyChecks
b.client.Runs = mc.Runs
b.client.StateVersions = mc.StateVersions
b.client.Variables = mc.Variables
b.client.Workspaces = mc.Workspaces
b.ShowDiagnostics = func(vals ...interface{}) {
@ -268,6 +271,7 @@ func testDisco(s *httptest.Server) *disco.Disco {
"versions.v1": fmt.Sprintf("%s/v1/versions/", s.URL),
}
d := disco.NewWithCredentialsSource(credsSrc)
d.SetUserAgent(httpclient.TerraformUserAgent(version.String()))
d.ForceHostServices(svchost.Hostname(defaultHostname), services)
d.ForceHostServices(svchost.Hostname("localhost"), services)

View File

@ -150,9 +150,10 @@ func TestBackendStates(t *testing.T, b Backend) {
Status: states.ObjectReady,
SchemaVersion: 0,
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
// write a distinct known state to bar

View File

@ -7,6 +7,7 @@ import (
"github.com/hashicorp/terraform/configs"
"github.com/hashicorp/terraform/terraform"
"github.com/hashicorp/terraform/tfdiags"
"github.com/zclconf/go-cty/cty"
)
// UnparsedVariableValue represents a variable value provided by the caller
@ -24,6 +25,22 @@ type UnparsedVariableValue interface {
ParseVariableValue(mode configs.VariableParsingMode) (*terraform.InputValue, tfdiags.Diagnostics)
}
// ParseVariableValues processes a map of unparsed variable values by
// correlating each one with the given variable declarations which should
// be from a root module.
//
// The map of unparsed variable values should include variables from all
// possible root module declarations sources such that it is as complete as
// it can possibly be for the current operation. If any declared variables
// are not included in the map, ParseVariableValues will either substitute
// a configured default value or produce an error.
//
// If this function returns without any errors in the diagnostics, the
// resulting input values map is guaranteed to be valid and ready to pass
// to terraform.NewContext. If the diagnostics contains errors, the returned
// InputValues may be incomplete but will include the subset of variables
// that were successfully processed, allowing for careful analysis of the
// partial result.
func ParseVariableValues(vv map[string]UnparsedVariableValue, decls map[string]*configs.Variable) (terraform.InputValues, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
ret := make(terraform.InputValues, len(vv))
@ -63,12 +80,11 @@ func ParseVariableValues(vv map[string]UnparsedVariableValue, decls map[string]*
// should migrate to using environment variables instead before
// this becomes an error in a future major release.
if seenUndeclaredInFile < 3 {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagWarning,
Summary: "Value for undeclared variable",
Detail: fmt.Sprintf("The root module does not declare a variable named %q. To use this value, add a \"variable\" block to the configuration.\n\nUsing a variables file to set an undeclared variable is deprecated and will become an error in a future release. If you wish to provide certain \"global\" settings to all configurations in your organization, use TF_VAR_... environment variables to set these instead.", name),
Subject: val.SourceRange.ToHCL().Ptr(),
})
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Warning,
"Value for undeclared variable",
fmt.Sprintf("The root module does not declare a variable named %q but a value was found in file %q. To use this value, add a \"variable\" block to the configuration.\n\nUsing a variables file to set an undeclared variable is deprecated and will become an error in a future release. If you wish to provide certain \"global\" settings to all configurations in your organization, use TF_VAR_... environment variables to set these instead.", name, val.SourceRange.Filename),
))
}
seenUndeclaredInFile++
@ -107,5 +123,40 @@ func ParseVariableValues(vv map[string]UnparsedVariableValue, decls map[string]*
})
}
// By this point we should've gathered all of the required root module
// variables from one of the many possible sources. We'll now populate
// any we haven't gathered as their defaults and fail if any of the
// missing ones are required.
for name, vc := range decls {
if _, defined := ret[name]; defined {
continue
}
if vc.Required() {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "No value for required variable",
Detail: fmt.Sprintf("The root module input variable %q is not set, and has no default value. Use a -var or -var-file command line argument to provide a value for this variable.", name),
Subject: vc.DeclRange.Ptr(),
})
// We'll include a placeholder value anyway, just so that our
// result is complete for any calling code that wants to cautiously
// analyze it for diagnostic purposes. Since our diagnostics now
// includes an error, normal processing will ignore this result.
ret[name] = &terraform.InputValue{
Value: cty.DynamicVal,
SourceType: terraform.ValueFromConfig,
SourceRange: tfdiags.SourceRangeFromHCL(vc.DeclRange),
}
} else {
ret[name] = &terraform.InputValue{
Value: vc.Default,
SourceType: terraform.ValueFromConfig,
SourceRange: tfdiags.SourceRangeFromHCL(vc.DeclRange),
}
}
}
return ret, diags
}

View File

@ -3,6 +3,8 @@ package backend
import (
"testing"
"github.com/google/go-cmp/cmp"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/terraform/configs"
@ -17,19 +19,53 @@ func TestParseVariableValuesUndeclared(t *testing.T) {
"undeclared2": testUnparsedVariableValue("2"),
"undeclared3": testUnparsedVariableValue("3"),
"undeclared4": testUnparsedVariableValue("4"),
"declared1": testUnparsedVariableValue("5"),
}
decls := map[string]*configs.Variable{
"declared1": {
Name: "declared1",
Type: cty.String,
ParsingMode: configs.VariableParseLiteral,
DeclRange: hcl.Range{
Filename: "fake.tf",
Start: hcl.Pos{Line: 2, Column: 1, Byte: 0},
End: hcl.Pos{Line: 2, Column: 1, Byte: 0},
},
},
"missing1": {
Name: "missing1",
Type: cty.String,
ParsingMode: configs.VariableParseLiteral,
DeclRange: hcl.Range{
Filename: "fake.tf",
Start: hcl.Pos{Line: 3, Column: 1, Byte: 0},
End: hcl.Pos{Line: 3, Column: 1, Byte: 0},
},
},
"missing2": {
Name: "missing1",
Type: cty.String,
ParsingMode: configs.VariableParseLiteral,
Default: cty.StringVal("default for missing2"),
DeclRange: hcl.Range{
Filename: "fake.tf",
Start: hcl.Pos{Line: 4, Column: 1, Byte: 0},
End: hcl.Pos{Line: 4, Column: 1, Byte: 0},
},
},
}
decls := map[string]*configs.Variable{}
_, diags := ParseVariableValues(vv, decls)
gotVals, diags := ParseVariableValues(vv, decls)
for _, diag := range diags {
t.Logf("%s: %s", diag.Description().Summary, diag.Description().Detail)
}
if got, want := len(diags), 4; got != want {
if got, want := len(diags), 5; got != want {
t.Fatalf("wrong number of diagnostics %d; want %d", got, want)
}
const undeclSingular = `Value for undeclared variable`
const undeclPlural = `Values for undeclared variables`
const missingRequired = `No value for required variable`
if got, want := diags[0].Description().Summary, undeclSingular; got != want {
t.Errorf("wrong summary for diagnostic 0\ngot: %s\nwant: %s", got, want)
@ -43,6 +79,42 @@ func TestParseVariableValuesUndeclared(t *testing.T) {
if got, want := diags[3].Description().Summary, undeclPlural; got != want {
t.Errorf("wrong summary for diagnostic 3\ngot: %s\nwant: %s", got, want)
}
if got, want := diags[4].Description().Summary, missingRequired; got != want {
t.Errorf("wrong summary for diagnostic 4\ngot: %s\nwant: %s", got, want)
}
wantVals := terraform.InputValues{
"declared1": {
Value: cty.StringVal("5"),
SourceType: terraform.ValueFromNamedFile,
SourceRange: tfdiags.SourceRange{
Filename: "fake.tfvars",
Start: tfdiags.SourcePos{Line: 1, Column: 1, Byte: 0},
End: tfdiags.SourcePos{Line: 1, Column: 1, Byte: 0},
},
},
"missing1": {
Value: cty.DynamicVal,
SourceType: terraform.ValueFromConfig,
SourceRange: tfdiags.SourceRange{
Filename: "fake.tf",
Start: tfdiags.SourcePos{Line: 3, Column: 1, Byte: 0},
End: tfdiags.SourcePos{Line: 3, Column: 1, Byte: 0},
},
},
"missing2": {
Value: cty.StringVal("default for missing2"),
SourceType: terraform.ValueFromConfig,
SourceRange: tfdiags.SourceRange{
Filename: "fake.tf",
Start: tfdiags.SourcePos{Line: 4, Column: 1, Byte: 0},
End: tfdiags.SourcePos{Line: 4, Column: 1, Byte: 0},
},
},
}
if diff := cmp.Diff(wantVals, gotVals, cmp.Comparer(cty.Value.RawEquals)); diff != "" {
t.Errorf("wrong result\n%s", diff)
}
}
type testUnparsedVariableValue string

View File

@ -23,7 +23,6 @@ resource "test_resource_import_other" "foo" {
{
ImportState: true,
ResourceName: "test_resource_import_other.foo",
ImportStateCheck: func(iss []*terraform.InstanceState) error {
if got, want := len(iss), 2; got != want {
return fmt.Errorf("wrong number of resources %d; want %d", got, want)

View File

@ -128,7 +128,7 @@ func applyFn(ctx context.Context) error {
if p.OSType == "" {
switch connType := state.Ephemeral.ConnInfo["type"]; connType {
case "ssh", "":
case "ssh", "": // The default connection type is ssh, so if the type is empty assume ssh
p.OSType = "linux"
case "winrm":
p.OSType = "windows"
@ -259,16 +259,30 @@ func (p *provisioner) generateAutosignToken(certname string) (string, error) {
}
func (p *provisioner) installPuppetAgentOpenSource() error {
task := "puppet_agent::install"
connType := p.instanceState.Ephemeral.ConnInfo["type"]
if connType == "" {
connType = "ssh"
}
agentConnInfo := map[string]string{
"type": connType,
"host": p.instanceState.Ephemeral.ConnInfo["host"],
"user": p.instanceState.Ephemeral.ConnInfo["user"],
"password": p.instanceState.Ephemeral.ConnInfo["password"], // Required on Windows only
}
result, err := bolt.Task(
p.instanceState.Ephemeral.ConnInfo,
agentConnInfo,
p.BoltTimeout,
p.UseSudo,
"puppet_agent::install",
task,
nil,
)
if err != nil || result.Items[0].Status != "success" {
return fmt.Errorf("puppet_agent::install failed: %s\n%+v", err, result)
return fmt.Errorf("%s failed: %s\n%+v", task, err, result)
}
return nil

View File

@ -7,6 +7,7 @@ import (
"github.com/hashicorp/go-checkpoint"
"github.com/hashicorp/terraform/command"
"github.com/hashicorp/terraform/command/cliconfig"
)
func init() {
@ -17,7 +18,7 @@ var checkpointResult chan *checkpoint.CheckResponse
// runCheckpoint runs a HashiCorp Checkpoint request. You can read about
// Checkpoint here: https://github.com/hashicorp/go-checkpoint.
func runCheckpoint(c *Config) {
func runCheckpoint(c *cliconfig.Config) {
// If the user doesn't want checkpoint at all, then return.
if c.DisableCheckpoint {
log.Printf("[INFO] Checkpoint disabled. Not running.")
@ -25,7 +26,7 @@ func runCheckpoint(c *Config) {
return
}
configDir, err := ConfigDir()
configDir, err := cliconfig.ConfigDir()
if err != nil {
log.Printf("[ERR] Checkpoint setup error: %s", err)
checkpointResult <- nil

View File

@ -248,11 +248,15 @@ Usage: terraform apply [options] [DIR-OR-PLAN]
Options:
-auto-approve Skip interactive approval of plan before applying.
-backup=path Path to backup the existing state file before
modifying. Defaults to the "-state-out" path with
".backup" extension. Set to "-" to disable backup.
-auto-approve Skip interactive approval of plan before applying.
-compact-warnings If Terraform produces any warnings that are not
accompanied by errors, show them in a more compact
form that includes only the summary messages.
-lock=true Lock the state file when locking is supported.

View File

@ -29,7 +29,10 @@ func TestApply_destroy(t *testing.T) {
AttrsJSON: []byte(`{"id":"bar"}`),
Status: states.ObjectReady,
},
addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
})
statePath := testStateFile(t, originalState)
@ -122,7 +125,10 @@ func TestApply_destroyLockedState(t *testing.T) {
AttrsJSON: []byte(`{"id":"bar"}`),
Status: states.ObjectReady,
},
addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
})
statePath := testStateFile(t, originalState)
@ -189,12 +195,15 @@ func TestApply_destroyTargeted(t *testing.T) {
Mode: addrs.ManagedResourceMode,
Type: "test_instance",
Name: "foo",
}.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance),
}.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance),
&states.ResourceInstanceObjectSrc{
AttrsJSON: []byte(`{"id":"i-ab123"}`),
Status: states.ObjectReady,
},
addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
s.SetResourceInstanceCurrent(
addrs.Resource{
@ -203,10 +212,14 @@ func TestApply_destroyTargeted(t *testing.T) {
Name: "foo",
}.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance),
&states.ResourceInstanceObjectSrc{
AttrsJSON: []byte(`{"id":"i-abc123"}`),
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"i-abc123"}`),
Dependencies: []addrs.AbsResource{mustResourceAddr("test_instance.foo")},
Status: states.ObjectReady,
},
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance),
)
})
statePath := testStateFile(t, originalState)

View File

@ -177,7 +177,7 @@ func TestApply_parallelism(t *testing.T) {
// to ApplyResourceChange, we need to use a number of separate providers
// here. They will all have the same mock implementation function assigned
// but crucially they will each have their own mutex.
providerFactories := map[string]providers.Factory{}
providerFactories := map[addrs.Provider]providers.Factory{}
for i := 0; i < 10; i++ {
name := fmt.Sprintf("test%d", i)
provider := &terraform.MockProvider{}
@ -203,7 +203,7 @@ func TestApply_parallelism(t *testing.T) {
NewState: cty.EmptyObjectVal,
}
}
providerFactories[name] = providers.FactoryFixed(provider)
providerFactories[addrs.NewLegacyProvider(name)] = providers.FactoryFixed(provider)
}
testingOverrides := &testingOverrides{
ProviderResolver: providers.ResolverFixed(providerFactories),
@ -423,7 +423,11 @@ func TestApply_input(t *testing.T) {
test = false
defer func() { test = true }()
// Set some default reader/writers for the inputs
// The configuration for this test includes a declaration of variable
// "foo" with no default, and we don't set it on the command line below,
// so the apply command will produce an interactive prompt for the
// value of var.foo. We'll answer "foo" here, and we expect the output
// value "result" to echo that back to us below.
defaultInputReader = bytes.NewBufferString("foo\n")
defaultInputWriter = new(bytes.Buffer)
@ -829,7 +833,10 @@ func TestApply_refresh(t *testing.T) {
AttrsJSON: []byte(`{"ami":"bar"}`),
Status: states.ObjectReady,
},
addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
})
statePath := testStateFile(t, originalState)
@ -983,7 +990,10 @@ func TestApply_state(t *testing.T) {
AttrsJSON: []byte(`{"ami":"foo"}`),
Status: states.ObjectReady,
},
addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
})
statePath := testStateFile(t, originalState)
@ -1347,7 +1357,10 @@ func TestApply_backup(t *testing.T) {
AttrsJSON: []byte("{\n \"id\": \"bar\"\n }"),
Status: states.ObjectReady,
},
addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
})
statePath := testStateFile(t, originalState)
@ -1648,7 +1661,10 @@ func applyFixturePlanFile(t *testing.T) string {
Type: "test_instance",
Name: "foo",
}.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance),
ProviderAddr: addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance),
ProviderAddr: addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
ChangeSrc: plans.ChangeSrc{
Action: plans.Create,
Before: priorValRaw,

View File

@ -17,7 +17,7 @@ import (
"github.com/hashicorp/hcl"
"github.com/hashicorp/terraform/svchost"
"github.com/hashicorp/terraform-svchost"
"github.com/hashicorp/terraform/tfdiags"
)

View File

@ -12,10 +12,10 @@ import (
"github.com/zclconf/go-cty/cty"
ctyjson "github.com/zclconf/go-cty/cty/json"
"github.com/hashicorp/terraform-svchost"
svcauth "github.com/hashicorp/terraform-svchost/auth"
"github.com/hashicorp/terraform/configs/hcl2shim"
pluginDiscovery "github.com/hashicorp/terraform/plugin/discovery"
"github.com/hashicorp/terraform/svchost"
svcauth "github.com/hashicorp/terraform/svchost/auth"
)
// credentialsConfigFile returns the path for the special configuration file

View File

@ -10,8 +10,8 @@ import (
"github.com/google/go-cmp/cmp"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/terraform/svchost"
svcauth "github.com/hashicorp/terraform/svchost/auth"
"github.com/hashicorp/terraform-svchost"
svcauth "github.com/hashicorp/terraform-svchost/auth"
)
func TestCredentialsForHost(t *testing.T) {

View File

@ -100,6 +100,12 @@ func tempDir(t *testing.T) string {
if err != nil {
t.Fatalf("err: %s", err)
}
dir, err = filepath.EvalSymlinks(dir)
if err != nil {
t.Fatal(err)
}
if err := os.RemoveAll(dir); err != nil {
t.Fatalf("err: %s", err)
}
@ -114,8 +120,8 @@ func testFixturePath(name string) string {
func metaOverridesForProvider(p providers.Interface) *testingOverrides {
return &testingOverrides{
ProviderResolver: providers.ResolverFixed(
map[string]providers.Factory{
"test": providers.FactoryFixed(p),
map[addrs.Provider]providers.Factory{
addrs.NewLegacyProvider("test"): providers.FactoryFixed(p),
},
),
}
@ -124,8 +130,8 @@ func metaOverridesForProvider(p providers.Interface) *testingOverrides {
func metaOverridesForProviderAndProvisioner(p providers.Interface, pr provisioners.Interface) *testingOverrides {
return &testingOverrides{
ProviderResolver: providers.ResolverFixed(
map[string]providers.Factory{
"test": providers.FactoryFixed(p),
map[addrs.Provider]providers.Factory{
addrs.NewLegacyProvider("test"): providers.FactoryFixed(p),
},
),
Provisioners: map[string]provisioners.Factory{
@ -260,12 +266,15 @@ func testState() *states.State {
// The weird whitespace here is reflective of how this would
// get written out in a real state file, due to the indentation
// of all of the containing wrapping objects and arrays.
AttrsJSON: []byte("{\n \"id\": \"bar\"\n }"),
Status: states.ObjectReady,
AttrsJSON: []byte("{\n \"id\": \"bar\"\n }"),
Status: states.ObjectReady,
Dependencies: []addrs.AbsResource{},
DependsOn: []addrs.Referenceable{},
},
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
)
// DeepCopy is used here to ensure our synthetic state matches exactly
// with a state that will have been copied during the command
@ -484,6 +493,11 @@ func testTempDir(t *testing.T) string {
t.Fatalf("err: %s", err)
}
d, err = filepath.EvalSymlinks(d)
if err != nil {
t.Fatal(err)
}
return d
}
@ -866,3 +880,11 @@ func normalizeJSON(t *testing.T, src []byte) string {
}
return buf.String()
}
func mustResourceAddr(s string) addrs.AbsResource {
addr, diags := addrs.ParseAbsResourceStr(s)
if diags.HasErrors() {
panic(diags.Err())
}
return addr
}

View File

@ -77,6 +77,7 @@ func (c *ConsoleCommand) Run(args []string) int {
opReq := c.Operation(b)
opReq.ConfigDir = configPath
opReq.ConfigLoader, err = c.initConfigLoader()
opReq.AllowUnsetVariables = true // we'll just evaluate them as unknown
if err != nil {
diags = diags.Append(err)
c.showDiagnostics(diags)
@ -95,12 +96,8 @@ func (c *ConsoleCommand) Run(args []string) int {
// Get the context
ctx, _, ctxDiags := local.Context(opReq)
diags = diags.Append(ctxDiags)
if ctxDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
// Creating the context can result in a lock, so ensure we release it
defer func() {
err := opReq.StateLocker.Unlock(nil)
if err != nil {
@ -108,6 +105,12 @@ func (c *ConsoleCommand) Run(args []string) int {
}
}()
diags = diags.Append(ctxDiags)
if ctxDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
// Setup the UI so we can output directly to stdout
ui := &cli.BasicUi{
Writer: wrappedstreams.Stdout(),

View File

@ -84,3 +84,52 @@ func TestConsole_tfvars(t *testing.T) {
t.Fatalf("bad: %q", actual)
}
}
func TestConsole_unsetRequiredVars(t *testing.T) {
// This test is verifying that it's possible to run "terraform console"
// without providing values for all required variables, without
// "terraform console" producing an interactive prompt for those variables
// or producing errors. Instead, it should allow evaluation in that
// partial context but see the unset variables values as being unknown.
tmp, cwd := testCwd(t)
defer testFixCwd(t, tmp, cwd)
p := testProvider()
ui := new(cli.MockUi)
c := &ConsoleCommand{
Meta: Meta{
testingOverrides: metaOverridesForProvider(p),
Ui: ui,
},
}
var output bytes.Buffer
defer testStdinPipe(t, strings.NewReader("var.foo\n"))()
outCloser := testStdoutCapture(t, &output)
args := []string{
// This test fixture includes variable "foo" {}, which we are
// intentionally not setting here.
testFixturePath("apply-vars"),
}
code := c.Run(args)
outCloser()
// Because we're running "terraform console" in piped input mode, we're
// expecting it to return a nonzero exit status here but the message
// must be the one indicating that it did attempt to evaluate var.foo and
// got an unknown value in return, rather than an error about var.foo
// not being set or a failure to prompt for it.
if code == 0 {
t.Fatalf("unexpected success\n%s", ui.OutputWriter.String())
}
// The error message should be the one console produces when it encounters
// an unknown value.
got := ui.ErrorWriter.String()
want := `Error: Result depends on values that cannot be determined`
if !strings.Contains(got, want) {
t.Fatalf("wrong output\ngot:\n%s\n\nwant string containing %q", got, want)
}
}

View File

@ -1,66 +0,0 @@
package command
import (
"fmt"
"os"
"strings"
"github.com/hashicorp/terraform/dag"
"github.com/mitchellh/cli"
)
// DebugJSON2DotCommand is a Command implementation that translates a json
// graph debug log to Dot format.
type DebugJSON2DotCommand struct {
Meta
}
func (c *DebugJSON2DotCommand) Run(args []string) int {
args, err := c.Meta.process(args, true)
if err != nil {
return 1
}
cmdFlags := c.Meta.extendedFlagSet("debug json2dot")
if err := cmdFlags.Parse(args); err != nil {
return cli.RunResultHelp
}
fileName := cmdFlags.Arg(0)
if fileName == "" {
return cli.RunResultHelp
}
f, err := os.Open(fileName)
if err != nil {
c.Ui.Error(fmt.Sprintf(errInvalidLog, err))
return cli.RunResultHelp
}
dot, err := dag.JSON2Dot(f)
if err != nil {
c.Ui.Error(fmt.Sprintf(errInvalidLog, err))
return cli.RunResultHelp
}
c.Ui.Output(string(dot))
return 0
}
func (c *DebugJSON2DotCommand) Help() string {
helpText := `
Usage: terraform debug json2dot input.json
Translate a graph debug file to dot format.
This command takes a single json graph log file and converts it to a single
dot graph written to stdout.
`
return strings.TrimSpace(helpText)
}
func (c *DebugJSON2DotCommand) Synopsis() string {
return "Convert json graph log to dot"
}
const errInvalidLog = `Error parsing log file: %[1]s`

View File

@ -1,53 +0,0 @@
package command
import (
"io/ioutil"
"os"
"strings"
"testing"
"github.com/hashicorp/terraform/dag"
"github.com/mitchellh/cli"
)
func TestDebugJSON2Dot(t *testing.T) {
// create the graph JSON output
logFile, err := ioutil.TempFile(testingDir, "tf")
if err != nil {
t.Fatal(err)
}
defer os.Remove(logFile.Name())
var g dag.Graph
g.SetDebugWriter(logFile)
g.Add(1)
g.Add(2)
g.Add(3)
g.Connect(dag.BasicEdge(1, 2))
g.Connect(dag.BasicEdge(2, 3))
ui := new(cli.MockUi)
c := &DebugJSON2DotCommand{
Meta: Meta{
testingOverrides: metaOverridesForProvider(testProvider()),
Ui: ui,
},
}
args := []string{
logFile.Name(),
}
if code := c.Run(args); code != 0 {
t.Fatalf("bad: \n%s", ui.ErrorWriter.String())
}
output := ui.OutputWriter.String()
if !strings.HasPrefix(output, "digraph {") {
t.Fatalf("doesn't look like digraph: %s", output)
}
if !strings.Contains(output, `subgraph "root" {`) {
t.Fatalf("doesn't contains root subgraph: %s", output)
}
}

View File

@ -177,6 +177,51 @@ func Diagnostic(diag tfdiags.Diagnostic, sources map[string][]byte, color *color
return buf.String()
}
// DiagnosticWarningsCompact is an alternative to Diagnostic for when all of
// the given diagnostics are warnings and we want to show them compactly,
// with only two lines per warning and excluding all of the detail information.
//
// The caller may optionally pre-process the given diagnostics with
// ConsolidateWarnings, in which case this function will recognize consolidated
// messages and include an indication that they are consolidated.
//
// Do not pass non-warning diagnostics to this function, or the result will
// be nonsense.
func DiagnosticWarningsCompact(diags tfdiags.Diagnostics, color *colorstring.Colorize) string {
var b strings.Builder
b.WriteString(color.Color("[bold][yellow]Warnings:[reset]\n\n"))
for _, diag := range diags {
sources := tfdiags.WarningGroupSourceRanges(diag)
b.WriteString(fmt.Sprintf("- %s\n", diag.Description().Summary))
if len(sources) > 0 {
mainSource := sources[0]
if mainSource.Subject != nil {
if len(sources) > 1 {
b.WriteString(fmt.Sprintf(
" on %s line %d (and %d more)\n",
mainSource.Subject.Filename,
mainSource.Subject.Start.Line,
len(sources)-1,
))
} else {
b.WriteString(fmt.Sprintf(
" on %s line %d\n",
mainSource.Subject.Filename,
mainSource.Subject.Start.Line,
))
}
} else if len(sources) > 1 {
b.WriteString(fmt.Sprintf(
" (%d occurences of this warning)\n",
len(sources),
))
}
}
}
return b.String()
}
func parseRange(src []byte, rng hcl.Range) (*hcl.File, int) {
filename := rng.Filename
offset := rng.Start.Byte

View File

@ -0,0 +1,73 @@
package format
import (
"testing"
"github.com/google/go-cmp/cmp"
"github.com/hashicorp/hcl/v2"
"github.com/mitchellh/colorstring"
"github.com/hashicorp/terraform/tfdiags"
)
func TestDiagnosticWarningsCompact(t *testing.T) {
var diags tfdiags.Diagnostics
diags = diags.Append(tfdiags.SimpleWarning("foo"))
diags = diags.Append(tfdiags.SimpleWarning("foo"))
diags = diags.Append(tfdiags.SimpleWarning("bar"))
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagWarning,
Summary: "source foo",
Detail: "...",
Subject: &hcl.Range{
Filename: "source.tf",
Start: hcl.Pos{Line: 2, Column: 1, Byte: 5},
End: hcl.Pos{Line: 2, Column: 1, Byte: 5},
},
})
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagWarning,
Summary: "source foo",
Detail: "...",
Subject: &hcl.Range{
Filename: "source.tf",
Start: hcl.Pos{Line: 3, Column: 1, Byte: 7},
End: hcl.Pos{Line: 3, Column: 1, Byte: 7},
},
})
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagWarning,
Summary: "source bar",
Detail: "...",
Subject: &hcl.Range{
Filename: "source2.tf",
Start: hcl.Pos{Line: 1, Column: 1, Byte: 1},
End: hcl.Pos{Line: 1, Column: 1, Byte: 1},
},
})
// ConsolidateWarnings groups together the ones
// that have source location information and that
// have the same summary text.
diags = diags.ConsolidateWarnings(1)
// A zero-value Colorize just passes all the formatting
// codes back to us, so we can test them literally.
got := DiagnosticWarningsCompact(diags, &colorstring.Colorize{})
want := `[bold][yellow]Warnings:[reset]
- foo
- foo
- bar
- source foo
on source.tf line 2 (and 1 more)
- source bar
on source2.tf line 1
`
if got != want {
t.Errorf(
"wrong result\ngot:\n%s\n\nwant:\n%s\n\ndiff:\n%s",
got, want, cmp.Diff(want, got),
)
}
}

View File

@ -70,22 +70,7 @@ func ResourceChange(
}
buf.WriteString(color.Color("[reset]\n"))
switch change.Action {
case plans.Create:
buf.WriteString(color.Color("[green] +[reset] "))
case plans.Read:
buf.WriteString(color.Color("[cyan] <=[reset] "))
case plans.Update:
buf.WriteString(color.Color("[yellow] ~[reset] "))
case plans.DeleteThenCreate:
buf.WriteString(color.Color("[red]-[reset]/[green]+[reset] "))
case plans.CreateThenDelete:
buf.WriteString(color.Color("[green]+[reset]/[red]-[reset] "))
case plans.Delete:
buf.WriteString(color.Color("[red] -[reset] "))
default:
buf.WriteString(color.Color("??? "))
}
buf.WriteString(color.Color(DiffActionSymbol(change.Action)) + " ")
switch addr.Resource.Resource.Mode {
case addrs.ManagedResourceMode:
@ -502,7 +487,7 @@ func (p *blockBodyDiffPrinter) writeValue(val cty.Value, action plans.Action, in
ty, err := ctyjson.ImpliedType(src)
// check for the special case of "null", which decodes to nil,
// and just allow it to be printed out directly
if err == nil && !ty.IsPrimitiveType() && val.AsString() != "null" {
if err == nil && !ty.IsPrimitiveType() && strings.TrimSpace(val.AsString()) != "null" {
jv, err := ctyjson.Unmarshal(src, ty)
if err == nil {
p.buf.WriteString("jsonencode(")
@ -520,6 +505,21 @@ func (p *blockBodyDiffPrinter) writeValue(val cty.Value, action plans.Action, in
}
}
}
if strings.Contains(val.AsString(), "\n") {
// It's a multi-line string, so we want to use the multi-line
// rendering so it'll be readable. Rather than re-implement
// that here, we'll just re-use the multi-line string diff
// printer with no changes, which ends up producing the
// result we want here.
// The path argument is nil because we don't track path
// information into strings and we know that a string can't
// have any indices or attributes that might need to be marked
// as (requires replacement), which is what that argument is for.
p.writeValueDiff(val, val, indent, nil)
break
}
fmt.Fprintf(p.buf, "%q", val.AsString())
case cty.Bool:
if val.True() {
@ -1014,8 +1014,9 @@ func (p *blockBodyDiffPrinter) writeActionSymbol(action plans.Action) {
}
func (p *blockBodyDiffPrinter) pathForcesNewResource(path cty.Path) bool {
if !p.action.IsReplace() {
// "requiredReplace" only applies when the instance is being replaced
if !p.action.IsReplace() || p.requiredReplace.Empty() {
// "requiredReplace" only applies when the instance is being replaced,
// and we should only inspect that set if it is not empty
return false
}
return p.requiredReplace.Has(path)
@ -1071,8 +1072,8 @@ func ctySequenceDiff(old, new []cty.Value) []*plans.Change {
var oldI, newI, lcsI int
for oldI < len(old) || newI < len(new) || lcsI < len(lcs) {
for oldI < len(old) && (lcsI >= len(lcs) || !old[oldI].RawEquals(lcs[lcsI])) {
isObjectDiff := old[oldI].Type().IsObjectType() && (newI >= len(new) || new[newI].Type().IsObjectType())
if isObjectDiff && newI < len(new) {
isObjectDiff := old[oldI].Type().IsObjectType() && newI < len(new) && new[newI].Type().IsObjectType() && (lcsI >= len(lcs) || !new[newI].RawEquals(lcs[lcsI]))
if isObjectDiff {
ret = append(ret, &plans.Change{
Action: plans.Update,
Before: old[oldI],
@ -1190,3 +1191,26 @@ func ctyNullBlockSetAsEmpty(in cty.Value) cty.Value {
// sets, so our result here is always a set.
return cty.SetValEmpty(in.Type().ElementType())
}
// DiffActionSymbol returns a string that, once passed through a
// colorstring.Colorize, will produce a result that can be written
// to a terminal to produce a symbol made of three printable
// characters, possibly interspersed with VT100 color codes.
func DiffActionSymbol(action plans.Action) string {
switch action {
case plans.DeleteThenCreate:
return "[red]-[reset]/[green]+[reset]"
case plans.CreateThenDelete:
return "[green]+[reset]/[red]-[reset]"
case plans.Create:
return " [green]+[reset]"
case plans.Delete:
return " [red]-[reset]"
case plans.Read:
return " [cyan]<=[reset]"
case plans.Update:
return " [yellow]~[reset]"
default:
return " ?"
}
}

View File

@ -50,6 +50,26 @@ func TestResourceChange_primitiveTypes(t *testing.T) {
+ resource "test_instance" "example" {
+ string = "null"
}
`,
},
"creation (null string with extra whitespace)": {
Action: plans.Create,
Mode: addrs.ManagedResourceMode,
Before: cty.NullVal(cty.EmptyObject),
After: cty.ObjectVal(map[string]cty.Value{
"string": cty.StringVal("null "),
}),
Schema: &configschema.Block{
Attributes: map[string]*configschema.Attribute{
"string": {Type: cty.String, Optional: true},
},
},
RequiredReplace: cty.NewPathSet(),
Tainted: false,
ExpectedOutput: ` # test_instance.example will be created
+ resource "test_instance" "example" {
+ string = "null "
}
`,
},
"deletion": {
@ -207,6 +227,37 @@ new line
+ new line
EOT
}
`,
},
"addition of multi-line string field": {
Action: plans.Update,
Mode: addrs.ManagedResourceMode,
Before: cty.ObjectVal(map[string]cty.Value{
"id": cty.StringVal("i-02ae66f368e8518a9"),
"more_lines": cty.NullVal(cty.String),
}),
After: cty.ObjectVal(map[string]cty.Value{
"id": cty.UnknownVal(cty.String),
"more_lines": cty.StringVal(`original
new line
`),
}),
Schema: &configschema.Block{
Attributes: map[string]*configschema.Attribute{
"id": {Type: cty.String, Optional: true, Computed: true},
"more_lines": {Type: cty.String, Optional: true},
},
},
RequiredReplace: cty.NewPathSet(),
Tainted: false,
ExpectedOutput: ` # test_instance.example will be updated in-place
~ resource "test_instance" "example" {
~ id = "i-02ae66f368e8518a9" -> (known after apply)
+ more_lines = <<~EOT
original
new line
EOT
}
`,
},
"force-new update of multi-line string field": {
@ -857,11 +908,11 @@ func TestResourceChange_JSON(t *testing.T) {
Mode: addrs.ManagedResourceMode,
Before: cty.ObjectVal(map[string]cty.Value{
"id": cty.StringVal("i-02ae66f368e8518a9"),
"json_field": cty.StringVal(`[{"one": "111"}, {"two": "222"}]`),
"json_field": cty.StringVal(`[{"one": "111"}, {"two": "222"}, {"three": "333"}]`),
}),
After: cty.ObjectVal(map[string]cty.Value{
"id": cty.UnknownVal(cty.String),
"json_field": cty.StringVal(`[{"one": "111"}]`),
"json_field": cty.StringVal(`[{"one": "111"}, {"three": "333"}]`),
}),
Schema: &configschema.Block{
Attributes: map[string]*configschema.Attribute{
@ -882,6 +933,9 @@ func TestResourceChange_JSON(t *testing.T) {
- {
- two = "222"
},
{
three = "333"
},
]
)
}
@ -3103,7 +3157,10 @@ func runTestCases(t *testing.T, testCases map[string]testCase) {
Type: "test_instance",
Name: "example",
}.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance),
ProviderAddr: addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance),
ProviderAddr: addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
ChangeSrc: plans.ChangeSrc{
Action: tc.Action,
Before: before,

View File

@ -1,306 +0,0 @@
package format
import (
"bytes"
"fmt"
"log"
"sort"
"strings"
"github.com/mitchellh/colorstring"
"github.com/hashicorp/terraform/addrs"
"github.com/hashicorp/terraform/plans"
"github.com/hashicorp/terraform/states"
"github.com/hashicorp/terraform/terraform"
)
// Plan is a representation of a plan optimized for display to
// an end-user, as opposed to terraform.Plan which is for internal use.
//
// DisplayPlan excludes implementation details that may otherwise appear
// in the main plan, such as destroy actions on data sources (which are
// there only to clean up the state).
type Plan struct {
Resources []*InstanceDiff
}
// InstanceDiff is a representation of an instance diff optimized
// for display, in conjunction with DisplayPlan.
type InstanceDiff struct {
Addr *terraform.ResourceAddress
Action plans.Action
// Attributes describes changes to the attributes of the instance.
//
// For destroy diffs this is always nil.
Attributes []*AttributeDiff
Tainted bool
Deposed bool
}
// AttributeDiff is a representation of an attribute diff optimized
// for display, in conjunction with DisplayInstanceDiff.
type AttributeDiff struct {
// Path is a dot-delimited traversal through possibly many levels of list and map structure,
// intended for display purposes only.
Path string
Action plans.Action
OldValue string
NewValue string
NewComputed bool
Sensitive bool
ForcesNew bool
}
// PlanStats gives summary counts for a Plan.
type PlanStats struct {
ToAdd, ToChange, ToDestroy int
}
// NewPlan produces a display-oriented Plan from a terraform.Plan.
func NewPlan(changes *plans.Changes) *Plan {
log.Printf("[TRACE] NewPlan for %#v", changes)
ret := &Plan{}
if changes == nil {
// Nothing to do!
return ret
}
for _, rc := range changes.Resources {
addr := rc.Addr
log.Printf("[TRACE] NewPlan found %s (%s)", addr, rc.Action)
dataSource := addr.Resource.Resource.Mode == addrs.DataResourceMode
// We create "delete" actions for data resources so we can clean
// up their entries in state, but this is an implementation detail
// that users shouldn't see.
if dataSource && rc.Action == plans.Delete {
continue
}
if rc.Action == plans.NoOp {
continue
}
// For now we'll shim this to work with our old types.
// TODO: Update for the new plan types, ideally also switching over to
// a structural diff renderer instead of a flat renderer.
did := &InstanceDiff{
Addr: terraform.NewLegacyResourceInstanceAddress(addr),
Action: rc.Action,
}
if rc.DeposedKey != states.NotDeposed {
did.Deposed = true
}
// Since this is just a temporary stub implementation on the way
// to us replacing this with the structural diff renderer, we currently
// don't include any attributes here.
// FIXME: Implement the structural diff renderer to replace this
// codepath altogether.
ret.Resources = append(ret.Resources, did)
}
// Sort the instance diffs by their addresses for display.
sort.Slice(ret.Resources, func(i, j int) bool {
iAddr := ret.Resources[i].Addr
jAddr := ret.Resources[j].Addr
return iAddr.Less(jAddr)
})
return ret
}
// Format produces and returns a text representation of the receiving plan
// intended for display in a terminal.
//
// If color is not nil, it is used to colorize the output.
func (p *Plan) Format(color *colorstring.Colorize) string {
if p.Empty() {
return "This plan does nothing."
}
if color == nil {
color = &colorstring.Colorize{
Colors: colorstring.DefaultColors,
Reset: false,
}
}
// Find the longest path length of all the paths that are changing,
// so we can align them all.
keyLen := 0
for _, r := range p.Resources {
for _, attr := range r.Attributes {
key := attr.Path
if len(key) > keyLen {
keyLen = len(key)
}
}
}
buf := new(bytes.Buffer)
for _, r := range p.Resources {
formatPlanInstanceDiff(buf, r, keyLen, color)
}
return strings.TrimSpace(buf.String())
}
// Stats returns statistics about the plan
func (p *Plan) Stats() PlanStats {
var ret PlanStats
for _, r := range p.Resources {
switch r.Action {
case plans.Create:
ret.ToAdd++
case plans.Update:
ret.ToChange++
case plans.DeleteThenCreate, plans.CreateThenDelete:
ret.ToAdd++
ret.ToDestroy++
case plans.Delete:
ret.ToDestroy++
}
}
return ret
}
// ActionCounts returns the number of diffs for each action type
func (p *Plan) ActionCounts() map[plans.Action]int {
ret := map[plans.Action]int{}
for _, r := range p.Resources {
ret[r.Action]++
}
return ret
}
// Empty returns true if there is at least one resource diff in the receiving plan.
func (p *Plan) Empty() bool {
return len(p.Resources) == 0
}
// DiffActionSymbol returns a string that, once passed through a
// colorstring.Colorize, will produce a result that can be written
// to a terminal to produce a symbol made of three printable
// characters, possibly interspersed with VT100 color codes.
func DiffActionSymbol(action plans.Action) string {
switch action {
case plans.DeleteThenCreate:
return "[red]-[reset]/[green]+[reset]"
case plans.CreateThenDelete:
return "[green]+[reset]/[red]-[reset]"
case plans.Create:
return " [green]+[reset]"
case plans.Delete:
return " [red]-[reset]"
case plans.Read:
return " [cyan]<=[reset]"
case plans.Update:
return " [yellow]~[reset]"
default:
return " ?"
}
}
// formatPlanInstanceDiff writes the text representation of the given instance diff
// to the given buffer, using the given colorizer.
func formatPlanInstanceDiff(buf *bytes.Buffer, r *InstanceDiff, keyLen int, colorizer *colorstring.Colorize) {
addrStr := r.Addr.String()
// Determine the color for the text (green for adding, yellow
// for change, red for delete), and symbol, and output the
// resource header.
color := "yellow"
symbol := DiffActionSymbol(r.Action)
oldValues := true
switch r.Action {
case plans.DeleteThenCreate, plans.CreateThenDelete:
color = "yellow"
case plans.Create:
color = "green"
oldValues = false
case plans.Delete:
color = "red"
case plans.Read:
color = "cyan"
oldValues = false
}
var extraStr string
if r.Tainted {
extraStr = extraStr + " (tainted)"
}
if r.Deposed {
extraStr = extraStr + " (deposed)"
}
if r.Action.IsReplace() {
extraStr = extraStr + colorizer.Color(" [red][bold](new resource required)")
}
buf.WriteString(
colorizer.Color(fmt.Sprintf(
"[%s]%s [%s]%s%s\n",
color, symbol, color, addrStr, extraStr,
)),
)
for _, attr := range r.Attributes {
v := attr.NewValue
var dispV string
switch {
case v == "" && attr.NewComputed:
dispV = "<computed>"
case attr.Sensitive:
dispV = "<sensitive>"
default:
dispV = fmt.Sprintf("%q", v)
}
updateMsg := ""
switch {
case attr.ForcesNew && r.Action.IsReplace():
updateMsg = colorizer.Color(" [red](forces new resource)")
case attr.Sensitive && oldValues:
updateMsg = colorizer.Color(" [yellow](attribute changed)")
}
if oldValues {
u := attr.OldValue
var dispU string
switch {
case attr.Sensitive:
dispU = "<sensitive>"
default:
dispU = fmt.Sprintf("%q", u)
}
buf.WriteString(fmt.Sprintf(
" %s:%s %s => %s%s\n",
attr.Path,
strings.Repeat(" ", keyLen-len(attr.Path)),
dispU, dispV,
updateMsg,
))
} else {
buf.WriteString(fmt.Sprintf(
" %s:%s %s%s\n",
attr.Path,
strings.Repeat(" ", keyLen-len(attr.Path)),
dispV,
updateMsg,
))
}
}
// Write the reset color so we don't bleed color into later text
buf.WriteString(colorizer.Color("[reset]\n"))
}

View File

@ -139,13 +139,14 @@ func formatStateModule(p blockBodyDiffPrinter, m *states.Module, schemas *terraf
}
var schema *configschema.Block
provider := m.Resources[key].ProviderConfig.ProviderConfig.StringCompact()
provider := m.Resources[key].ProviderConfig.Provider
if _, exists := schemas.Providers[provider]; !exists {
// This should never happen in normal use because we should've
// loaded all of the schemas and checked things prior to this
// point. We can't return errors here, but since this is UI code
// we will try to do _something_ reasonable.
p.buf.WriteString(fmt.Sprintf("# missing schema for provider %q\n\n", provider))
p.buf.WriteString(fmt.Sprintf("# missing schema for provider %q\n\n", provider.LegacyString()))
continue
}

View File

@ -138,8 +138,8 @@ func testProviderSchema() *terraform.ProviderSchema {
func testSchemas() *terraform.Schemas {
provider := testProvider()
return &terraform.Schemas{
Providers: map[string]*terraform.ProviderSchema{
"test": provider.GetSchemaReturn,
Providers: map[addrs.Provider]*terraform.ProviderSchema{
addrs.NewLegacyProvider("test"): provider.GetSchemaReturn,
},
}
}
@ -243,9 +243,10 @@ func basicState(t *testing.T) *states.State {
SchemaVersion: 1,
AttrsJSON: []byte(`{"woozles":"confuzles"}`),
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
rootModule.SetResourceInstanceCurrent(
addrs.Resource{
@ -258,9 +259,10 @@ func basicState(t *testing.T) *states.State {
SchemaVersion: 1,
AttrsJSON: []byte(`{"compute":"sure"}`),
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
return state
}
@ -293,9 +295,10 @@ func stateWithMoreOutputs(t *testing.T) *states.State {
SchemaVersion: 1,
AttrsJSON: []byte(`{"woozles":"confuzles"}`),
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
return state
}
@ -319,9 +322,10 @@ func nestedState(t *testing.T) *states.State {
SchemaVersion: 1,
AttrsJSON: []byte(`{"woozles":"confuzles","nested": [{"value": "42"}]}`),
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
return state
}
@ -341,9 +345,10 @@ func deposedState(t *testing.T) *states.State {
SchemaVersion: 1,
AttrsJSON: []byte(`{"woozles":"confuzles","nested": [{"value": "42"}]}`),
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
return state
}
@ -369,9 +374,10 @@ func onlyDeposedState(t *testing.T) *states.State {
SchemaVersion: 1,
AttrsJSON: []byte(`{"woozles":"confuzles","nested": [{"value": "42"}]}`),
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
rootModule.SetResourceInstanceDeposed(
addrs.Resource{
@ -385,9 +391,10 @@ func onlyDeposedState(t *testing.T) *states.State {
SchemaVersion: 1,
AttrsJSON: []byte(`{"woozles":"confuzles","nested": [{"value": "42"}]}`),
},
addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
)
return state
}

View File

@ -96,6 +96,7 @@ func (c *GraphCommand) Run(args []string) int {
opReq.ConfigDir = configPath
opReq.ConfigLoader, err = c.initConfigLoader()
opReq.PlanFile = planFile
opReq.AllowUnsetVariables = true
if err != nil {
diags = diags.Append(err)
c.showDiagnostics(diags)
@ -110,13 +111,6 @@ func (c *GraphCommand) Run(args []string) int {
return 1
}
defer func() {
err := opReq.StateLocker.Unlock(nil)
if err != nil {
c.Ui.Error(err.Error())
}
}()
// Determine the graph type
graphType := terraform.GraphTypePlan
if plan != nil {
@ -190,12 +184,11 @@ Options:
-draw-cycles Highlight any cycles in the graph with colored edges.
This helps when diagnosing cycle errors.
-module-depth=n Specifies the depth of modules to show in the output.
By default this is -1, which will expand all.
-type=plan Type of graph to output. Can be: plan, plan-destroy, apply,
validate, input, refresh.
-module-depth=n (deprecated) In prior versions of Terraform, specified the
depth of modules to show in the output.
`
return strings.TrimSpace(helpText)
}

View File

@ -33,7 +33,7 @@ func TestGraph(t *testing.T) {
}
output := ui.OutputWriter.String()
if !strings.Contains(output, "provider.test") {
if !strings.Contains(output, `provider["registry.terraform.io/-/test"]`) {
t.Fatalf("doesn't look like digraph: %s", output)
}
}
@ -80,7 +80,7 @@ func TestGraph_noArgs(t *testing.T) {
}
output := ui.OutputWriter.String()
if !strings.Contains(output, "provider.test") {
if !strings.Contains(output, `provider["registry.terraform.io/-/test"]`) {
t.Fatalf("doesn't look like digraph: %s", output)
}
}
@ -125,7 +125,10 @@ func TestGraph_plan(t *testing.T) {
Before: plans.DynamicValue(`{}`),
After: plans.DynamicValue(`null`),
},
ProviderAddr: addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance),
ProviderAddr: addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
})
emptyConfig, err := plans.NewDynamicValue(cty.EmptyObjectVal, cty.EmptyObject)
if err != nil {
@ -158,7 +161,7 @@ func TestGraph_plan(t *testing.T) {
}
output := ui.OutputWriter.String()
if !strings.Contains(output, "provider.test") {
if !strings.Contains(output, `provider["registry.terraform.io/-/test"]`) {
t.Fatalf("doesn't look like digraph: %s", output)
}
}

View File

@ -43,7 +43,6 @@ func (c *ImportCommand) Run(args []string) int {
cmdFlags.StringVar(&c.Meta.stateOutPath, "state-out", "", "path")
cmdFlags.StringVar(&c.Meta.backupPath, "backup", "", "path")
cmdFlags.StringVar(&configPath, "config", pwd, "path")
cmdFlags.StringVar(&c.Meta.provider, "provider", "", "provider")
cmdFlags.BoolVar(&c.Meta.stateLock, "lock", true, "lock state")
cmdFlags.DurationVar(&c.Meta.stateLockTimeout, "lock-timeout", 0, "lock timeout")
cmdFlags.BoolVar(&c.Meta.allowMissingConfig, "allow-missing-config", false, "allow missing config")
@ -156,35 +155,6 @@ func (c *ImportCommand) Run(args []string) int {
return 1
}
// Also parse the user-provided provider address, if any.
var providerAddr addrs.AbsProviderConfig
if c.Meta.provider != "" {
traversal, travDiags := hclsyntax.ParseTraversalAbs([]byte(c.Meta.provider), `-provider=...`, hcl.Pos{Line: 1, Column: 1})
diags = diags.Append(travDiags)
if travDiags.HasErrors() {
c.showDiagnostics(diags)
c.Ui.Info(importCommandInvalidAddressReference)
return 1
}
relAddr, addrDiags := addrs.ParseProviderConfigCompact(traversal)
diags = diags.Append(addrDiags)
if addrDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
providerAddr = relAddr.Absolute(addrs.RootModuleInstance)
} else {
// Use a default address inferred from the resource type.
// We assume the same module as the resource address here, which
// may get resolved to an inherited provider when we construct the
// import graph inside ctx.Import, called below.
if rc != nil && rc.ProviderConfigRef != nil {
providerAddr = rc.ProviderConfigAddr().Absolute(addr.Module)
} else {
providerAddr = resourceRelAddr.DefaultProviderConfig().Absolute(addr.Module)
}
}
// Check for user-supplied plugin path
if c.pluginPath, err = c.loadPluginPath(); err != nil {
c.Ui.Error(fmt.Sprintf("Error loading plugin path: %s", err))
@ -233,13 +203,8 @@ func (c *ImportCommand) Run(args []string) int {
// Get the context
ctx, state, ctxDiags := local.Context(opReq)
diags = diags.Append(ctxDiags)
if ctxDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
// Make sure to unlock the state
// Creating the context can result in a lock, so ensure we release it
defer func() {
err := opReq.StateLocker.Unlock(nil)
if err != nil {
@ -247,15 +212,20 @@ func (c *ImportCommand) Run(args []string) int {
}
}()
diags = diags.Append(ctxDiags)
if ctxDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
// Perform the import. Note that as you can see it is possible for this
// API to import more than one resource at once. For now, we only allow
// one while we stabilize this feature.
newState, importDiags := ctx.Import(&terraform.ImportOpts{
Targets: []*terraform.ImportTarget{
&terraform.ImportTarget{
Addr: addr,
ID: args[1],
ProviderAddr: providerAddr,
Addr: addr,
ID: args[1],
},
},
})
@ -340,11 +310,6 @@ Options:
-no-color If specified, output won't contain any color.
-provider=provider Deprecated: Override the provider configuration to use
when importing the object. By default, Terraform uses the
provider specified in the configuration for the target
resource, and that is the best behavior in most cases.
-state=PATH Path to the source state file. Defaults to the configured
backend, or "terraform.tfstate"

View File

@ -258,6 +258,75 @@ func TestImport_remoteState(t *testing.T) {
testStateOutput(t, statePath, testImportStr)
}
// early failure on import should not leave stale lock
func TestImport_initializationErrorShouldUnlock(t *testing.T) {
td := tempDir(t)
copy.CopyDir(testFixturePath("import-provider-remote-state"), td)
defer os.RemoveAll(td)
defer testChdir(t, td)()
statePath := "imported.tfstate"
// init our backend
ui := cli.NewMockUi()
m := Meta{
testingOverrides: metaOverridesForProvider(testProvider()),
Ui: ui,
}
ic := &InitCommand{
Meta: m,
providerInstaller: &mockProviderInstaller{
Providers: map[string][]string{
"test": []string{"1.2.3"},
},
Dir: m.pluginDir(),
},
}
// (Using log here rather than t.Log so that these messages interleave with other trace logs)
log.Print("[TRACE] TestImport_initializationErrorShouldUnlock running: terraform init")
if code := ic.Run([]string{}); code != 0 {
t.Fatalf("init failed\n%s", ui.ErrorWriter)
}
// overwrite the config with one including a resource from an invalid provider
copy.CopyFile(filepath.Join(testFixturePath("import-provider-invalid"), "main.tf"), filepath.Join(td, "main.tf"))
p := testProvider()
ui = new(cli.MockUi)
c := &ImportCommand{
Meta: Meta{
testingOverrides: metaOverridesForProvider(p),
Ui: ui,
},
}
args := []string{
"unknown_instance.baz",
"bar",
}
log.Printf("[TRACE] TestImport_initializationErrorShouldUnlock running: terraform import %s %s", args[0], args[1])
// this should fail
if code := c.Run(args); code != 1 {
fmt.Println(ui.OutputWriter)
t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String())
}
// specifically, it should fail due to a missing provider
msg := ui.ErrorWriter.String()
if want := "Could not satisfy plugin requirements"; !strings.Contains(msg, want) {
t.Errorf("incorrect message\nwant substring: %s\ngot:\n%s", want, msg)
}
// verify that the local state was unlocked after initialization error
if _, err := os.Stat(filepath.Join(td, fmt.Sprintf(".%s.lock.info", statePath))); !os.IsNotExist(err) {
t.Fatal("state left locked after import")
}
}
func TestImport_providerConfigWithVar(t *testing.T) {
defer testChdir(t, testFixturePath("import-provider-var"))()
@ -332,6 +401,63 @@ func TestImport_providerConfigWithVar(t *testing.T) {
testStateOutput(t, statePath, testImportStr)
}
func TestImport_providerConfigWithDataSource(t *testing.T) {
defer testChdir(t, testFixturePath("import-provider-datasource"))()
statePath := testTempFile(t)
p := testProvider()
ui := new(cli.MockUi)
c := &ImportCommand{
Meta: Meta{
testingOverrides: metaOverridesForProvider(p),
Ui: ui,
},
}
p.ImportResourceStateFn = nil
p.ImportResourceStateResponse = providers.ImportResourceStateResponse{
ImportedResources: []providers.ImportedResource{
{
TypeName: "test_instance",
State: cty.ObjectVal(map[string]cty.Value{
"id": cty.StringVal("yay"),
}),
},
},
}
p.GetSchemaReturn = &terraform.ProviderSchema{
Provider: &configschema.Block{
Attributes: map[string]*configschema.Attribute{
"foo": {Type: cty.String, Optional: true},
},
},
ResourceTypes: map[string]*configschema.Block{
"test_instance": {
Attributes: map[string]*configschema.Attribute{
"id": {Type: cty.String, Optional: true, Computed: true},
},
},
},
DataSources: map[string]*configschema.Block{
"test_data": {
Attributes: map[string]*configschema.Attribute{
"id": {Type: cty.String, Optional: true, Computed: true},
},
},
},
}
args := []string{
"-state", statePath,
"test_instance.foo",
"bar",
}
if code := c.Run(args); code != 1 {
t.Fatalf("bad, wanted error: %d\n\n%s", code, ui.ErrorWriter.String())
}
}
func TestImport_providerConfigWithVarDefault(t *testing.T) {
defer testChdir(t, testFixturePath("import-provider-var-default"))()
@ -479,156 +605,6 @@ func TestImport_providerConfigWithVarFile(t *testing.T) {
testStateOutput(t, statePath, testImportStr)
}
func TestImport_customProvider(t *testing.T) {
defer testChdir(t, testFixturePath("import-provider-aliased"))()
statePath := testTempFile(t)
p := testProvider()
ui := new(cli.MockUi)
c := &ImportCommand{
Meta: Meta{
testingOverrides: metaOverridesForProvider(p),
Ui: ui,
},
}
p.ImportResourceStateFn = nil
p.ImportResourceStateResponse = providers.ImportResourceStateResponse{
ImportedResources: []providers.ImportedResource{
{
TypeName: "test_instance",
State: cty.ObjectVal(map[string]cty.Value{
"id": cty.StringVal("yay"),
}),
},
},
}
p.GetSchemaReturn = &terraform.ProviderSchema{
Provider: &configschema.Block{
Attributes: map[string]*configschema.Attribute{
"foo": {Type: cty.String, Optional: true},
},
},
ResourceTypes: map[string]*configschema.Block{
"test_instance": {
Attributes: map[string]*configschema.Attribute{
"id": {Type: cty.String, Optional: true, Computed: true},
},
},
},
}
args := []string{
"-provider", "test.alias",
"-state", statePath,
"test_instance.foo",
"bar",
}
if code := c.Run(args); code != 0 {
t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String())
}
if !p.ImportResourceStateCalled {
t.Fatal("ImportResourceState should be called")
}
testStateOutput(t, statePath, testImportCustomProviderStr)
}
// This tests behavior when the provider name does not match the implied
// provider name
func TestImport_providerNameMismatch(t *testing.T) {
defer testChdir(t, testFixturePath("import-provider-mismatch"))()
statePath := testTempFile(t)
p := testProvider()
ui := new(cli.MockUi)
c := &ImportCommand{
Meta: Meta{
testingOverrides: &testingOverrides{
ProviderResolver: providers.ResolverFixed(
map[string]providers.Factory{
"test-beta": providers.FactoryFixed(p),
},
),
},
Ui: ui,
},
}
configured := false
p.ConfigureNewFn = func(req providers.ConfigureRequest) providers.ConfigureResponse {
configured = true
cfg := req.Config
if !cfg.Type().HasAttribute("foo") {
return providers.ConfigureResponse{
Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("configuration has no foo argument")),
}
}
if got, want := cfg.GetAttr("foo"), cty.StringVal("baz"); !want.RawEquals(got) {
return providers.ConfigureResponse{
Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("foo argument is %#v, but want %#v", got, want)),
}
}
return providers.ConfigureResponse{}
}
p.ImportResourceStateFn = nil
p.ImportResourceStateResponse = providers.ImportResourceStateResponse{
ImportedResources: []providers.ImportedResource{
{
TypeName: "test_instance",
State: cty.ObjectVal(map[string]cty.Value{
"id": cty.StringVal("yay"),
}),
},
},
}
p.GetSchemaReturn = &terraform.ProviderSchema{
Provider: &configschema.Block{
Attributes: map[string]*configschema.Attribute{
"foo": {Type: cty.String, Optional: true},
},
},
ResourceTypes: map[string]*configschema.Block{
"test_instance": {
Attributes: map[string]*configschema.Attribute{
"id": {Type: cty.String, Optional: true, Computed: true},
},
},
},
}
args := []string{
"-provider", "test-beta",
"-state", statePath,
"test_instance.foo",
"bar",
}
if code := c.Run(args); code != 0 {
t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String())
}
// Verify that the test-beta provider was configured
if !configured {
t.Fatal("Configure should be called")
}
if !p.ImportResourceStateCalled {
t.Fatal("ImportResourceState (provider 'test-beta') should be called")
}
if !p.ReadResourceCalled {
t.Fatal("ReadResource (provider 'test-beta' should be called")
}
testStateOutput(t, statePath, testImportProviderMismatchStr)
}
func TestImport_allowMissingResourceConfig(t *testing.T) {
defer testChdir(t, testFixturePath("import-missing-resource-config"))()
@ -890,17 +866,18 @@ func TestImport_pluginDir(t *testing.T) {
// Now we need to go through some plugin init.
// This discovers our fake plugin and writes the lock file.
initUi := new(cli.MockUi)
initCmd := &InitCommand{
Meta: Meta{
pluginPath: []string{"./plugins"},
Ui: cli.NewMockUi(),
Ui: initUi,
},
providerInstaller: &discovery.ProviderInstaller{
PluginProtocolVersion: discovery.PluginInstallProtocolVersion,
},
}
if code := initCmd.Run(nil); code != 0 {
t.Fatal(initCmd.Meta.Ui.(*cli.MockUi).ErrorWriter.String())
t.Fatal(initUi.ErrorWriter.String())
}
args := []string{
@ -930,17 +907,17 @@ func TestImport_pluginDir(t *testing.T) {
const testImportStr = `
test_instance.foo:
ID = yay
provider = provider.test
provider = provider["registry.terraform.io/-/test"]
`
const testImportCustomProviderStr = `
test_instance.foo:
ID = yay
provider = provider.test.alias
provider = provider["registry.terraform.io/-/test"].alias
`
const testImportProviderMismatchStr = `
test_instance.foo:
ID = yay
provider = provider.test-beta
provider = provider["registry.terraform.io/-/test-beta"]
`

View File

@ -25,6 +25,7 @@ import (
"github.com/hashicorp/terraform/states"
"github.com/hashicorp/terraform/terraform"
"github.com/hashicorp/terraform/tfdiags"
"github.com/hashicorp/terraform/version"
)
// InitCommand is a Command implementation that takes a Terraform
@ -295,6 +296,15 @@ func (c *InitCommand) Run(args []string) int {
}
back = be
}
} else {
// load the previously-stored backend config
be, backendDiags := c.Meta.backendFromState()
diags = diags.Append(backendDiags)
if backendDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
back = be
}
if back == nil {
@ -496,7 +506,7 @@ func (c *InitCommand) getProviders(earlyConfig *earlyconfig.Config, state *state
configReqs := configDeps.AllPluginRequirements()
// FIXME: This is weird because ConfigTreeDependencies was written before
// we switched over to using earlyConfig as the main source of dependencies.
// In future we should clean this up to be a more reasoable API.
// In future we should clean this up to be a more reasonable API.
stateReqs := terraform.ConfigTreeDependencies(nil, state).AllPluginRequirements()
requirements := configReqs.Merge(stateReqs)
@ -517,7 +527,7 @@ func (c *InitCommand) getProviders(earlyConfig *earlyconfig.Config, state *state
}
for provider, reqd := range missing {
pty := addrs.ProviderType{Name: provider}
pty := addrs.NewLegacyProvider(provider)
_, providerDiags, err := c.providerInstaller.Get(pty, reqd.Versions)
diags = diags.Append(providerDiags)
@ -559,7 +569,7 @@ func (c *InitCommand) getProviders(earlyConfig *earlyconfig.Config, state *state
// Generic version incompatible msg
c.Ui.Error(fmt.Sprintf(errProviderIncompatible, provider, constraint))
case err == discovery.ErrorSignatureVerification:
c.Ui.Error(fmt.Sprintf(errSignatureVerification, provider))
c.Ui.Error(fmt.Sprintf(errSignatureVerification, provider, version.SemVer))
case err == discovery.ErrorChecksumVerification,
err == discovery.ErrorMissingChecksumVerification:
c.Ui.Error(fmt.Sprintf(errChecksumVerification, provider))
@ -597,7 +607,7 @@ func (c *InitCommand) getProviders(earlyConfig *earlyconfig.Config, state *state
available = c.providerPluginSet() // re-discover to see newly-installed plugins
// internal providers were already filtered out, since we don't need to get them.
chosen := choosePlugins(available, nil, requirements)
chosen := chooseProviders(available, nil, requirements)
digests := map[string][]byte{}
for name, meta := range chosen {
@ -1011,9 +1021,12 @@ were changed after this version was released to the Registry.
`
const errSignatureVerification = `
[reset][bold][red]Error verifying GPG signature for provider %[1]q[reset][red]
Terraform was unable to verify the GPG signature of the downloaded provider
files using the keys downloaded from the Terraform Registry. This may mean that
the publisher of the provider removed the key it was signed with, or that the
distributed files were changed after this version was released.
[reset][bold][red]Error:[reset][bold] Untrusted signing key for provider %[1]q[reset]
This provider package is not signed with the HashiCorp signing key, and is
therefore incompatible with Terraform v%[2]s.
A later version of Terraform may have introduced other signing keys that would
accept this provider. Alternatively, an earlier version of this provider may
be compatible with Terraform v%[2]s.
`

View File

@ -947,6 +947,56 @@ func TestInit_rcProviders(t *testing.T) {
}
}
func TestInit_providerSource(t *testing.T) {
// Create a temporary working directory that is empty
td := tempDir(t)
configDirName := "init-required-providers"
copy.CopyDir(testFixturePath(configDirName), filepath.Join(td, configDirName))
defer os.RemoveAll(td)
defer testChdir(t, td)()
ui := new(cli.MockUi)
m := Meta{
testingOverrides: metaOverridesForProvider(testProvider()),
Ui: ui,
}
c := &InitCommand{
Meta: m,
providerInstaller: &mockProviderInstaller{},
}
// make our plugin paths
if err := os.MkdirAll(c.pluginDir(), 0755); err != nil {
t.Fatal(err)
}
if err := os.MkdirAll(DefaultPluginVendorDir, 0755); err != nil {
t.Fatal(err)
}
// add some dummy providers
// the auto plugin directory
testPath := filepath.Join(c.pluginDir(), "terraform-provider-test_v1.2.3_x4")
if err := ioutil.WriteFile(testPath, []byte("test bin"), 0755); err != nil {
t.Fatal(err)
}
// the vendor path
sourcePath := filepath.Join(DefaultPluginVendorDir, "terraform-provider-source_v1.2.3_x4")
if err := ioutil.WriteFile(sourcePath, []byte("test bin"), 0755); err != nil {
t.Fatal(err)
}
args := []string{configDirName}
if code := c.Run(args); code != 0 {
t.Fatalf("bad: \n%s", ui.ErrorWriter.String())
}
if strings.Contains(ui.OutputWriter.String(), "Terraform has initialized, but configuration upgrades may be needed") {
t.Fatalf("unexpected \"configuration upgrade\" warning in output")
}
}
func TestInit_getUpgradePlugins(t *testing.T) {
// Create a temporary working directory that is empty
td := tempDir(t)
@ -1187,6 +1237,7 @@ func TestInit_providerLockFile(t *testing.T) {
func TestInit_pluginDirReset(t *testing.T) {
td := testTempDir(t)
defer os.RemoveAll(td)
defer testChdir(t, td)()
ui := new(cli.MockUi)

View File

@ -33,7 +33,24 @@ func BuildPluginCommandString(pluginType, pluginName string) (string, error) {
return strings.Join(parts, TFSPACE), nil
}
// Internal plugins do not support any CLI args, but we do receive flags that
// main.go:mergeEnvArgs has merged in from EnvCLI. Instead of making main.go
// aware of this exception, we strip all flags from our args. Flags are easily
// identified by the '-' prefix, ensured by the cli package used.
func StripArgFlags(args []string) []string {
argsNoFlags := []string{}
for i := range args {
if !strings.HasPrefix(args[i], "-") {
argsNoFlags = append(argsNoFlags, args[i])
}
}
return argsNoFlags
}
func (c *InternalPluginCommand) Run(args []string) int {
// strip flags from args, only use subcommands.
args = StripArgFlags(args)
if len(args) != 2 {
log.Printf("Wrong number of args; expected: terraform internal-plugin pluginType pluginName")
return 1

View File

@ -1,13 +1,17 @@
package command
import "testing"
import (
"testing"
"github.com/hashicorp/terraform/addrs"
)
func TestInternalPlugin_InternalProviders(t *testing.T) {
m := new(Meta)
providers := m.internalProviders()
// terraform is the only provider moved back to internal
for _, name := range []string{"terraform"} {
pf, ok := providers[name]
pf, ok := providers[addrs.NewLegacyProvider(name)]
if !ok {
t.Errorf("Expected to find %s in InternalProviders", name)
}
@ -42,3 +46,12 @@ func TestInternalPlugin_BuildPluginCommandString(t *testing.T) {
t.Errorf("Expected command to end with %s; got:\n%s\n", expected, actual)
}
}
func TestInternalPlugin_StripArgFlags(t *testing.T) {
actual := StripArgFlags([]string{"provisioner", "remote-exec", "-var-file=my_vars.tfvars", "-flag"})
expected := []string{"provisioner", "remote-exec"}
// Must be same length and order.
if len(actual) != len(expected) || expected[0] != actual[0] || actual[1] != actual[1] {
t.Fatalf("Expected args to be exactly '%s', got '%s'", expected, actual)
}
}

View File

@ -139,7 +139,9 @@ func marshalProviderConfigs(
}
for k, pc := range c.Module.ProviderConfigs {
schema := schemas.ProviderConfig(pc.Name)
// FIXME: lookup providerFqn from config
providerFqn := addrs.NewLegacyProvider(pc.Name)
schema := schemas.ProviderConfig(providerFqn)
p := providerConfig{
Name: pc.Name,
Alias: pc.Alias,
@ -301,8 +303,10 @@ func marshalResources(resources map[string]*configs.Resource, schemas *terraform
}
}
// TODO: get actual providerFqn
providerFqn := addrs.NewLegacyProvider(v.ProviderConfigAddr().LocalName)
schema, schemaVer := schemas.ResourceTypeConfig(
v.ProviderConfigAddr().Type,
providerFqn,
v.Mode,
v.Type,
)

View File

@ -178,7 +178,7 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform
}
schema, _ := schemas.ResourceTypeConfig(
rc.ProviderAddr.ProviderConfig.Type,
rc.ProviderAddr.Provider,
addr.Resource.Resource.Mode,
addr.Resource.Resource.Type,
)
@ -252,7 +252,7 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform
r.ModuleAddress = addr.Module.String()
r.Name = addr.Resource.Resource.Name
r.Type = addr.Resource.Resource.Type
r.ProviderName = rc.ProviderAddr.ProviderConfig.StringCompact()
r.ProviderName = rc.ProviderAddr.Provider.LegacyString()
p.ResourceChanges = append(p.ResourceChanges, r)

View File

@ -26,7 +26,7 @@ type stateValues struct {
type attributeValues map[string]interface{}
func marshalAttributeValues(value cty.Value, schema *configschema.Block) attributeValues {
if value == cty.NilVal {
if value == cty.NilVal || value.IsNull() {
return nil
}
ret := make(attributeValues)
@ -96,16 +96,36 @@ func marshalPlannedValues(changes *plans.Changes, schemas *terraform.Schemas) (m
containingModule := resource.Addr.Module.String()
moduleResourceMap[containingModule] = append(moduleResourceMap[containingModule], resource.Addr)
// root has no parents.
if containingModule != "" {
// the root module has no parents
if !resource.Addr.Module.IsRoot() {
parent := resource.Addr.Module.Parent().String()
// we likely will see multiple resources in one module, so we
// we expect to see multiple resources in one module, so we
// only need to report the "parent" module for each child module
// once.
if !seenModules[containingModule] {
moduleMap[parent] = append(moduleMap[parent], resource.Addr.Module)
seenModules[containingModule] = true
}
// If any given parent module has no resources, it needs to be
// added to the moduleMap. This walks through the current
// resources' modules' ancestors, taking advantage of the fact
// that Ancestors() returns an ordered slice, and verifies that
// each one is in the map.
ancestors := resource.Addr.Module.Ancestors()
for i, ancestor := range ancestors[:len(ancestors)-1] {
aStr := ancestor.String()
// childStr here is the immediate child of the current step
childStr := ancestors[i+1].String()
// we likely will see multiple resources in one module, so we
// only need to report the "parent" module for each child module
// once.
if !seenModules[childStr] {
moduleMap[aStr] = append(moduleMap[aStr], ancestors[i+1])
seenModules[childStr] = true
}
}
}
}
}
@ -144,7 +164,7 @@ func marshalPlanResources(changes *plans.Changes, ris []addrs.AbsResourceInstanc
Address: r.Addr.String(),
Type: r.Addr.Resource.Resource.Type,
Name: r.Addr.Resource.Resource.Name,
ProviderName: r.ProviderAddr.ProviderConfig.StringCompact(),
ProviderName: r.ProviderAddr.Provider.LegacyString(),
Index: r.Addr.Resource.Key,
}
@ -161,7 +181,7 @@ func marshalPlanResources(changes *plans.Changes, ris []addrs.AbsResourceInstanc
}
schema, schemaVer := schemas.ResourceTypeConfig(
r.ProviderAddr.ProviderConfig.Type,
r.ProviderAddr.Provider,
r.Addr.Resource.Resource.Mode,
resource.Type,
)

View File

@ -30,6 +30,18 @@ func TestMarshalAttributeValues(t *testing.T) {
},
nil,
},
{
cty.NullVal(cty.String),
&configschema.Block{
Attributes: map[string]*configschema.Attribute{
"foo": {
Type: cty.String,
Optional: true,
},
},
},
nil,
},
{
cty.ObjectVal(map[string]cty.Value{
"foo": cty.StringVal("bar"),
@ -246,7 +258,10 @@ func TestMarshalPlanResources(t *testing.T) {
Type: "test_thing",
Name: "example",
}.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance),
ProviderAddr: addrs.ProviderConfig{Type: "test"}.Absolute(addrs.RootModuleInstance),
ProviderAddr: addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
ChangeSrc: plans.ChangeSrc{
Action: test.Action,
Before: before,
@ -278,8 +293,8 @@ func TestMarshalPlanResources(t *testing.T) {
func testSchemas() *terraform.Schemas {
return &terraform.Schemas{
Providers: map[string]*terraform.ProviderSchema{
"test": &terraform.ProviderSchema{
Providers: map[addrs.Provider]*terraform.ProviderSchema{
addrs.NewLegacyProvider("test"): &terraform.ProviderSchema{
ResourceTypes: map[string]*configschema.Block{
"test_thing": {
Attributes: map[string]*configschema.Attribute{

View File

@ -35,7 +35,7 @@ func Marshal(s *terraform.Schemas) ([]byte, error) {
providers := newProviders()
for k, v := range s.Providers {
providers.Schemas[k] = marshalProvider(v)
providers.Schemas[k.LegacyString()] = marshalProvider(v)
}
ret, err := json.Marshal(providers)

View File

@ -7,6 +7,7 @@ import (
"github.com/google/go-cmp/cmp"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/terraform/addrs"
"github.com/hashicorp/terraform/configs/configschema"
"github.com/hashicorp/terraform/terraform"
)
@ -117,8 +118,8 @@ func TestMarshalProvider(t *testing.T) {
func testProviders() *terraform.Schemas {
return &terraform.Schemas{
Providers: map[string]*terraform.ProviderSchema{
"test": testProvider(),
Providers: map[addrs.Provider]*terraform.ProviderSchema{
addrs.NewLegacyProvider("test"): testProvider(),
},
}
}

View File

@ -91,6 +91,9 @@ type resource struct {
// Tainted is true if the resource is tainted in terraform state.
Tainted bool `json:"tainted,omitempty"`
// Deposed is set if the resource is deposed in terraform state.
DeposedKey string `json:"deposed_key,omitempty"`
}
// attributeValues is the JSON representation of the attribute values of the
@ -98,9 +101,10 @@ type resource struct {
type attributeValues map[string]interface{}
func marshalAttributeValues(value cty.Value, schema *configschema.Block) attributeValues {
if value == cty.NilVal {
if value == cty.NilVal || value.IsNull() {
return nil
}
ret := make(attributeValues)
it := value.ElementIterator()
@ -246,18 +250,18 @@ func marshalResources(resources map[string]*states.Resource, schemas *terraform.
for _, r := range resources {
for k, ri := range r.Instances {
resource := resource{
current := resource{
Address: r.Addr.String(),
Type: r.Addr.Type,
Name: r.Addr.Name,
ProviderName: r.ProviderConfig.ProviderConfig.StringCompact(),
ProviderName: r.ProviderConfig.Provider.LegacyString(),
}
switch r.Addr.Mode {
case addrs.ManagedResourceMode:
resource.Mode = "managed"
current.Mode = "managed"
case addrs.DataResourceMode:
resource.Mode = "data"
current.Mode = "data"
default:
return ret, fmt.Errorf("resource %s has an unsupported mode %s",
r.Addr.String(),
@ -266,41 +270,76 @@ func marshalResources(resources map[string]*states.Resource, schemas *terraform.
}
if r.EachMode != states.NoEach {
resource.Index = k
current.Index = k
}
schema, _ := schemas.ResourceTypeConfig(
r.ProviderConfig.ProviderConfig.Type,
r.ProviderConfig.Provider,
r.Addr.Mode,
r.Addr.Type,
)
resource.SchemaVersion = ri.Current.SchemaVersion
if schema == nil {
return nil, fmt.Errorf("no schema found for %s", r.Addr.String())
}
riObj, err := ri.Current.Decode(schema.ImpliedType())
if err != nil {
return nil, err
}
// It is possible that the only instance is deposed
if ri.Current != nil {
current.SchemaVersion = ri.Current.SchemaVersion
resource.AttributeValues = marshalAttributeValues(riObj.Value, schema)
if len(riObj.Dependencies) > 0 {
dependencies := make([]string, len(riObj.Dependencies))
for i, v := range riObj.Dependencies {
dependencies[i] = v.String()
if schema == nil {
return nil, fmt.Errorf("no schema found for %s", r.Addr.String())
}
resource.DependsOn = dependencies
riObj, err := ri.Current.Decode(schema.ImpliedType())
if err != nil {
return nil, err
}
current.AttributeValues = marshalAttributeValues(riObj.Value, schema)
if len(riObj.Dependencies) > 0 {
dependencies := make([]string, len(riObj.Dependencies))
for i, v := range riObj.Dependencies {
dependencies[i] = v.String()
}
current.DependsOn = dependencies
}
if riObj.Status == states.ObjectTainted {
current.Tainted = true
}
ret = append(ret, current)
}
if riObj.Status == states.ObjectTainted {
resource.Tainted = true
}
for deposedKey, rios := range ri.Deposed {
// copy the base fields from the current instance
deposed := resource{
Address: current.Address,
Type: current.Type,
Name: current.Name,
ProviderName: current.ProviderName,
Mode: current.Mode,
Index: current.Index,
}
ret = append(ret, resource)
riObj, err := rios.Decode(schema.ImpliedType())
if err != nil {
return nil, err
}
deposed.AttributeValues = marshalAttributeValues(riObj.Value, schema)
if len(riObj.Dependencies) > 0 {
dependencies := make([]string, len(riObj.Dependencies))
for i, v := range riObj.Dependencies {
dependencies[i] = v.String()
}
deposed.DependsOn = dependencies
}
if riObj.Status == states.ObjectTainted {
deposed.Tainted = true
}
deposed.DeposedKey = deposedKey.String()
ret = append(ret, deposed)
}
}
}
sort.Slice(ret, func(i, j int) bool {

View File

@ -91,6 +91,18 @@ func TestMarshalAttributeValues(t *testing.T) {
},
nil,
},
{
cty.NullVal(cty.String),
&configschema.Block{
Attributes: map[string]*configschema.Attribute{
"foo": {
Type: cty.String,
Optional: true,
},
},
},
nil,
},
{
cty.ObjectVal(map[string]cty.Value{
"foo": cty.StringVal("bar"),
@ -158,19 +170,20 @@ func TestMarshalAttributeValues(t *testing.T) {
}
func TestMarshalResources(t *testing.T) {
tests := []struct {
deposedKey := states.NewDeposedKey()
tests := map[string]struct {
Resources map[string]*states.Resource
Schemas *terraform.Schemas
Want []resource
Err bool
}{
{
"nil": {
nil,
nil,
nil,
false,
},
{
"single resource": {
map[string]*states.Resource{
"test_thing.baz": {
Addr: addrs.Resource{
@ -188,9 +201,10 @@ func TestMarshalResources(t *testing.T) {
},
},
},
ProviderConfig: addrs.ProviderConfig{
Type: "test",
}.Absolute(addrs.RootModuleInstance),
ProviderConfig: addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
},
},
testSchemas(),
@ -211,29 +225,137 @@ func TestMarshalResources(t *testing.T) {
},
false,
},
"deposed resource": {
map[string]*states.Resource{
"test_thing.baz": {
Addr: addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_thing",
Name: "bar",
},
EachMode: states.EachList,
Instances: map[addrs.InstanceKey]*states.ResourceInstance{
addrs.IntKey(0): {
Deposed: map[states.DeposedKey]*states.ResourceInstanceObjectSrc{
states.DeposedKey(deposedKey): &states.ResourceInstanceObjectSrc{
SchemaVersion: 1,
Status: states.ObjectReady,
AttrsJSON: []byte(`{"woozles":"confuzles"}`),
},
},
},
},
ProviderConfig: addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
},
},
testSchemas(),
[]resource{
resource{
Address: "test_thing.bar",
Mode: "managed",
Type: "test_thing",
Name: "bar",
Index: addrs.IntKey(0),
ProviderName: "test",
DeposedKey: deposedKey.String(),
AttributeValues: attributeValues{
"foozles": json.RawMessage(`null`),
"woozles": json.RawMessage(`"confuzles"`),
},
},
},
false,
},
"deposed and current resource": {
map[string]*states.Resource{
"test_thing.baz": {
Addr: addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_thing",
Name: "bar",
},
EachMode: states.EachList,
Instances: map[addrs.InstanceKey]*states.ResourceInstance{
addrs.IntKey(0): {
Deposed: map[states.DeposedKey]*states.ResourceInstanceObjectSrc{
states.DeposedKey(deposedKey): &states.ResourceInstanceObjectSrc{
SchemaVersion: 1,
Status: states.ObjectReady,
AttrsJSON: []byte(`{"woozles":"confuzles"}`),
},
},
Current: &states.ResourceInstanceObjectSrc{
SchemaVersion: 1,
Status: states.ObjectReady,
AttrsJSON: []byte(`{"woozles":"confuzles"}`),
},
},
},
ProviderConfig: addrs.AbsProviderConfig{
Provider: addrs.NewLegacyProvider("test"),
Module: addrs.RootModuleInstance,
},
},
},
testSchemas(),
[]resource{
resource{
Address: "test_thing.bar",
Mode: "managed",
Type: "test_thing",
Name: "bar",
Index: addrs.IntKey(0),
ProviderName: "test",
SchemaVersion: 1,
AttributeValues: attributeValues{
"foozles": json.RawMessage(`null`),
"woozles": json.RawMessage(`"confuzles"`),
},
},
resource{
Address: "test_thing.bar",
Mode: "managed",
Type: "test_thing",
Name: "bar",
Index: addrs.IntKey(0),
ProviderName: "test",
DeposedKey: deposedKey.String(),
AttributeValues: attributeValues{
"foozles": json.RawMessage(`null`),
"woozles": json.RawMessage(`"confuzles"`),
},
},
},
false,
},
}
for _, test := range tests {
got, err := marshalResources(test.Resources, test.Schemas)
if test.Err {
if err == nil {
t.Fatal("succeeded; want error")
for name, test := range tests {
t.Run(name, func(t *testing.T) {
got, err := marshalResources(test.Resources, test.Schemas)
if test.Err {
if err == nil {
t.Fatal("succeeded; want error")
}
return
} else if err != nil {
t.Fatalf("unexpected error: %s", err)
}
return
} else if err != nil {
t.Fatalf("unexpected error: %s", err)
}
eq := reflect.DeepEqual(got, test.Want)
if !eq {
t.Fatalf("wrong result:\nGot: %#v\nWant: %#v\n", got, test.Want)
}
eq := reflect.DeepEqual(got, test.Want)
if !eq {
t.Fatalf("wrong result:\nGot: %#v\nWant: %#v\n", got, test.Want)
}
})
}
}
func testSchemas() *terraform.Schemas {
return &terraform.Schemas{
Providers: map[string]*terraform.ProviderSchema{
"test": &terraform.ProviderSchema{
Providers: map[addrs.Provider]*terraform.ProviderSchema{
addrs.NewLegacyProvider("test"): &terraform.ProviderSchema{
ResourceTypes: map[string]*configschema.Block{
"test_thing": {
Attributes: map[string]*configschema.Attribute{

View File

@ -10,14 +10,16 @@ import (
"math/rand"
"net"
"net/http"
"net/url"
"path/filepath"
"strings"
tfe "github.com/hashicorp/go-tfe"
svchost "github.com/hashicorp/terraform-svchost"
svcauth "github.com/hashicorp/terraform-svchost/auth"
"github.com/hashicorp/terraform-svchost/disco"
"github.com/hashicorp/terraform/command/cliconfig"
"github.com/hashicorp/terraform/httpclient"
"github.com/hashicorp/terraform/svchost"
svcauth "github.com/hashicorp/terraform/svchost/auth"
"github.com/hashicorp/terraform/svchost/disco"
"github.com/hashicorp/terraform/tfdiags"
uuid "github.com/hashicorp/go-uuid"
@ -103,21 +105,12 @@ func (c *LoginCommand) Run(args []string) int {
return 1
}
creds := c.Services.CredentialsSource()
// In normal use (i.e. without test mocks/fakes) creds will be an instance
// of the command/cliconfig.CredentialsSource type, which has some extra
// methods we can use to give the user better feedback about what we're
// going to do. credsCtx will be nil if it's any other implementation,
// though.
var credsCtx *loginCredentialsContext
if c, ok := creds.(*cliconfig.CredentialsSource); ok {
filename, _ := c.CredentialsFilePath()
credsCtx = &loginCredentialsContext{
Location: c.HostCredentialsLocation(hostname),
LocalFilename: filename, // empty in the very unlikely event that we can't select a config directory for this user
HelperType: c.CredentialsHelperType(),
}
creds := c.Services.CredentialsSource().(*cliconfig.CredentialsSource)
filename, _ := creds.CredentialsFilePath()
credsCtx := &loginCredentialsContext{
Location: creds.HostCredentialsLocation(hostname),
LocalFilename: filename, // empty in the very unlikely event that we can't select a config directory for this user
HelperType: creds.CredentialsHelperType(),
}
clientConfig, err := host.ServiceOAuthClient("login.v1")
@ -125,25 +118,49 @@ func (c *LoginCommand) Run(args []string) int {
case nil:
// Great! No problem, then.
case *disco.ErrServiceNotProvided:
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Host does not support Terraform login",
fmt.Sprintf("The given hostname %q does not allow creating Terraform authorization tokens.", dispHostname),
))
// This is also fine! We'll try the manual token creation process.
case *disco.ErrVersionNotSupported:
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
tfdiags.Warning,
"Host does not support Terraform login",
fmt.Sprintf("The given hostname %q allows creating Terraform authorization tokens, but requires a newer version of Terraform CLI to do so.", dispHostname),
))
default:
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
tfdiags.Warning,
"Host does not support Terraform login",
fmt.Sprintf("The given hostname %q cannot support \"terraform login\": %s.", dispHostname, err),
))
}
// If login service is unavailable, check for a TFE v2 API as fallback
var service *url.URL
if clientConfig == nil {
service, err = host.ServiceURL("tfe.v2")
switch err.(type) {
case nil:
// Success!
case *disco.ErrServiceNotProvided:
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Host does not support Terraform tokens API",
fmt.Sprintf("The given hostname %q does not support creating Terraform authorization tokens.", dispHostname),
))
case *disco.ErrVersionNotSupported:
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Host does not support Terraform tokens API",
fmt.Sprintf("The given hostname %q allows creating Terraform authorization tokens, but requires a newer version of Terraform CLI to do so.", dispHostname),
))
default:
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Host does not support Terraform tokens API",
fmt.Sprintf("The given hostname %q cannot support \"terraform login\": %s.", dispHostname, err),
))
}
}
if credsCtx.Location == cliconfig.CredentialsInOtherFile {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
@ -157,37 +174,41 @@ func (c *LoginCommand) Run(args []string) int {
return 1
}
var token *oauth2.Token
switch {
case clientConfig.SupportedGrantTypes.Has(disco.OAuthAuthzCodeGrant):
// We prefer an OAuth code grant if the server supports it.
var tokenDiags tfdiags.Diagnostics
token, tokenDiags = c.interactiveGetTokenByCode(hostname, credsCtx, clientConfig)
diags = diags.Append(tokenDiags)
if tokenDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
var token svcauth.HostCredentialsToken
var tokenDiags tfdiags.Diagnostics
// Prefer Terraform login if available
if clientConfig != nil {
var oauthToken *oauth2.Token
switch {
case clientConfig.SupportedGrantTypes.Has(disco.OAuthAuthzCodeGrant):
// We prefer an OAuth code grant if the server supports it.
oauthToken, tokenDiags = c.interactiveGetTokenByCode(hostname, credsCtx, clientConfig)
case clientConfig.SupportedGrantTypes.Has(disco.OAuthOwnerPasswordGrant) && hostname == svchost.Hostname("app.terraform.io"):
// The password grant type is allowed only for Terraform Cloud SaaS.
oauthToken, tokenDiags = c.interactiveGetTokenByPassword(hostname, credsCtx, clientConfig)
default:
tokenDiags = tokenDiags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Host does not support Terraform login",
fmt.Sprintf("The given hostname %q does not allow any OAuth grant types that are supported by this version of Terraform.", dispHostname),
))
}
case clientConfig.SupportedGrantTypes.Has(disco.OAuthOwnerPasswordGrant) && hostname == svchost.Hostname("app.terraform.io"):
// The password grant type is allowed only for Terraform Cloud SaaS.
var tokenDiags tfdiags.Diagnostics
token, tokenDiags = c.interactiveGetTokenByPassword(hostname, credsCtx, clientConfig)
diags = diags.Append(tokenDiags)
if tokenDiags.HasErrors() {
c.showDiagnostics(diags)
return 1
if oauthToken != nil {
token = svcauth.HostCredentialsToken(oauthToken.AccessToken)
}
default:
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Host does not support Terraform login",
fmt.Sprintf("The given hostname %q does not allow any OAuth grant types that are supported by this version of Terraform.", dispHostname),
))
} else if service != nil {
token, tokenDiags = c.interactiveGetTokenByUI(hostname, credsCtx, service)
}
diags = diags.Append(tokenDiags)
if diags.HasErrors() {
c.showDiagnostics(diags)
return 1
}
err = creds.StoreForHost(hostname, svcauth.HostCredentialsToken(token.AccessToken))
err = creds.StoreForHost(hostname, token)
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
@ -468,10 +489,94 @@ func (c *LoginCommand) interactiveGetTokenByPassword(hostname svchost.Hostname,
return token, diags
}
func (c *LoginCommand) interactiveContextConsent(hostname svchost.Hostname, grantType disco.OAuthGrantType, credsCtx *loginCredentialsContext) (bool, tfdiags.Diagnostics) {
func (c *LoginCommand) interactiveGetTokenByUI(hostname svchost.Hostname, credsCtx *loginCredentialsContext, service *url.URL) (svcauth.HostCredentialsToken, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
c.Ui.Output(fmt.Sprintf("Terraform will request an API token for %s using OAuth.\n", hostname.ForDisplay()))
confirm, confirmDiags := c.interactiveContextConsent(hostname, disco.OAuthGrantType(""), credsCtx)
diags = diags.Append(confirmDiags)
if !confirm {
diags = diags.Append(errors.New("Login cancelled"))
return "", diags
}
tokensURL := url.URL{
Scheme: "https",
Host: service.Hostname(),
Path: "/app/settings/tokens",
RawQuery: "source=terraform-login",
}
launchBrowserManually := false
if c.BrowserLauncher != nil {
err := c.BrowserLauncher.OpenURL(tokensURL.String())
if err == nil {
c.Ui.Output(fmt.Sprintf("Terraform must now open a web browser to the tokens page for %s.\n", hostname.ForDisplay()))
c.Ui.Output(fmt.Sprintf("If a browser does not open this automatically, open the following URL to proceed:\n %s\n", tokensURL.String()))
} else {
// Assume we're on a platform where opening a browser isn't possible.
launchBrowserManually = true
}
} else {
launchBrowserManually = true
}
if launchBrowserManually {
c.Ui.Output(fmt.Sprintf("Open the following URL to access the tokens page for %s:\n %s\n", hostname.ForDisplay(), tokensURL.String()))
}
c.Ui.Output("\n---------------------------------------------------------------------------------\n")
c.Ui.Output("Generate a token using your browser, and copy-paste it into this prompt.\n")
// credsCtx might not be set if we're using a mock credentials source
// in a test, but it should always be set in normal use.
if credsCtx != nil {
switch credsCtx.Location {
case cliconfig.CredentialsViaHelper:
c.Ui.Output(fmt.Sprintf("Terraform will store the token in the configured %q credentials helper\nfor use by subsequent commands.\n", credsCtx.HelperType))
case cliconfig.CredentialsInPrimaryFile, cliconfig.CredentialsNotAvailable:
c.Ui.Output(fmt.Sprintf("Terraform will store the token in plain text in the following file\nfor use by subsequent commands:\n %s\n", credsCtx.LocalFilename))
}
}
token, err := c.Ui.AskSecret(fmt.Sprintf(c.Colorize().Color("Token for [bold]%s[reset]:"), hostname.ForDisplay()))
if err != nil {
diags := diags.Append(fmt.Errorf("Failed to retrieve token: %s", err))
return "", diags
}
token = strings.TrimSpace(token)
cfg := &tfe.Config{
Address: service.String(),
BasePath: service.Path,
Token: token,
Headers: make(http.Header),
}
client, err := tfe.NewClient(cfg)
if err != nil {
diags = diags.Append(fmt.Errorf("Failed to create API client: %s", err))
return "", diags
}
user, err := client.Users.ReadCurrent(context.Background())
if err == tfe.ErrUnauthorized {
diags = diags.Append(fmt.Errorf("Token is invalid: %s", err))
return "", diags
} else if err != nil {
diags = diags.Append(fmt.Errorf("Failed to retrieve user account details: %s", err))
return "", diags
}
c.Ui.Output(fmt.Sprintf(c.Colorize().Color("\nRetrieved token for user [bold]%s[reset]\n"), user.Username))
return svcauth.HostCredentialsToken(token), nil
}
func (c *LoginCommand) interactiveContextConsent(hostname svchost.Hostname, grantType disco.OAuthGrantType, credsCtx *loginCredentialsContext) (bool, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
mechanism := "OAuth"
if grantType == "" {
mechanism = "your browser"
}
c.Ui.Output(fmt.Sprintf("Terraform will request an API token for %s using %s.\n", hostname.ForDisplay(), mechanism))
if grantType.UsesAuthorizationEndpoint() {
c.Ui.Output(

Some files were not shown because too many files have changed in this diff Show More