Commit Graph

20473 Commits

Author SHA1 Message Date
Martin Atkins 780e758f1e tfdiags: Allow construction of RPC-friendly Diagnostics
Due to the use of interfaces, Diagnostics is not super-friendly to the gob
encoding we currently use for plugin RPC. To mitigate this, we provide
a helper that converts all of the wrapped objects into a predictable flat
structure that we can pre-emptively register with gob.

This means that the decoded Diagnostics still has the same meaning as
the original, though the original wrapped errors (if any) are lost and
thus our errwrap integration won't be effective any longer.
2017-10-06 11:46:07 -07:00
Martin Atkins ab5efb805c tfdiags: SourceRange.StartString
This helper provides a concise string showing the filename, start line and
start column for a range, for easy inclusion in error messages.
2017-10-06 11:46:07 -07:00
Martin Atkins fc20f419dd config and command: use errwrap to propagate config load errors
Previously we were using fmt.Sprintf and thus forcing the stringification
of the wrapped error.

Using errwrap allows us to unpack the original error at the top of the
stack, which is useful when the wrapped error is really a hcl.Diagnostics
containing potentially-multiple errors and possibly warnings.
2017-10-06 11:46:07 -07:00
Martin Atkins 61cd3bf02a tfdiags: new package for normalizing error and warning messages
Currently we lean heavily on the Go error type as our primary means of
describing errors, and along with that use several more specialized
implementations of it in different spots for additional capabilities such
as multiple errors in one object, source code range references, etc.

We also have a rather ad-hoc approach of returning an array of warnings
from certain functions along with one or multiple errors.

This rather-disorganized approach makes it hard for us to present
user-facing error messages consistently. As a step towards mitigating
this, package tfdiags provides a model for user-facing error and warning
messages and helper functions for creating them from various other
error and warning types used elsewhere in Terraform.

This mechanism is intended to be used to report errors and warnings where
the audience is the Terraform user, and so it may go a few layers deep
down the call stack into codepaths like config parsing, interpolation, etc
but is primarily a UX concern. The deepest reaches of Terraform core will
continue using "error" as normal, with higher layers preparing error
messages for presentation to the user.

To avoid needing to change the interface of every function that might
generate error diagnostics, the Diagnostics type can be "smuggled" via
an error value through other APIs and then unwrapped at the other end,
though it will lose any naked warnings (without at least one error) along
the way, and so codepaths that are expected to generate warnings
(validation, primarily) should use the concrete Diagnostics type
throughout the call chain.
2017-10-06 11:46:07 -07:00
Martin Atkins 336b352d6f Update our go.googlesource.com rate-limit cookie
We have a generated cookie for googlesource.com so that we don't get so rate-limited when cloning Google-hosted Go libraries.

The previous credential was invalidated, so this is a newly-generated one. This credential does nothing except allow us to fetch git repositories from go.googlesource.com with a slightly-higher rate limit.
2017-10-06 11:43:28 -07:00
Martin Atkins b91bd62747 config/configschema: Sensitive flag for attributes
We don't currently have any need for this information, but we're
propagating it out of helper/schema here pre-emptively so that once we
later have a use for it we will not need to rebuild the providers to gain
access to it.

The long-term expected use-case for this is to have Terraform Core use
static analysis techniques to trace the path of sensitive data through
interpolations so that intermediate results can be flagged as sensitive
too, but we have a lot more work to do before such a thing would actually
be possible.
2017-10-04 16:35:11 -07:00
Martin Atkins 69650b0bbc helper/schema: conversion of Schema to configschema.Block
As part of moving to the next-generation HCL implementation,
Terraform Core is getting its own representation of configuration schema
that is tailored for configuration-processing use-cases. The capabilities
of this are a subset of the helper/schema model primarily concerned with
the configuration structure and value types, leaving detailed validation
and defaults for helper/schema to still solve.

These new methods allow mechanical creation of a schema in the new Core
schema model from a schema expressed in the helper/schema model. This is
not yet used as of this commit, but will be used later to implement some
new ResourceProvider methods that will allow core to obtain the schema
for provider, resource and data source configuration while remaining
source-compatible with existing provider implementations.
2017-10-04 16:35:11 -07:00
Martin Atkins 0ef985cada config/configschema: Block.ImpliedType delegates to zcldec
zcldec now has its own function for computing the implied type for a spec,
so we can use that instead of our own logic.

The zcldec logic is more general since its spec model is more general than
our schema model here, but it produces the same results for the subset
of specifications that our DecoderSpec method produces.
2017-10-04 16:35:11 -07:00
Martin Atkins 2b622fe31a config/configschema: Block.DecoderSpec
This returns a decoding specification that can be used with the hcldec
package to decode a body into a cty.Value of an object type.
2017-10-04 16:35:11 -07:00
Martin Atkins 4de052cf01 govendor fetch github.com/hashicorp/hcl2/... 2017-10-04 16:35:11 -07:00
Martin Atkins 21bf786401 govendor fetch github.com/hashicorp/hcl2/hcltest 2017-10-04 16:35:11 -07:00
Martin Atkins d0d829848a config/configschema: Block.ImpliedType method
This returns a cty.Type that the caller can expect to recieve when
decoding a value using the (not yet implemented) decoder specification
for a given schema.
2017-10-04 16:35:11 -07:00
Martin Atkins f117906bdb config/configschema: InternalValidate for blocks
This checks that a schema complies with the documented constraints on
which values are valid. It is primarily intended for use in tests.
2017-10-04 16:35:11 -07:00
Martin Atkins d712a04c32 config/configschema: MinItems and MaxItems for nested blocks 2017-10-04 16:35:11 -07:00
Martin Atkins dadec5210b config/configschema: stub out initial API 2017-10-04 16:35:11 -07:00
James Bardin 313f795755 Merge branch 'stable-website'
re-sync the trees
2017-10-04 14:53:51 -04:00
James Bardin 0122e11550 Merge pull request #16252 from hashicorp/jbardin/fast-transitive
faster DAG transitive reduction
2017-10-04 09:27:48 -04:00
James Bardin b20ab9f0bb Merge pull request #16243 from hashicorp/jbardin/s3-errors
retry on s3 backend internal errors
2017-10-04 09:26:26 -04:00
bclodius fba02f0bea retry on s3 backend internal errors
Internal errors from S3 are usually transient, and can be immediately retried.
Make 2 attempts at retreiving the state object before returning an error.
2017-10-04 09:11:30 -04:00
Patrick Decat 00c3367dda website: Replace `~` by `$HOME` in documentation of TF_PLUGIN_CACHE_DIR 2017-10-03 18:34:22 -07:00
Martin Atkins b851fa71c9 config: Make HIL-based functions available to HCL2 via a shim
Terraform has a _lot_ of functions written against HIL's function API, and
we're not ready to rewrite them all yet, so instead we shim the HIL
function API to conform to the HCL2 (really: cty) function API and thus
allow most of our existing functions to work as expected when called from
HCL2-based config files.

Not all of the functions can be fully shimmed in this way due to depending
on HIL implementation details that we can't mimic through the HCL2 API.
We don't attempt to address that yet, and instead just let them fail when
called. We will eventually address this by using first-class HCL2
functions for these few cases, thus avoiding the HIL API altogether where
we need to. (The methodology for that is already illustrated here in the
provision of jsonencode and jsondecode functions that are HCL2-native.)
2017-10-03 17:47:01 -07:00
Martin Atkins 34e9de605c config: tests for HIL-to-HCL2 value shimming 2017-10-03 17:47:01 -07:00
Martin Atkins bbf9725134 config: Validate resource "count" for HCL2-specified resources
This early validation uses interpolation of a placeholder value to achieve
some "best effort" validation of the validity of the count attribute.
Since HCL2-specified resources can't be interpolated using the main
interpolator, here we branch and use the HCL2 API to do a
largely-equivalent (though slightly less accurate) check.

In the long run we don't really need this extra check at all, since the
validation walk does a more accurate version of the same thing. However,
we're preserving this for now in the interests of minimizing the amount
of change for the main codepath during our experiment.
2017-10-03 17:47:01 -07:00
Martin Atkins 71e68f06c4 config: allow tests to force using the HCL2 loader
Currently the default for tests is to use the old HCL loader, but we need
to be able to test aspects of the new loader as we work through the
experimental phase. This new function testConfigHCL2 is the same as
testConfig except that it forces the use of HCL2 even if the opt-in
comment isn't present, thus allowing us to implement tests that ensure
that the exact same file works in both the old and new cases.

Once the HCL2 loader becomes the default this function will be removed
and callers will start calling into the normal testConfig function.
2017-10-03 17:47:01 -07:00
Martin Atkins d91327eaa0 config: allow HCL2 experiment opt-in (build-time flag to enable)
Use the new HCL2 config loader when the opt-in comment #terraform:hcl2 is
present in a .tf file.

For now this is disabled for "normal" builds and enabled only if
explicitly configured via a linker flag during build. This is because it's
not yet in a good state to be released: the HCL2 loader produces RawConfig
objects that the validator and interpolator can't yet deal with, and so
using HCL2 for anything non-trivial currently causes Terraform to crash
in real use.
2017-10-03 17:47:01 -07:00
Martin Atkins b0215fcd0f config: HCL2 config loader
This loader uses the HCL2 parser and decoder to process a config file,
and then transforms the result into the same shape as would be produced
by the HCL config loader.

To avoid making changes to the existing config structures (which are
depended on across much of the codebase) we first decode into a set of
HCL2-tailored structs and then process them into the public-facing structs
that a loader is expected to return. This is a compromise to keep the
config package API broadly unchanged for now. Once we're ready to remove
the old HCL loader (which implies that we're ready to support HCL2
natively elsewhere in the codebase) we will be able to simplify this
quite considerably.

Due to some mismatches of abstraction between HCL/HIL and HCL2, some
shimming is required to get the required result.
2017-10-03 17:47:01 -07:00
Martin Atkins edbbe41b44 config: allow a HCL2 body to piggy-back on a RawConfig
At this time we're not ready to refactor the various uses of RawConfig
in Terraform core, so we'll smuggle a HCL2 body within a degenerate
RawConfig object that we can then recognize and unpack once this object
is returned to us in an interpolation call.
2017-10-03 17:47:01 -07:00
Martin Atkins c202afeca9 govendor fetch github.com/hashicorp/hcl2/... 2017-10-03 17:47:01 -07:00
Martin Atkins 86343fa895 Remove test output file accidentally added in 52c97e9f
Also, fix the missing gitignore rule that caused this to be added in the
first place.
2017-10-03 17:32:26 -07:00
James Bardin c7e9e98500 update CHANGELOG.md 2017-10-03 14:31:46 -04:00
James Bardin 91442b7146 Merge pull request #15680 from brunomcustodio/etcdv3-backend
[WIP] etcd v3 backend with lock support.
2017-10-03 14:15:58 -04:00
James Bardin e34eb38672 Merge branch 'v0.10.7-branch' into stable-website 2017-10-03 13:38:29 -04:00
James Bardin 8cf0a8ca9c faster DAG transitive reduction
In the case of highly-connected graphs, the TransitiveReduction process
was far too computationally intensive. Since no operations are applied
to the nodes, and the walk order is not even user visible, we don't need
to sort them n^2 times.
2017-10-03 11:24:59 -04:00
James Bardin 7d5c320d0e Merge pull request #16080 from Banno/add-msg-on-ssh-errors
communicator/ssh: add what error details we can for the user
2017-10-03 10:21:57 -04:00
Adam Shannon c9c2823f62 communicator/ssh: add what error details we can for the user
ssh.Waitmsg's String() method provides output which can include the
process status, signal, and message
2017-10-03 09:06:02 -05:00
James Bardin d477d1f6d4 Merge pull request #15553 from bonifaido/custom_s3_backend
Allow non-AWS S3 backends
2017-10-02 19:39:48 -04:00
James Bardin 67be68b521 Merge pull request #16242 from hashicorp/jbardin/race
minor race fix in tests
2017-10-02 16:50:02 -04:00
James Bardin 08ac4f3a2e Merge pull request #16089 from wgliang/optimization-merge-slice
Optimization merge slice
2017-10-02 16:40:13 -04:00
James Bardin 3662aac93f Merge pull request #16204 from hashicorp/jbardin/outputs
make outputs error
2017-10-02 16:37:36 -04:00
James Bardin e0ee1e4d6e Merge pull request #16070 from octo/defaultdatadir
backend/local: Remove unused const DefaultDataDir.
2017-10-02 16:27:23 -04:00
James Bardin 3ed7b1abec fix the providerWithLocals test
This test was set to fail once this issue was fixed, and now it's fixed.
2017-10-02 16:21:37 -04:00
James Bardin 3c2b3df5a9 add some output error tests
Add some tests for output errors and catch the errors behind the output
errors feature flag.
2017-10-02 16:20:29 -04:00
James Bardin 715036d209 put output errors behind a feature flag
We're going to start merging breaking functgionality behind feature
flags, to reduce the need for long-lived feature branches.
2017-10-02 16:20:29 -04:00
James Bardin 35c6a4e89d add DestroyValueReferenceTransformer
DestroyValueReferenceTransformer is used during destroy to reverse the
edges for output and local values. Because destruction is going to
remove these from the state, nodes that depend on their value need to be
visited first.
2017-10-02 16:20:29 -04:00
James Bardin 3414e1b996 remove dead code 2017-10-02 16:20:29 -04:00
James Bardin 7fde7d9f20 make a destroy plan work the same as destroy
When working on an existing plan, the context always used walkApply,
even if the plan was for a full destroy. Mark in the plan if it was
icreated for a destroy, and transfer that to the context when reading
the plan.
2017-10-02 16:20:29 -04:00
James Bardin 78964d305c remove broken outputs from targeted graph
A Targeted graph may include outputs that were transitively included,
but if they are missing any dependencies they will fail to interpolate
later on.

Prune any outputs in the TargetsTransformer that have missing
dependencies, and are not depended on by any resource. This will
maintain the existing behavior of outputs failing silently ni most
cases, but allow errors to be surfaced where the output value is
required.
2017-10-02 16:20:29 -04:00
James Bardin a048bcffa0 continue on output errors during Input
Module outputs may not have complete information during Input, because
it happens before refresh. Continue process on output interpolation
errors during the Input walk.
2017-10-02 16:20:29 -04:00
James Bardin 1ad97f6be8 use an EvalOpFilter for module variables
Remove the Input flag threaded through the input graph creation process
to prevent interpolation failures on module variables.
Use an EvalOpFilter instead to inset the correct EvalNode during
walkInput. Remove the EvalTryInterpolate type, and use the same
ContinueOnErr flag as the output node for consistency and to try and
keep the number possible eval node types down.
2017-10-02 16:20:29 -04:00
James Bardin c4dd31e62b "destroy" outputs too
Ouptuts don't need to be re-evaluated during destroy, since everything
is already in the state, so we can simply remove them.
2017-10-02 16:20:29 -04:00