govendor fetch github.com/hashicorp/hcl2/...

This commit is contained in:
Martin Atkins 2017-09-20 14:39:34 -07:00
parent 86343fa895
commit c202afeca9
177 changed files with 57235 additions and 2 deletions

36
vendor/github.com/agext/levenshtein/DCO generated vendored Normal file
View File

@ -0,0 +1,36 @@
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.

201
vendor/github.com/agext/levenshtein/LICENSE generated vendored Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

1
vendor/github.com/agext/levenshtein/MAINTAINERS generated vendored Normal file
View File

@ -0,0 +1 @@
Alex Bucataru <alex@alrux.com> (@AlexBucataru)

5
vendor/github.com/agext/levenshtein/NOTICE generated vendored Normal file
View File

@ -0,0 +1,5 @@
Alrux Go EXTensions (AGExt) - package levenshtein
Copyright 2016 ALRUX Inc.
This product includes software developed at ALRUX Inc.
(http://www.alrux.com/).

38
vendor/github.com/agext/levenshtein/README.md generated vendored Normal file
View File

@ -0,0 +1,38 @@
# A Go package for calculating the Levenshtein distance between two strings
[![Release](https://img.shields.io/github/release/agext/levenshtein.svg?style=flat)](https://github.com/agext/levenshtein/releases/latest)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg?style=flat)](https://godoc.org/github.com/agext/levenshtein) 
[![Build Status](https://travis-ci.org/agext/levenshtein.svg?branch=master&style=flat)](https://travis-ci.org/agext/levenshtein)
[![Coverage Status](https://coveralls.io/repos/github/agext/levenshtein/badge.svg?style=flat)](https://coveralls.io/github/agext/levenshtein)
[![Go Report Card](https://goreportcard.com/badge/github.com/agext/levenshtein?style=flat)](https://goreportcard.com/report/github.com/agext/levenshtein)
This package implements distance and similarity metrics for strings, based on the Levenshtein measure, in [Go](http://golang.org).
## Project Status
v1.2.1 Stable: Guaranteed no breaking changes to the API in future v1.x releases. Probably safe to use in production, though provided on "AS IS" basis.
This package is being actively maintained. If you encounter any problems or have any suggestions for improvement, please [open an issue](https://github.com/agext/levenshtein/issues). Pull requests are welcome.
## Overview
The Levenshtein `Distance` between two strings is the minimum total cost of edits that would convert the first string into the second. The allowed edit operations are insertions, deletions, and substitutions, all at character (one UTF-8 code point) level. Each operation has a default cost of 1, but each can be assigned its own cost equal to or greater than 0.
A `Distance` of 0 means the two strings are identical, and the higher the value the more different the strings. Since in practice we are interested in finding if the two strings are "close enough", it often does not make sense to continue the calculation once the result is mathematically guaranteed to exceed a desired threshold. Providing this value to the `Distance` function allows it to take a shortcut and return a lower bound instead of an exact cost when the threshold is exceeded.
The `Similarity` function calculates the distance, then converts it into a normalized metric within the range 0..1, with 1 meaning the strings are identical, and 0 that they have nothing in common. A minimum similarity threshold can be provided to speed up the calculation of the metric for strings that are far too dissimilar for the purpose at hand. All values under this threshold are rounded down to 0.
The `Match` function provides a similarity metric, with the same range and meaning as `Similarity`, but with a bonus for string pairs that share a common prefix and have a similarity above a "bonus threshold". It uses the same method as proposed by Winkler for the Jaro distance, and the reasoning behind it is that these string pairs are very likely spelling variations or errors, and they are more closely linked than the edit distance alone would suggest.
The underlying `Calculate` function is also exported, to allow the building of other derivative metrics, if needed.
## Installation
```
go get github.com/agext/levenshtein
```
## License
Package levenshtein is released under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.

290
vendor/github.com/agext/levenshtein/levenshtein.go generated vendored Normal file
View File

@ -0,0 +1,290 @@
// Copyright 2016 ALRUX Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/*
Package levenshtein implements distance and similarity metrics for strings, based on the Levenshtein measure.
The Levenshtein `Distance` between two strings is the minimum total cost of edits that would convert the first string into the second. The allowed edit operations are insertions, deletions, and substitutions, all at character (one UTF-8 code point) level. Each operation has a default cost of 1, but each can be assigned its own cost equal to or greater than 0.
A `Distance` of 0 means the two strings are identical, and the higher the value the more different the strings. Since in practice we are interested in finding if the two strings are "close enough", it often does not make sense to continue the calculation once the result is mathematically guaranteed to exceed a desired threshold. Providing this value to the `Distance` function allows it to take a shortcut and return a lower bound instead of an exact cost when the threshold is exceeded.
The `Similarity` function calculates the distance, then converts it into a normalized metric within the range 0..1, with 1 meaning the strings are identical, and 0 that they have nothing in common. A minimum similarity threshold can be provided to speed up the calculation of the metric for strings that are far too dissimilar for the purpose at hand. All values under this threshold are rounded down to 0.
The `Match` function provides a similarity metric, with the same range and meaning as `Similarity`, but with a bonus for string pairs that share a common prefix and have a similarity above a "bonus threshold". It uses the same method as proposed by Winkler for the Jaro distance, and the reasoning behind it is that these string pairs are very likely spelling variations or errors, and they are more closely linked than the edit distance alone would suggest.
The underlying `Calculate` function is also exported, to allow the building of other derivative metrics, if needed.
*/
package levenshtein
// Calculate determines the Levenshtein distance between two strings, using
// the given costs for each edit operation. It returns the distance along with
// the lengths of the longest common prefix and suffix.
//
// If maxCost is non-zero, the calculation stops as soon as the distance is determined
// to be greater than maxCost. Therefore, any return value higher than maxCost is a
// lower bound for the actual distance.
func Calculate(str1, str2 []rune, maxCost, insCost, subCost, delCost int) (dist, prefixLen, suffixLen int) {
l1, l2 := len(str1), len(str2)
// trim common prefix, if any, as it doesn't affect the distance
for ; prefixLen < l1 && prefixLen < l2; prefixLen++ {
if str1[prefixLen] != str2[prefixLen] {
break
}
}
str1, str2 = str1[prefixLen:], str2[prefixLen:]
l1 -= prefixLen
l2 -= prefixLen
// trim common suffix, if any, as it doesn't affect the distance
for 0 < l1 && 0 < l2 {
if str1[l1-1] != str2[l2-1] {
str1, str2 = str1[:l1], str2[:l2]
break
}
l1--
l2--
suffixLen++
}
// if the first string is empty, the distance is the length of the second string times the cost of insertion
if l1 == 0 {
dist = l2 * insCost
return
}
// if the second string is empty, the distance is the length of the first string times the cost of deletion
if l2 == 0 {
dist = l1 * delCost
return
}
// variables used in inner "for" loops
var y, dy, c, l int
// if maxCost is greater than or equal to the maximum possible distance, it's equivalent to 'unlimited'
if maxCost > 0 {
if subCost < delCost+insCost {
if maxCost >= l1*subCost+(l2-l1)*insCost {
maxCost = 0
}
} else {
if maxCost >= l1*delCost+l2*insCost {
maxCost = 0
}
}
}
if maxCost > 0 {
// prefer the longer string first, to minimize time;
// a swap also transposes the meanings of insertion and deletion.
if l1 < l2 {
str1, str2, l1, l2, insCost, delCost = str2, str1, l2, l1, delCost, insCost
}
// the length differential times cost of deletion is a lower bound for the cost;
// if it is higher than the maxCost, there is no point going into the main calculation.
if dist = (l1 - l2) * delCost; dist > maxCost {
return
}
d := make([]int, l1+1)
// offset and length of d in the current row
doff, dlen := 0, 1
for y, dy = 1, delCost; y <= l1 && dy <= maxCost; dlen++ {
d[y] = dy
y++
dy = y * delCost
}
// fmt.Printf("%q -> %q: init doff=%d dlen=%d d[%d:%d]=%v\n", str1, str2, doff, dlen, doff, doff+dlen, d[doff:doff+dlen])
for x := 0; x < l2; x++ {
dy, d[doff] = d[doff], d[doff]+insCost
for d[doff] > maxCost && dlen > 0 {
if str1[doff] != str2[x] {
dy += subCost
}
doff++
dlen--
if c = d[doff] + insCost; c < dy {
dy = c
}
dy, d[doff] = d[doff], dy
}
for y, l = doff, doff+dlen-1; y < l; dy, d[y] = d[y], dy {
if str1[y] != str2[x] {
dy += subCost
}
if c = d[y] + delCost; c < dy {
dy = c
}
y++
if c = d[y] + insCost; c < dy {
dy = c
}
}
if y < l1 {
if str1[y] != str2[x] {
dy += subCost
}
if c = d[y] + delCost; c < dy {
dy = c
}
for ; dy <= maxCost && y < l1; dy, d[y] = dy+delCost, dy {
y++
dlen++
}
}
// fmt.Printf("%q -> %q: x=%d doff=%d dlen=%d d[%d:%d]=%v\n", str1, str2, x, doff, dlen, doff, doff+dlen, d[doff:doff+dlen])
if dlen == 0 {
dist = maxCost + 1
return
}
}
if doff+dlen-1 < l1 {
dist = maxCost + 1
return
}
dist = d[l1]
} else {
// ToDo: This is O(l1*l2) time and O(min(l1,l2)) space; investigate if it is
// worth to implement diagonal approach - O(l1*(1+dist)) time, up to O(l1*l2) space
// http://www.csse.monash.edu.au/~lloyd/tildeStrings/Alignment/92.IPL.html
// prefer the shorter string first, to minimize space; time is O(l1*l2) anyway;
// a swap also transposes the meanings of insertion and deletion.
if l1 > l2 {
str1, str2, l1, l2, insCost, delCost = str2, str1, l2, l1, delCost, insCost
}
d := make([]int, l1+1)
for y = 1; y <= l1; y++ {
d[y] = y * delCost
}
for x := 0; x < l2; x++ {
dy, d[0] = d[0], d[0]+insCost
for y = 0; y < l1; dy, d[y] = d[y], dy {
if str1[y] != str2[x] {
dy += subCost
}
if c = d[y] + delCost; c < dy {
dy = c
}
y++
if c = d[y] + insCost; c < dy {
dy = c
}
}
}
dist = d[l1]
}
return
}
// Distance returns the Levenshtein distance between str1 and str2, using the
// default or provided cost values. Pass nil for the third argument to use the
// default cost of 1 for all three operations, with no maximum.
func Distance(str1, str2 string, p *Params) int {
if p == nil {
p = defaultParams
}
dist, _, _ := Calculate([]rune(str1), []rune(str2), p.maxCost, p.insCost, p.subCost, p.delCost)
return dist
}
// Similarity returns a score in the range of 0..1 for how similar the two strings are.
// A score of 1 means the strings are identical, and 0 means they have nothing in common.
//
// A nil third argument uses the default cost of 1 for all three operations.
//
// If a non-zero MinScore value is provided in the parameters, scores lower than it
// will be returned as 0.
func Similarity(str1, str2 string, p *Params) float64 {
return Match(str1, str2, p.Clone().BonusThreshold(1.1)) // guaranteed no bonus
}
// Match returns a similarity score adjusted by the same method as proposed by Winkler for
// the Jaro distance - giving a bonus to string pairs that share a common prefix, only if their
// similarity score is already over a threshold.
//
// The score is in the range of 0..1, with 1 meaning the strings are identical,
// and 0 meaning they have nothing in common.
//
// A nil third argument uses the default cost of 1 for all three operations, maximum length of
// common prefix to consider for bonus of 4, scaling factor of 0.1, and bonus threshold of 0.7.
//
// If a non-zero MinScore value is provided in the parameters, scores lower than it
// will be returned as 0.
func Match(str1, str2 string, p *Params) float64 {
s1, s2 := []rune(str1), []rune(str2)
l1, l2 := len(s1), len(s2)
// two empty strings are identical; shortcut also avoids divByZero issues later on.
if l1 == 0 && l2 == 0 {
return 1
}
if p == nil {
p = defaultParams
}
// a min over 1 can never be satisfied, so the score is 0.
if p.minScore > 1 {
return 0
}
insCost, delCost, maxDist, max := p.insCost, p.delCost, 0, 0
if l1 > l2 {
l1, l2, insCost, delCost = l2, l1, delCost, insCost
}
if p.subCost < delCost+insCost {
maxDist = l1*p.subCost + (l2-l1)*insCost
} else {
maxDist = l1*delCost + l2*insCost
}
// a zero min is always satisfied, so no need to set a max cost.
if p.minScore > 0 {
// if p.minScore is lower than p.bonusThreshold, we can use a simplified formula
// for the max cost, because a sim score below min cannot receive a bonus.
if p.minScore < p.bonusThreshold {
// round down the max - a cost equal to a rounded up max would already be under min.
max = int((1 - p.minScore) * float64(maxDist))
} else {
// p.minScore <= sim + p.bonusPrefix*p.bonusScale*(1-sim)
// p.minScore <= (1-dist/maxDist) + p.bonusPrefix*p.bonusScale*(1-(1-dist/maxDist))
// p.minScore <= 1 - dist/maxDist + p.bonusPrefix*p.bonusScale*dist/maxDist
// 1 - p.minScore >= dist/maxDist - p.bonusPrefix*p.bonusScale*dist/maxDist
// (1-p.minScore)*maxDist/(1-p.bonusPrefix*p.bonusScale) >= dist
max = int((1 - p.minScore) * float64(maxDist) / (1 - float64(p.bonusPrefix)*p.bonusScale))
}
}
dist, pl, _ := Calculate(s1, s2, max, p.insCost, p.subCost, p.delCost)
if max > 0 && dist > max {
return 0
}
sim := 1 - float64(dist)/float64(maxDist)
if sim >= p.bonusThreshold && sim < 1 && p.bonusPrefix > 0 && p.bonusScale > 0 {
if pl > p.bonusPrefix {
pl = p.bonusPrefix
}
sim += float64(pl) * p.bonusScale * (1 - sim)
}
if sim < p.minScore {
return 0
}
return sim
}

152
vendor/github.com/agext/levenshtein/params.go generated vendored Normal file
View File

@ -0,0 +1,152 @@
// Copyright 2016 ALRUX Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package levenshtein
// Params represents a set of parameter values for the various formulas involved
// in the calculation of the Levenshtein string metrics.
type Params struct {
insCost int
subCost int
delCost int
maxCost int
minScore float64
bonusPrefix int
bonusScale float64
bonusThreshold float64
}
var (
defaultParams = NewParams()
)
// NewParams creates a new set of parameters and initializes it with the default values.
func NewParams() *Params {
return &Params{
insCost: 1,
subCost: 1,
delCost: 1,
maxCost: 0,
minScore: 0,
bonusPrefix: 4,
bonusScale: .1,
bonusThreshold: .7,
}
}
// Clone returns a pointer to a copy of the receiver parameter set, or of a new
// default parameter set if the receiver is nil.
func (p *Params) Clone() *Params {
if p == nil {
return NewParams()
}
return &Params{
insCost: p.insCost,
subCost: p.subCost,
delCost: p.delCost,
maxCost: p.maxCost,
minScore: p.minScore,
bonusPrefix: p.bonusPrefix,
bonusScale: p.bonusScale,
bonusThreshold: p.bonusThreshold,
}
}
// InsCost overrides the default value of 1 for the cost of insertion.
// The new value must be zero or positive.
func (p *Params) InsCost(v int) *Params {
if v >= 0 {
p.insCost = v
}
return p
}
// SubCost overrides the default value of 1 for the cost of substitution.
// The new value must be zero or positive.
func (p *Params) SubCost(v int) *Params {
if v >= 0 {
p.subCost = v
}
return p
}
// DelCost overrides the default value of 1 for the cost of deletion.
// The new value must be zero or positive.
func (p *Params) DelCost(v int) *Params {
if v >= 0 {
p.delCost = v
}
return p
}
// MaxCost overrides the default value of 0 (meaning unlimited) for the maximum cost.
// The calculation of Distance() stops when the result is guaranteed to exceed
// this maximum, returning a lower-bound rather than exact value.
// The new value must be zero or positive.
func (p *Params) MaxCost(v int) *Params {
if v >= 0 {
p.maxCost = v
}
return p
}
// MinScore overrides the default value of 0 for the minimum similarity score.
// Scores below this threshold are returned as 0 by Similarity() and Match().
// The new value must be zero or positive. Note that a minimum greater than 1
// can never be satisfied, resulting in a score of 0 for any pair of strings.
func (p *Params) MinScore(v float64) *Params {
if v >= 0 {
p.minScore = v
}
return p
}
// BonusPrefix overrides the default value for the maximum length of
// common prefix to be considered for bonus by Match().
// The new value must be zero or positive.
func (p *Params) BonusPrefix(v int) *Params {
if v >= 0 {
p.bonusPrefix = v
}
return p
}
// BonusScale overrides the default value for the scaling factor used by Match()
// in calculating the bonus.
// The new value must be zero or positive. To guarantee that the similarity score
// remains in the interval 0..1, this scaling factor is not allowed to exceed
// 1 / BonusPrefix.
func (p *Params) BonusScale(v float64) *Params {
if v >= 0 {
p.bonusScale = v
}
// the bonus cannot exceed (1-sim), or the score may become greater than 1.
if float64(p.bonusPrefix)*p.bonusScale > 1 {
p.bonusScale = 1 / float64(p.bonusPrefix)
}
return p
}
// BonusThreshold overrides the default value for the minimum similarity score
// for which Match() can assign a bonus.
// The new value must be zero or positive. Note that a threshold greater than 1
// effectively makes Match() become the equivalent of Similarity().
func (p *Params) BonusThreshold(v float64) *Params {
if v >= 0 {
p.bonusThreshold = v
}
return p
}

95
vendor/github.com/apparentlymart/go-textseg/LICENSE generated vendored Normal file
View File

@ -0,0 +1,95 @@
Copyright (c) 2017 Martin Atkins
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
---------
Unicode table generation programs are under a separate copyright and license:
Copyright (c) 2014 Couchbase, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file
except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the
License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
either express or implied. See the License for the specific language governing permissions
and limitations under the License.
---------
Grapheme break data is provided as part of the Unicode character database,
copright 2016 Unicode, Inc, which is provided with the following license:
Unicode Data Files include all data files under the directories
http://www.unicode.org/Public/, http://www.unicode.org/reports/,
http://www.unicode.org/cldr/data/, http://source.icu-project.org/repos/icu/, and
http://www.unicode.org/utility/trac/browser/.
Unicode Data Files do not include PDF online code charts under the
directory http://www.unicode.org/Public/.
Software includes any source code published in the Unicode Standard
or under the directories
http://www.unicode.org/Public/, http://www.unicode.org/reports/,
http://www.unicode.org/cldr/data/, http://source.icu-project.org/repos/icu/, and
http://www.unicode.org/utility/trac/browser/.
NOTICE TO USER: Carefully read the following legal agreement.
BY DOWNLOADING, INSTALLING, COPYING OR OTHERWISE USING UNICODE INC.'S
DATA FILES ("DATA FILES"), AND/OR SOFTWARE ("SOFTWARE"),
YOU UNEQUIVOCALLY ACCEPT, AND AGREE TO BE BOUND BY, ALL OF THE
TERMS AND CONDITIONS OF THIS AGREEMENT.
IF YOU DO NOT AGREE, DO NOT DOWNLOAD, INSTALL, COPY, DISTRIBUTE OR USE
THE DATA FILES OR SOFTWARE.
COPYRIGHT AND PERMISSION NOTICE
Copyright © 1991-2017 Unicode, Inc. All rights reserved.
Distributed under the Terms of Use in http://www.unicode.org/copyright.html.
Permission is hereby granted, free of charge, to any person obtaining
a copy of the Unicode data files and any associated documentation
(the "Data Files") or Unicode software and any associated documentation
(the "Software") to deal in the Data Files or Software
without restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, and/or sell copies of
the Data Files or Software, and to permit persons to whom the Data Files
or Software are furnished to do so, provided that either
(a) this copyright and permission notice appear with all copies
of the Data Files or Software, or
(b) this copyright and permission notice appear in associated
Documentation.
THE DATA FILES AND SOFTWARE ARE PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT OF THIRD PARTY RIGHTS.
IN NO EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS
NOTICE BE LIABLE FOR ANY CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL
DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE,
DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THE DATA FILES OR SOFTWARE.
Except as contained in this notice, the name of a copyright holder
shall not be used in advertising or otherwise to promote the sale,
use or other dealings in these Data Files or Software without prior
written authorization of the copyright holder.

View File

@ -0,0 +1,30 @@
package textseg
import (
"bufio"
"bytes"
)
// AllTokens is a utility that uses a bufio.SplitFunc to produce a slice of
// all of the recognized tokens in the given buffer.
func AllTokens(buf []byte, splitFunc bufio.SplitFunc) ([][]byte, error) {
scanner := bufio.NewScanner(bytes.NewReader(buf))
scanner.Split(splitFunc)
var ret [][]byte
for scanner.Scan() {
ret = append(ret, scanner.Bytes())
}
return ret, scanner.Err()
}
// TokenCount is a utility that uses a bufio.SplitFunc to count the number of
// recognized tokens in the given buffer.
func TokenCount(buf []byte, splitFunc bufio.SplitFunc) (int, error) {
scanner := bufio.NewScanner(bytes.NewReader(buf))
scanner.Split(splitFunc)
var ret int
for scanner.Scan() {
ret++
}
return ret, scanner.Err()
}

View File

@ -0,0 +1,7 @@
package textseg
//go:generate go run make_tables.go -output tables.go
//go:generate go run make_test_tables.go -output tables_test.go
//go:generate ruby unicode2ragel.rb --url=http://www.unicode.org/Public/9.0.0/ucd/auxiliary/GraphemeBreakProperty.txt -m GraphemeCluster -p "Prepend,CR,LF,Control,Extend,Regional_Indicator,SpacingMark,L,V,T,LV,LVT,E_Base,E_Modifier,ZWJ,Glue_After_Zwj,E_Base_GAZ" -o grapheme_clusters_table.rl
//go:generate ragel -Z grapheme_clusters.rl
//go:generate gofmt -w grapheme_clusters.go

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,132 @@
package textseg
import (
"errors"
"unicode/utf8"
)
// Generated from grapheme_clusters.rl. DO NOT EDIT
%%{
# (except you are actually in grapheme_clusters.rl here, so edit away!)
machine graphclust;
write data;
}%%
var Error = errors.New("invalid UTF8 text")
// ScanGraphemeClusters is a split function for bufio.Scanner that splits
// on grapheme cluster boundaries.
func ScanGraphemeClusters(data []byte, atEOF bool) (int, []byte, error) {
if len(data) == 0 {
return 0, nil, nil
}
// Ragel state
cs := 0 // Current State
p := 0 // "Pointer" into data
pe := len(data) // End-of-data "pointer"
ts := 0
te := 0
act := 0
eof := pe
// Make Go compiler happy
_ = ts
_ = te
_ = act
_ = eof
startPos := 0
endPos := 0
%%{
include GraphemeCluster "grapheme_clusters_table.rl";
action start {
startPos = p
}
action end {
endPos = p
}
action emit {
return endPos+1, data[startPos:endPos+1], nil
}
ZWJGlue = ZWJ (Glue_After_Zwj | E_Base_GAZ Extend* E_Modifier?)?;
AnyExtender = Extend | ZWJGlue | SpacingMark;
Extension = AnyExtender*;
ReplacementChar = (0xEF 0xBF 0xBD);
CRLFSeq = CR LF;
ControlSeq = Control | ReplacementChar;
HangulSeq = (
L+ (((LV? V+ | LVT) T*)?|LV?) |
LV V* T* |
V+ T* |
LVT T* |
T+
) Extension;
EmojiSeq = (E_Base | E_Base_GAZ) Extend* E_Modifier? Extension;
ZWJSeq = ZWJGlue Extension;
EmojiFlagSeq = Regional_Indicator Regional_Indicator? Extension;
UTF8Cont = 0x80 .. 0xBF;
AnyUTF8 = (
0x00..0x7F |
0xC0..0xDF . UTF8Cont |
0xE0..0xEF . UTF8Cont . UTF8Cont |
0xF0..0xF7 . UTF8Cont . UTF8Cont . UTF8Cont
);
# OtherSeq is any character that isn't at the start of one of the extended sequences above, followed by extension
OtherSeq = (AnyUTF8 - (CR|LF|Control|ReplacementChar|L|LV|V|LVT|T|E_Base|E_Base_GAZ|ZWJ|Regional_Indicator|Prepend)) Extension;
# PrependSeq is prepend followed by any of the other patterns above, except control characters which explicitly break
PrependSeq = Prepend+ (HangulSeq|EmojiSeq|ZWJSeq|EmojiFlagSeq|OtherSeq)?;
CRLFTok = CRLFSeq >start @end;
ControlTok = ControlSeq >start @end;
HangulTok = HangulSeq >start @end;
EmojiTok = EmojiSeq >start @end;
ZWJTok = ZWJSeq >start @end;
EmojiFlagTok = EmojiFlagSeq >start @end;
OtherTok = OtherSeq >start @end;
PrependTok = PrependSeq >start @end;
main := |*
CRLFTok => emit;
ControlTok => emit;
HangulTok => emit;
EmojiTok => emit;
ZWJTok => emit;
EmojiFlagTok => emit;
PrependTok => emit;
OtherTok => emit;
# any single valid UTF-8 character would also be valid per spec,
# but we'll handle that separately after the loop so we can deal
# with requesting more bytes if we're not at EOF.
*|;
write init;
write exec;
}%%
// If we fall out here then we were unable to complete a sequence.
// If we weren't able to complete a sequence then either we've
// reached the end of a partial buffer (so there's more data to come)
// or we have an isolated symbol that would normally be part of a
// grapheme cluster but has appeared in isolation here.
if !atEOF {
// Request more
return 0, nil, nil
}
// Just take the first UTF-8 sequence and return that.
_, seqLen := utf8.DecodeRune(data)
return seqLen, data[:seqLen], nil
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,335 @@
#!/usr/bin/env ruby
#
# This scripted has been updated to accept more command-line arguments:
#
# -u, --url URL to process
# -m, --machine Machine name
# -p, --properties Properties to add to the machine
# -o, --output Write output to file
#
# Updated by: Marty Schoch <marty.schoch@gmail.com>
#
# This script uses the unicode spec to generate a Ragel state machine
# that recognizes unicode alphanumeric characters. It generates 5
# character classes: uupper, ulower, ualpha, udigit, and ualnum.
# Currently supported encodings are UTF-8 [default] and UCS-4.
#
# Usage: unicode2ragel.rb [options]
# -e, --encoding [ucs4 | utf8] Data encoding
# -h, --help Show this message
#
# This script was originally written as part of the Ferret search
# engine library.
#
# Author: Rakan El-Khalil <rakan@well.com>
require 'optparse'
require 'open-uri'
ENCODINGS = [ :utf8, :ucs4 ]
ALPHTYPES = { :utf8 => "byte", :ucs4 => "rune" }
DEFAULT_CHART_URL = "http://www.unicode.org/Public/5.1.0/ucd/DerivedCoreProperties.txt"
DEFAULT_MACHINE_NAME= "WChar"
###
# Display vars & default option
TOTAL_WIDTH = 80
RANGE_WIDTH = 23
@encoding = :utf8
@chart_url = DEFAULT_CHART_URL
machine_name = DEFAULT_MACHINE_NAME
properties = []
@output = $stdout
###
# Option parsing
cli_opts = OptionParser.new do |opts|
opts.on("-e", "--encoding [ucs4 | utf8]", "Data encoding") do |o|
@encoding = o.downcase.to_sym
end
opts.on("-h", "--help", "Show this message") do
puts opts
exit
end
opts.on("-u", "--url URL", "URL to process") do |o|
@chart_url = o
end
opts.on("-m", "--machine MACHINE_NAME", "Machine name") do |o|
machine_name = o
end
opts.on("-p", "--properties x,y,z", Array, "Properties to add to machine") do |o|
properties = o
end
opts.on("-o", "--output FILE", "output file") do |o|
@output = File.new(o, "w+")
end
end
cli_opts.parse(ARGV)
unless ENCODINGS.member? @encoding
puts "Invalid encoding: #{@encoding}"
puts cli_opts
exit
end
##
# Downloads the document at url and yields every alpha line's hex
# range and description.
def each_alpha( url, property )
open( url ) do |file|
file.each_line do |line|
next if line =~ /^#/;
next if line !~ /; #{property} #/;
range, description = line.split(/;/)
range.strip!
description.gsub!(/.*#/, '').strip!
if range =~ /\.\./
start, stop = range.split '..'
else start = stop = range
end
yield start.hex .. stop.hex, description
end
end
end
###
# Formats to hex at minimum width
def to_hex( n )
r = "%0X" % n
r = "0#{r}" unless (r.length % 2).zero?
r
end
###
# UCS4 is just a straight hex conversion of the unicode codepoint.
def to_ucs4( range )
rangestr = "0x" + to_hex(range.begin)
rangestr << "..0x" + to_hex(range.end) if range.begin != range.end
[ rangestr ]
end
##
# 0x00 - 0x7f -> 0zzzzzzz[7]
# 0x80 - 0x7ff -> 110yyyyy[5] 10zzzzzz[6]
# 0x800 - 0xffff -> 1110xxxx[4] 10yyyyyy[6] 10zzzzzz[6]
# 0x010000 - 0x10ffff -> 11110www[3] 10xxxxxx[6] 10yyyyyy[6] 10zzzzzz[6]
UTF8_BOUNDARIES = [0x7f, 0x7ff, 0xffff, 0x10ffff]
def to_utf8_enc( n )
r = 0
if n <= 0x7f
r = n
elsif n <= 0x7ff
y = 0xc0 | (n >> 6)
z = 0x80 | (n & 0x3f)
r = y << 8 | z
elsif n <= 0xffff
x = 0xe0 | (n >> 12)
y = 0x80 | (n >> 6) & 0x3f
z = 0x80 | n & 0x3f
r = x << 16 | y << 8 | z
elsif n <= 0x10ffff
w = 0xf0 | (n >> 18)
x = 0x80 | (n >> 12) & 0x3f
y = 0x80 | (n >> 6) & 0x3f
z = 0x80 | n & 0x3f
r = w << 24 | x << 16 | y << 8 | z
end
to_hex(r)
end
def from_utf8_enc( n )
n = n.hex
r = 0
if n <= 0x7f
r = n
elsif n <= 0xdfff
y = (n >> 8) & 0x1f
z = n & 0x3f
r = y << 6 | z
elsif n <= 0xefffff
x = (n >> 16) & 0x0f
y = (n >> 8) & 0x3f
z = n & 0x3f
r = x << 10 | y << 6 | z
elsif n <= 0xf7ffffff
w = (n >> 24) & 0x07
x = (n >> 16) & 0x3f
y = (n >> 8) & 0x3f
z = n & 0x3f
r = w << 18 | x << 12 | y << 6 | z
end
r
end
###
# Given a range, splits it up into ranges that can be continuously
# encoded into utf8. Eg: 0x00 .. 0xff => [0x00..0x7f, 0x80..0xff]
# This is not strictly needed since the current [5.1] unicode standard
# doesn't have ranges that straddle utf8 boundaries. This is included
# for completeness as there is no telling if that will ever change.
def utf8_ranges( range )
ranges = []
UTF8_BOUNDARIES.each do |max|
if range.begin <= max
if range.end <= max
ranges << range
return ranges
end
ranges << (range.begin .. max)
range = (max + 1) .. range.end
end
end
ranges
end
def build_range( start, stop )
size = start.size/2
left = size - 1
return [""] if size < 1
a = start[0..1]
b = stop[0..1]
###
# Shared prefix
if a == b
return build_range(start[2..-1], stop[2..-1]).map do |elt|
"0x#{a} " + elt
end
end
###
# Unshared prefix, end of run
return ["0x#{a}..0x#{b} "] if left.zero?
###
# Unshared prefix, not end of run
# Range can be 0x123456..0x56789A
# Which is equivalent to:
# 0x123456 .. 0x12FFFF
# 0x130000 .. 0x55FFFF
# 0x560000 .. 0x56789A
ret = []
ret << build_range(start, a + "FF" * left)
###
# Only generate middle range if need be.
if a.hex+1 != b.hex
max = to_hex(b.hex - 1)
max = "FF" if b == "FF"
ret << "0x#{to_hex(a.hex+1)}..0x#{max} " + "0x00..0xFF " * left
end
###
# Don't generate last range if it is covered by first range
ret << build_range(b + "00" * left, stop) unless b == "FF"
ret.flatten!
end
def to_utf8( range )
utf8_ranges( range ).map do |r|
begin_enc = to_utf8_enc(r.begin)
end_enc = to_utf8_enc(r.end)
build_range begin_enc, end_enc
end.flatten!
end
##
# Perform a 3-way comparison of the number of codepoints advertised by
# the unicode spec for the given range, the originally parsed range,
# and the resulting utf8 encoded range.
def count_codepoints( code )
code.split(' ').inject(1) do |acc, elt|
if elt =~ /0x(.+)\.\.0x(.+)/
if @encoding == :utf8
acc * (from_utf8_enc($2) - from_utf8_enc($1) + 1)
else
acc * ($2.hex - $1.hex + 1)
end
else
acc
end
end
end
def is_valid?( range, desc, codes )
spec_count = 1
spec_count = $1.to_i if desc =~ /\[(\d+)\]/
range_count = range.end - range.begin + 1
sum = codes.inject(0) { |acc, elt| acc + count_codepoints(elt) }
sum == spec_count and sum == range_count
end
##
# Generate the state maching to stdout
def generate_machine( name, property )
pipe = " "
@output.puts " #{name} = "
each_alpha( @chart_url, property ) do |range, desc|
codes = (@encoding == :ucs4) ? to_ucs4(range) : to_utf8(range)
#raise "Invalid encoding of range #{range}: #{codes.inspect}" unless
# is_valid? range, desc, codes
range_width = codes.map { |a| a.size }.max
range_width = RANGE_WIDTH if range_width < RANGE_WIDTH
desc_width = TOTAL_WIDTH - RANGE_WIDTH - 11
desc_width -= (range_width - RANGE_WIDTH) if range_width > RANGE_WIDTH
if desc.size > desc_width
desc = desc[0..desc_width - 4] + "..."
end
codes.each_with_index do |r, idx|
desc = "" unless idx.zero?
code = "%-#{range_width}s" % r
@output.puts " #{pipe} #{code} ##{desc}"
pipe = "|"
end
end
@output.puts " ;"
@output.puts ""
end
@output.puts <<EOF
# The following Ragel file was autogenerated with #{$0}
# from: #{@chart_url}
#
# It defines #{properties}.
#
# To use this, make sure that your alphtype is set to #{ALPHTYPES[@encoding]},
# and that your input is in #{@encoding}.
%%{
machine #{machine_name};
EOF
properties.each { |x| generate_machine( x, x ) }
@output.puts <<EOF
}%%
EOF

View File

@ -0,0 +1,19 @@
package textseg
import "unicode/utf8"
// ScanGraphemeClusters is a split function for bufio.Scanner that splits
// on UTF8 sequence boundaries.
//
// This is included largely for completeness, since this behavior is already
// built in to Go when ranging over a string.
func ScanUTF8Sequences(data []byte, atEOF bool) (int, []byte, error) {
if len(data) == 0 {
return 0, nil, nil
}
r, seqLen := utf8.DecodeRune(data)
if r == utf8.RuneError && !atEOF {
return 0, nil, nil
}
return seqLen, data[:seqLen], nil
}

353
vendor/github.com/hashicorp/hcl2/LICENSE generated vendored Normal file
View File

@ -0,0 +1,353 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. “Contributor”
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributors Contribution.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. “Incompatible With Secondary Licenses”
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of version
1.1 or earlier of the License, but not also under the terms of a
Secondary License.
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.7. “Larger Work”
means a work that combines Covered Software with other material, in a separate
file or files, that is not Covered Software.
1.8. “License”
means this document.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible, whether at the
time of the initial grant or subsequently, any and all of the rights conveyed by
this License.
1.10. “Modifications”
means any of the following:
a. any file in Source Code Form that results from an addition to, deletion
from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process,
and apparatus claims, in any patent Licensable by such Contributor that
would be infringed, but for the grant of the License, by the making,
using, selling, offering for sale, having made, import, or transfer of
either its Contributions or its Contributor Version.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this
License. For legal entities, “You” includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, “control” means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or as
part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its Contributions
or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become
effective for each Contribution on the date the Contributor first distributes
such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this
License. No additional rights or licenses will be implied from the distribution
or licensing of Covered Software under this License. Notwithstanding Section
2.1(b) above, no patent license is granted by a Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third partys
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of its
Contributions.
This License does not grant any rights in the trademarks, service marks, or
logos of any Contributor (except as may be necessary to comply with the
notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this License
(see Section 10.2) or under the terms of a Secondary License (if permitted
under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions
are its original creation(s) or it has sufficient rights to grant the
rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable
copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under the
terms of this License. You must inform recipients that the Source Code Form
of the Covered Software is governed by the terms of this License, and how
they can obtain a copy of this License. You may not attempt to alter or
restrict the recipients rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this License,
or sublicense it under different terms, provided that the license for
the Executable Form does not attempt to limit or alter the recipients
rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for the
Covered Software. If the Larger Work is a combination of Covered Software
with a work governed by one or more Secondary Licenses, and the Covered
Software is not Incompatible With Secondary Licenses, this License permits
You to additionally distribute such Covered Software under the terms of
such Secondary License(s), so that the recipient of the Larger Work may, at
their option, further distribute the Covered Software under the terms of
either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including
copyright notices, patent notices, disclaimers of warranty, or limitations
of liability) contained within the Source Code Form of the Covered
Software, except that You may alter any license notices to the extent
required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on behalf
of any Contributor. You must make it absolutely clear that any such
warranty, support, indemnity, or liability obligation is offered by You
alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute, judicial
order, or regulation then You must: (a) comply with the terms of this License
to the maximum extent possible; and (b) describe the limitations and the code
they affect. Such description must be placed in a text file included with all
distributions of the Covered Software under this License. Except to the
extent prohibited by statute or regulation, such description must be
sufficiently detailed for a recipient of ordinary skill to be able to
understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing basis,
if such Contributor fails to notify You of the non-compliance by some
reasonable means prior to 60 days after You have come back into compliance.
Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by
some reasonable means, this is the first time You have received notice of
non-compliance with this License from such Contributor, and You become
compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions, counter-claims,
and cross-claims) alleging that a Contributor Version directly or
indirectly infringes any patent, then the rights granted to You by any and
all Contributors for the Covered Software under Section 2.1 of this License
shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an “as is” basis, without
warranty of any kind, either expressed, implied, or statutory, including,
without limitation, warranties that the Covered Software is free of defects,
merchantable, fit for a particular purpose or non-infringing. The entire
risk as to the quality and performance of the Covered Software is with You.
Should any Covered Software prove defective in any respect, You (not any
Contributor) assume the cost of any necessary servicing, repair, or
correction. This disclaimer of warranty constitutes an essential part of this
License. No use of any Covered Software is authorized under this License
except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from such
partys negligence to the extent applicable law prohibits such limitation.
Some jurisdictions do not allow the exclusion or limitation of incidental or
consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of
a jurisdiction where the defendant maintains its principal place of business
and such litigation shall be governed by laws of that jurisdiction, without
reference to its conflict-of-law provisions. Nothing in this Section shall
prevent a partys ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter
hereof. If any provision of this License is held to be unenforceable, such
provision shall be reformed only to the extent necessary to make it
enforceable. Any law or regulation which provides that the language of a
contract shall be construed against the drafter shall not be used to construe
this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of
the License under which You originally received the Covered Software, or
under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a modified
version of this License if you rename the license and remove any
references to the name of the license steward (except to note that such
modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then
You may include the notice in a location (such as a LICENSE file in a relevant
directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is “Incompatible
With Secondary Licenses”, as defined by
the Mozilla Public License, v. 2.0.

304
vendor/github.com/hashicorp/hcl2/gohcl/decode.go generated vendored Normal file
View File

@ -0,0 +1,304 @@
package gohcl
import (
"fmt"
"reflect"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/hcl2/hcl"
"github.com/zclconf/go-cty/cty/convert"
"github.com/zclconf/go-cty/cty/gocty"
)
// DecodeBody extracts the configuration within the given body into the given
// value. This value must be a non-nil pointer to either a struct or
// a map, where in the former case the configuration will be decoded using
// struct tags and in the latter case only attributes are allowed and their
// values are decoded into the map.
//
// The given EvalContext is used to resolve any variables or functions in
// expressions encountered while decoding. This may be nil to require only
// constant values, for simple applications that do not support variables or
// functions.
//
// The returned diagnostics should be inspected with its HasErrors method to
// determine if the populated value is valid and complete. If error diagnostics
// are returned then the given value may have been partially-populated but
// may still be accessed by a careful caller for static analysis and editor
// integration use-cases.
func DecodeBody(body hcl.Body, ctx *hcl.EvalContext, val interface{}) hcl.Diagnostics {
rv := reflect.ValueOf(val)
if rv.Kind() != reflect.Ptr {
panic(fmt.Sprintf("target value must be a pointer, not %s", rv.Type().String()))
}
return decodeBodyToValue(body, ctx, rv.Elem())
}
func decodeBodyToValue(body hcl.Body, ctx *hcl.EvalContext, val reflect.Value) hcl.Diagnostics {
et := val.Type()
switch et.Kind() {
case reflect.Struct:
return decodeBodyToStruct(body, ctx, val)
case reflect.Map:
return decodeBodyToMap(body, ctx, val)
default:
panic(fmt.Sprintf("target value must be pointer to struct or map, not %s", et.String()))
}
}
func decodeBodyToStruct(body hcl.Body, ctx *hcl.EvalContext, val reflect.Value) hcl.Diagnostics {
schema, partial := ImpliedBodySchema(val.Interface())
var content *hcl.BodyContent
var leftovers hcl.Body
var diags hcl.Diagnostics
if partial {
content, leftovers, diags = body.PartialContent(schema)
} else {
content, diags = body.Content(schema)
}
if content == nil {
return diags
}
tags := getFieldTags(val.Type())
if tags.Remain != nil {
fieldIdx := *tags.Remain
field := val.Type().Field(fieldIdx)
fieldV := val.Field(fieldIdx)
switch {
case bodyType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(leftovers))
case attrsType.AssignableTo(field.Type):
attrs, attrsDiags := leftovers.JustAttributes()
if len(attrsDiags) > 0 {
diags = append(diags, attrsDiags...)
}
fieldV.Set(reflect.ValueOf(attrs))
default:
diags = append(diags, decodeBodyToValue(leftovers, ctx, fieldV)...)
}
}
for name, fieldIdx := range tags.Attributes {
attr := content.Attributes[name]
field := val.Type().Field(fieldIdx)
fieldV := val.Field(fieldIdx)
if attr == nil {
if !exprType.AssignableTo(field.Type) {
continue
}
// As a special case, if the target is of type hcl.Expression then
// we'll assign an actual expression that evalues to a cty null,
// so the caller can deal with it within the cty realm rather
// than within the Go realm.
synthExpr := hcl.StaticExpr(cty.NullVal(cty.DynamicPseudoType), body.MissingItemRange())
fieldV.Set(reflect.ValueOf(synthExpr))
continue
}
switch {
case attrType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(attr))
case exprType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(attr.Expr))
default:
diags = append(diags, DecodeExpression(
attr.Expr, ctx, fieldV.Addr().Interface(),
)...)
}
}
blocksByType := content.Blocks.ByType()
for typeName, fieldIdx := range tags.Blocks {
blocks := blocksByType[typeName]
field := val.Type().Field(fieldIdx)
ty := field.Type
isSlice := false
isPtr := false
if ty.Kind() == reflect.Slice {
isSlice = true
ty = ty.Elem()
}
if ty.Kind() == reflect.Ptr {
isPtr = true
ty = ty.Elem()
}
if len(blocks) > 1 && !isSlice {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Duplicate %s block", typeName),
Detail: fmt.Sprintf(
"Only one %s block is allowed. Another was defined at %s.",
typeName, blocks[0].DefRange.String(),
),
Subject: &blocks[1].DefRange,
})
continue
}
if len(blocks) == 0 {
if isSlice || isPtr {
val.Field(fieldIdx).Set(reflect.Zero(field.Type))
} else {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Missing %s block", typeName),
Detail: fmt.Sprintf("A %s block is required.", typeName),
Subject: body.MissingItemRange().Ptr(),
})
}
continue
}
switch {
case isSlice:
elemType := ty
if isPtr {
elemType = reflect.PtrTo(ty)
}
sli := reflect.MakeSlice(reflect.SliceOf(elemType), len(blocks), len(blocks))
for i, block := range blocks {
if isPtr {
v := reflect.New(ty)
diags = append(diags, decodeBlockToValue(block, ctx, v.Elem())...)
sli.Index(i).Set(v)
} else {
diags = append(diags, decodeBlockToValue(block, ctx, sli.Index(i))...)
}
}
val.Field(fieldIdx).Set(sli)
default:
block := blocks[0]
if isPtr {
v := reflect.New(ty)
diags = append(diags, decodeBlockToValue(block, ctx, v.Elem())...)
val.Field(fieldIdx).Set(v)
} else {
diags = append(diags, decodeBlockToValue(block, ctx, val.Field(fieldIdx))...)
}
}
}
return diags
}
func decodeBodyToMap(body hcl.Body, ctx *hcl.EvalContext, v reflect.Value) hcl.Diagnostics {
attrs, diags := body.JustAttributes()
if attrs == nil {
return diags
}
mv := reflect.MakeMap(v.Type())
for k, attr := range attrs {
switch {
case attrType.AssignableTo(v.Type().Elem()):
mv.SetMapIndex(reflect.ValueOf(k), reflect.ValueOf(attr))
case exprType.AssignableTo(v.Type().Elem()):
mv.SetMapIndex(reflect.ValueOf(k), reflect.ValueOf(attr.Expr))
default:
ev := reflect.New(v.Type().Elem())
diags = append(diags, DecodeExpression(attr.Expr, ctx, ev.Interface())...)
mv.SetMapIndex(reflect.ValueOf(k), ev.Elem())
}
}
v.Set(mv)
return diags
}
func decodeBlockToValue(block *hcl.Block, ctx *hcl.EvalContext, v reflect.Value) hcl.Diagnostics {
var diags hcl.Diagnostics
ty := v.Type()
switch {
case blockType.AssignableTo(ty):
v.Elem().Set(reflect.ValueOf(block))
case bodyType.AssignableTo(ty):
v.Elem().Set(reflect.ValueOf(block.Body))
case attrsType.AssignableTo(ty):
attrs, attrsDiags := block.Body.JustAttributes()
if len(attrsDiags) > 0 {
diags = append(diags, attrsDiags...)
}
v.Elem().Set(reflect.ValueOf(attrs))
default:
diags = append(diags, decodeBodyToValue(block.Body, ctx, v)...)
if len(block.Labels) > 0 {
blockTags := getFieldTags(ty)
for li, lv := range block.Labels {
lfieldIdx := blockTags.Labels[li].FieldIndex
v.Field(lfieldIdx).Set(reflect.ValueOf(lv))
}
}
}
return diags
}
// DecodeExpression extracts the value of the given expression into the given
// value. This value must be something that gocty is able to decode into,
// since the final decoding is delegated to that package.
//
// The given EvalContext is used to resolve any variables or functions in
// expressions encountered while decoding. This may be nil to require only
// constant values, for simple applications that do not support variables or
// functions.
//
// The returned diagnostics should be inspected with its HasErrors method to
// determine if the populated value is valid and complete. If error diagnostics
// are returned then the given value may have been partially-populated but
// may still be accessed by a careful caller for static analysis and editor
// integration use-cases.
func DecodeExpression(expr hcl.Expression, ctx *hcl.EvalContext, val interface{}) hcl.Diagnostics {
srcVal, diags := expr.Value(ctx)
convTy, err := gocty.ImpliedType(val)
if err != nil {
panic(fmt.Sprintf("unsuitable DecodeExpression target: %s", err))
}
srcVal, err = convert.Convert(srcVal, convTy)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsuitable value type",
Detail: fmt.Sprintf("Unsuitable value: %s", err.Error()),
Subject: expr.StartRange().Ptr(),
Context: expr.Range().Ptr(),
})
return diags
}
err = gocty.FromCtyValue(srcVal, val)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsuitable value type",
Detail: fmt.Sprintf("Unsuitable value: %s", err.Error()),
Subject: expr.StartRange().Ptr(),
Context: expr.Range().Ptr(),
})
}
return diags
}

49
vendor/github.com/hashicorp/hcl2/gohcl/doc.go generated vendored Normal file
View File

@ -0,0 +1,49 @@
// Package gohcl allows decoding HCL configurations into Go data structures.
//
// It provides a convenient and concise way of describing the schema for
// configuration and then accessing the resulting data via native Go
// types.
//
// A struct field tag scheme is used, similar to other decoding and
// unmarshalling libraries. The tags are formatted as in the following example:
//
// ThingType string `hcl:"thing_type,attr"`
//
// Within each tag there are two comma-separated tokens. The first is the
// name of the corresponding construct in configuration, while the second
// is a keyword giving the kind of construct expected. The following
// kind keywords are supported:
//
// attr (the default) indicates that the value is to be populated from an attribute
// block indicates that the value is to populated from a block
// label indicates that the value is to populated from a block label
// remain indicates that the value is to be populated from the remaining body after populating other fields
//
// "attr" fields may either be of type *hcl.Expression, in which case the raw
// expression is assigned, or of any type accepted by gocty, in which case
// gocty will be used to assign the value to a native Go type.
//
// "block" fields may be of type *hcl.Block or hcl.Body, in which case the
// corresponding raw value is assigned, or may be a struct that recursively
// uses the same tags. Block fields may also be slices of any of these types,
// in which case multiple blocks of the corresponding type are decoded into
// the slice.
//
// "label" fields are considered only in a struct used as the type of a field
// marked as "block", and are used sequentially to capture the labels of
// the blocks being decoded. In this case, the name token is used only as
// an identifier for the label in diagnostic messages.
//
// "remain" can be placed on a single field that may be either of type
// hcl.Body or hcl.Attributes, in which case any remaining body content is
// placed into this field for delayed processing. If no "remain" field is
// present then any attributes or blocks not matched by another valid tag
// will cause an error diagnostic.
//
// Broadly-speaking this package deals with two types of error. The first is
// errors in the configuration itself, which are returned as diagnostics
// written with the configuration author as the target audience. The second
// is bugs in the calling program, such as invalid struct tags, which are
// surfaced via panics since there can be no useful runtime handling of such
// errors and they should certainly not be returned to the user as diagnostics.
package gohcl

167
vendor/github.com/hashicorp/hcl2/gohcl/schema.go generated vendored Normal file
View File

@ -0,0 +1,167 @@
package gohcl
import (
"fmt"
"reflect"
"sort"
"strings"
"github.com/hashicorp/hcl2/hcl"
)
// ImpliedBodySchema produces a hcl.BodySchema derived from the type of the
// given value, which must be a struct value or a pointer to one. If an
// inappropriate value is passed, this function will panic.
//
// The second return argument indicates whether the given struct includes
// a "remain" field, and thus the returned schema is non-exhaustive.
//
// This uses the tags on the fields of the struct to discover how each
// field's value should be expressed within configuration. If an invalid
// mapping is attempted, this function will panic.
func ImpliedBodySchema(val interface{}) (schema *hcl.BodySchema, partial bool) {
ty := reflect.TypeOf(val)
if ty.Kind() == reflect.Ptr {
ty = ty.Elem()
}
if ty.Kind() != reflect.Struct {
panic(fmt.Sprintf("given value must be struct, not %T", val))
}
var attrSchemas []hcl.AttributeSchema
var blockSchemas []hcl.BlockHeaderSchema
tags := getFieldTags(ty)
attrNames := make([]string, 0, len(tags.Attributes))
for n := range tags.Attributes {
attrNames = append(attrNames, n)
}
sort.Strings(attrNames)
for _, n := range attrNames {
idx := tags.Attributes[n]
field := ty.Field(idx)
var required bool
switch {
case field.Type.AssignableTo(exprType):
// If we're decoding to hcl.Expression then absense can be
// indicated via a null value, so we don't specify that
// the field is required during decoding.
required = false
case field.Type.Kind() != reflect.Ptr:
required = true
default:
required = false
}
attrSchemas = append(attrSchemas, hcl.AttributeSchema{
Name: n,
Required: required,
})
}
blockNames := make([]string, 0, len(tags.Blocks))
for n := range tags.Blocks {
blockNames = append(blockNames, n)
}
sort.Strings(blockNames)
for _, n := range blockNames {
idx := tags.Blocks[n]
field := ty.Field(idx)
fty := field.Type
if fty.Kind() == reflect.Slice {
fty = fty.Elem()
}
if fty.Kind() == reflect.Ptr {
fty = fty.Elem()
}
if fty.Kind() != reflect.Struct {
panic(fmt.Sprintf(
"hcl 'block' tag kind cannot be applied to %s field %s: struct required", field.Type.String(), field.Name,
))
}
ftags := getFieldTags(fty)
var labelNames []string
if len(ftags.Labels) > 0 {
labelNames = make([]string, len(ftags.Labels))
for i, l := range ftags.Labels {
labelNames[i] = l.Name
}
}
blockSchemas = append(blockSchemas, hcl.BlockHeaderSchema{
Type: n,
LabelNames: labelNames,
})
}
partial = tags.Remain != nil
schema = &hcl.BodySchema{
Attributes: attrSchemas,
Blocks: blockSchemas,
}
return schema, partial
}
type fieldTags struct {
Attributes map[string]int
Blocks map[string]int
Labels []labelField
Remain *int
}
type labelField struct {
FieldIndex int
Name string
}
func getFieldTags(ty reflect.Type) *fieldTags {
ret := &fieldTags{
Attributes: map[string]int{},
Blocks: map[string]int{},
}
ct := ty.NumField()
for i := 0; i < ct; i++ {
field := ty.Field(i)
tag := field.Tag.Get("hcl")
if tag == "" {
continue
}
comma := strings.Index(tag, ",")
var name, kind string
if comma != -1 {
name = tag[:comma]
kind = tag[comma+1:]
} else {
name = tag
kind = "attr"
}
switch kind {
case "attr":
ret.Attributes[name] = i
case "block":
ret.Blocks[name] = i
case "label":
ret.Labels = append(ret.Labels, labelField{
FieldIndex: i,
Name: name,
})
case "remain":
if ret.Remain != nil {
panic("only one 'remain' tag is permitted")
}
idx := i // copy, because this loop will continue assigning to i
ret.Remain = &idx
default:
panic(fmt.Sprintf("invalid hcl field tag kind %q on %s %q", kind, field.Type.String(), field.Name))
}
}
return ret
}

16
vendor/github.com/hashicorp/hcl2/gohcl/types.go generated vendored Normal file
View File

@ -0,0 +1,16 @@
package gohcl
import (
"reflect"
"github.com/hashicorp/hcl2/hcl"
)
var victimExpr hcl.Expression
var victimBody hcl.Body
var exprType = reflect.TypeOf(&victimExpr).Elem()
var bodyType = reflect.TypeOf(&victimBody).Elem()
var blockType = reflect.TypeOf((*hcl.Block)(nil))
var attrType = reflect.TypeOf((*hcl.Attribute)(nil))
var attrsType = reflect.TypeOf(hcl.Attributes(nil))

103
vendor/github.com/hashicorp/hcl2/hcl/diagnostic.go generated vendored Normal file
View File

@ -0,0 +1,103 @@
package hcl
import (
"fmt"
)
// DiagnosticSeverity represents the severity of a diagnostic.
type DiagnosticSeverity int
const (
// DiagInvalid is the invalid zero value of DiagnosticSeverity
DiagInvalid DiagnosticSeverity = iota
// DiagError indicates that the problem reported by a diagnostic prevents
// further progress in parsing and/or evaluating the subject.
DiagError
// DiagWarning indicates that the problem reported by a diagnostic warrants
// user attention but does not prevent further progress. It is most
// commonly used for showing deprecation notices.
DiagWarning
)
// Diagnostic represents information to be presented to a user about an
// error or anomoly in parsing or evaluating configuration.
type Diagnostic struct {
Severity DiagnosticSeverity
// Summary and detail contain the English-language description of the
// problem. Summary is a terse description of the general problem and
// detail is a more elaborate, often-multi-sentence description of
// the probem and what might be done to solve it.
Summary string
Detail string
Subject *Range
Context *Range
}
// Diagnostics is a list of Diagnostic instances.
type Diagnostics []*Diagnostic
// error implementation, so that diagnostics can be returned via APIs
// that normally deal in vanilla Go errors.
//
// This presents only minimal context about the error, for compatibility
// with usual expectations about how errors will present as strings.
func (d *Diagnostic) Error() string {
return fmt.Sprintf("%s: %s; %s", d.Subject, d.Summary, d.Detail)
}
// error implementation, so that sets of diagnostics can be returned via
// APIs that normally deal in vanilla Go errors.
func (d Diagnostics) Error() string {
count := len(d)
switch {
case count == 0:
return "no diagnostics"
case count == 1:
return d[0].Error()
default:
return fmt.Sprintf("%s, and %d other diagnostic(s)", d[0].Error(), count-1)
}
}
// Append appends a new error to a Diagnostics and return the whole Diagnostics.
//
// This is provided as a convenience for returning from a function that
// collects and then returns a set of diagnostics:
//
// return nil, diags.Append(&hcl.Diagnostic{ ... })
//
// Note that this modifies the array underlying the diagnostics slice, so
// must be used carefully within a single codepath. It is incorrect (and rude)
// to extend a diagnostics created by a different subsystem.
func (d Diagnostics) Append(diag *Diagnostic) Diagnostics {
return append(d, diag)
}
// Extend concatenates the given Diagnostics with the receiver and returns
// the whole new Diagnostics.
//
// This is similar to Append but accepts multiple diagnostics to add. It has
// all the same caveats and constraints.
func (d Diagnostics) Extend(diags Diagnostics) Diagnostics {
return append(d, diags...)
}
// HasErrors returns true if the receiver contains any diagnostics of
// severity DiagError.
func (d Diagnostics) HasErrors() bool {
for _, diag := range d {
if diag.Severity == DiagError {
return true
}
}
return false
}
// A DiagnosticWriter emits diagnostics somehow.
type DiagnosticWriter interface {
WriteDiagnostic(*Diagnostic) error
WriteDiagnostics(Diagnostics) error
}

163
vendor/github.com/hashicorp/hcl2/hcl/diagnostic_text.go generated vendored Normal file
View File

@ -0,0 +1,163 @@
package hcl
import (
"bufio"
"bytes"
"errors"
"fmt"
"io"
wordwrap "github.com/mitchellh/go-wordwrap"
)
type diagnosticTextWriter struct {
files map[string]*File
wr io.Writer
width uint
color bool
}
// NewDiagnosticTextWriter creates a DiagnosticWriter that writes diagnostics
// to the given writer as formatted text.
//
// It is designed to produce text appropriate to print in a monospaced font
// in a terminal of a particular width, or optionally with no width limit.
//
// The given width may be zero to disable word-wrapping of the detail text
// and truncation of source code snippets.
//
// If color is set to true, the output will include VT100 escape sequences to
// color-code the severity indicators. It is suggested to turn this off if
// the target writer is not a terminal.
func NewDiagnosticTextWriter(wr io.Writer, files map[string]*File, width uint, color bool) DiagnosticWriter {
return &diagnosticTextWriter{
files: files,
wr: wr,
width: width,
color: color,
}
}
func (w *diagnosticTextWriter) WriteDiagnostic(diag *Diagnostic) error {
if diag == nil {
return errors.New("nil diagnostic")
}
var colorCode, resetCode string
if w.color {
switch diag.Severity {
case DiagError:
colorCode = "\x1b[31m"
case DiagWarning:
colorCode = "\x1b[33m"
}
resetCode = "\x1b[0m"
}
var severityStr string
switch diag.Severity {
case DiagError:
severityStr = "Error"
case DiagWarning:
severityStr = "Warning"
default:
// should never happen
severityStr = "???????"
}
fmt.Fprintf(w.wr, "%s%s%s: %s\n\n", colorCode, severityStr, resetCode, diag.Summary)
if diag.Subject != nil {
file := w.files[diag.Subject.Filename]
if file == nil || file.Bytes == nil {
fmt.Fprintf(w.wr, " on %s line %d:\n (source code not available)\n\n", diag.Subject.Filename, diag.Subject.Start.Line)
} else {
src := file.Bytes
r := bytes.NewReader(src)
sc := bufio.NewScanner(r)
sc.Split(bufio.ScanLines)
var startLine, endLine int
if diag.Context != nil {
startLine = diag.Context.Start.Line
endLine = diag.Context.End.Line
} else {
startLine = diag.Subject.Start.Line
endLine = diag.Subject.End.Line
}
var contextLine string
if diag.Subject != nil {
contextLine = contextString(file, diag.Subject.Start.Byte)
if contextLine != "" {
contextLine = ", in " + contextLine
}
}
li := 1
var ls string
for sc.Scan() {
ls = sc.Text()
if li == startLine {
break
}
li++
}
fmt.Fprintf(w.wr, " on %s line %d%s:\n", diag.Subject.Filename, diag.Subject.Start.Line, contextLine)
// TODO: Generate markers for the specific characters that are in the Context and Subject ranges.
// For now, we just print out the lines.
fmt.Fprintf(w.wr, "%4d: %s\n", li, ls)
if endLine > li {
for sc.Scan() {
ls = sc.Text()
li++
fmt.Fprintf(w.wr, "%4d: %s\n", li, ls)
if li == endLine {
break
}
}
}
w.wr.Write([]byte{'\n'})
}
}
if diag.Detail != "" {
detail := diag.Detail
if w.width != 0 {
detail = wordwrap.WrapString(detail, w.width)
}
fmt.Fprintf(w.wr, "%s\n\n", detail)
}
return nil
}
func (w *diagnosticTextWriter) WriteDiagnostics(diags Diagnostics) error {
for _, diag := range diags {
err := w.WriteDiagnostic(diag)
if err != nil {
return err
}
}
return nil
}
func contextString(file *File, offset int) string {
type contextStringer interface {
ContextString(offset int) string
}
if cser, ok := file.Nav.(contextStringer); ok {
return cser.ContextString(offset)
}
return ""
}

24
vendor/github.com/hashicorp/hcl2/hcl/didyoumean.go generated vendored Normal file
View File

@ -0,0 +1,24 @@
package hcl
import (
"github.com/agext/levenshtein"
)
// nameSuggestion tries to find a name from the given slice of suggested names
// that is close to the given name and returns it if found. If no suggestion
// is close enough, returns the empty string.
//
// The suggestions are tried in order, so earlier suggestions take precedence
// if the given string is similar to two or more suggestions.
//
// This function is intended to be used with a relatively-small number of
// suggestions. It's not optimized for hundreds or thousands of them.
func nameSuggestion(given string, suggestions []string) string {
for _, suggestion := range suggestions {
dist := levenshtein.Distance(given, suggestion, nil)
if dist < 3 { // threshold determined experimentally
return suggestion
}
}
return ""
}

1
vendor/github.com/hashicorp/hcl2/hcl/doc.go generated vendored Normal file
View File

@ -0,0 +1 @@
package hcl

25
vendor/github.com/hashicorp/hcl2/hcl/eval_context.go generated vendored Normal file
View File

@ -0,0 +1,25 @@
package hcl
import (
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/function"
)
// An EvalContext provides the variables and functions that should be used
// to evaluate an expression.
type EvalContext struct {
Variables map[string]cty.Value
Functions map[string]function.Function
parent *EvalContext
}
// NewChild returns a new EvalContext that is a child of the receiver.
func (ctx *EvalContext) NewChild() *EvalContext {
return &EvalContext{parent: ctx}
}
// Parent returns the parent of the receiver, or nil if the receiver has
// no parent.
func (ctx *EvalContext) Parent() *EvalContext {
return ctx.parent
}

View File

@ -0,0 +1,24 @@
package hclsyntax
import (
"github.com/agext/levenshtein"
)
// nameSuggestion tries to find a name from the given slice of suggested names
// that is close to the given name and returns it if found. If no suggestion
// is close enough, returns the empty string.
//
// The suggestions are tried in order, so earlier suggestions take precedence
// if the given string is similar to two or more suggestions.
//
// This function is intended to be used with a relatively-small number of
// suggestions. It's not optimized for hundreds or thousands of them.
func nameSuggestion(given string, suggestions []string) string {
for _, suggestion := range suggestions {
dist := levenshtein.Distance(given, suggestion, nil)
if dist < 3 { // threshold determined experimentally
return suggestion
}
}
return ""
}

View File

@ -0,0 +1,7 @@
// Package hclsyntax contains the parser, AST, etc for zcl's native language,
// as opposed to the JSON variant.
//
// In normal use applications should rarely depend on this package directly,
// instead preferring the higher-level interface of the main hcl package and
// its companion hclparse.
package hclsyntax

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,258 @@
package hclsyntax
import (
"fmt"
"github.com/hashicorp/hcl2/hcl"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
"github.com/zclconf/go-cty/cty/function"
"github.com/zclconf/go-cty/cty/function/stdlib"
)
type Operation struct {
Impl function.Function
Type cty.Type
}
var (
OpLogicalOr = &Operation{
Impl: stdlib.OrFunc,
Type: cty.Bool,
}
OpLogicalAnd = &Operation{
Impl: stdlib.AndFunc,
Type: cty.Bool,
}
OpLogicalNot = &Operation{
Impl: stdlib.NotFunc,
Type: cty.Bool,
}
OpEqual = &Operation{
Impl: stdlib.EqualFunc,
Type: cty.Bool,
}
OpNotEqual = &Operation{
Impl: stdlib.NotEqualFunc,
Type: cty.Bool,
}
OpGreaterThan = &Operation{
Impl: stdlib.GreaterThanFunc,
Type: cty.Bool,
}
OpGreaterThanOrEqual = &Operation{
Impl: stdlib.GreaterThanOrEqualToFunc,
Type: cty.Bool,
}
OpLessThan = &Operation{
Impl: stdlib.LessThanFunc,
Type: cty.Bool,
}
OpLessThanOrEqual = &Operation{
Impl: stdlib.LessThanOrEqualToFunc,
Type: cty.Bool,
}
OpAdd = &Operation{
Impl: stdlib.AddFunc,
Type: cty.Number,
}
OpSubtract = &Operation{
Impl: stdlib.SubtractFunc,
Type: cty.Number,
}
OpMultiply = &Operation{
Impl: stdlib.MultiplyFunc,
Type: cty.Number,
}
OpDivide = &Operation{
Impl: stdlib.DivideFunc,
Type: cty.Number,
}
OpModulo = &Operation{
Impl: stdlib.ModuloFunc,
Type: cty.Number,
}
OpNegate = &Operation{
Impl: stdlib.NegateFunc,
Type: cty.Number,
}
)
var binaryOps []map[TokenType]*Operation
func init() {
// This operation table maps from the operator's token type
// to the AST operation type. All expressions produced from
// binary operators are BinaryOp nodes.
//
// Binary operator groups are listed in order of precedence, with
// the *lowest* precedence first. Operators within the same group
// have left-to-right associativity.
binaryOps = []map[TokenType]*Operation{
{
TokenOr: OpLogicalOr,
},
{
TokenAnd: OpLogicalAnd,
},
{
TokenEqualOp: OpEqual,
TokenNotEqual: OpNotEqual,
},
{
TokenGreaterThan: OpGreaterThan,
TokenGreaterThanEq: OpGreaterThanOrEqual,
TokenLessThan: OpLessThan,
TokenLessThanEq: OpLessThanOrEqual,
},
{
TokenPlus: OpAdd,
TokenMinus: OpSubtract,
},
{
TokenStar: OpMultiply,
TokenSlash: OpDivide,
TokenPercent: OpModulo,
},
}
}
type BinaryOpExpr struct {
LHS Expression
Op *Operation
RHS Expression
SrcRange hcl.Range
}
func (e *BinaryOpExpr) walkChildNodes(w internalWalkFunc) {
e.LHS = w(e.LHS).(Expression)
e.RHS = w(e.RHS).(Expression)
}
func (e *BinaryOpExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
impl := e.Op.Impl // assumed to be a function taking exactly two arguments
params := impl.Params()
lhsParam := params[0]
rhsParam := params[1]
var diags hcl.Diagnostics
givenLHSVal, lhsDiags := e.LHS.Value(ctx)
givenRHSVal, rhsDiags := e.RHS.Value(ctx)
diags = append(diags, lhsDiags...)
diags = append(diags, rhsDiags...)
lhsVal, err := convert.Convert(givenLHSVal, lhsParam.Type)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid operand",
Detail: fmt.Sprintf("Unsuitable value for left operand: %s.", err),
Subject: e.LHS.Range().Ptr(),
Context: &e.SrcRange,
})
}
rhsVal, err := convert.Convert(givenRHSVal, rhsParam.Type)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid operand",
Detail: fmt.Sprintf("Unsuitable value for right operand: %s.", err),
Subject: e.RHS.Range().Ptr(),
Context: &e.SrcRange,
})
}
if diags.HasErrors() {
// Don't actually try the call if we have errors already, since the
// this will probably just produce a confusing duplicative diagnostic.
return cty.UnknownVal(e.Op.Type), diags
}
args := []cty.Value{lhsVal, rhsVal}
result, err := impl.Call(args)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
// FIXME: This diagnostic is useless.
Severity: hcl.DiagError,
Summary: "Operation failed",
Detail: fmt.Sprintf("Error during operation: %s.", err),
Subject: &e.SrcRange,
})
return cty.UnknownVal(e.Op.Type), diags
}
return result, diags
}
func (e *BinaryOpExpr) Range() hcl.Range {
return e.SrcRange
}
func (e *BinaryOpExpr) StartRange() hcl.Range {
return e.LHS.StartRange()
}
type UnaryOpExpr struct {
Op *Operation
Val Expression
SrcRange hcl.Range
SymbolRange hcl.Range
}
func (e *UnaryOpExpr) walkChildNodes(w internalWalkFunc) {
e.Val = w(e.Val).(Expression)
}
func (e *UnaryOpExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
impl := e.Op.Impl // assumed to be a function taking exactly one argument
params := impl.Params()
param := params[0]
givenVal, diags := e.Val.Value(ctx)
val, err := convert.Convert(givenVal, param.Type)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid operand",
Detail: fmt.Sprintf("Unsuitable value for unary operand: %s.", err),
Subject: e.Val.Range().Ptr(),
Context: &e.SrcRange,
})
}
if diags.HasErrors() {
// Don't actually try the call if we have errors already, since the
// this will probably just produce a confusing duplicative diagnostic.
return cty.UnknownVal(e.Op.Type), diags
}
args := []cty.Value{val}
result, err := impl.Call(args)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
// FIXME: This diagnostic is useless.
Severity: hcl.DiagError,
Summary: "Operation failed",
Detail: fmt.Sprintf("Error during operation: %s.", err),
Subject: &e.SrcRange,
})
return cty.UnknownVal(e.Op.Type), diags
}
return result, diags
}
func (e *UnaryOpExpr) Range() hcl.Range {
return e.SrcRange
}
func (e *UnaryOpExpr) StartRange() hcl.Range {
return e.SymbolRange
}

View File

@ -0,0 +1,192 @@
package hclsyntax
import (
"bytes"
"fmt"
"github.com/hashicorp/hcl2/hcl"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
)
type TemplateExpr struct {
Parts []Expression
SrcRange hcl.Range
}
func (e *TemplateExpr) walkChildNodes(w internalWalkFunc) {
for i, part := range e.Parts {
e.Parts[i] = w(part).(Expression)
}
}
func (e *TemplateExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
buf := &bytes.Buffer{}
var diags hcl.Diagnostics
isKnown := true
for _, part := range e.Parts {
partVal, partDiags := part.Value(ctx)
diags = append(diags, partDiags...)
if partVal.IsNull() {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid template interpolation value",
Detail: fmt.Sprintf(
"The expression result is null. Cannot include a null value in a string template.",
),
Subject: part.Range().Ptr(),
Context: &e.SrcRange,
})
continue
}
if !partVal.IsKnown() {
// If any part is unknown then the result as a whole must be
// unknown too. We'll keep on processing the rest of the parts
// anyway, because we want to still emit any diagnostics resulting
// from evaluating those.
isKnown = false
continue
}
strVal, err := convert.Convert(partVal, cty.String)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid template interpolation value",
Detail: fmt.Sprintf(
"Cannot include the given value in a string template: %s.",
err.Error(),
),
Subject: part.Range().Ptr(),
Context: &e.SrcRange,
})
continue
}
buf.WriteString(strVal.AsString())
}
if !isKnown {
return cty.UnknownVal(cty.String), diags
}
return cty.StringVal(buf.String()), diags
}
func (e *TemplateExpr) Range() hcl.Range {
return e.SrcRange
}
func (e *TemplateExpr) StartRange() hcl.Range {
return e.Parts[0].StartRange()
}
// TemplateJoinExpr is used to convert tuples of strings produced by template
// constructs (i.e. for loops) into flat strings, by converting the values
// tos strings and joining them. This AST node is not used directly; it's
// produced as part of the AST of a "for" loop in a template.
type TemplateJoinExpr struct {
Tuple Expression
}
func (e *TemplateJoinExpr) walkChildNodes(w internalWalkFunc) {
e.Tuple = w(e.Tuple).(Expression)
}
func (e *TemplateJoinExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
tuple, diags := e.Tuple.Value(ctx)
if tuple.IsNull() {
// This indicates a bug in the code that constructed the AST.
panic("TemplateJoinExpr got null tuple")
}
if tuple.Type() == cty.DynamicPseudoType {
return cty.UnknownVal(cty.String), diags
}
if !tuple.Type().IsTupleType() {
// This indicates a bug in the code that constructed the AST.
panic("TemplateJoinExpr got non-tuple tuple")
}
if !tuple.IsKnown() {
return cty.UnknownVal(cty.String), diags
}
buf := &bytes.Buffer{}
it := tuple.ElementIterator()
for it.Next() {
_, val := it.Element()
if val.IsNull() {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid template interpolation value",
Detail: fmt.Sprintf(
"An iteration result is null. Cannot include a null value in a string template.",
),
Subject: e.Range().Ptr(),
})
continue
}
if val.Type() == cty.DynamicPseudoType {
return cty.UnknownVal(cty.String), diags
}
strVal, err := convert.Convert(val, cty.String)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid template interpolation value",
Detail: fmt.Sprintf(
"Cannot include one of the interpolation results into the string template: %s.",
err.Error(),
),
Subject: e.Range().Ptr(),
})
continue
}
if !val.IsKnown() {
return cty.UnknownVal(cty.String), diags
}
buf.WriteString(strVal.AsString())
}
return cty.StringVal(buf.String()), diags
}
func (e *TemplateJoinExpr) Range() hcl.Range {
return e.Tuple.Range()
}
func (e *TemplateJoinExpr) StartRange() hcl.Range {
return e.Tuple.StartRange()
}
// TemplateWrapExpr is used instead of a TemplateExpr when a template
// consists _only_ of a single interpolation sequence. In that case, the
// template's result is the single interpolation's result, verbatim with
// no type conversions.
type TemplateWrapExpr struct {
Wrapped Expression
SrcRange hcl.Range
}
func (e *TemplateWrapExpr) walkChildNodes(w internalWalkFunc) {
e.Wrapped = w(e.Wrapped).(Expression)
}
func (e *TemplateWrapExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
return e.Wrapped.Value(ctx)
}
func (e *TemplateWrapExpr) Range() hcl.Range {
return e.SrcRange
}
func (e *TemplateWrapExpr) StartRange() hcl.Range {
return e.SrcRange
}

View File

@ -0,0 +1,72 @@
package hclsyntax
// Generated by expression_vars_get.go. DO NOT EDIT.
// Run 'go generate' on this package to update the set of functions here.
import (
"github.com/hashicorp/hcl2/hcl"
)
func (e *AnonSymbolExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *BinaryOpExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *ConditionalExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *ForExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *FunctionCallExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *IndexExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *LiteralValueExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *ObjectConsExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *RelativeTraversalExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *ScopeTraversalExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *SplatExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *TemplateExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *TemplateJoinExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *TemplateWrapExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *TupleConsExpr) Variables() []hcl.Traversal {
return Variables(e)
}
func (e *UnaryOpExpr) Variables() []hcl.Traversal {
return Variables(e)
}

20
vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/file.go generated vendored Normal file
View File

@ -0,0 +1,20 @@
package hclsyntax
import (
"github.com/hashicorp/hcl2/hcl"
)
// File is the top-level object resulting from parsing a configuration file.
type File struct {
Body *Body
Bytes []byte
}
func (f *File) AsHCLFile() *hcl.File {
return &hcl.File{
Body: f.Body,
Bytes: f.Bytes,
// TODO: The Nav object, once we have an implementation of it
}
}

View File

@ -0,0 +1,7 @@
package hclsyntax
//go:generate go run expression_vars_gen.go
//go:generate ruby unicode2ragel.rb --url=http://www.unicode.org/Public/9.0.0/ucd/DerivedCoreProperties.txt -m UnicodeDerived -p ID_Start,ID_Continue -o unicode_derived.rl
//go:generate ragel -Z scan_tokens.rl
//go:generate gofmt -w scan_tokens.go
//go:generate stringer -type TokenType -output token_type_string.go

View File

@ -0,0 +1,21 @@
package hclsyntax
import (
"bytes"
)
type Keyword []byte
var forKeyword = Keyword([]byte{'f', 'o', 'r'})
var inKeyword = Keyword([]byte{'i', 'n'})
var ifKeyword = Keyword([]byte{'i', 'f'})
var elseKeyword = Keyword([]byte{'e', 'l', 's', 'e'})
var endifKeyword = Keyword([]byte{'e', 'n', 'd', 'i', 'f'})
var endforKeyword = Keyword([]byte{'e', 'n', 'd', 'f', 'o', 'r'})
func (kw Keyword) TokenMatches(token Token) bool {
if token.Type != TokenIdent {
return false
}
return bytes.Equal([]byte(kw), token.Bytes)
}

View File

@ -0,0 +1,41 @@
package hclsyntax
import (
"bytes"
"fmt"
)
type navigation struct {
root *Body
}
// Implementation of zcled.ContextString
func (n navigation) ContextString(offset int) string {
// We will walk our top-level blocks until we find one that contains
// the given offset, and then construct a representation of the header
// of the block.
var block *Block
for _, candidate := range n.root.Blocks {
if candidate.Range().ContainsOffset(offset) {
block = candidate
break
}
}
if block == nil {
return ""
}
if len(block.Labels) == 0 {
// Easy case!
return block.Type
}
buf := &bytes.Buffer{}
buf.WriteString(block.Type)
for _, label := range block.Labels {
fmt.Fprintf(buf, " %q", label)
}
return buf.String()
}

22
vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/node.go generated vendored Normal file
View File

@ -0,0 +1,22 @@
package hclsyntax
import (
"github.com/hashicorp/hcl2/hcl"
)
// Node is the abstract type that every AST node implements.
//
// This is a closed interface, so it cannot be implemented from outside of
// this package.
type Node interface {
// This is the mechanism by which the public-facing walk functions
// are implemented. Implementations should call the given function
// for each child node and then replace that node with its return value.
// The return value might just be the same node, for non-transforming
// walks.
walkChildNodes(w internalWalkFunc)
Range() hcl.Range
}
type internalWalkFunc func(Node) Node

1780
vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/parser.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,728 @@
package hclsyntax
import (
"fmt"
"strings"
"unicode"
"github.com/hashicorp/hcl2/hcl"
"github.com/zclconf/go-cty/cty"
)
func (p *parser) ParseTemplate() (Expression, hcl.Diagnostics) {
return p.parseTemplate(TokenEOF)
}
func (p *parser) parseTemplate(end TokenType) (Expression, hcl.Diagnostics) {
exprs, passthru, rng, diags := p.parseTemplateInner(end)
if passthru {
if len(exprs) != 1 {
panic("passthru set with len(exprs) != 1")
}
return &TemplateWrapExpr{
Wrapped: exprs[0],
SrcRange: rng,
}, diags
}
return &TemplateExpr{
Parts: exprs,
SrcRange: rng,
}, diags
}
func (p *parser) parseTemplateInner(end TokenType) ([]Expression, bool, hcl.Range, hcl.Diagnostics) {
parts, diags := p.parseTemplateParts(end)
tp := templateParser{
Tokens: parts.Tokens,
SrcRange: parts.SrcRange,
}
exprs, exprsDiags := tp.parseRoot()
diags = append(diags, exprsDiags...)
passthru := false
if len(parts.Tokens) == 2 { // one real token and one synthetic "end" token
if _, isInterp := parts.Tokens[0].(*templateInterpToken); isInterp {
passthru = true
}
}
return exprs, passthru, parts.SrcRange, diags
}
type templateParser struct {
Tokens []templateToken
SrcRange hcl.Range
pos int
}
func (p *templateParser) parseRoot() ([]Expression, hcl.Diagnostics) {
var exprs []Expression
var diags hcl.Diagnostics
for {
next := p.Peek()
if _, isEnd := next.(*templateEndToken); isEnd {
break
}
expr, exprDiags := p.parseExpr()
diags = append(diags, exprDiags...)
exprs = append(exprs, expr)
}
return exprs, diags
}
func (p *templateParser) parseExpr() (Expression, hcl.Diagnostics) {
next := p.Peek()
switch tok := next.(type) {
case *templateLiteralToken:
p.Read() // eat literal
return &LiteralValueExpr{
Val: cty.StringVal(tok.Val),
SrcRange: tok.SrcRange,
}, nil
case *templateInterpToken:
p.Read() // eat interp
return tok.Expr, nil
case *templateIfToken:
return p.parseIf()
case *templateForToken:
return p.parseFor()
case *templateEndToken:
p.Read() // eat erroneous token
return errPlaceholderExpr(tok.SrcRange), hcl.Diagnostics{
{
// This is a particularly unhelpful diagnostic, so callers
// should attempt to pre-empt it and produce a more helpful
// diagnostic that is context-aware.
Severity: hcl.DiagError,
Summary: "Unexpected end of template",
Detail: "The control directives within this template are unbalanced.",
Subject: &tok.SrcRange,
},
}
case *templateEndCtrlToken:
p.Read() // eat erroneous token
return errPlaceholderExpr(tok.SrcRange), hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Unexpected %s directive", tok.Name()),
Detail: "The control directives within this template are unbalanced.",
Subject: &tok.SrcRange,
},
}
default:
// should never happen, because above should be exhaustive
panic(fmt.Sprintf("unhandled template token type %T", next))
}
}
func (p *templateParser) parseIf() (Expression, hcl.Diagnostics) {
open := p.Read()
openIf, isIf := open.(*templateIfToken)
if !isIf {
// should never happen if caller is behaving
panic("parseIf called with peeker not pointing at if token")
}
var ifExprs, elseExprs []Expression
var diags hcl.Diagnostics
var endifRange hcl.Range
currentExprs := &ifExprs
Token:
for {
next := p.Peek()
if end, isEnd := next.(*templateEndToken); isEnd {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unexpected end of template",
Detail: fmt.Sprintf(
"The if directive at %s is missing its corresponding endif directive.",
openIf.SrcRange,
),
Subject: &end.SrcRange,
})
return errPlaceholderExpr(end.SrcRange), diags
}
if end, isCtrlEnd := next.(*templateEndCtrlToken); isCtrlEnd {
p.Read() // eat end directive
switch end.Type {
case templateElse:
if currentExprs == &ifExprs {
currentExprs = &elseExprs
continue Token
}
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unexpected else directive",
Detail: fmt.Sprintf(
"Already in the else clause for the if started at %s.",
openIf.SrcRange,
),
Subject: &end.SrcRange,
})
case templateEndIf:
endifRange = end.SrcRange
break Token
default:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Unexpected %s directive", end.Name()),
Detail: fmt.Sprintf(
"Expecting an endif directive for the if started at %s.",
openIf.SrcRange,
),
Subject: &end.SrcRange,
})
}
return errPlaceholderExpr(end.SrcRange), diags
}
expr, exprDiags := p.parseExpr()
diags = append(diags, exprDiags...)
*currentExprs = append(*currentExprs, expr)
}
if len(ifExprs) == 0 {
ifExprs = append(ifExprs, &LiteralValueExpr{
Val: cty.StringVal(""),
SrcRange: hcl.Range{
Filename: openIf.SrcRange.Filename,
Start: openIf.SrcRange.End,
End: openIf.SrcRange.End,
},
})
}
if len(elseExprs) == 0 {
elseExprs = append(elseExprs, &LiteralValueExpr{
Val: cty.StringVal(""),
SrcRange: hcl.Range{
Filename: endifRange.Filename,
Start: endifRange.Start,
End: endifRange.Start,
},
})
}
trueExpr := &TemplateExpr{
Parts: ifExprs,
SrcRange: hcl.RangeBetween(ifExprs[0].Range(), ifExprs[len(ifExprs)-1].Range()),
}
falseExpr := &TemplateExpr{
Parts: elseExprs,
SrcRange: hcl.RangeBetween(elseExprs[0].Range(), elseExprs[len(elseExprs)-1].Range()),
}
return &ConditionalExpr{
Condition: openIf.CondExpr,
TrueResult: trueExpr,
FalseResult: falseExpr,
SrcRange: hcl.RangeBetween(openIf.SrcRange, endifRange),
}, diags
}
func (p *templateParser) parseFor() (Expression, hcl.Diagnostics) {
open := p.Read()
openFor, isFor := open.(*templateForToken)
if !isFor {
// should never happen if caller is behaving
panic("parseFor called with peeker not pointing at for token")
}
var contentExprs []Expression
var diags hcl.Diagnostics
var endforRange hcl.Range
Token:
for {
next := p.Peek()
if end, isEnd := next.(*templateEndToken); isEnd {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unexpected end of template",
Detail: fmt.Sprintf(
"The for directive at %s is missing its corresponding endfor directive.",
openFor.SrcRange,
),
Subject: &end.SrcRange,
})
return errPlaceholderExpr(end.SrcRange), diags
}
if end, isCtrlEnd := next.(*templateEndCtrlToken); isCtrlEnd {
p.Read() // eat end directive
switch end.Type {
case templateElse:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unexpected else directive",
Detail: "An else clause is not expected for a for directive.",
Subject: &end.SrcRange,
})
case templateEndFor:
endforRange = end.SrcRange
break Token
default:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Unexpected %s directive", end.Name()),
Detail: fmt.Sprintf(
"Expecting an endfor directive corresponding to the for directive at %s.",
openFor.SrcRange,
),
Subject: &end.SrcRange,
})
}
return errPlaceholderExpr(end.SrcRange), diags
}
expr, exprDiags := p.parseExpr()
diags = append(diags, exprDiags...)
contentExprs = append(contentExprs, expr)
}
if len(contentExprs) == 0 {
contentExprs = append(contentExprs, &LiteralValueExpr{
Val: cty.StringVal(""),
SrcRange: hcl.Range{
Filename: openFor.SrcRange.Filename,
Start: openFor.SrcRange.End,
End: openFor.SrcRange.End,
},
})
}
contentExpr := &TemplateExpr{
Parts: contentExprs,
SrcRange: hcl.RangeBetween(contentExprs[0].Range(), contentExprs[len(contentExprs)-1].Range()),
}
forExpr := &ForExpr{
KeyVar: openFor.KeyVar,
ValVar: openFor.ValVar,
CollExpr: openFor.CollExpr,
ValExpr: contentExpr,
SrcRange: hcl.RangeBetween(openFor.SrcRange, endforRange),
OpenRange: openFor.SrcRange,
CloseRange: endforRange,
}
return &TemplateJoinExpr{
Tuple: forExpr,
}, diags
}
func (p *templateParser) Peek() templateToken {
return p.Tokens[p.pos]
}
func (p *templateParser) Read() templateToken {
ret := p.Peek()
if _, end := ret.(*templateEndToken); !end {
p.pos++
}
return ret
}
// parseTemplateParts produces a flat sequence of "template tokens", which are
// either literal values (with any "trimming" already applied), interpolation
// sequences, or control flow markers.
//
// A further pass is required on the result to turn it into an AST.
func (p *parser) parseTemplateParts(end TokenType) (*templateParts, hcl.Diagnostics) {
var parts []templateToken
var diags hcl.Diagnostics
startRange := p.NextRange()
ltrimNext := false
nextCanTrimPrev := false
var endRange hcl.Range
Token:
for {
next := p.Read()
if next.Type == end {
// all done!
endRange = next.Range
break
}
ltrim := ltrimNext
ltrimNext = false
canTrimPrev := nextCanTrimPrev
nextCanTrimPrev = false
switch next.Type {
case TokenStringLit, TokenQuotedLit:
str, strDiags := p.decodeStringLit(next)
diags = append(diags, strDiags...)
if ltrim {
str = strings.TrimLeftFunc(str, unicode.IsSpace)
}
parts = append(parts, &templateLiteralToken{
Val: str,
SrcRange: next.Range,
})
nextCanTrimPrev = true
case TokenTemplateInterp:
// if the opener is ${~ then we want to eat any trailing whitespace
// in the preceding literal token, assuming it is indeed a literal
// token.
if canTrimPrev && len(next.Bytes) == 3 && next.Bytes[2] == '~' && len(parts) > 0 {
prevExpr := parts[len(parts)-1]
if lexpr, ok := prevExpr.(*templateLiteralToken); ok {
lexpr.Val = strings.TrimRightFunc(lexpr.Val, unicode.IsSpace)
}
}
p.PushIncludeNewlines(false)
expr, exprDiags := p.ParseExpression()
diags = append(diags, exprDiags...)
close := p.Peek()
if close.Type != TokenTemplateSeqEnd {
if !p.recovery {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Extra characters after interpolation expression",
Detail: "Expected a closing brace to end the interpolation expression, but found extra characters.",
Subject: &close.Range,
Context: hcl.RangeBetween(startRange, close.Range).Ptr(),
})
}
p.recover(TokenTemplateSeqEnd)
} else {
p.Read() // eat closing brace
// If the closer is ~} then we want to eat any leading
// whitespace on the next token, if it turns out to be a
// literal token.
if len(close.Bytes) == 2 && close.Bytes[0] == '~' {
ltrimNext = true
}
}
p.PopIncludeNewlines()
parts = append(parts, &templateInterpToken{
Expr: expr,
SrcRange: hcl.RangeBetween(next.Range, close.Range),
})
case TokenTemplateControl:
// if the opener is !{~ then we want to eat any trailing whitespace
// in the preceding literal token, assuming it is indeed a literal
// token.
if canTrimPrev && len(next.Bytes) == 3 && next.Bytes[2] == '~' && len(parts) > 0 {
prevExpr := parts[len(parts)-1]
if lexpr, ok := prevExpr.(*templateLiteralToken); ok {
lexpr.Val = strings.TrimRightFunc(lexpr.Val, unicode.IsSpace)
}
}
p.PushIncludeNewlines(false)
kw := p.Peek()
if kw.Type != TokenIdent {
if !p.recovery {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid template directive",
Detail: "A template directive keyword (\"if\", \"for\", etc) is expected at the beginning of a !{ sequence.",
Subject: &kw.Range,
Context: hcl.RangeBetween(next.Range, kw.Range).Ptr(),
})
}
p.recover(TokenTemplateSeqEnd)
p.PopIncludeNewlines()
continue Token
}
p.Read() // eat keyword token
switch {
case ifKeyword.TokenMatches(kw):
condExpr, exprDiags := p.ParseExpression()
diags = append(diags, exprDiags...)
parts = append(parts, &templateIfToken{
CondExpr: condExpr,
SrcRange: hcl.RangeBetween(next.Range, p.NextRange()),
})
case elseKeyword.TokenMatches(kw):
parts = append(parts, &templateEndCtrlToken{
Type: templateElse,
SrcRange: hcl.RangeBetween(next.Range, p.NextRange()),
})
case endifKeyword.TokenMatches(kw):
parts = append(parts, &templateEndCtrlToken{
Type: templateEndIf,
SrcRange: hcl.RangeBetween(next.Range, p.NextRange()),
})
case forKeyword.TokenMatches(kw):
var keyName, valName string
if p.Peek().Type != TokenIdent {
if !p.recovery {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid 'for' directive",
Detail: "For directive requires variable name after 'for'.",
Subject: p.Peek().Range.Ptr(),
})
}
p.recover(TokenTemplateSeqEnd)
p.PopIncludeNewlines()
continue Token
}
valName = string(p.Read().Bytes)
if p.Peek().Type == TokenComma {
// What we just read was actually the key, then.
keyName = valName
p.Read() // eat comma
if p.Peek().Type != TokenIdent {
if !p.recovery {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid 'for' directive",
Detail: "For directive requires value variable name after comma.",
Subject: p.Peek().Range.Ptr(),
})
}
p.recover(TokenTemplateSeqEnd)
p.PopIncludeNewlines()
continue Token
}
valName = string(p.Read().Bytes)
}
if !inKeyword.TokenMatches(p.Peek()) {
if !p.recovery {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid 'for' directive",
Detail: "For directive requires 'in' keyword after names.",
Subject: p.Peek().Range.Ptr(),
})
}
p.recover(TokenTemplateSeqEnd)
p.PopIncludeNewlines()
continue Token
}
p.Read() // eat 'in' keyword
collExpr, collDiags := p.ParseExpression()
diags = append(diags, collDiags...)
parts = append(parts, &templateForToken{
KeyVar: keyName,
ValVar: valName,
CollExpr: collExpr,
SrcRange: hcl.RangeBetween(next.Range, p.NextRange()),
})
case endforKeyword.TokenMatches(kw):
parts = append(parts, &templateEndCtrlToken{
Type: templateEndFor,
SrcRange: hcl.RangeBetween(next.Range, p.NextRange()),
})
default:
if !p.recovery {
suggestions := []string{"if", "for", "else", "endif", "endfor"}
given := string(kw.Bytes)
suggestion := nameSuggestion(given, suggestions)
if suggestion != "" {
suggestion = fmt.Sprintf(" Did you mean %q?", suggestion)
}
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid template control keyword",
Detail: fmt.Sprintf("%q is not a valid template control keyword.%s", given, suggestion),
Subject: &kw.Range,
Context: hcl.RangeBetween(next.Range, kw.Range).Ptr(),
})
}
p.recover(TokenTemplateSeqEnd)
p.PopIncludeNewlines()
continue Token
}
close := p.Peek()
if close.Type != TokenTemplateSeqEnd {
if !p.recovery {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Extra characters in %s marker", kw.Bytes),
Detail: "Expected a closing brace to end the sequence, but found extra characters.",
Subject: &close.Range,
Context: hcl.RangeBetween(startRange, close.Range).Ptr(),
})
}
p.recover(TokenTemplateSeqEnd)
} else {
p.Read() // eat closing brace
// If the closer is ~} then we want to eat any leading
// whitespace on the next token, if it turns out to be a
// literal token.
if len(close.Bytes) == 2 && close.Bytes[0] == '~' {
ltrimNext = true
}
}
p.PopIncludeNewlines()
default:
if !p.recovery {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unterminated template string",
Detail: "No closing marker was found for the string.",
Subject: &next.Range,
Context: hcl.RangeBetween(startRange, next.Range).Ptr(),
})
}
final := p.recover(end)
endRange = final.Range
break Token
}
}
if len(parts) == 0 {
// If a sequence has no content, we'll treat it as if it had an
// empty string in it because that's what the user probably means
// if they write "" in configuration.
parts = append(parts, &templateLiteralToken{
Val: "",
SrcRange: hcl.Range{
// Range is the zero-character span immediately after the
// opening quote.
Filename: startRange.Filename,
Start: startRange.End,
End: startRange.End,
},
})
}
// Always end with an end token, so the parser can produce diagnostics
// about unclosed items with proper position information.
parts = append(parts, &templateEndToken{
SrcRange: endRange,
})
ret := &templateParts{
Tokens: parts,
SrcRange: hcl.RangeBetween(startRange, endRange),
}
return ret, diags
}
type templateParts struct {
Tokens []templateToken
SrcRange hcl.Range
}
// templateToken is a higher-level token that represents a single atom within
// the template language. Our template parsing first raises the raw token
// stream to a sequence of templateToken, and then transforms the result into
// an expression tree.
type templateToken interface {
templateToken() templateToken
}
type templateLiteralToken struct {
Val string
SrcRange hcl.Range
isTemplateToken
}
type templateInterpToken struct {
Expr Expression
SrcRange hcl.Range
isTemplateToken
}
type templateIfToken struct {
CondExpr Expression
SrcRange hcl.Range
isTemplateToken
}
type templateForToken struct {
KeyVar string // empty if ignoring key
ValVar string
CollExpr Expression
SrcRange hcl.Range
isTemplateToken
}
type templateEndCtrlType int
const (
templateEndIf templateEndCtrlType = iota
templateElse
templateEndFor
)
type templateEndCtrlToken struct {
Type templateEndCtrlType
SrcRange hcl.Range
isTemplateToken
}
func (t *templateEndCtrlToken) Name() string {
switch t.Type {
case templateEndIf:
return "endif"
case templateElse:
return "else"
case templateEndFor:
return "endfor"
default:
// should never happen
panic("invalid templateEndCtrlType")
}
}
type templateEndToken struct {
SrcRange hcl.Range
isTemplateToken
}
type isTemplateToken [0]int
func (t isTemplateToken) templateToken() templateToken {
return t
}

View File

@ -0,0 +1,159 @@
package hclsyntax
import (
"github.com/hashicorp/hcl2/hcl"
"github.com/zclconf/go-cty/cty"
)
// ParseTraversalAbs parses an absolute traversal that is assumed to consume
// all of the remaining tokens in the peeker. The usual parser recovery
// behavior is not supported here because traversals are not expected to
// be parsed as part of a larger program.
func (p *parser) ParseTraversalAbs() (hcl.Traversal, hcl.Diagnostics) {
var ret hcl.Traversal
var diags hcl.Diagnostics
// Absolute traversal must always begin with a variable name
varTok := p.Read()
if varTok.Type != TokenIdent {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Variable name required",
Detail: "Must begin with a variable name.",
Subject: &varTok.Range,
})
return ret, diags
}
varName := string(varTok.Bytes)
ret = append(ret, hcl.TraverseRoot{
Name: varName,
SrcRange: varTok.Range,
})
for {
next := p.Peek()
if next.Type == TokenEOF {
return ret, diags
}
switch next.Type {
case TokenDot:
// Attribute access
dot := p.Read() // eat dot
nameTok := p.Read()
if nameTok.Type != TokenIdent {
if nameTok.Type == TokenStar {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Attribute name required",
Detail: "Splat expressions (.*) may not be used here.",
Subject: &nameTok.Range,
Context: hcl.RangeBetween(varTok.Range, nameTok.Range).Ptr(),
})
} else {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Attribute name required",
Detail: "Dot must be followed by attribute name.",
Subject: &nameTok.Range,
Context: hcl.RangeBetween(varTok.Range, nameTok.Range).Ptr(),
})
}
return ret, diags
}
attrName := string(nameTok.Bytes)
ret = append(ret, hcl.TraverseAttr{
Name: attrName,
SrcRange: hcl.RangeBetween(dot.Range, nameTok.Range),
})
case TokenOBrack:
// Index
open := p.Read() // eat open bracket
next := p.Peek()
switch next.Type {
case TokenNumberLit:
tok := p.Read() // eat number
numVal, numDiags := p.numberLitValue(tok)
diags = append(diags, numDiags...)
close := p.Read()
if close.Type != TokenCBrack {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unclosed index brackets",
Detail: "Index key must be followed by a closing bracket.",
Subject: &close.Range,
Context: hcl.RangeBetween(open.Range, close.Range).Ptr(),
})
}
ret = append(ret, hcl.TraverseIndex{
Key: numVal,
SrcRange: hcl.RangeBetween(open.Range, close.Range),
})
if diags.HasErrors() {
return ret, diags
}
case TokenOQuote:
str, _, strDiags := p.parseQuotedStringLiteral()
diags = append(diags, strDiags...)
close := p.Read()
if close.Type != TokenCBrack {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unclosed index brackets",
Detail: "Index key must be followed by a closing bracket.",
Subject: &close.Range,
Context: hcl.RangeBetween(open.Range, close.Range).Ptr(),
})
}
ret = append(ret, hcl.TraverseIndex{
Key: cty.StringVal(str),
SrcRange: hcl.RangeBetween(open.Range, close.Range),
})
if diags.HasErrors() {
return ret, diags
}
default:
if next.Type == TokenStar {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Attribute name required",
Detail: "Splat expressions ([*]) may not be used here.",
Subject: &next.Range,
Context: hcl.RangeBetween(varTok.Range, next.Range).Ptr(),
})
} else {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Index value required",
Detail: "Index brackets must contain either a literal number or a literal string.",
Subject: &next.Range,
Context: hcl.RangeBetween(varTok.Range, next.Range).Ptr(),
})
}
return ret, diags
}
default:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid character",
Detail: "Expected an attribute access or an index operator.",
Subject: &next.Range,
Context: hcl.RangeBetween(varTok.Range, next.Range).Ptr(),
})
return ret, diags
}
}
}

View File

@ -0,0 +1,108 @@
package hclsyntax
import (
"github.com/hashicorp/hcl2/hcl"
)
type peeker struct {
Tokens Tokens
NextIndex int
IncludeComments bool
IncludeNewlinesStack []bool
}
func newPeeker(tokens Tokens, includeComments bool) *peeker {
return &peeker{
Tokens: tokens,
IncludeComments: includeComments,
IncludeNewlinesStack: []bool{true},
}
}
func (p *peeker) Peek() Token {
ret, _ := p.nextToken()
return ret
}
func (p *peeker) Read() Token {
ret, nextIdx := p.nextToken()
p.NextIndex = nextIdx
return ret
}
func (p *peeker) NextRange() hcl.Range {
return p.Peek().Range
}
func (p *peeker) PrevRange() hcl.Range {
if p.NextIndex == 0 {
return p.NextRange()
}
return p.Tokens[p.NextIndex-1].Range
}
func (p *peeker) nextToken() (Token, int) {
for i := p.NextIndex; i < len(p.Tokens); i++ {
tok := p.Tokens[i]
switch tok.Type {
case TokenComment:
if !p.IncludeComments {
// Single-line comment tokens, starting with # or //, absorb
// the trailing newline that terminates them as part of their
// bytes. When we're filtering out comments, we must as a
// special case transform these to newline tokens in order
// to properly parse newline-terminated block items.
if p.includingNewlines() {
if len(tok.Bytes) > 0 && tok.Bytes[len(tok.Bytes)-1] == '\n' {
fakeNewline := Token{
Type: TokenNewline,
Bytes: tok.Bytes[len(tok.Bytes)-1 : len(tok.Bytes)],
// We use the whole token range as the newline
// range, even though that's a little... weird,
// because otherwise we'd need to go count
// characters again in order to figure out the
// column of the newline, and that complexity
// isn't justified when ranges of newlines are
// so rarely printed anyway.
Range: tok.Range,
}
return fakeNewline, i + 1
}
}
continue
}
case TokenNewline:
if !p.includingNewlines() {
continue
}
}
return tok, i + 1
}
// if we fall out here then we'll return the EOF token, and leave
// our index pointed off the end of the array so we'll keep
// returning EOF in future too.
return p.Tokens[len(p.Tokens)-1], len(p.Tokens)
}
func (p *peeker) includingNewlines() bool {
return p.IncludeNewlinesStack[len(p.IncludeNewlinesStack)-1]
}
func (p *peeker) PushIncludeNewlines(include bool) {
p.IncludeNewlinesStack = append(p.IncludeNewlinesStack, include)
}
func (p *peeker) PopIncludeNewlines() bool {
stack := p.IncludeNewlinesStack
remain, ret := stack[:len(stack)-1], stack[len(stack)-1]
p.IncludeNewlinesStack = remain
return ret
}

View File

@ -0,0 +1,130 @@
package hclsyntax
import (
"github.com/hashicorp/hcl2/hcl"
)
// ParseConfig parses the given buffer as a whole zcl config file, returning
// a *hcl.File representing its contents. If HasErrors called on the returned
// diagnostics returns true, the returned body is likely to be incomplete
// and should therefore be used with care.
//
// The body in the returned file has dynamic type *zclsyntax.Body, so callers
// may freely type-assert this to get access to the full zclsyntax API in
// situations where detailed access is required. However, most common use-cases
// should be served using the hcl.Body interface to ensure compatibility with
// other configurationg syntaxes, such as JSON.
func ParseConfig(src []byte, filename string, start hcl.Pos) (*hcl.File, hcl.Diagnostics) {
tokens, diags := LexConfig(src, filename, start)
peeker := newPeeker(tokens, false)
parser := &parser{peeker: peeker}
body, parseDiags := parser.ParseBody(TokenEOF)
diags = append(diags, parseDiags...)
return &hcl.File{
Body: body,
Bytes: src,
Nav: navigation{
root: body,
},
}, diags
}
// ParseExpression parses the given buffer as a standalone zcl expression,
// returning it as an instance of Expression.
func ParseExpression(src []byte, filename string, start hcl.Pos) (Expression, hcl.Diagnostics) {
tokens, diags := LexExpression(src, filename, start)
peeker := newPeeker(tokens, false)
parser := &parser{peeker: peeker}
// Bare expressions are always parsed in "ignore newlines" mode, as if
// they were wrapped in parentheses.
parser.PushIncludeNewlines(false)
expr, parseDiags := parser.ParseExpression()
diags = append(diags, parseDiags...)
next := parser.Peek()
if next.Type != TokenEOF && !parser.recovery {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Extra characters after expression",
Detail: "An expression was successfully parsed, but extra characters were found after it.",
Subject: &next.Range,
})
}
return expr, diags
}
// ParseTemplate parses the given buffer as a standalone zcl template,
// returning it as an instance of Expression.
func ParseTemplate(src []byte, filename string, start hcl.Pos) (Expression, hcl.Diagnostics) {
tokens, diags := LexTemplate(src, filename, start)
peeker := newPeeker(tokens, false)
parser := &parser{peeker: peeker}
expr, parseDiags := parser.ParseTemplate()
diags = append(diags, parseDiags...)
return expr, diags
}
// ParseTraversalAbs parses the given buffer as a standalone absolute traversal.
//
// Parsing as a traversal is more limited than parsing as an expession since
// it allows only attribute and indexing operations on variables. Traverals
// are useful as a syntax for referring to objects without necessarily
// evaluating them.
func ParseTraversalAbs(src []byte, filename string, start hcl.Pos) (hcl.Traversal, hcl.Diagnostics) {
tokens, diags := LexExpression(src, filename, start)
peeker := newPeeker(tokens, false)
parser := &parser{peeker: peeker}
// Bare traverals are always parsed in "ignore newlines" mode, as if
// they were wrapped in parentheses.
parser.PushIncludeNewlines(false)
expr, parseDiags := parser.ParseTraversalAbs()
diags = append(diags, parseDiags...)
return expr, diags
}
// LexConfig performs lexical analysis on the given buffer, treating it as a
// whole zcl config file, and returns the resulting tokens.
//
// Only minimal validation is done during lexical analysis, so the returned
// diagnostics may include errors about lexical issues such as bad character
// encodings or unrecognized characters, but full parsing is required to
// detect _all_ syntax errors.
func LexConfig(src []byte, filename string, start hcl.Pos) (Tokens, hcl.Diagnostics) {
tokens := scanTokens(src, filename, start, scanNormal)
diags := checkInvalidTokens(tokens)
return tokens, diags
}
// LexExpression performs lexical analysis on the given buffer, treating it as
// a standalone zcl expression, and returns the resulting tokens.
//
// Only minimal validation is done during lexical analysis, so the returned
// diagnostics may include errors about lexical issues such as bad character
// encodings or unrecognized characters, but full parsing is required to
// detect _all_ syntax errors.
func LexExpression(src []byte, filename string, start hcl.Pos) (Tokens, hcl.Diagnostics) {
// This is actually just the same thing as LexConfig, since configs
// and expressions lex in the same way.
tokens := scanTokens(src, filename, start, scanNormal)
diags := checkInvalidTokens(tokens)
return tokens, diags
}
// LexTemplate performs lexical analysis on the given buffer, treating it as a
// standalone zcl template, and returns the resulting tokens.
//
// Only minimal validation is done during lexical analysis, so the returned
// diagnostics may include errors about lexical issues such as bad character
// encodings or unrecognized characters, but full parsing is required to
// detect _all_ syntax errors.
func LexTemplate(src []byte, filename string, start hcl.Pos) (Tokens, hcl.Diagnostics) {
tokens := scanTokens(src, filename, start, scanTemplate)
diags := checkInvalidTokens(tokens)
return tokens, diags
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,342 @@
package zclsyntax
import (
"bytes"
"github.com/zclconf/go-zcl/zcl"
)
// This file is generated from scan_tokens.rl. DO NOT EDIT.
%%{
# (except you are actually in scan_tokens.rl here, so edit away!)
machine zcltok;
write data;
}%%
func scanTokens(data []byte, filename string, start hcl.Pos, mode scanMode) []Token {
f := &tokenAccum{
Filename: filename,
Bytes: data,
Pos: start,
}
%%{
include UnicodeDerived "unicode_derived.rl";
UTF8Cont = 0x80 .. 0xBF;
AnyUTF8 = (
0x00..0x7F |
0xC0..0xDF . UTF8Cont |
0xE0..0xEF . UTF8Cont . UTF8Cont |
0xF0..0xF7 . UTF8Cont . UTF8Cont . UTF8Cont
);
BrokenUTF8 = any - AnyUTF8;
NumberLitContinue = (digit|'.'|('e'|'E') ('+'|'-')? digit);
NumberLit = digit ("" | (NumberLitContinue - '.') | (NumberLitContinue* (NumberLitContinue - '.')));
Ident = ID_Start (ID_Continue | '-')*;
# Symbols that just represent themselves are handled as a single rule.
SelfToken = "[" | "]" | "(" | ")" | "." | "," | "*" | "/" | "+" | "-" | "=" | "<" | ">" | "!" | "?" | ":" | "\n" | "&" | "|" | "~" | "^" | ";" | "`";
EqualOp = "==";
NotEqual = "!=";
GreaterThanEqual = ">=";
LessThanEqual = "<=";
LogicalAnd = "&&";
LogicalOr = "||";
Ellipsis = "...";
FatArrow = "=>";
Newline = '\r' ? '\n';
EndOfLine = Newline;
BeginStringTmpl = '"';
BeginHeredocTmpl = '<<' ('-')? Ident Newline;
Comment = (
("#" (any - EndOfLine)* EndOfLine) |
("//" (any - EndOfLine)* EndOfLine) |
("/*" any* "*/")
);
# Tabs are not valid, but we accept them in the scanner and mark them
# as tokens so that we can produce diagnostics advising the user to
# use spaces instead.
Tabs = 0x09+;
# Note: zclwrite assumes that only ASCII spaces appear between tokens,
# and uses this assumption to recreate the spaces between tokens by
# looking at byte offset differences.
Spaces = ' '+;
action beginStringTemplate {
token(TokenOQuote);
fcall stringTemplate;
}
action endStringTemplate {
token(TokenCQuote);
fret;
}
action beginHeredocTemplate {
token(TokenOHeredoc);
// the token is currently the whole heredoc introducer, like
// <<EOT or <<-EOT, followed by a newline. We want to extract
// just the "EOT" portion that we'll use as the closing marker.
marker := data[ts+2:te-1]
if marker[0] == '-' {
marker = marker[1:]
}
if marker[len(marker)-1] == '\r' {
marker = marker[:len(marker)-1]
}
heredocs = append(heredocs, heredocInProgress{
Marker: marker,
StartOfLine: true,
})
fcall heredocTemplate;
}
action heredocLiteralEOL {
// This action is called specificially when a heredoc literal
// ends with a newline character.
// This might actually be our end marker.
topdoc := &heredocs[len(heredocs)-1]
if topdoc.StartOfLine {
maybeMarker := bytes.TrimSpace(data[ts:te])
if bytes.Equal(maybeMarker, topdoc.Marker) {
token(TokenCHeredoc);
heredocs = heredocs[:len(heredocs)-1]
fret;
}
}
topdoc.StartOfLine = true;
token(TokenStringLit);
}
action heredocLiteralMidline {
// This action is called when a heredoc literal _doesn't_ end
// with a newline character, e.g. because we're about to enter
// an interpolation sequence.
heredocs[len(heredocs)-1].StartOfLine = false;
token(TokenStringLit);
}
action bareTemplateLiteral {
token(TokenStringLit);
}
action beginTemplateInterp {
token(TokenTemplateInterp);
braces++;
retBraces = append(retBraces, braces);
if len(heredocs) > 0 {
heredocs[len(heredocs)-1].StartOfLine = false;
}
fcall main;
}
action beginTemplateControl {
token(TokenTemplateControl);
braces++;
retBraces = append(retBraces, braces);
if len(heredocs) > 0 {
heredocs[len(heredocs)-1].StartOfLine = false;
}
fcall main;
}
action openBrace {
token(TokenOBrace);
braces++;
}
action closeBrace {
if len(retBraces) > 0 && retBraces[len(retBraces)-1] == braces {
token(TokenTemplateSeqEnd);
braces--;
retBraces = retBraces[0:len(retBraces)-1]
fret;
} else {
token(TokenCBrace);
braces--;
}
}
action closeTemplateSeqEatWhitespace {
// Only consume from the retBraces stack and return if we are at
// a suitable brace nesting level, otherwise things will get
// confused. (Not entering this branch indicates a syntax error,
// which we will catch in the parser.)
if len(retBraces) > 0 && retBraces[len(retBraces)-1] == braces {
token(TokenTemplateSeqEnd);
braces--;
retBraces = retBraces[0:len(retBraces)-1]
fret;
} else {
// We intentionally generate a TokenTemplateSeqEnd here,
// even though the user apparently wanted a brace, because
// we want to allow the parser to catch the incorrect use
// of a ~} to balance a generic opening brace, rather than
// a template sequence.
token(TokenTemplateSeqEnd);
braces--;
}
}
TemplateInterp = "${" ("~")?;
TemplateControl = "%{" ("~")?;
EndStringTmpl = '"';
StringLiteralChars = (AnyUTF8 - ("\r"|"\n"));
TemplateStringLiteral = (
('$' ^'{') |
('%' ^'{') |
('\\' StringLiteralChars) |
(StringLiteralChars - ("$" | '%' | '"'))
)+;
HeredocStringLiteral = (
('$' ^'{') |
('%' ^'{') |
(StringLiteralChars - ("$" | '%'))
)*;
BareStringLiteral = (
('$' ^'{') |
('%' ^'{') |
(StringLiteralChars - ("$" | '%'))
)* Newline?;
stringTemplate := |*
TemplateInterp => beginTemplateInterp;
TemplateControl => beginTemplateControl;
EndStringTmpl => endStringTemplate;
TemplateStringLiteral => { token(TokenQuotedLit); };
AnyUTF8 => { token(TokenInvalid); };
BrokenUTF8 => { token(TokenBadUTF8); };
*|;
heredocTemplate := |*
TemplateInterp => beginTemplateInterp;
TemplateControl => beginTemplateControl;
HeredocStringLiteral EndOfLine => heredocLiteralEOL;
HeredocStringLiteral => heredocLiteralMidline;
BrokenUTF8 => { token(TokenBadUTF8); };
*|;
bareTemplate := |*
TemplateInterp => beginTemplateInterp;
TemplateControl => beginTemplateControl;
BareStringLiteral => bareTemplateLiteral;
BrokenUTF8 => { token(TokenBadUTF8); };
*|;
main := |*
Spaces => {};
NumberLit => { token(TokenNumberLit) };
Ident => { token(TokenIdent) };
Comment => { token(TokenComment) };
Newline => { token(TokenNewline) };
EqualOp => { token(TokenEqualOp); };
NotEqual => { token(TokenNotEqual); };
GreaterThanEqual => { token(TokenGreaterThanEq); };
LessThanEqual => { token(TokenLessThanEq); };
LogicalAnd => { token(TokenAnd); };
LogicalOr => { token(TokenOr); };
Ellipsis => { token(TokenEllipsis); };
FatArrow => { token(TokenFatArrow); };
SelfToken => { selfToken() };
"{" => openBrace;
"}" => closeBrace;
"~}" => closeTemplateSeqEatWhitespace;
BeginStringTmpl => beginStringTemplate;
BeginHeredocTmpl => beginHeredocTemplate;
Tabs => { token(TokenTabs) };
BrokenUTF8 => { token(TokenBadUTF8) };
AnyUTF8 => { token(TokenInvalid) };
*|;
}%%
// Ragel state
p := 0 // "Pointer" into data
pe := len(data) // End-of-data "pointer"
ts := 0
te := 0
act := 0
eof := pe
var stack []int
var top int
var cs int // current state
switch mode {
case scanNormal:
cs = zcltok_en_main
case scanTemplate:
cs = zcltok_en_bareTemplate
default:
panic("invalid scanMode")
}
braces := 0
var retBraces []int // stack of brace levels that cause us to use fret
var heredocs []heredocInProgress // stack of heredocs we're currently processing
%%{
prepush {
stack = append(stack, 0);
}
postpop {
stack = stack[:len(stack)-1];
}
}%%
// Make Go compiler happy
_ = ts
_ = te
_ = act
_ = eof
token := func (ty TokenType) {
f.emitToken(ty, ts, te)
}
selfToken := func () {
b := data[ts:te]
if len(b) != 1 {
// should never happen
panic("selfToken only works for single-character tokens")
}
f.emitToken(TokenType(b[0]), ts, te)
}
%%{
write init nocs;
write exec;
}%%
// If we fall out here without being in a final state then we've
// encountered something that the scanner can't match, which we'll
// deal with as an invalid.
if cs < zcltok_first_final {
f.emitToken(TokenInvalid, p, len(data))
}
// We always emit a synthetic EOF token at the end, since it gives the
// parser position information for an "unexpected EOF" diagnostic.
f.emitToken(TokenEOF, len(data), len(data))
return f.Tokens
}

879
vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/spec.md generated vendored Normal file
View File

@ -0,0 +1,879 @@
# HCL Native Syntax Specification
This is the specification of the syntax and semantics of the native syntax
for HCL. HCL is a system for defining configuration languages for applications.
The HCL information model is designed to support multiple concrete syntaxes
for configuration, but this native syntax is considered the primary format
and is optimized for human authoring and maintenence, as opposed to machine
generation of configuration.
The language consists of three integrated sub-languages:
* The _structural_ language defines the overall heirarchical configuration
structure, and is a serialization of HCL bodies, blocks and attributes.
* The _expression_ language is used to express attribute values, either as
literals or as derivations of other values.
* The _template_ language is used to compose values together into strings,
as one of several types of expression in the expression language.
In normal use these three sub-languages are used together within configuration
files to describe an overall configuration, with the structural language
being used at the top level. The expression and template languages can also
be used in isolation, to implement features such as REPLs, debuggers, and
integration into more limited HCL syntaxes such as the JSON profile.
## Syntax Notation
Within this specification a semi-formal notation is used to illustrate the
details of syntax. This notation is intended for human consumption rather
than machine consumption, with the following conventions:
* A naked name starting with an uppercase letter is a global production,
common to all of the syntax specifications in this document.
* A naked name starting with a lowercase letter is a local production,
meaningful only within the specification where it is defined.
* Double and single quotes (`"` and `'`) are used to mark literal character
sequences, which may be either punctuation markers or keywords.
* The default operator for combining items, which has no punctuation,
is concatenation.
* The symbol `|` indicates that any one of its left and right operands may
be present.
* The `*` symbol indicates zero or more repetitions of the item to its left.
* The `?` symbol indicates zero or one of the item to its left.
* Parentheses (`(` and `)`) are used to group items together to apply
the `|`, `*` and `?` operators to them collectively.
The grammar notation does not fully describe the language. The prose may
augment or conflict with the illustrated grammar. In case of conflict, prose
has priority.
## Source Code Representation
Source code is unicode text expressed in the UTF-8 encoding. The language
itself does not perform unicode normalization, so syntax features such as
identifiers are sequences of unicode code points and so e.g. a precombined
accented character is distinct from a letter associated with a combining
accent. (String literals have some special handling with regard to Unicode
normalization which will be covered later in the relevant section.)
UTF-8 encoded Unicode byte order marks are not permitted. Invalid or
non-normalized UTF-8 encoding is always a parse error.
## Lexical Elements
### Comments and Whitespace
Comments and Whitespace are recognized as lexical elements but are ignored
except as described below.
Whitespace is defined as a sequence of zero or more space characters
(U+0020). Newline sequences (either U+000A or U+000D followed by U+000A)
are _not_ considered whitespace but are ignored as such in certain contexts.
Horizontal tab characters (U+0009) are not considered to be whitespace and
are not valid within HCL native syntax.
Comments serve as program documentation and come in two forms:
* _Line comments_ start with either the `//` or `#` sequences and end with
the next newline sequence. A line comments is considered equivalent to a
newline sequence.
* _Inline comments_ start with the `/*` sequence and end with the `*/`
sequence, and may have any characters within except the ending sequence.
An inline comments is considered equivalent to a whitespace sequence.
Comments and whitespace cannot begin within within other comments, or within
template literals except inside an interpolation sequence or template directive.
### Identifiers
Identifiers name entities such as blocks, attributes and expression variables.
Identifiers are interpreted as per [UAX #31][UAX31] Section 2. Specifically,
their syntax is defined in terms of the `ID_Start` and `ID_Continue`
character properties as follows:
```ebnf
Identifier = ID_Start (ID_Continue | '-')*;
```
The Unicode specification provides the normative requirements for identifier
parsing. Non-normatively, the spirit of this specification is that `ID_Start`
consists of Unicode letter and certain unambiguous punctuation tokens, while
`ID_Continue` augments that set with Unicode digits, combining marks, etc.
The dash character `-` is additionally allowed in identifiers, even though
that is not part of the unicode `ID_Continue` definition. This is to allow
attribute names and block type names to contain dashes, although underscores
as word separators are considered the idiomatic usage.
[UAX31]: http://unicode.org/reports/tr31/ "Unicode Identifier and Pattern Syntax"
### Keywords
There are no globally-reserved words, but in some contexts certain identifiers
are reserved to function as keywords. These are discussed further in the
relevant documentation sections that follow. In such situations, the
identifier's role as a keyword supersedes any other valid interpretation that
may be possible. Outside of these specific situations, the keywords have no
special meaning and are interpreted as regular identifiers.
### Operators and Delimiters
The following character sequences represent operators, delimiters, and other
special tokens:
```
+ && == < : { [ ( ${
- || != > ? } ] ) %{
* ! <= = .
/ >= => ,
% ...
```
### Numeric Literals
A numeric literal is a decimal representation of a
real number. It has an integer part, a fractional part,
and an exponent part.
```ebnf
NumericLit = decimal+ ("." decimal+)? (expmark decimal+)?;
decimal = '0' .. '9';
expmark = ('e' | 'E') ("+" | "-")?;
```
## Structural Elements
The structural language consists of syntax representing the following
constructs:
* _Attributes_, which assign a value to a specified name.
* _Blocks_, which create a child body annotated by a type and optional labels.
* _Body Content_, which consists of a collection of attributes and blocks.
These constructs correspond to the similarly-named concepts in the
language-agnostic HCL information model.
```ebnf
ConfigFile = Body;
Body = (Attribute | Block)*;
Attribute = Identifier "=" Expression Newline;
Block = Identifier (StringLit)* "{" Newline Body "}" Newline;
```
### Configuration Files
A _configuration file_ is a sequence of characters whose top-level is
interpreted as a Body.
### Bodies
A _body_ is a collection of associated attributes and blocks. The meaning of
this association is defined by the calling application.
### Attribute Definitions
An _attribute definition_ assigns a value to a particular attribute name within
a body. Each distinct attribute name may be defined no more than once within a
single body.
The attribute value is given as an expression, which is retained literally
for later evaluation by the calling application.
### Blocks
A _block_ creates a child body that is annotated with a block _type_ and
zero or more optional block _labels_. Blocks create a structural heirachy
which can be interpreted by the calling application.
## Expressions
The expression sub-language is used within attribute definitions to specify
values.
```ebnf
Expression = (
ExprTerm |
Operation |
Conditional
);
```
### Types
The value types used within the expression language are those defined by the
syntax-agnostic HCL information model. An expression may return any valid
type, but only a subset of the available types have first-class syntax.
A calling application may make other types available via _variables_ and
_functions_.
### Expression Terms
Expression _terms_ are the operands for unary and binary expressions, as well
as acting as expressions in their own right.
```ebnf
ExprTerm = (
LiteralValue |
CollectionValue |
TemplateExpr |
VariableExpr |
FunctionCall |
ForExpr |
ExprTerm Index |
ExprTerm GetAttr |
ExprTerm Splat |
"(" Expression ")"
);
```
The productions for these different term types are given in their corresponding
sections.
Between the `(` and `)` characters denoting a sub-expression, newline
characters are ignored as whitespace.
### Literal Values
A _literal value_ immediately represents a particular value of a primitive
type.
```ebnf
LiteralValue = (
NumericLit |
"true" |
"false" |
"null"
);
```
* Numeric literals represent values of type _number_.
* The `true` and `false` keywords represent values of type _bool_.
* The `null` keyword represents a null value of the dynamic pseudo-type.
String literals are not directly available in the expression sub-language, but
are available via the template sub-language, which can in turn be incorporated
via _template expressions_.
### Collection Values
A _collection value_ combines zero or more other expressions to produce a
collection value.
```ebnf
CollectionValue = tuple | object;
tuple = "[" (
(Expression ("," Expression)* ","?)?
) "]";
object = "{" (
(objectelem ("," objectelem)* ","?)?
) "}";
objectelem = (Identifier | Expression) "=" Expression;
```
Only tuple and object values can be directly constructed via native syntax.
Tuple and object values can in turn be converted to list, set and map values
with other operations, which behaves as defined by the syntax-agnostic HCL
information model.
When specifying an object element, an identifier is interpreted as a literal
attribute name as opposed to a variable reference. To populate an item key
from a variable, use parentheses to disambiguate:
* `{foo = "baz"}` is interpreted as an attribute literally named `foo`.
* `{(foo) = "baz"}` is interpreted as an attribute whose name is taken
from the variable named `foo`.
Between the open and closing delimiters of these sequences, newline sequences
are ignored as whitespace.
There is a syntax ambiguity between _for expressions_ and collection values
whose first element is a reference to a variable named `for`. The
_for expression_ interpretation has priority, so to produce a tuple whose
first element is the value of a variable named `for`, or an object with a
key named `for`, use paretheses to disambiguate:
* `[for, foo, baz]` is a syntax error.
* `[(for), foo, baz]` is a tuple whose first element is the value of variable
`for`.
* `{for: 1, baz: 2}` is a syntax error.
* `{(for): 1, baz: 2}` is an object with an attribute literally named `for`.
* `{baz: 2, for: 1}` is equivalent to the previous example, and resolves the
ambiguity by reordering.
### Template Expressions
A _template expression_ embeds a program written in the template sub-language
as an expression. Template expressions come in two forms:
* A _quoted_ template expression is delimited by quote characters (`"`) and
defines a template as a single-line expression with escape characters.
* A _heredoc_ template expression is introduced by a `<<` sequence and
defines a template via a multi-line sequence terminated by a user-chosen
delimiter.
In both cases the template interpolation and directive syntax is available for
use within the delimiters, and any text outside of these special sequences is
interpreted as a literal string.
In _quoted_ template expressions any literal string sequences within the
template behave in a special way: literal newline sequences are not permitted
and instead _escape sequences_ can be included, starting with the
backslash `\`:
```
\n Unicode newline control character
\r Unicode carriage return control character
\t Unicode tab control character
\" Literal quote mark, used to prevent interpretation as end of string
\\ Literal backslash, used to prevent interpretation as escape sequence
\uNNNN Unicode character from Basic Multilingual Plane (NNNN is four hexadecimal digits)
\UNNNNNNNN Unicode character from supplementary planes (NNNNNNNN is eight hexadecimal digits)
```
The _heredoc_ template expression type is introduced by either `<<` or `<<-`,
followed by an identifier. The template expression ends when the given
identifier subsequently appears again on a line of its own.
If a heredoc template is introduced with the `<<-` symbol, any literal string
at the start of each line is analyzed to find the minimum number of leading
spaces, and then that number of prefix spaces is removed from all line-leading
literal strings. The final closing marker may also have an arbitrary number
of spaces preceding it on its line.
```ebnf
TemplateExpr = quotedTemplate | heredocTemplate;
quotedTemplate = (as defined in prose above);
heredocTemplate = (
("<<" | "<<-") Identifier Newline
(content as defined in prose above)
Identifier Newline
);
```
A quoted template expression containing only a single literal string serves
as a syntax for defining literal string _expressions_. In certain contexts
the template syntax is restricted in this manner:
```ebnf
StringLit = '"' (quoted literals as defined in prose above) '"';
```
The `StringLit` production permits the escape sequences discussed for quoted
template expressions as above, but does _not_ permit template interpolation
or directive sequences.
### Variables and Variable Expressions
A _variable_ is a value that has been assigned a symbolic name. Variables are
made available for use in expressions by the calling application, by populating
the _global scope_ used for expression evaluation.
Variables can also be created by expressions themselves, which always creates
a _child scope_ that incorporates the variables from its parent scope but
(re-)defines zero or more names with new values.
The value of a variable is accessed using a _variable expression_, which is
a standalone `Identifier` whose name corresponds to a defined variable:
```ebnf
VariableExpr = Identifier;
```
Variables in a particular scope are immutable, but child scopes may _hide_
a variable from an ancestor scope by defining a new variable of the same name.
When looking up variables, the most locally-defined variable of the given name
is used, and ancestor-scoped variables of the same name cannot be accessed.
No direct syntax is provided for declaring or assigning variables, but other
expression constructs implicitly create child scopes and define variables as
part of their evaluation.
### Functions and Function Calls
A _function_ is an operation that has been assigned a symbolic name. Functions
are made available for use in expressions by the calling application, by
populating the _function table_ used for expression evaluation.
The namespace of functions is distinct from the namespace of variables. A
function and a variable may share the same name with no implication that they
are in any way related.
A function can be executed via a _function call_ expression:
```ebnf
FunctionCall = Identifier "(" arguments ")";
Arguments = (
() ||
(Expression ("," Expression)* ("," | "...")?)
);
```
The definition of functions and the semantics of calling them are defined by
the language-agnostic HCL information model. The given arguments are mapped
onto the function's _parameters_ and the result of a function call expression
is the return value of the named function when given those arguments.
If the final argument expression is followed by the ellipsis symbol (`...`),
the final argument expression must evaluate to either a list or tuple value.
The elements of the value are each mapped to a single parameter of the
named function, beginning at the first parameter remaining after all other
argument expressions have been mapped.
Within the parentheses that delimit the function arguments, newline sequences
are ignored as whitespace.
### For Expressions
A _for expression_ is a construct for constructing a collection by projecting
the items from another collection.
```ebnf
ForExpr = forTupleExpr | forObjectExpr;
forTupleExpr = "[" forIntro Expression forCond? "]";
forObjectExpr = "{" forIntro Expression "=>" Expression "..."? forCond? "}";
forIntro = "for" Identifier ("," Identifier)? "in" Expression ":";
forCond = "if" Expression;
```
The punctuation used to delimit a for expression decide whether it will produce
a tuple value (`[` and `]`) or an object value (`{` and `}`).
The "introduction" is equivalent in both cases: the keyword `for` followed by
either one or two identifiers separated by a comma which define the temporary
variable names used for iteration, followed by the keyword `in` and then
an expression that must evaluate to a value that can be iterated. The
introduction is then terminated by the colon (`:`) symbol.
If only one identifier is provided, it is the name of a variable that will
be temporarily assigned the value of each element during iteration. If both
are provided, the first is the key and the second is the value.
Tuple, object, list, map, and set types are iterable. The type of collection
used defines how the key and value variables are populated:
* For tuple and list types, the _key_ is the zero-based index into the
sequence for each element, and the _value_ is the element value. The
elements are visited in index order.
* For object and map types, the _key_ is the string attribute name or element
key, and the _value_ is the attribute or element value. The elements are
visited in the order defined by a lexicographic sort of the attribute names
or keys.
* For set types, the _key_ and _value_ are both the element value. The elements
are visited in an undefined but consistent order.
The expression after the colon and (in the case of object `for`) the expression
after the `=>` are both evaluated once for each element of the source
collection, in a local scope that defines the key and value variable names
specified.
The results of evaluating these expressions for each input element are used
to populate an element in the new collection. In the case of tuple `for`, the
single expression becomes an element, appending values to the tuple in visit
order. In the case of object `for`, the pair of expressions is used as an
attribute name and value respectively, creating an element in the resulting
object.
In the case of object `for`, it is an error if two input elements produce
the same result from the attribute name expression, since duplicate
attributes are not possible. If the ellipsis symbol (`...`) appears
immediately after the value experssion, this activates the grouping mode in
which each value in the resulting object is a _tuple_ of all of the values
that were produced against each distinct key.
* `[for v in ["a", "b"]: v]` returns `["a", "b"]`.
* `[for i, v in ["a", "b"]: i]` returns `[0, 1]`.
* `{for i, v in ["a", "b"]: v => i}` returns `{a = 0, b = 1}`.
* `{for i, v in ["a", "a", "b"]: k => v}` produces an error, because attribute
`a` is defined twice.
* `{for i, v in ["a", "a", "b"]: v => i...}` returns `{a = [0, 1], b = [2]}`.
If the `if` keyword is used after the element expression(s), it applies an
additional predicate that can be used to conditionally filter elements from
the source collection from consideration. The expression following `if` is
evaluated once for each source element, in the same scope used for the
element expression(s). It must evaluate to a boolean value; if `true`, the
element will be evaluated as normal, while if `false` the element will be
skipped.
* `[for i, v in ["a", "b", "c"]: v if i < 2]` returns `["a", "b"]`.
If the collection value, element expression(s) or condition expression return
unknown values that are otherwise type-valid, the result is a value of the
dynamic pseudo-type.
### Index Operator
The _index_ operator returns the value of a single element of a collection
value. It is a postfix operator and can be applied to any value that has
a tuple, object, map, or list type.
```ebnf
Index = "[" Expression "]";
```
The expression delimited by the brackets is the _key_ by which an element
will be looked up.
If the index operator is applied to a value of tuple or list type, the
key expression must be an non-negative integer number representing the
zero-based element index to access. If applied to a value of object or map
type, the key expression must be a string representing the attribute name
or element key. If the given key value is not of the appropriate type, a
conversion is attempted using the conversion rules from the HCL
syntax-agnostic information model.
An error is produced if the given key expression does not correspond to
an element in the collection, either because it is of an unconvertable type,
because it is outside the range of elements for a tuple or list, or because
the given attribute or key does not exist.
If either the collection or the key are an unknown value of an
otherwise-suitable type, the return value is an unknown value whose type
matches what type would be returned given known values, or a value of the
dynamic pseudo-type if type information alone cannot determine a suitable
return type.
Within the brackets that delimit the index key, newline sequences are ignored
as whitespace.
### Attribute Access Operator
The _attribute access_ operator returns the value of a single attribute in
an object value. It is a postfix operator and can be applied to any value
that has an object type.
```ebnf
GetAttr = "." Identifier;
```
The given identifier is interpreted as the name of the attribute to access.
An error is produced if the object to which the operator is applied does not
have an attribute with the given name.
If the object is an unknown value of a type that has the attribute named, the
result is an unknown value of the attribute's type.
### Splat Operators
The _splat operators_ allow convenient access to attributes or elements of
elements in a tuple, list, or set value.
There are two kinds of "splat" operator:
* The _attribute-only_ splat operator supports only attribute lookups into
the elements from a list, but supports an arbitrary number of them.
* The _full_ splat operator additionally supports indexing into the elements
from a list, and allows any combination of attribute access and index
operations.
```ebnf
Splat = attrSplat | fullSplat;
attrSplat = "." "*" GetAttr*;
fullSplat = "[" "*" "]" (GetAttr | Index)*;
```
The splat operators can be thought of as shorthands for common operations that
could otherwise be performed using _for expressions_:
* `tuple.*.foo.bar[0]` is approximately equivalent to
`[for v in tuple: v.foo.bar][0]`.
* `tuple[*].foo.bar[0]` is approximately equivalent to
`[for v in tuple: v.foo.bar[0]]`
Note the difference in how the trailing index operator is interpreted in
each case. This different interpretation is the key difference between the
_attribute-only_ and _full_ splat operators.
Splat operators have one additional behavior compared to the equivalent
_for expressions_ shown above: if a splat operator is applied to a value that
is _not_ of tuple, list, or set type, the value is coerced automatically into
a single-value list of the value type:
* `any_object.*.id` is equivalent to `[any_object.id]`, assuming that `any_object`
is a single object.
* `any_number.*` is equivalent to `[any_number]`, assuming that `any_number`
is a single number.
If the left operand of a splat operator is an unknown value of any type, the
result is a value of the dynamic pseudo-type.
### Operations
Operations apply a particular operator to either one or two expression terms.
```ebnf
Operation = unaryOp | binaryOp;
unaryOp = ("-" | "!") ExprTerm;
binaryOp = ExprTerm binaryOperator ExprTerm;
binaryOperator = compareOperator | arithmeticOperator | logicOperator;
compareOperator = "==" | "!=" | "<" | ">" | "<=" | ">=";
arithmeticOperator = "+" | "-" | "*" | "/" | "%";
logicOperator = "&&" | "||" | "!";
```
The unary operators have the highest precedence.
The binary operators are grouped into the following precedence levels:
```
Level Operators
6 * / %
5 + -
4 > >= < <=
3 == !=
2 &&
1 ||
```
Higher values of "level" bind tighter. Operators within the same precedence
level have left-to-right associativity. For example, `x / y * z` is equivalent
to `(x / y) * z`.
### Comparison Operators
Comparison operators always produce boolean values, as a result of testing
the relationship between two values.
The two equality operators apply to values of any type:
```
a == b equal
a != b not equal
```
Two values are equal if the are of identical types and their values are
equal as defined in the HCL syntax-agnostic information model. The equality
operators are commutative and opposite, such that `(a == b) == !(a != b)`
and `(a == b) == (b == a)` for all values `a` and `b`.
The four numeric comparison operators apply only to numbers:
```
a < b less than
a <= b less than or equal to
a > b greater than
a >= b greater than or equal to
```
If either operand of a comparison operator is a correctly-typed unknown value
or a value of the dynamic pseudo-type, the result is an unknown boolean.
### Arithmetic Operators
Arithmetic operators apply only to number values and always produce number
values as results.
```
a + b sum (addition)
a - b difference (subtraction)
a * b product (multiplication)
a / b quotient (division)
a % b remainder (modulo)
-a negation
```
Arithmetic operations are considered to be performed in an arbitrary-precision
number space.
If either operand of an arithmetic operator is an unknown number or a value
of the dynamic pseudo-type, the result is an unknown number.
### Logic Operators
Logic operators apply only to boolean values and always produce boolean values
as results.
```
a && b logical AND
a || b logical OR
!a logical NOT
```
If either operand of a logic operator is an unknown bool value or a value
of the dynamic pseudo-type, the result is an unknown bool value.
### Conditional Operator
The conditional operator allows selecting from one of two expressions based on
the outcome of a boolean expression.
```ebnf
Conditional = Expression "?" Expression ":" Expression;
```
The first expression is the _predicate_, which is evaluated and must produce
a boolean result. If the predicate value is `true`, the result of the second
expression is the result of the conditional. If the predicate value is
`false`, the result of the third expression is the result of the conditional.
The second and third expressions must be of the same type or must be able to
unify into a common type using the type unification rules defined in the
HCL syntax-agnostic information model. This unified type is the result type
of the conditional, with both expressions converted as necessary to the
unified type.
If the predicate is an unknown boolean value or a value of the dynamic
pseudo-type then the result is an unknown value of the unified type of the
other two expressions.
If either the second or third expressions produce errors when evaluated,
these errors are passed through only if the erroneous expression is selected.
This allows for expressions such as
`length(some_list) > 0 ? some_list[0] : default` (given some suitable `length`
function) without producing an error when the predicate is `false`.
## Templates
The template sub-language is used within template expressions to concisely
combine strings and other values to produce other strings. It can also be
used in isolation as a standalone template language.
```ebnf
Template = (
TemplateLiteral |
TemplateInterpolation |
TemplateDirective
)*
TemplateDirective = TemplateIf | TemplateFor;
```
A template behaves like an expression that always returns a string value.
The different elements of the template are evaluated and combined into a
single string to return. If any of the elements produce an unknown string
or a value of the dynamic pseudo-type, the result is an unknown string.
An important use-case for standalone templates is to enable the use of
expressions in alternative HCL syntaxes where a native expression grammar is
not available. For example, the HCL JSON profile treats the values of JSON
strings as standalone templates when attributes are evaluated in expression
mode.
### Template Literals
A template literal is a literal sequence of characters to include in the
resulting string. When the template sub-language is used standalone, a
template literal can contain any unicode character, with the exception
of the sequences that introduce interpolations and directives, and for the
sequences that escape those introductions.
The interpolation and directive introductions are escaped by doubling their
leading characters. The `${` sequence is escaped as `$${` and the `%{`
sequence is escaped as `%%{`.
When the template sub-language is embedded in the expression language via
_template expressions_, additional constraints and transforms are applied to
template literalsas described in the definition of template expressions.
The value of a template literal can be modified by _strip markers_ in any
interpolations or directives that are adjacent to it. A strip marker is
a tilde (`~`) placed immediately after the opening `{` or before the closing
`}` of a template sequence:
* `hello ${~ "world" }` produces `"helloworld"`.
* `%{ if true ~} hello %{~ endif }` produces `"hello"`.
When a strip marker is present, any spaces adjacent to it in the corresponding
string literal (if any) are removed before producing the final value. Space
characters are interpreted as per Unicode's definition.
Stripping is done at syntax level rather than value level. Values returned
by interpolations or directives are not subject to stripping:
* `${"hello" ~}${" world"}` produces `"hello world"`, and not `"helloworld"`,
because the space is not in a template literal directly adjacent to the
strip marker.
### Template Interpolations
An _interpolation sequence_ evaluates an expression (written in the
expression sub-language), converts the result to a string value, and
replaces itself with the resulting string.
```ebnf
TemplateInterpolation = ("${" | "${~") Expression ("}" | "~}";
```
If the expression result cannot be converted to a string, an error is
produced.
### Template If Directive
The template `if` directive is the template equivalent of the
_conditional expression_, allowing selection of one of two sub-templates based
on the value of a predicate expression.
```ebnf
TemplateIf = (
("%{" | "%{~") "if" Expression ("}" | "~}")
Template
(
("%{" | "%{~") "else" ("}" | "~}")
Template
)?
("%{" | "%{~") "endif" ("}" | "~}")
);
```
The evaluation of the `if` directive is equivalent to the conditional
expression, with the following exceptions:
* The two sub-templates always produce strings, and thus the result value is
also always a string.
* The `else` clause may be omitted, in which case the conditional's third
expression result is implied to be the empty string.
### Template For Directive
The template `for` directive is the template equivalent of the _for expression_,
producing zero or more copies of its sub-template based on the elements of
a collection.
```ebnf
TemplateFor = (
("%{" | "%{~") "for" Identifier ("," Identifier) "in" Expression ("}" | "~}")
Template
("%{" | "%{~") "endfor" ("}" | "~}")
);
```
The evaluation of the `for` directive is equivalent to the _for expression_
when producing a tuple, with the following exceptions:
* The sub-template always produces a string.
* There is no equivalent of the "if" clause on the for expression.
* The elements of the resulting tuple are all converted to strings and
concatenated to produce a flat string result.
### Template Interpolation Unwrapping
As a special case, a template that consists only of a single interpolation,
with no surrounding literals, directives or other interpolations, is
"unwrapped". In this case, the result of the interpolation expression is
returned verbatim, without conversion to string.
This special case exists primarily to enable the native template language
to be used inside strings in alternative HCL syntaxes that lack a first-class
template or expression syntax. Unwrapping allows arbitrary expressions to be
used to populate attributes when strings in such languages are interpreted
as templates.
* `${true}` produces the boolean value `true`
* `${"${true}"}` produces the boolean value `true`, because both the inner
and outer interpolations are subject to unwrapping.
* `hello ${true}` produces the string `"hello true"`
* `${""}${true}` produces the string `"true"` because there are two
interpolation sequences, even though one produces an empty result.
* `%{ for v in [true] }${v}%{ endif }` produces the string `true` because
the presence of the `for` directive circumvents the unwrapping even though
the final result is a single value.
In some contexts this unwrapping behavior may be circumvented by the calling
application, by converting the final template result to string. This is
necessary, for example, if a standalone template is being used to produce
the direct contents of a file, since the result in that case must always be a
string.

View File

@ -0,0 +1,379 @@
package hclsyntax
import (
"fmt"
"strings"
"github.com/hashicorp/hcl2/hcl"
)
// AsHCLBlock returns the block data expressed as a *hcl.Block.
func (b *Block) AsHCLBlock() *hcl.Block {
lastHeaderRange := b.TypeRange
if len(b.LabelRanges) > 0 {
lastHeaderRange = b.LabelRanges[len(b.LabelRanges)-1]
}
return &hcl.Block{
Type: b.Type,
Labels: b.Labels,
Body: b.Body,
DefRange: hcl.RangeBetween(b.TypeRange, lastHeaderRange),
TypeRange: b.TypeRange,
LabelRanges: b.LabelRanges,
}
}
// Body is the implementation of hcl.Body for the zcl native syntax.
type Body struct {
Attributes Attributes
Blocks Blocks
// These are used with PartialContent to produce a "remaining items"
// body to return. They are nil on all bodies fresh out of the parser.
hiddenAttrs map[string]struct{}
hiddenBlocks map[string]struct{}
SrcRange hcl.Range
EndRange hcl.Range // Final token of the body, for reporting missing items
}
// Assert that *Body implements hcl.Body
var assertBodyImplBody hcl.Body = &Body{}
func (b *Body) walkChildNodes(w internalWalkFunc) {
b.Attributes = w(b.Attributes).(Attributes)
b.Blocks = w(b.Blocks).(Blocks)
}
func (b *Body) Range() hcl.Range {
return b.SrcRange
}
func (b *Body) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
content, remainHCL, diags := b.PartialContent(schema)
// No we'll see if anything actually remains, to produce errors about
// extraneous items.
remain := remainHCL.(*Body)
for name, attr := range b.Attributes {
if _, hidden := remain.hiddenAttrs[name]; !hidden {
var suggestions []string
for _, attrS := range schema.Attributes {
if _, defined := content.Attributes[attrS.Name]; defined {
continue
}
suggestions = append(suggestions, attrS.Name)
}
suggestion := nameSuggestion(name, suggestions)
if suggestion != "" {
suggestion = fmt.Sprintf(" Did you mean %q?", suggestion)
} else {
// Is there a block of the same name?
for _, blockS := range schema.Blocks {
if blockS.Type == name {
suggestion = fmt.Sprintf(" Did you mean to define a block of type %q?", name)
break
}
}
}
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsupported attribute",
Detail: fmt.Sprintf("An attribute named %q is not expected here.%s", name, suggestion),
Subject: &attr.NameRange,
})
}
}
for _, block := range b.Blocks {
blockTy := block.Type
if _, hidden := remain.hiddenBlocks[blockTy]; !hidden {
var suggestions []string
for _, blockS := range schema.Blocks {
suggestions = append(suggestions, blockS.Type)
}
suggestion := nameSuggestion(blockTy, suggestions)
if suggestion != "" {
suggestion = fmt.Sprintf(" Did you mean %q?", suggestion)
} else {
// Is there an attribute of the same name?
for _, attrS := range schema.Attributes {
if attrS.Name == blockTy {
suggestion = fmt.Sprintf(" Did you mean to define attribute %q?", blockTy)
break
}
}
}
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsupported block type",
Detail: fmt.Sprintf("Blocks of type %q are not expected here.%s", blockTy, suggestion),
Subject: &block.TypeRange,
})
}
}
return content, diags
}
func (b *Body) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
attrs := make(hcl.Attributes)
var blocks hcl.Blocks
var diags hcl.Diagnostics
hiddenAttrs := make(map[string]struct{})
hiddenBlocks := make(map[string]struct{})
if b.hiddenAttrs != nil {
for k, v := range b.hiddenAttrs {
hiddenAttrs[k] = v
}
}
if b.hiddenBlocks != nil {
for k, v := range b.hiddenBlocks {
hiddenBlocks[k] = v
}
}
for _, attrS := range schema.Attributes {
name := attrS.Name
attr, exists := b.Attributes[name]
_, hidden := hiddenAttrs[name]
if hidden || !exists {
if attrS.Required {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing required attribute",
Detail: fmt.Sprintf("The attribute %q is required, but no definition was found.", attrS.Name),
Subject: b.MissingItemRange().Ptr(),
})
}
continue
}
hiddenAttrs[name] = struct{}{}
attrs[name] = attr.AsHCLAttribute()
}
blocksWanted := make(map[string]hcl.BlockHeaderSchema)
for _, blockS := range schema.Blocks {
blocksWanted[blockS.Type] = blockS
}
for _, block := range b.Blocks {
if _, hidden := hiddenBlocks[block.Type]; hidden {
continue
}
blockS, wanted := blocksWanted[block.Type]
if !wanted {
continue
}
if len(block.Labels) > len(blockS.LabelNames) {
name := block.Type
if len(blockS.LabelNames) == 0 {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Extraneous label for %s", name),
Detail: fmt.Sprintf(
"No labels are expected for %s blocks.", name,
),
Subject: block.LabelRanges[0].Ptr(),
Context: hcl.RangeBetween(block.TypeRange, block.OpenBraceRange).Ptr(),
})
} else {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Extraneous label for %s", name),
Detail: fmt.Sprintf(
"Only %d labels (%s) are expected for %s blocks.",
len(blockS.LabelNames), strings.Join(blockS.LabelNames, ", "), name,
),
Subject: block.LabelRanges[len(blockS.LabelNames)].Ptr(),
Context: hcl.RangeBetween(block.TypeRange, block.OpenBraceRange).Ptr(),
})
}
continue
}
if len(block.Labels) < len(blockS.LabelNames) {
name := block.Type
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Missing %s for %s", blockS.LabelNames[len(block.Labels)], name),
Detail: fmt.Sprintf(
"All %s blocks must have %d labels (%s).",
name, len(blockS.LabelNames), strings.Join(blockS.LabelNames, ", "),
),
Subject: &block.OpenBraceRange,
Context: hcl.RangeBetween(block.TypeRange, block.OpenBraceRange).Ptr(),
})
continue
}
blocks = append(blocks, block.AsHCLBlock())
}
// We hide blocks only after we've processed all of them, since otherwise
// we can't process more than one of the same type.
for _, blockS := range schema.Blocks {
hiddenBlocks[blockS.Type] = struct{}{}
}
remain := &Body{
Attributes: b.Attributes,
Blocks: b.Blocks,
hiddenAttrs: hiddenAttrs,
hiddenBlocks: hiddenBlocks,
SrcRange: b.SrcRange,
EndRange: b.EndRange,
}
return &hcl.BodyContent{
Attributes: attrs,
Blocks: blocks,
MissingItemRange: b.MissingItemRange(),
}, remain, diags
}
func (b *Body) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
attrs := make(hcl.Attributes)
var diags hcl.Diagnostics
if len(b.Blocks) > 0 {
example := b.Blocks[0]
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Unexpected %s block", example.Type),
Detail: "Blocks are not allowed here.",
Context: &example.TypeRange,
})
// we will continue processing anyway, and return the attributes
// we are able to find so that certain analyses can still be done
// in the face of errors.
}
if b.Attributes == nil {
return attrs, diags
}
for name, attr := range b.Attributes {
if _, hidden := b.hiddenAttrs[name]; hidden {
continue
}
attrs[name] = attr.AsHCLAttribute()
}
return attrs, diags
}
func (b *Body) MissingItemRange() hcl.Range {
return b.EndRange
}
// Attributes is the collection of attribute definitions within a body.
type Attributes map[string]*Attribute
func (a Attributes) walkChildNodes(w internalWalkFunc) {
for k, attr := range a {
a[k] = w(attr).(*Attribute)
}
}
// Range returns the range of some arbitrary point within the set of
// attributes, or an invalid range if there are no attributes.
//
// This is provided only to complete the Node interface, but has no practical
// use.
func (a Attributes) Range() hcl.Range {
// An attributes doesn't really have a useful range to report, since
// it's just a grouping construct. So we'll arbitrarily take the
// range of one of the attributes, or produce an invalid range if we have
// none. In practice, there's little reason to ask for the range of
// an Attributes.
for _, attr := range a {
return attr.Range()
}
return hcl.Range{
Filename: "<unknown>",
}
}
// Attribute represents a single attribute definition within a body.
type Attribute struct {
Name string
Expr Expression
SrcRange hcl.Range
NameRange hcl.Range
EqualsRange hcl.Range
}
func (a *Attribute) walkChildNodes(w internalWalkFunc) {
a.Expr = w(a.Expr).(Expression)
}
func (a *Attribute) Range() hcl.Range {
return a.SrcRange
}
// AsHCLAttribute returns the block data expressed as a *hcl.Attribute.
func (a *Attribute) AsHCLAttribute() *hcl.Attribute {
return &hcl.Attribute{
Name: a.Name,
Expr: a.Expr,
Range: a.SrcRange,
NameRange: a.NameRange,
}
}
// Blocks is the list of nested blocks within a body.
type Blocks []*Block
func (bs Blocks) walkChildNodes(w internalWalkFunc) {
for i, block := range bs {
bs[i] = w(block).(*Block)
}
}
// Range returns the range of some arbitrary point within the list of
// blocks, or an invalid range if there are no blocks.
//
// This is provided only to complete the Node interface, but has no practical
// use.
func (bs Blocks) Range() hcl.Range {
if len(bs) > 0 {
return bs[0].Range()
}
return hcl.Range{
Filename: "<unknown>",
}
}
// Block represents a nested block structure
type Block struct {
Type string
Labels []string
Body *Body
TypeRange hcl.Range
LabelRanges []hcl.Range
OpenBraceRange hcl.Range
CloseBraceRange hcl.Range
}
func (b *Block) walkChildNodes(w internalWalkFunc) {
b.Body = w(b.Body).(*Body)
}
func (b *Block) Range() hcl.Range {
return hcl.RangeBetween(b.TypeRange, b.CloseBraceRange)
}

271
vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/token.go generated vendored Normal file
View File

@ -0,0 +1,271 @@
package hclsyntax
import (
"fmt"
"github.com/apparentlymart/go-textseg/textseg"
"github.com/hashicorp/hcl2/hcl"
)
// Token represents a sequence of bytes from some HCL code that has been
// tagged with a type and its range within the source file.
type Token struct {
Type TokenType
Bytes []byte
Range hcl.Range
}
// Tokens is a slice of Token.
type Tokens []Token
// TokenType is an enumeration used for the Type field on Token.
type TokenType rune
const (
// Single-character tokens are represented by their own character, for
// convenience in producing these within the scanner. However, the values
// are otherwise arbitrary and just intended to be mnemonic for humans
// who might see them in debug output.
TokenOBrace TokenType = '{'
TokenCBrace TokenType = '}'
TokenOBrack TokenType = '['
TokenCBrack TokenType = ']'
TokenOParen TokenType = '('
TokenCParen TokenType = ')'
TokenOQuote TokenType = '«'
TokenCQuote TokenType = '»'
TokenOHeredoc TokenType = 'H'
TokenCHeredoc TokenType = 'h'
TokenStar TokenType = '*'
TokenSlash TokenType = '/'
TokenPlus TokenType = '+'
TokenMinus TokenType = '-'
TokenPercent TokenType = '%'
TokenEqual TokenType = '='
TokenEqualOp TokenType = '≔'
TokenNotEqual TokenType = '≠'
TokenLessThan TokenType = '<'
TokenLessThanEq TokenType = '≤'
TokenGreaterThan TokenType = '>'
TokenGreaterThanEq TokenType = '≥'
TokenAnd TokenType = '∧'
TokenOr TokenType = ''
TokenBang TokenType = '!'
TokenDot TokenType = '.'
TokenComma TokenType = ','
TokenEllipsis TokenType = '…'
TokenFatArrow TokenType = '⇒'
TokenQuestion TokenType = '?'
TokenColon TokenType = ':'
TokenTemplateInterp TokenType = '∫'
TokenTemplateControl TokenType = 'λ'
TokenTemplateSeqEnd TokenType = '∎'
TokenQuotedLit TokenType = 'Q' // might contain backslash escapes
TokenStringLit TokenType = 'S' // cannot contain backslash escapes
TokenNumberLit TokenType = 'N'
TokenIdent TokenType = 'I'
TokenComment TokenType = 'C'
TokenNewline TokenType = '\n'
TokenEOF TokenType = '␄'
// The rest are not used in the language but recognized by the scanner so
// we can generate good diagnostics in the parser when users try to write
// things that might work in other languages they are familiar with, or
// simply make incorrect assumptions about the HCL language.
TokenBitwiseAnd TokenType = '&'
TokenBitwiseOr TokenType = '|'
TokenBitwiseNot TokenType = '~'
TokenBitwiseXor TokenType = '^'
TokenStarStar TokenType = '➚'
TokenBacktick TokenType = '`'
TokenSemicolon TokenType = ';'
TokenTabs TokenType = '␉'
TokenInvalid TokenType = '<27>'
TokenBadUTF8 TokenType = '💩'
// TokenNil is a placeholder for when a token is required but none is
// available, e.g. when reporting errors. The scanner will never produce
// this as part of a token stream.
TokenNil TokenType = '\x00'
)
func (t TokenType) GoString() string {
return fmt.Sprintf("hclsyntax.%s", t.String())
}
type scanMode int
const (
scanNormal scanMode = iota
scanTemplate
)
type tokenAccum struct {
Filename string
Bytes []byte
Pos hcl.Pos
Tokens []Token
}
func (f *tokenAccum) emitToken(ty TokenType, startOfs, endOfs int) {
// Walk through our buffer to figure out how much we need to adjust
// the start pos to get our end pos.
start := f.Pos
start.Column += startOfs - f.Pos.Byte // Safe because only ASCII spaces can be in the offset
start.Byte = startOfs
end := start
end.Byte = endOfs
b := f.Bytes[startOfs:endOfs]
for len(b) > 0 {
advance, seq, _ := textseg.ScanGraphemeClusters(b, true)
if len(seq) == 1 && seq[0] == '\n' {
end.Line++
end.Column = 1
} else {
end.Column++
}
b = b[advance:]
}
f.Pos = end
f.Tokens = append(f.Tokens, Token{
Type: ty,
Bytes: f.Bytes[startOfs:endOfs],
Range: hcl.Range{
Filename: f.Filename,
Start: start,
End: end,
},
})
}
type heredocInProgress struct {
Marker []byte
StartOfLine bool
}
// checkInvalidTokens does a simple pass across the given tokens and generates
// diagnostics for tokens that should _never_ appear in HCL source. This
// is intended to avoid the need for the parser to have special support
// for them all over.
//
// Returns a diagnostics with no errors if everything seems acceptable.
// Otherwise, returns zero or more error diagnostics, though tries to limit
// repetition of the same information.
func checkInvalidTokens(tokens Tokens) hcl.Diagnostics {
var diags hcl.Diagnostics
toldBitwise := 0
toldExponent := 0
toldBacktick := 0
toldSemicolon := 0
toldTabs := 0
toldBadUTF8 := 0
for _, tok := range tokens {
switch tok.Type {
case TokenBitwiseAnd, TokenBitwiseOr, TokenBitwiseXor, TokenBitwiseNot:
if toldBitwise < 4 {
var suggestion string
switch tok.Type {
case TokenBitwiseAnd:
suggestion = " Did you mean boolean AND (\"&&\")?"
case TokenBitwiseOr:
suggestion = " Did you mean boolean OR (\"&&\")?"
case TokenBitwiseNot:
suggestion = " Did you mean boolean NOT (\"!\")?"
}
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsupported operator",
Detail: fmt.Sprintf("Bitwise operators are not supported.%s", suggestion),
Subject: &tok.Range,
})
toldBitwise++
}
case TokenStarStar:
if toldExponent < 1 {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsupported operator",
Detail: "\"**\" is not a supported operator. Exponentiation is not supported as an operator.",
Subject: &tok.Range,
})
toldExponent++
}
case TokenBacktick:
// Only report for alternating (even) backticks, so we won't report both start and ends of the same
// backtick-quoted string.
if toldExponent < 4 && (toldExponent%2) == 0 {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid character",
Detail: "The \"`\" character is not valid. To create a multi-line string, use the \"heredoc\" syntax, like \"<<EOT\".",
Subject: &tok.Range,
})
toldBacktick++
}
case TokenSemicolon:
if toldSemicolon < 1 {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid character",
Detail: "The \";\" character is not valid. Use newlines to separate attributes and blocks, and commas to separate items in collection values.",
Subject: &tok.Range,
})
toldSemicolon++
}
case TokenTabs:
if toldTabs < 1 {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid character",
Detail: "Tab characters may not be used. The recommended indentation style is two spaces per indent.",
Subject: &tok.Range,
})
toldTabs++
}
case TokenBadUTF8:
if toldBadUTF8 < 1 {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid character encoding",
Detail: "All input files must be UTF-8 encoded. Ensure that UTF-8 encoding is selected in your editor.",
Subject: &tok.Range,
})
toldBadUTF8++
}
case TokenInvalid:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid character",
Detail: "This character is not used within the language.",
Subject: &tok.Range,
})
toldTabs++
}
}
return diags
}

View File

@ -0,0 +1,69 @@
// Code generated by "stringer -type TokenType -output token_type_string.go"; DO NOT EDIT.
package hclsyntax
import "fmt"
const _TokenType_name = "TokenNilTokenNewlineTokenBangTokenPercentTokenBitwiseAndTokenOParenTokenCParenTokenStarTokenPlusTokenCommaTokenMinusTokenDotTokenSlashTokenColonTokenSemicolonTokenLessThanTokenEqualTokenGreaterThanTokenQuestionTokenCommentTokenOHeredocTokenIdentTokenNumberLitTokenQuotedLitTokenStringLitTokenOBrackTokenCBrackTokenBitwiseXorTokenBacktickTokenCHeredocTokenOBraceTokenBitwiseOrTokenCBraceTokenBitwiseNotTokenOQuoteTokenCQuoteTokenTemplateControlTokenEllipsisTokenFatArrowTokenTemplateSeqEndTokenAndTokenOrTokenTemplateInterpTokenEqualOpTokenNotEqualTokenLessThanEqTokenGreaterThanEqTokenEOFTokenTabsTokenStarStarTokenInvalidTokenBadUTF8"
var _TokenType_map = map[TokenType]string{
0: _TokenType_name[0:8],
10: _TokenType_name[8:20],
33: _TokenType_name[20:29],
37: _TokenType_name[29:41],
38: _TokenType_name[41:56],
40: _TokenType_name[56:67],
41: _TokenType_name[67:78],
42: _TokenType_name[78:87],
43: _TokenType_name[87:96],
44: _TokenType_name[96:106],
45: _TokenType_name[106:116],
46: _TokenType_name[116:124],
47: _TokenType_name[124:134],
58: _TokenType_name[134:144],
59: _TokenType_name[144:158],
60: _TokenType_name[158:171],
61: _TokenType_name[171:181],
62: _TokenType_name[181:197],
63: _TokenType_name[197:210],
67: _TokenType_name[210:222],
72: _TokenType_name[222:235],
73: _TokenType_name[235:245],
78: _TokenType_name[245:259],
81: _TokenType_name[259:273],
83: _TokenType_name[273:287],
91: _TokenType_name[287:298],
93: _TokenType_name[298:309],
94: _TokenType_name[309:324],
96: _TokenType_name[324:337],
104: _TokenType_name[337:350],
123: _TokenType_name[350:361],
124: _TokenType_name[361:375],
125: _TokenType_name[375:386],
126: _TokenType_name[386:401],
171: _TokenType_name[401:412],
187: _TokenType_name[412:423],
955: _TokenType_name[423:443],
8230: _TokenType_name[443:456],
8658: _TokenType_name[456:469],
8718: _TokenType_name[469:488],
8743: _TokenType_name[488:496],
8744: _TokenType_name[496:503],
8747: _TokenType_name[503:522],
8788: _TokenType_name[522:534],
8800: _TokenType_name[534:547],
8804: _TokenType_name[547:562],
8805: _TokenType_name[562:580],
9220: _TokenType_name[580:588],
9225: _TokenType_name[588:597],
10138: _TokenType_name[597:610],
65533: _TokenType_name[610:622],
128169: _TokenType_name[622:634],
}
func (i TokenType) String() string {
if str, ok := _TokenType_map[i]; ok {
return str
}
return fmt.Sprintf("TokenType(%d)", i)
}

View File

@ -0,0 +1,335 @@
#!/usr/bin/env ruby
#
# This scripted has been updated to accept more command-line arguments:
#
# -u, --url URL to process
# -m, --machine Machine name
# -p, --properties Properties to add to the machine
# -o, --output Write output to file
#
# Updated by: Marty Schoch <marty.schoch@gmail.com>
#
# This script uses the unicode spec to generate a Ragel state machine
# that recognizes unicode alphanumeric characters. It generates 5
# character classes: uupper, ulower, ualpha, udigit, and ualnum.
# Currently supported encodings are UTF-8 [default] and UCS-4.
#
# Usage: unicode2ragel.rb [options]
# -e, --encoding [ucs4 | utf8] Data encoding
# -h, --help Show this message
#
# This script was originally written as part of the Ferret search
# engine library.
#
# Author: Rakan El-Khalil <rakan@well.com>
require 'optparse'
require 'open-uri'
ENCODINGS = [ :utf8, :ucs4 ]
ALPHTYPES = { :utf8 => "byte", :ucs4 => "rune" }
DEFAULT_CHART_URL = "http://www.unicode.org/Public/5.1.0/ucd/DerivedCoreProperties.txt"
DEFAULT_MACHINE_NAME= "WChar"
###
# Display vars & default option
TOTAL_WIDTH = 80
RANGE_WIDTH = 23
@encoding = :utf8
@chart_url = DEFAULT_CHART_URL
machine_name = DEFAULT_MACHINE_NAME
properties = []
@output = $stdout
###
# Option parsing
cli_opts = OptionParser.new do |opts|
opts.on("-e", "--encoding [ucs4 | utf8]", "Data encoding") do |o|
@encoding = o.downcase.to_sym
end
opts.on("-h", "--help", "Show this message") do
puts opts
exit
end
opts.on("-u", "--url URL", "URL to process") do |o|
@chart_url = o
end
opts.on("-m", "--machine MACHINE_NAME", "Machine name") do |o|
machine_name = o
end
opts.on("-p", "--properties x,y,z", Array, "Properties to add to machine") do |o|
properties = o
end
opts.on("-o", "--output FILE", "output file") do |o|
@output = File.new(o, "w+")
end
end
cli_opts.parse(ARGV)
unless ENCODINGS.member? @encoding
puts "Invalid encoding: #{@encoding}"
puts cli_opts
exit
end
##
# Downloads the document at url and yields every alpha line's hex
# range and description.
def each_alpha( url, property )
open( url ) do |file|
file.each_line do |line|
next if line =~ /^#/;
next if line !~ /; #{property} #/;
range, description = line.split(/;/)
range.strip!
description.gsub!(/.*#/, '').strip!
if range =~ /\.\./
start, stop = range.split '..'
else start = stop = range
end
yield start.hex .. stop.hex, description
end
end
end
###
# Formats to hex at minimum width
def to_hex( n )
r = "%0X" % n
r = "0#{r}" unless (r.length % 2).zero?
r
end
###
# UCS4 is just a straight hex conversion of the unicode codepoint.
def to_ucs4( range )
rangestr = "0x" + to_hex(range.begin)
rangestr << "..0x" + to_hex(range.end) if range.begin != range.end
[ rangestr ]
end
##
# 0x00 - 0x7f -> 0zzzzzzz[7]
# 0x80 - 0x7ff -> 110yyyyy[5] 10zzzzzz[6]
# 0x800 - 0xffff -> 1110xxxx[4] 10yyyyyy[6] 10zzzzzz[6]
# 0x010000 - 0x10ffff -> 11110www[3] 10xxxxxx[6] 10yyyyyy[6] 10zzzzzz[6]
UTF8_BOUNDARIES = [0x7f, 0x7ff, 0xffff, 0x10ffff]
def to_utf8_enc( n )
r = 0
if n <= 0x7f
r = n
elsif n <= 0x7ff
y = 0xc0 | (n >> 6)
z = 0x80 | (n & 0x3f)
r = y << 8 | z
elsif n <= 0xffff
x = 0xe0 | (n >> 12)
y = 0x80 | (n >> 6) & 0x3f
z = 0x80 | n & 0x3f
r = x << 16 | y << 8 | z
elsif n <= 0x10ffff
w = 0xf0 | (n >> 18)
x = 0x80 | (n >> 12) & 0x3f
y = 0x80 | (n >> 6) & 0x3f
z = 0x80 | n & 0x3f
r = w << 24 | x << 16 | y << 8 | z
end
to_hex(r)
end
def from_utf8_enc( n )
n = n.hex
r = 0
if n <= 0x7f
r = n
elsif n <= 0xdfff
y = (n >> 8) & 0x1f
z = n & 0x3f
r = y << 6 | z
elsif n <= 0xefffff
x = (n >> 16) & 0x0f
y = (n >> 8) & 0x3f
z = n & 0x3f
r = x << 10 | y << 6 | z
elsif n <= 0xf7ffffff
w = (n >> 24) & 0x07
x = (n >> 16) & 0x3f
y = (n >> 8) & 0x3f
z = n & 0x3f
r = w << 18 | x << 12 | y << 6 | z
end
r
end
###
# Given a range, splits it up into ranges that can be continuously
# encoded into utf8. Eg: 0x00 .. 0xff => [0x00..0x7f, 0x80..0xff]
# This is not strictly needed since the current [5.1] unicode standard
# doesn't have ranges that straddle utf8 boundaries. This is included
# for completeness as there is no telling if that will ever change.
def utf8_ranges( range )
ranges = []
UTF8_BOUNDARIES.each do |max|
if range.begin <= max
if range.end <= max
ranges << range
return ranges
end
ranges << (range.begin .. max)
range = (max + 1) .. range.end
end
end
ranges
end
def build_range( start, stop )
size = start.size/2
left = size - 1
return [""] if size < 1
a = start[0..1]
b = stop[0..1]
###
# Shared prefix
if a == b
return build_range(start[2..-1], stop[2..-1]).map do |elt|
"0x#{a} " + elt
end
end
###
# Unshared prefix, end of run
return ["0x#{a}..0x#{b} "] if left.zero?
###
# Unshared prefix, not end of run
# Range can be 0x123456..0x56789A
# Which is equivalent to:
# 0x123456 .. 0x12FFFF
# 0x130000 .. 0x55FFFF
# 0x560000 .. 0x56789A
ret = []
ret << build_range(start, a + "FF" * left)
###
# Only generate middle range if need be.
if a.hex+1 != b.hex
max = to_hex(b.hex - 1)
max = "FF" if b == "FF"
ret << "0x#{to_hex(a.hex+1)}..0x#{max} " + "0x00..0xFF " * left
end
###
# Don't generate last range if it is covered by first range
ret << build_range(b + "00" * left, stop) unless b == "FF"
ret.flatten!
end
def to_utf8( range )
utf8_ranges( range ).map do |r|
begin_enc = to_utf8_enc(r.begin)
end_enc = to_utf8_enc(r.end)
build_range begin_enc, end_enc
end.flatten!
end
##
# Perform a 3-way comparison of the number of codepoints advertised by
# the unicode spec for the given range, the originally parsed range,
# and the resulting utf8 encoded range.
def count_codepoints( code )
code.split(' ').inject(1) do |acc, elt|
if elt =~ /0x(.+)\.\.0x(.+)/
if @encoding == :utf8
acc * (from_utf8_enc($2) - from_utf8_enc($1) + 1)
else
acc * ($2.hex - $1.hex + 1)
end
else
acc
end
end
end
def is_valid?( range, desc, codes )
spec_count = 1
spec_count = $1.to_i if desc =~ /\[(\d+)\]/
range_count = range.end - range.begin + 1
sum = codes.inject(0) { |acc, elt| acc + count_codepoints(elt) }
sum == spec_count and sum == range_count
end
##
# Generate the state maching to stdout
def generate_machine( name, property )
pipe = " "
@output.puts " #{name} = "
each_alpha( @chart_url, property ) do |range, desc|
codes = (@encoding == :ucs4) ? to_ucs4(range) : to_utf8(range)
#raise "Invalid encoding of range #{range}: #{codes.inspect}" unless
# is_valid? range, desc, codes
range_width = codes.map { |a| a.size }.max
range_width = RANGE_WIDTH if range_width < RANGE_WIDTH
desc_width = TOTAL_WIDTH - RANGE_WIDTH - 11
desc_width -= (range_width - RANGE_WIDTH) if range_width > RANGE_WIDTH
if desc.size > desc_width
desc = desc[0..desc_width - 4] + "..."
end
codes.each_with_index do |r, idx|
desc = "" unless idx.zero?
code = "%-#{range_width}s" % r
@output.puts " #{pipe} #{code} ##{desc}"
pipe = "|"
end
end
@output.puts " ;"
@output.puts ""
end
@output.puts <<EOF
# The following Ragel file was autogenerated with #{$0}
# from: #{@chart_url}
#
# It defines #{properties}.
#
# To use this, make sure that your alphtype is set to #{ALPHTYPES[@encoding]},
# and that your input is in #{@encoding}.
%%{
machine #{machine_name};
EOF
properties.each { |x| generate_machine( x, x ) }
@output.puts <<EOF
}%%
EOF

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,86 @@
package hclsyntax
import (
"github.com/hashicorp/hcl2/hcl"
)
// Variables returns all of the variables referenced within a given experssion.
//
// This is the implementation of the "Variables" method on every native
// expression.
func Variables(expr Expression) []hcl.Traversal {
var vars []hcl.Traversal
walker := &variablesWalker{
Callback: func(t hcl.Traversal) {
vars = append(vars, t)
},
}
Walk(expr, walker)
return vars
}
// variablesWalker is a Walker implementation that calls its callback for any
// root scope traversal found while walking.
type variablesWalker struct {
Callback func(hcl.Traversal)
localScopes []map[string]struct{}
}
func (w *variablesWalker) Enter(n Node) hcl.Diagnostics {
switch tn := n.(type) {
case *ScopeTraversalExpr:
t := tn.Traversal
// Check if the given root name appears in any of the active
// local scopes. We don't want to return local variables here, since
// the goal of walking variables is to tell the calling application
// which names it needs to populate in the _root_ scope.
name := t.RootName()
for _, names := range w.localScopes {
if _, localized := names[name]; localized {
return nil
}
}
w.Callback(t)
case ChildScope:
w.localScopes = append(w.localScopes, tn.LocalNames)
}
return nil
}
func (w *variablesWalker) Exit(n Node) hcl.Diagnostics {
switch n.(type) {
case ChildScope:
// pop the latest local scope, assuming that the walker will
// behave symmetrically as promised.
w.localScopes = w.localScopes[:len(w.localScopes)-1]
}
return nil
}
// ChildScope is a synthetic AST node that is visited during a walk to
// indicate that its descendent will be evaluated in a child scope, which
// may mask certain variables from the parent scope as locals.
//
// ChildScope nodes don't really exist in the AST, but are rather synthesized
// on the fly during walk. Therefore it doesn't do any good to transform them;
// instead, transform either parent node that created a scope or the expression
// that the child scope struct wraps.
type ChildScope struct {
LocalNames map[string]struct{}
Expr *Expression // pointer because it can be replaced on walk
}
func (e ChildScope) walkChildNodes(w internalWalkFunc) {
*(e.Expr) = w(*(e.Expr)).(Expression)
}
// Range returns the range of the expression that the ChildScope is
// encapsulating. It isn't really very useful to call Range on a ChildScope.
func (e ChildScope) Range() hcl.Range {
return (*e.Expr).Range()
}

77
vendor/github.com/hashicorp/hcl2/hcl/hclsyntax/walk.go generated vendored Normal file
View File

@ -0,0 +1,77 @@
package hclsyntax
import (
"github.com/hashicorp/hcl2/hcl"
)
// VisitFunc is the callback signature for VisitAll.
type VisitFunc func(node Node) hcl.Diagnostics
// VisitAll is a basic way to traverse the AST beginning with a particular
// node. The given function will be called once for each AST node in
// depth-first order, but no context is provided about the shape of the tree.
//
// The VisitFunc may return diagnostics, in which case they will be accumulated
// and returned as a single set.
func VisitAll(node Node, f VisitFunc) hcl.Diagnostics {
diags := f(node)
node.walkChildNodes(func(node Node) Node {
diags = append(diags, VisitAll(node, f)...)
return node
})
return diags
}
// Walker is an interface used with Walk.
type Walker interface {
Enter(node Node) hcl.Diagnostics
Exit(node Node) hcl.Diagnostics
}
// Walk is a more complex way to traverse the AST starting with a particular
// node, which provides information about the tree structure via separate
// Enter and Exit functions.
func Walk(node Node, w Walker) hcl.Diagnostics {
diags := w.Enter(node)
node.walkChildNodes(func(node Node) Node {
diags = append(diags, Walk(node, w)...)
return node
})
return diags
}
// Transformer is an interface used with Transform
type Transformer interface {
// Transform accepts a node and returns a replacement node along with
// a flag for whether to also visit child nodes. If the flag is false,
// none of the child nodes will be visited and the TransformExit method
// will not be called for the node.
//
// It is acceptable and appropriate for Transform to return the same node
// it was given, for situations where no transform is needed.
Transform(node Node) (Node, bool, hcl.Diagnostics)
// TransformExit signals the end of transformations of child nodes of the
// given node. If Transform returned a new node, the given node is the
// node that was returned, rather than the node that was originally
// encountered.
TransformExit(node Node) hcl.Diagnostics
}
// Transform allows for in-place transformations of an AST starting with a
// particular node. The provider Transformer implementation drives the
// transformation process. The return value is the node that replaced the
// given top-level node.
func Transform(node Node, t Transformer) (Node, hcl.Diagnostics) {
newNode, descend, diags := t.Transform(node)
if !descend {
return newNode, diags
}
node.walkChildNodes(func(node Node) Node {
newNode, newDiags := Transform(node, t)
diags = append(diags, newDiags...)
return newNode
})
diags = append(diags, t.TransformExit(newNode)...)
return newNode, diags
}

121
vendor/github.com/hashicorp/hcl2/hcl/json/ast.go generated vendored Normal file
View File

@ -0,0 +1,121 @@
package json
import (
"math/big"
"github.com/hashicorp/hcl2/hcl"
)
type node interface {
Range() hcl.Range
StartRange() hcl.Range
}
type objectVal struct {
Attrs map[string]*objectAttr
SrcRange hcl.Range // range of the entire object, brace-to-brace
OpenRange hcl.Range // range of the opening brace
CloseRange hcl.Range // range of the closing brace
}
func (n *objectVal) Range() hcl.Range {
return n.SrcRange
}
func (n *objectVal) StartRange() hcl.Range {
return n.OpenRange
}
type objectAttr struct {
Name string
Value node
NameRange hcl.Range // range of the name string
}
func (n *objectAttr) Range() hcl.Range {
return n.NameRange
}
func (n *objectAttr) StartRange() hcl.Range {
return n.NameRange
}
type arrayVal struct {
Values []node
SrcRange hcl.Range // range of the entire object, bracket-to-bracket
OpenRange hcl.Range // range of the opening bracket
}
func (n *arrayVal) Range() hcl.Range {
return n.SrcRange
}
func (n *arrayVal) StartRange() hcl.Range {
return n.OpenRange
}
type booleanVal struct {
Value bool
SrcRange hcl.Range
}
func (n *booleanVal) Range() hcl.Range {
return n.SrcRange
}
func (n *booleanVal) StartRange() hcl.Range {
return n.SrcRange
}
type numberVal struct {
Value *big.Float
SrcRange hcl.Range
}
func (n *numberVal) Range() hcl.Range {
return n.SrcRange
}
func (n *numberVal) StartRange() hcl.Range {
return n.SrcRange
}
type stringVal struct {
Value string
SrcRange hcl.Range
}
func (n *stringVal) Range() hcl.Range {
return n.SrcRange
}
func (n *stringVal) StartRange() hcl.Range {
return n.SrcRange
}
type nullVal struct {
SrcRange hcl.Range
}
func (n *nullVal) Range() hcl.Range {
return n.SrcRange
}
func (n *nullVal) StartRange() hcl.Range {
return n.SrcRange
}
// invalidVal is used as a placeholder where a value is needed for a valid
// parse tree but the input was invalid enough to prevent one from being
// created.
type invalidVal struct {
SrcRange hcl.Range
}
func (n invalidVal) Range() hcl.Range {
return n.SrcRange
}
func (n invalidVal) StartRange() hcl.Range {
return n.SrcRange
}

View File

@ -0,0 +1,33 @@
package json
import (
"github.com/agext/levenshtein"
)
var keywords = []string{"false", "true", "null"}
// keywordSuggestion tries to find a valid JSON keyword that is close to the
// given string and returns it if found. If no keyword is close enough, returns
// the empty string.
func keywordSuggestion(given string) string {
return nameSuggestion(given, keywords)
}
// nameSuggestion tries to find a name from the given slice of suggested names
// that is close to the given name and returns it if found. If no suggestion
// is close enough, returns the empty string.
//
// The suggestions are tried in order, so earlier suggestions take precedence
// if the given string is similar to two or more suggestions.
//
// This function is intended to be used with a relatively-small number of
// suggestions. It's not optimized for hundreds or thousands of them.
func nameSuggestion(given string, suggestions []string) string {
for _, suggestion := range suggestions {
dist := levenshtein.Distance(given, suggestion, nil)
if dist < 3 { // threshold determined experimentally
return suggestion
}
}
return ""
}

8
vendor/github.com/hashicorp/hcl2/hcl/json/doc.go generated vendored Normal file
View File

@ -0,0 +1,8 @@
// Package json is the JSON parser for HCL. It parses JSON files and returns
// implementations of the core HCL structural interfaces in terms of the
// JSON data inside.
//
// This is not a generic JSON parser. Instead, it deals with the mapping from
// the JSON information model to the HCL information model, using a number
// of hard-coded structural conventions.
package json

View File

@ -0,0 +1,41 @@
package json
import (
"strings"
)
type navigation struct {
root *objectVal
}
// Implementation of zcled.ContextString
func (n navigation) ContextString(offset int) string {
steps := navigationStepsRev(n.root, offset)
if steps == nil {
return ""
}
// We built our slice backwards, so we'll reverse it in-place now.
half := len(steps) / 2 // integer division
for i := 0; i < half; i++ {
steps[i], steps[len(steps)-1-i] = steps[len(steps)-1-i], steps[i]
}
return strings.Join(steps, ".")
}
func navigationStepsRev(obj *objectVal, offset int) []string {
// Do any of our properties have an object that contains the target
// offset?
for k, attr := range obj.Attrs {
ov, ok := attr.Value.(*objectVal)
if !ok {
continue
}
if ov.SrcRange.ContainsOffset(offset) {
return append(navigationStepsRev(ov, offset), k)
}
}
return nil
}

504
vendor/github.com/hashicorp/hcl2/hcl/json/parser.go generated vendored Normal file
View File

@ -0,0 +1,504 @@
package json
import (
"encoding/json"
"fmt"
"math/big"
"github.com/hashicorp/hcl2/hcl"
)
func parseFileContent(buf []byte, filename string) (node, hcl.Diagnostics) {
tokens := scan(buf, pos{
Filename: filename,
Pos: hcl.Pos{
Byte: 0,
Line: 1,
Column: 1,
},
})
p := newPeeker(tokens)
node, diags := parseValue(p)
if len(diags) == 0 && p.Peek().Type != tokenEOF {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Extraneous data after value",
Detail: "Extra characters appear after the JSON value.",
Subject: p.Peek().Range.Ptr(),
})
}
return node, diags
}
func parseValue(p *peeker) (node, hcl.Diagnostics) {
tok := p.Peek()
wrapInvalid := func(n node, diags hcl.Diagnostics) (node, hcl.Diagnostics) {
if n != nil {
return n, diags
}
return invalidVal{tok.Range}, diags
}
switch tok.Type {
case tokenBraceO:
return wrapInvalid(parseObject(p))
case tokenBrackO:
return wrapInvalid(parseArray(p))
case tokenNumber:
return wrapInvalid(parseNumber(p))
case tokenString:
return wrapInvalid(parseString(p))
case tokenKeyword:
return wrapInvalid(parseKeyword(p))
case tokenBraceC:
return wrapInvalid(nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Missing attribute value",
Detail: "A JSON value must start with a brace, a bracket, a number, a string, or a keyword.",
Subject: &tok.Range,
},
})
case tokenBrackC:
return wrapInvalid(nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Missing array element value",
Detail: "A JSON value must start with a brace, a bracket, a number, a string, or a keyword.",
Subject: &tok.Range,
},
})
case tokenEOF:
return wrapInvalid(nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Missing value",
Detail: "The JSON data ends prematurely.",
Subject: &tok.Range,
},
})
default:
return wrapInvalid(nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Invalid start of value",
Detail: "A JSON value must start with a brace, a bracket, a number, a string, or a keyword.",
Subject: &tok.Range,
},
})
}
}
func tokenCanStartValue(tok token) bool {
switch tok.Type {
case tokenBraceO, tokenBrackO, tokenNumber, tokenString, tokenKeyword:
return true
default:
return false
}
}
func parseObject(p *peeker) (node, hcl.Diagnostics) {
var diags hcl.Diagnostics
open := p.Read()
attrs := map[string]*objectAttr{}
// recover is used to shift the peeker to what seems to be the end of
// our object, so that when we encounter an error we leave the peeker
// at a reasonable point in the token stream to continue parsing.
recover := func(tok token) {
open := 1
for {
switch tok.Type {
case tokenBraceO:
open++
case tokenBraceC:
open--
if open <= 1 {
return
}
case tokenEOF:
// Ran out of source before we were able to recover,
// so we'll bail here and let the caller deal with it.
return
}
tok = p.Read()
}
}
Token:
for {
if p.Peek().Type == tokenBraceC {
break Token
}
keyNode, keyDiags := parseValue(p)
diags = diags.Extend(keyDiags)
if keyNode == nil {
return nil, diags
}
keyStrNode, ok := keyNode.(*stringVal)
if !ok {
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid object attribute name",
Detail: "A JSON object attribute name must be a string",
Subject: keyNode.StartRange().Ptr(),
})
}
key := keyStrNode.Value
colon := p.Read()
if colon.Type != tokenColon {
recover(colon)
if colon.Type == tokenBraceC || colon.Type == tokenComma {
// Catch common mistake of using braces instead of brackets
// for an object.
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing object value",
Detail: "A JSON object attribute must have a value, introduced by a colon.",
Subject: &colon.Range,
})
}
if colon.Type == tokenEquals {
// Possible confusion with native zcl syntax.
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing attribute value colon",
Detail: "JSON uses a colon as its name/value delimiter, not an equals sign.",
Subject: &colon.Range,
})
}
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing attribute value colon",
Detail: "A colon must appear between an object attribute's name and its value.",
Subject: &colon.Range,
})
}
valNode, valDiags := parseValue(p)
diags = diags.Extend(valDiags)
if valNode == nil {
return nil, diags
}
if existing := attrs[key]; existing != nil {
// Generate a diagnostic for the duplicate key, but continue parsing
// anyway since this is a semantic error we can recover from.
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate JSON object property",
Detail: fmt.Sprintf(
"An property named %q was previously introduced at %s",
key, existing.NameRange.String(),
),
Subject: &keyStrNode.SrcRange,
})
}
attrs[key] = &objectAttr{
Name: key,
Value: valNode,
NameRange: keyStrNode.SrcRange,
}
switch p.Peek().Type {
case tokenComma:
comma := p.Read()
if p.Peek().Type == tokenBraceC {
// Special error message for this common mistake
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Trailing comma in object",
Detail: "JSON does not permit a trailing comma after the final attribute in an object.",
Subject: &comma.Range,
})
}
continue Token
case tokenEOF:
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unclosed object",
Detail: "No closing brace was found for this JSON object.",
Subject: &open.Range,
})
case tokenBrackC:
// Consume the bracket anyway, so that we don't return with the peeker
// at a strange place.
p.Read()
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Mismatched braces",
Detail: "A JSON object must be closed with a brace, not a bracket.",
Subject: p.Peek().Range.Ptr(),
})
case tokenBraceC:
break Token
default:
recover(p.Read())
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing attribute seperator comma",
Detail: "A comma must appear between each attribute declaration in an object.",
Subject: p.Peek().Range.Ptr(),
})
}
}
close := p.Read()
return &objectVal{
Attrs: attrs,
SrcRange: hcl.RangeBetween(open.Range, close.Range),
OpenRange: open.Range,
CloseRange: close.Range,
}, diags
}
func parseArray(p *peeker) (node, hcl.Diagnostics) {
var diags hcl.Diagnostics
open := p.Read()
vals := []node{}
// recover is used to shift the peeker to what seems to be the end of
// our array, so that when we encounter an error we leave the peeker
// at a reasonable point in the token stream to continue parsing.
recover := func(tok token) {
open := 1
for {
switch tok.Type {
case tokenBrackO:
open++
case tokenBrackC:
open--
if open <= 1 {
return
}
case tokenEOF:
// Ran out of source before we were able to recover,
// so we'll bail here and let the caller deal with it.
return
}
tok = p.Read()
}
}
Token:
for {
if p.Peek().Type == tokenBrackC {
break Token
}
valNode, valDiags := parseValue(p)
diags = diags.Extend(valDiags)
if valNode == nil {
return nil, diags
}
vals = append(vals, valNode)
switch p.Peek().Type {
case tokenComma:
comma := p.Read()
if p.Peek().Type == tokenBrackC {
// Special error message for this common mistake
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Trailing comma in array",
Detail: "JSON does not permit a trailing comma after the final attribute in an array.",
Subject: &comma.Range,
})
}
continue Token
case tokenColon:
recover(p.Read())
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid array value",
Detail: "A colon is not used to introduce values in a JSON array.",
Subject: p.Peek().Range.Ptr(),
})
case tokenEOF:
recover(p.Read())
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unclosed object",
Detail: "No closing bracket was found for this JSON array.",
Subject: &open.Range,
})
case tokenBraceC:
recover(p.Read())
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Mismatched brackets",
Detail: "A JSON array must be closed with a bracket, not a brace.",
Subject: p.Peek().Range.Ptr(),
})
case tokenBrackC:
break Token
default:
recover(p.Read())
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing attribute seperator comma",
Detail: "A comma must appear between each value in an array.",
Subject: p.Peek().Range.Ptr(),
})
}
}
close := p.Read()
return &arrayVal{
Values: vals,
SrcRange: hcl.RangeBetween(open.Range, close.Range),
OpenRange: open.Range,
}, diags
}
func parseNumber(p *peeker) (node, hcl.Diagnostics) {
tok := p.Read()
// Use encoding/json to validate the number syntax.
// TODO: Do this more directly to produce better diagnostics.
var num json.Number
err := json.Unmarshal(tok.Bytes, &num)
if err != nil {
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Invalid JSON number",
Detail: fmt.Sprintf("There is a syntax error in the given JSON number."),
Subject: &tok.Range,
},
}
}
f, _, err := (&big.Float{}).Parse(string(num), 10)
if err != nil {
// Should never happen if above passed, since JSON numbers are a subset
// of what big.Float can parse...
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Invalid JSON number",
Detail: fmt.Sprintf("There is a syntax error in the given JSON number."),
Subject: &tok.Range,
},
}
}
return &numberVal{
Value: f,
SrcRange: tok.Range,
}, nil
}
func parseString(p *peeker) (node, hcl.Diagnostics) {
tok := p.Read()
var str string
err := json.Unmarshal(tok.Bytes, &str)
if err != nil {
var errRange hcl.Range
if serr, ok := err.(*json.SyntaxError); ok {
errOfs := serr.Offset
errPos := tok.Range.Start
errPos.Byte += int(errOfs)
// TODO: Use the byte offset to properly count unicode
// characters for the column, and mark the whole of the
// character that was wrong as part of our range.
errPos.Column += int(errOfs)
errEndPos := errPos
errEndPos.Byte++
errEndPos.Column++
errRange = hcl.Range{
Filename: tok.Range.Filename,
Start: errPos,
End: errEndPos,
}
} else {
errRange = tok.Range
}
var contextRange *hcl.Range
if errRange != tok.Range {
contextRange = &tok.Range
}
// FIXME: Eventually we should parse strings directly here so
// we can produce a more useful error message in the face fo things
// such as invalid escapes, etc.
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Invalid JSON string",
Detail: fmt.Sprintf("There is a syntax error in the given JSON string."),
Subject: &errRange,
Context: contextRange,
},
}
}
return &stringVal{
Value: str,
SrcRange: tok.Range,
}, nil
}
func parseKeyword(p *peeker) (node, hcl.Diagnostics) {
tok := p.Read()
s := string(tok.Bytes)
switch s {
case "true":
return &booleanVal{
Value: true,
SrcRange: tok.Range,
}, nil
case "false":
return &booleanVal{
Value: false,
SrcRange: tok.Range,
}, nil
case "null":
return &nullVal{
SrcRange: tok.Range,
}, nil
case "undefined", "NaN", "Infinity":
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Invalid JSON keyword",
Detail: fmt.Sprintf("The JavaScript identifier %q cannot be used in JSON.", s),
Subject: &tok.Range,
},
}
default:
var dym string
if suggest := keywordSuggestion(s); suggest != "" {
dym = fmt.Sprintf(" Did you mean %q?", suggest)
}
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Invalid JSON keyword",
Detail: fmt.Sprintf("%q is not a valid JSON keyword.%s", s, dym),
Subject: &tok.Range,
},
}
}
}

25
vendor/github.com/hashicorp/hcl2/hcl/json/peeker.go generated vendored Normal file
View File

@ -0,0 +1,25 @@
package json
type peeker struct {
tokens []token
pos int
}
func newPeeker(tokens []token) *peeker {
return &peeker{
tokens: tokens,
pos: 0,
}
}
func (p *peeker) Peek() token {
return p.tokens[p.pos]
}
func (p *peeker) Read() token {
ret := p.tokens[p.pos]
if ret.Type != tokenEOF {
p.pos++
}
return ret
}

115
vendor/github.com/hashicorp/hcl2/hcl/json/public.go generated vendored Normal file
View File

@ -0,0 +1,115 @@
package json
import (
"fmt"
"io/ioutil"
"os"
"github.com/hashicorp/hcl2/hcl"
)
// Parse attempts to parse the given buffer as JSON and, if successful, returns
// a hcl.File for the zcl configuration represented by it.
//
// This is not a generic JSON parser. Instead, it deals only with the profile
// of JSON used to express zcl configuration.
//
// The returned file is valid only if the returned diagnostics returns false
// from its HasErrors method. If HasErrors returns true, the file represents
// the subset of data that was able to be parsed, which may be none.
func Parse(src []byte, filename string) (*hcl.File, hcl.Diagnostics) {
rootNode, diags := parseFileContent(src, filename)
if _, ok := rootNode.(*objectVal); !ok {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Root value must be object",
Detail: "The root value in a JSON-based configuration must be a JSON object.",
Subject: rootNode.StartRange().Ptr(),
})
// Put in a placeholder objectVal just so the caller always gets
// a valid file, even if it appears empty. This is useful for callers
// that are doing static analysis of possibly-erroneous source code,
// which will try to process the returned file even if we return
// diagnostics of severity error. This way, they'll get a file that
// has an empty body rather than a body that panics when probed.
fakePos := hcl.Pos{
Byte: 0,
Line: 1,
Column: 1,
}
fakeRange := hcl.Range{
Filename: filename,
Start: fakePos,
End: fakePos,
}
rootNode = &objectVal{
Attrs: map[string]*objectAttr{},
SrcRange: fakeRange,
OpenRange: fakeRange,
}
}
file := &hcl.File{
Body: &body{
obj: rootNode.(*objectVal),
},
Bytes: src,
Nav: navigation{rootNode.(*objectVal)},
}
return file, diags
}
// ParseFile is a convenience wrapper around Parse that first attempts to load
// data from the given filename, passing the result to Parse if successful.
//
// If the file cannot be read, an error diagnostic with nil context is returned.
func ParseFile(filename string) (*hcl.File, hcl.Diagnostics) {
f, err := os.Open(filename)
if err != nil {
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Failed to open file",
Detail: fmt.Sprintf("The file %q could not be opened.", filename),
},
}
}
defer f.Close()
src, err := ioutil.ReadAll(f)
if err != nil {
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Failed to read file",
Detail: fmt.Sprintf("The file %q was opened, but an error occured while reading it.", filename),
},
}
}
return Parse(src, filename)
}
// ParseWithHIL is like Parse except the returned file will use the HIL
// template syntax for expressions in strings, rather than the native zcl
// template syntax.
//
// This is intended for providing backward compatibility for applications that
// used to use HCL/HIL and thus had a JSON-based format with HIL
// interpolations.
func ParseWithHIL(src []byte, filename string) (*hcl.File, hcl.Diagnostics) {
file, diags := Parse(src, filename)
if file != nil && file.Body != nil {
file.Body.(*body).useHIL = true
}
return file, diags
}
// ParseFileWithHIL is like ParseWithHIL but it reads data from a file before
// parsing it.
func ParseFileWithHIL(filename string) (*hcl.File, hcl.Diagnostics) {
file, diags := ParseFile(filename)
if file != nil && file.Body != nil {
file.Body.(*body).useHIL = true
}
return file, diags
}

293
vendor/github.com/hashicorp/hcl2/hcl/json/scanner.go generated vendored Normal file
View File

@ -0,0 +1,293 @@
package json
import (
"fmt"
"github.com/apparentlymart/go-textseg/textseg"
"github.com/hashicorp/hcl2/hcl"
)
//go:generate stringer -type tokenType scanner.go
type tokenType rune
const (
tokenBraceO tokenType = '{'
tokenBraceC tokenType = '}'
tokenBrackO tokenType = '['
tokenBrackC tokenType = ']'
tokenComma tokenType = ','
tokenColon tokenType = ':'
tokenKeyword tokenType = 'K'
tokenString tokenType = 'S'
tokenNumber tokenType = 'N'
tokenEOF tokenType = '␄'
tokenInvalid tokenType = 0
tokenEquals tokenType = '=' // used only for reminding the user of JSON syntax
)
type token struct {
Type tokenType
Bytes []byte
Range hcl.Range
}
// scan returns the primary tokens for the given JSON buffer in sequence.
//
// The responsibility of this pass is to just mark the slices of the buffer
// as being of various types. It is lax in how it interprets the multi-byte
// token types keyword, string and number, preferring to capture erroneous
// extra bytes that we presume the user intended to be part of the token
// so that we can generate more helpful diagnostics in the parser.
func scan(buf []byte, start pos) []token {
var tokens []token
p := start
for {
if len(buf) == 0 {
tokens = append(tokens, token{
Type: tokenEOF,
Bytes: nil,
Range: posRange(p, p),
})
return tokens
}
buf, p = skipWhitespace(buf, p)
if len(buf) == 0 {
tokens = append(tokens, token{
Type: tokenEOF,
Bytes: nil,
Range: posRange(p, p),
})
return tokens
}
start = p
first := buf[0]
switch {
case first == '{' || first == '}' || first == '[' || first == ']' || first == ',' || first == ':' || first == '=':
p.Pos.Column++
p.Pos.Byte++
tokens = append(tokens, token{
Type: tokenType(first),
Bytes: buf[0:1],
Range: posRange(start, p),
})
buf = buf[1:]
case first == '"':
var tokBuf []byte
tokBuf, buf, p = scanString(buf, p)
tokens = append(tokens, token{
Type: tokenString,
Bytes: tokBuf,
Range: posRange(start, p),
})
case byteCanStartNumber(first):
var tokBuf []byte
tokBuf, buf, p = scanNumber(buf, p)
tokens = append(tokens, token{
Type: tokenNumber,
Bytes: tokBuf,
Range: posRange(start, p),
})
case byteCanStartKeyword(first):
var tokBuf []byte
tokBuf, buf, p = scanKeyword(buf, p)
tokens = append(tokens, token{
Type: tokenKeyword,
Bytes: tokBuf,
Range: posRange(start, p),
})
default:
tokens = append(tokens, token{
Type: tokenInvalid,
Bytes: buf[:1],
Range: start.Range(1, 1),
})
// If we've encountered an invalid then we might as well stop
// scanning since the parser won't proceed beyond this point.
return tokens
}
}
}
func byteCanStartNumber(b byte) bool {
switch b {
// We are slightly more tolerant than JSON requires here since we
// expect the parser will make a stricter interpretation of the
// number bytes, but we specifically don't allow 'e' or 'E' here
// since we want the scanner to treat that as the start of an
// invalid keyword instead, to produce more intelligible error messages.
case '-', '+', '.', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
return true
default:
return false
}
}
func scanNumber(buf []byte, start pos) ([]byte, []byte, pos) {
// The scanner doesn't check that the sequence of digit-ish bytes is
// in a valid order. The parser must do this when decoding a number
// token.
var i int
p := start
Byte:
for i = 0; i < len(buf); i++ {
switch buf[i] {
case '-', '+', '.', 'e', 'E', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
p.Pos.Byte++
p.Pos.Column++
default:
break Byte
}
}
return buf[:i], buf[i:], p
}
func byteCanStartKeyword(b byte) bool {
switch {
// We allow any sequence of alphabetical characters here, even though
// JSON is more constrained, so that we can collect what we presume
// the user intended to be a single keyword and then check its validity
// in the parser, where we can generate better diagnostics.
// So e.g. we want to be able to say:
// unrecognized keyword "True". Did you mean "true"?
case b >= 'a' || b <= 'z' || b >= 'A' || b <= 'Z':
return true
default:
return false
}
}
func scanKeyword(buf []byte, start pos) ([]byte, []byte, pos) {
var i int
p := start
Byte:
for i = 0; i < len(buf); i++ {
b := buf[i]
switch {
case (b >= 'a' && b <= 'z') || (b >= 'A' && b <= 'Z') || b == '_':
p.Pos.Byte++
p.Pos.Column++
default:
break Byte
}
}
return buf[:i], buf[i:], p
}
func scanString(buf []byte, start pos) ([]byte, []byte, pos) {
// The scanner doesn't validate correct use of escapes, etc. It pays
// attention to escapes only for the purpose of identifying the closing
// quote character. It's the parser's responsibility to do proper
// validation.
//
// The scanner also doesn't specifically detect unterminated string
// literals, though they can be identified in the parser by checking if
// the final byte in a string token is the double-quote character.
// Skip the opening quote symbol
i := 1
p := start
p.Pos.Byte++
p.Pos.Column++
escaping := false
Byte:
for i < len(buf) {
b := buf[i]
switch {
case b == '\\':
escaping = !escaping
p.Pos.Byte++
p.Pos.Column++
i++
case b == '"':
p.Pos.Byte++
p.Pos.Column++
i++
if !escaping {
break Byte
}
escaping = false
case b < 32:
break Byte
default:
// Advance by one grapheme cluster, so that we consider each
// grapheme to be a "column".
// Ignoring error because this scanner cannot produce errors.
advance, _, _ := textseg.ScanGraphemeClusters(buf[i:], true)
p.Pos.Byte += advance
p.Pos.Column++
i += advance
escaping = false
}
}
return buf[:i], buf[i:], p
}
func skipWhitespace(buf []byte, start pos) ([]byte, pos) {
var i int
p := start
Byte:
for i = 0; i < len(buf); i++ {
switch buf[i] {
case ' ':
p.Pos.Byte++
p.Pos.Column++
case '\n':
p.Pos.Byte++
p.Pos.Column = 1
p.Pos.Line++
case '\r':
// For the purpose of line/column counting we consider a
// carriage return to take up no space, assuming that it will
// be paired up with a newline (on Windows, for example) that
// will account for both of them.
p.Pos.Byte++
case '\t':
// We arbitrarily count a tab as if it were two spaces, because
// we need to choose _some_ number here. This means any system
// that renders code on-screen with markers must itself treat
// tabs as a pair of spaces for rendering purposes, or instead
// use the byte offset and back into its own column position.
p.Pos.Byte++
p.Pos.Column += 2
default:
break Byte
}
}
return buf[i:], p
}
type pos struct {
Filename string
Pos hcl.Pos
}
func (p *pos) Range(byteLen, charLen int) hcl.Range {
start := p.Pos
end := p.Pos
end.Byte += byteLen
end.Column += charLen
return hcl.Range{
Filename: p.Filename,
Start: start,
End: end,
}
}
func posRange(start, end pos) hcl.Range {
return hcl.Range{
Filename: start.Filename,
Start: start.Pos,
End: end.Pos,
}
}
func (t token) GoString() string {
return fmt.Sprintf("json.token{json.%s, []byte(%q), %#v}", t.Type, t.Bytes, t.Range)
}

265
vendor/github.com/hashicorp/hcl2/hcl/json/spec.md generated vendored Normal file
View File

@ -0,0 +1,265 @@
# HCL JSON Syntax Specification
This is the specification for the JSON serialization for hcl. HCL is a system
for defining configuration languages for applications. The HCL information
model is designed to support multiple concrete syntaxes for configuration,
and this JSON-based format complements [the native syntax](../zclsyntax/spec.md)
by being easy to machine-generate, whereas the native syntax is oriented
towards human authoring and maintenence.
This syntax is defined in terms of JSON as defined in
[RFC7159](https://tools.ietf.org/html/rfc7159). As such it inherits the JSON
grammar as-is, and merely defines a specific methodology for interpreting
JSON constructs into HCL structural elements and expressions.
This mapping is defined such that valid JSON-serialized HCL input can be
produced using standard JSON implementations in various programming languages.
_Parsing_ such JSON has some additional constraints not beyond what is normally
supported by JSON parsers, though adaptations are defined to allow processing
with an off-the-shelf JSON parser with certain caveats, described in later
sections.
## Structural Elements
The HCL language-agnostic information model defines a _body_ as an abstract
container for attribute definitions and child blocks. A body is represented
in JSON as a JSON _object_.
As defined in the language-agnostic model, body processing is done in terms
of a schema which provides context for interpreting the body's content. For
JSON bodies, the schema is crucial to allow differentiation of attribute
definitions and block definitions, both of which are represented via object
properties.
The special property name `"//"`, when used in an object representing a HCL
body, is parsed and ignored. A property with this name can be used to
include human-readable comments. (This special property name is _not_
processed in this way for any _other_ HCL constructs that are represented as
JSON objects.)
### Attributes
Where the given schema describes an attribute with a given name, the object
property with the matching name — if present — serves as the attribute's
definition.
When a body is being processed in the _dynamic attributes_ mode, each object
property serves as an attribute definition for the attribute whose name
matches the property name.
The value of an attribute definition property is interpreted as an _expression_,
as described in a later section.
Given a schema that calls for an attribute named "foo", a JSON object like
the following provides a definition for that attribute:
```json
{
"foo": "bar baz"
}
```
### Blocks
Where the given schema describes a block with a given type name, the object
property with the matching name — if present — serves as a definition of
zero or more blocks of that type.
Processing of child blocks is in terms of nested JSON objects and arrays.
If the schema defines one or more _labels_ for the block type, a nested
object is required for each labelling level, with the object keys serving as
the label values at that level.
After any labelling levels, the next nested value is either a JSON object
representing a single block body, or a JSON array of JSON objects that each
represent a single block body. Use of an array accommodates the definition
of multiple blocks that have identical type and labels.
Given a schema that calls for a block type named "foo" with no labels, the
following JSON objects are all valid definitions of zero or more blocks of this
type:
```json
{
"foo": {
"child_attr": "baz"
}
}
```
```json
{
"foo": [
{
"child_attr": "baz"
},
{
"child_attr": "boz"
}
]
}
```
```json
{
"foo": []
}
```
The first of these defines a single child block of type "foo". The second
defines _two_ such blocks. The final example shows a degenerate definition
of zero blocks, though generators should prefer to omit the property entirely
in this scenario.
Given a schema that calls for a block type named "foo" with _two_ labels, the
extra label levels must be represented as objects as in the following examples:
```json
{
"foo": {
"bar": {
"baz": {
"child_attr": "baz"
},
"boz": {
"child_attr": "baz"
}
},
"boz": {
"baz": {
"child_attr": "baz"
},
}
}
}
```
```json
{
"foo": {
"bar": {
"baz": {
"child_attr": "baz"
},
"boz": {
"child_attr": "baz"
}
},
"boz": {
"baz": [
{
"child_attr": "baz"
},
{
"child_attr": "boz"
}
]
}
}
}
```
Where multiple definitions are included for the same type and labels, the
JSON array is always the value of the property representing the final label,
and contains objects representing block bodies. It is not valid to use an array
at any other point in the block definition structure.
## Expressions
JSON lacks a native expression syntax, so the HCL JSON syntax instead defines
a mapping for each of the JSON value types, including a special mapping for
strings that allows optional use of arbitrary expressions.
### Objects
When interpreted as an expression, a JSON object represents a value of a HCL
object type.
Each property of the JSON object represents an attribute of the HCL object type.
The object type is constructed by enumerating the JSON object properties,
creating for each an attribute whose name exactly matches the property name,
and whose type is the result of recursively applying the expression mapping
rules.
An instance of the constructed object type is then created, whose values
are interpreted by again recursively applying the mapping rules defined in
this section.
It is an error to define the same property name multiple times within a single
JSON object interpreted as an expression.
### Arrays
When interpreted as an expression, a JSON array represents a value of a HCL
tuple type.
Each element of the JSON array represents an element of the HCL tuple type.
The tuple type is constructed by enumerationg the JSON array elements, creating
for each an element whose type is the result of recursively applying the
expression mapping rules. Correspondance is preserved between the array element
indices and the tuple element indices.
An instance of the constructed tuple type is then created, whose values are
interpreted by again recursively applying the mapping rules defined in this
section.
### Numbers
When interpreted as an expression, a JSON number represents a HCL number value.
HCL numbers are arbitrary-precision decimal values, so an ideal implementation
of this specification will translate exactly the value given to a number of
corresponding precision.
In practice, off-the-shelf JSON parsers often do not support customizing the
processing of numbers, and instead force processing as 32-bit or 64-bit
floating point values with a potential loss of precision. It is permissable
for a HCL JSON parser to pass on such limitations _if and only if_ the
available precision and other constraints are defined in its documentation.
Calling applications each have differing precision requirements, so calling
applications are free to select an implementation with more limited precision
capabilities should high precision not be required for that application.
### Boolean Values
The JSON boolean values `true` and `false`, when interpreted as expressions,
represent the corresponding HCL boolean values.
### The Null Value
The JSON value `null`, when interpreted as an expression, represents a
HCL null value of the dynamic pseudo-type.
### Strings
When intepreted as an expression, a JSON string may be interpreted in one of
two ways depending on the evaluation mode.
If evaluating in literal-only mode (as defined by the syntax-agnostic
information model) the literal string is intepreted directly as a HCL string
value, by directly using the exact sequence of unicode characters represented.
Template interpolations and directives MUST NOT be processed in this mode,
allowing any characters that appear as introduction sequences to pass through
literally:
```json
"Hello world! Template sequences like ${ are not intepreted here."
```
When evaluating in full expression mode (again, as defined by the syntax-
agnostic information model) the literal string is instead interpreted as a
_standalone template_ in the HCL Native Syntax. The expression evaluation
result is then the direct result of evaluating that template with the current
variable scope and function table.
```json
"Hello, ${name}! Template sequences are interpreted in full expression mode."
```
In particular the _Template Interpolation Unwrapping_ requirement from the
HCL native syntax specification must be implemented, allowing the use of
single-interpolation templates to represent expressions that would not
otherwise be representable in JSON, such as the following example where
the result must be a number, rather than a string representation of a number:
```json
"${ a + b }"
```

355
vendor/github.com/hashicorp/hcl2/hcl/json/structure.go generated vendored Normal file
View File

@ -0,0 +1,355 @@
package json
import (
"fmt"
"github.com/hashicorp/hcl2/hcl/hclsyntax"
"github.com/hashicorp/hcl2/hcl"
"github.com/zclconf/go-cty/cty"
)
// body is the implementation of "Body" used for files processed with the JSON
// parser.
type body struct {
obj *objectVal
// If non-nil, the keys of this map cause the corresponding attributes to
// be treated as non-existing. This is used when Body.PartialContent is
// called, to produce the "remaining content" Body.
hiddenAttrs map[string]struct{}
// If set, string values are turned into expressions using HIL's template
// language, rather than the native zcl language. This is intended to
// allow applications moving from HCL to zcl to continue to parse the
// JSON variant of their config that HCL handled previously.
useHIL bool
}
// expression is the implementation of "Expression" used for files processed
// with the JSON parser.
type expression struct {
src node
}
func (b *body) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
content, newBody, diags := b.PartialContent(schema)
hiddenAttrs := newBody.(*body).hiddenAttrs
var nameSuggestions []string
for _, attrS := range schema.Attributes {
if _, ok := hiddenAttrs[attrS.Name]; !ok {
// Only suggest an attribute name if we didn't use it already.
nameSuggestions = append(nameSuggestions, attrS.Name)
}
}
for _, blockS := range schema.Blocks {
// Blocks can appear multiple times, so we'll suggest their type
// names regardless of whether they've already been used.
nameSuggestions = append(nameSuggestions, blockS.Type)
}
for k, attr := range b.obj.Attrs {
if k == "//" {
// Ignore "//" keys in objects representing bodies, to allow
// their use as comments.
continue
}
if _, ok := hiddenAttrs[k]; !ok {
var fixItHint string
suggestion := nameSuggestion(k, nameSuggestions)
if suggestion != "" {
fixItHint = fmt.Sprintf(" Did you mean %q?", suggestion)
}
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Extraneous JSON object property",
Detail: fmt.Sprintf("No attribute or block type is named %q.%s", k, fixItHint),
Subject: &attr.NameRange,
Context: attr.Range().Ptr(),
})
}
}
return content, diags
}
func (b *body) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
obj := b.obj
jsonAttrs := obj.Attrs
usedNames := map[string]struct{}{}
if b.hiddenAttrs != nil {
for k := range b.hiddenAttrs {
usedNames[k] = struct{}{}
}
}
var diags hcl.Diagnostics
content := &hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: nil,
MissingItemRange: b.MissingItemRange(),
}
for _, attrS := range schema.Attributes {
jsonAttr, exists := jsonAttrs[attrS.Name]
_, used := usedNames[attrS.Name]
if used || !exists {
if attrS.Required {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing required attribute",
Detail: fmt.Sprintf("The attribute %q is required, so a JSON object property must be present with this name.", attrS.Name),
Subject: &obj.OpenRange,
})
}
continue
}
content.Attributes[attrS.Name] = &hcl.Attribute{
Name: attrS.Name,
Expr: &expression{src: jsonAttr.Value},
Range: hcl.RangeBetween(jsonAttr.NameRange, jsonAttr.Value.Range()),
NameRange: jsonAttr.NameRange,
}
usedNames[attrS.Name] = struct{}{}
}
for _, blockS := range schema.Blocks {
jsonAttr, exists := jsonAttrs[blockS.Type]
_, used := usedNames[blockS.Type]
if used || !exists {
usedNames[blockS.Type] = struct{}{}
continue
}
v := jsonAttr.Value
diags = append(diags, b.unpackBlock(v, blockS.Type, &jsonAttr.NameRange, blockS.LabelNames, nil, nil, &content.Blocks)...)
usedNames[blockS.Type] = struct{}{}
}
unusedBody := &body{
obj: b.obj,
hiddenAttrs: usedNames,
useHIL: b.useHIL,
}
return content, unusedBody, diags
}
// JustAttributes for JSON bodies interprets all properties of the wrapped
// JSON object as attributes and returns them.
func (b *body) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
attrs := make(map[string]*hcl.Attribute)
for name, jsonAttr := range b.obj.Attrs {
if name == "//" {
// Ignore "//" keys in objects representing bodies, to allow
// their use as comments.
continue
}
if _, hidden := b.hiddenAttrs[name]; hidden {
continue
}
attrs[name] = &hcl.Attribute{
Name: name,
Expr: &expression{src: jsonAttr.Value},
Range: hcl.RangeBetween(jsonAttr.NameRange, jsonAttr.Value.Range()),
NameRange: jsonAttr.NameRange,
}
}
// No diagnostics possible here, since the parser already took care of
// finding duplicates and every JSON value can be a valid attribute value.
return attrs, nil
}
func (b *body) MissingItemRange() hcl.Range {
return b.obj.CloseRange
}
func (b *body) unpackBlock(v node, typeName string, typeRange *hcl.Range, labelsLeft []string, labelsUsed []string, labelRanges []hcl.Range, blocks *hcl.Blocks) (diags hcl.Diagnostics) {
if len(labelsLeft) > 0 {
labelName := labelsLeft[0]
ov, ok := v.(*objectVal)
if !ok {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Incorrect JSON value type",
Detail: fmt.Sprintf("A JSON object is required, whose keys represent the %s block's %s.", typeName, labelName),
Subject: v.StartRange().Ptr(),
})
return
}
if len(ov.Attrs) == 0 {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing block label",
Detail: fmt.Sprintf("At least one object property is required, whose name represents the %s block's %s.", typeName, labelName),
Subject: v.StartRange().Ptr(),
})
return
}
labelsUsed := append(labelsUsed, "")
labelRanges := append(labelRanges, hcl.Range{})
for pk, p := range ov.Attrs {
labelsUsed[len(labelsUsed)-1] = pk
labelRanges[len(labelRanges)-1] = p.NameRange
diags = append(diags, b.unpackBlock(p.Value, typeName, typeRange, labelsLeft[1:], labelsUsed, labelRanges, blocks)...)
}
return
}
// By the time we get here, we've peeled off all the labels and we're ready
// to deal with the block's actual content.
// need to copy the label slices because their underlying arrays will
// continue to be mutated after we return.
labels := make([]string, len(labelsUsed))
copy(labels, labelsUsed)
labelR := make([]hcl.Range, len(labelRanges))
copy(labelR, labelRanges)
switch tv := v.(type) {
case *objectVal:
// Single instance of the block
*blocks = append(*blocks, &hcl.Block{
Type: typeName,
Labels: labels,
Body: &body{
obj: tv,
useHIL: b.useHIL,
},
DefRange: tv.OpenRange,
TypeRange: *typeRange,
LabelRanges: labelR,
})
case *arrayVal:
// Multiple instances of the block
for _, av := range tv.Values {
ov, ok := av.(*objectVal)
if !ok {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Incorrect JSON value type",
Detail: fmt.Sprintf("A JSON object is required, representing the contents of a %q block.", typeName),
Subject: v.StartRange().Ptr(),
})
continue
}
*blocks = append(*blocks, &hcl.Block{
Type: typeName,
Labels: labels,
Body: &body{
obj: ov,
useHIL: b.useHIL,
},
DefRange: tv.OpenRange,
TypeRange: *typeRange,
LabelRanges: labelR,
})
}
default:
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Incorrect JSON value type",
Detail: fmt.Sprintf("Either a JSON object or a JSON array is required, representing the contents of one or more %q blocks.", typeName),
Subject: v.StartRange().Ptr(),
})
}
return
}
func (e *expression) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
switch v := e.src.(type) {
case *stringVal:
if ctx != nil {
// Parse string contents as a zcl native language expression.
// We only do this if we have a context, so passing a nil context
// is how the caller specifies that interpolations are not allowed
// and that the string should just be returned verbatim.
templateSrc := v.Value
expr, diags := hclsyntax.ParseTemplate(
[]byte(templateSrc),
v.SrcRange.Filename,
// This won't produce _exactly_ the right result, since
// the zclsyntax parser can't "see" any escapes we removed
// while parsing JSON, but it's better than nothing.
hcl.Pos{
Line: v.SrcRange.Start.Line,
// skip over the opening quote mark
Byte: v.SrcRange.Start.Byte + 1,
Column: v.SrcRange.Start.Column + 1,
},
)
if diags.HasErrors() {
return cty.DynamicVal, diags
}
val, evalDiags := expr.Value(ctx)
diags = append(diags, evalDiags...)
return val, diags
}
// FIXME: Once the native zcl template language parser is implemented,
// parse string values as templates and evaluate them.
return cty.StringVal(v.Value), nil
case *numberVal:
return cty.NumberVal(v.Value), nil
case *booleanVal:
return cty.BoolVal(v.Value), nil
case *arrayVal:
vals := []cty.Value{}
for _, jsonVal := range v.Values {
val, _ := (&expression{src: jsonVal}).Value(ctx)
vals = append(vals, val)
}
return cty.TupleVal(vals), nil
case *objectVal:
attrs := map[string]cty.Value{}
for name, jsonAttr := range v.Attrs {
val, _ := (&expression{src: jsonAttr.Value}).Value(ctx)
attrs[name] = val
}
return cty.ObjectVal(attrs), nil
default:
// Default to DynamicVal so that ASTs containing invalid nodes can
// still be partially-evaluated.
return cty.DynamicVal, nil
}
}
func (e *expression) Variables() []hcl.Traversal {
var vars []hcl.Traversal
switch v := e.src.(type) {
case *stringVal:
// FIXME: Once the native zcl template language parser is implemented,
// parse with that and look for variables in there too,
case *arrayVal:
for _, jsonVal := range v.Values {
vars = append(vars, (&expression{src: jsonVal}).Variables()...)
}
case *objectVal:
for _, jsonAttr := range v.Attrs {
vars = append(vars, (&expression{src: jsonAttr.Value}).Variables()...)
}
}
return vars
}
func (e *expression) Range() hcl.Range {
return e.src.Range()
}
func (e *expression) StartRange() hcl.Range {
return e.src.StartRange()
}

View File

@ -0,0 +1,29 @@
// Code generated by "stringer -type tokenType scanner.go"; DO NOT EDIT.
package json
import "fmt"
const _tokenType_name = "tokenInvalidtokenCommatokenColontokenEqualstokenKeywordtokenNumbertokenStringtokenBrackOtokenBrackCtokenBraceOtokenBraceCtokenEOF"
var _tokenType_map = map[tokenType]string{
0: _tokenType_name[0:12],
44: _tokenType_name[12:22],
58: _tokenType_name[22:32],
61: _tokenType_name[32:43],
75: _tokenType_name[43:55],
78: _tokenType_name[55:66],
83: _tokenType_name[66:77],
91: _tokenType_name[77:88],
93: _tokenType_name[88:99],
123: _tokenType_name[99:110],
125: _tokenType_name[110:121],
9220: _tokenType_name[121:129],
}
func (i tokenType) String() string {
if str, ok := _tokenType_map[i]; ok {
return str
}
return fmt.Sprintf("tokenType(%d)", i)
}

226
vendor/github.com/hashicorp/hcl2/hcl/merged.go generated vendored Normal file
View File

@ -0,0 +1,226 @@
package hcl
import (
"fmt"
)
// MergeFiles combines the given files to produce a single body that contains
// configuration from all of the given files.
//
// The ordering of the given files decides the order in which contained
// elements will be returned. If any top-level attributes are defined with
// the same name across multiple files, a diagnostic will be produced from
// the Content and PartialContent methods describing this error in a
// user-friendly way.
func MergeFiles(files []*File) Body {
var bodies []Body
for _, file := range files {
bodies = append(bodies, file.Body)
}
return MergeBodies(bodies)
}
// MergeBodies is like MergeFiles except it deals directly with bodies, rather
// than with entire files.
func MergeBodies(bodies []Body) Body {
if len(bodies) == 0 {
// Swap out for our singleton empty body, to reduce the number of
// empty slices we have hanging around.
return emptyBody
}
// If any of the given bodies are already merged bodies, we'll unpack
// to flatten to a single mergedBodies, since that's conceptually simpler.
// This also, as a side-effect, eliminates any empty bodies, since
// empties are merged bodies with no inner bodies.
var newLen int
var flatten bool
for _, body := range bodies {
if children, merged := body.(mergedBodies); merged {
newLen += len(children)
flatten = true
} else {
newLen++
}
}
if !flatten { // not just newLen == len, because we might have mergedBodies with single bodies inside
return mergedBodies(bodies)
}
if newLen == 0 {
// Don't allocate a new empty when we already have one
return emptyBody
}
new := make([]Body, 0, newLen)
for _, body := range bodies {
if children, merged := body.(mergedBodies); merged {
new = append(new, children...)
} else {
new = append(new, body)
}
}
return mergedBodies(new)
}
var emptyBody = mergedBodies([]Body{})
// EmptyBody returns a body with no content. This body can be used as a
// placeholder when a body is required but no body content is available.
func EmptyBody() Body {
return emptyBody
}
type mergedBodies []Body
// Content returns the content produced by applying the given schema to all
// of the merged bodies and merging the result.
//
// Although required attributes _are_ supported, they should be used sparingly
// with merged bodies since in this case there is no contextual information
// with which to return good diagnostics. Applications working with merged
// bodies may wish to mark all attributes as optional and then check for
// required attributes afterwards, to produce better diagnostics.
func (mb mergedBodies) Content(schema *BodySchema) (*BodyContent, Diagnostics) {
// the returned body will always be empty in this case, because mergedContent
// will only ever call Content on the child bodies.
content, _, diags := mb.mergedContent(schema, false)
return content, diags
}
func (mb mergedBodies) PartialContent(schema *BodySchema) (*BodyContent, Body, Diagnostics) {
return mb.mergedContent(schema, true)
}
func (mb mergedBodies) JustAttributes() (Attributes, Diagnostics) {
attrs := make(map[string]*Attribute)
var diags Diagnostics
for _, body := range mb {
thisAttrs, thisDiags := body.JustAttributes()
if len(thisDiags) != 0 {
diags = append(diags, thisDiags...)
}
if thisAttrs != nil {
for name, attr := range thisAttrs {
if existing := attrs[name]; existing != nil {
diags = diags.Append(&Diagnostic{
Severity: DiagError,
Summary: "Duplicate attribute",
Detail: fmt.Sprintf(
"Attribute %q was already assigned at %s",
name, existing.NameRange.String(),
),
Subject: &attr.NameRange,
})
continue
}
attrs[name] = attr
}
}
}
return attrs, diags
}
func (mb mergedBodies) MissingItemRange() Range {
if len(mb) == 0 {
// Nothing useful to return here, so we'll return some garbage.
return Range{
Filename: "<empty>",
}
}
// arbitrarily use the first body's missing item range
return mb[0].MissingItemRange()
}
func (mb mergedBodies) mergedContent(schema *BodySchema, partial bool) (*BodyContent, Body, Diagnostics) {
// We need to produce a new schema with none of the attributes marked as
// required, since _any one_ of our bodies can contribute an attribute value.
// We'll separately check that all required attributes are present at
// the end.
mergedSchema := &BodySchema{
Blocks: schema.Blocks,
}
for _, attrS := range schema.Attributes {
mergedAttrS := attrS
mergedAttrS.Required = false
mergedSchema.Attributes = append(mergedSchema.Attributes, mergedAttrS)
}
var mergedLeftovers []Body
content := &BodyContent{
Attributes: map[string]*Attribute{},
}
var diags Diagnostics
for _, body := range mb {
var thisContent *BodyContent
var thisLeftovers Body
var thisDiags Diagnostics
if partial {
thisContent, thisLeftovers, thisDiags = body.PartialContent(mergedSchema)
} else {
thisContent, thisDiags = body.Content(mergedSchema)
}
if thisLeftovers != nil {
mergedLeftovers = append(mergedLeftovers)
}
if len(thisDiags) != 0 {
diags = append(diags, thisDiags...)
}
if thisContent.Attributes != nil {
for name, attr := range thisContent.Attributes {
if existing := content.Attributes[name]; existing != nil {
diags = diags.Append(&Diagnostic{
Severity: DiagError,
Summary: "Duplicate attribute",
Detail: fmt.Sprintf(
"Attribute %q was already assigned at %s",
name, existing.NameRange.String(),
),
Subject: &attr.NameRange,
})
continue
}
content.Attributes[name] = attr
}
}
if len(thisContent.Blocks) != 0 {
content.Blocks = append(content.Blocks, thisContent.Blocks...)
}
}
// Finally, we check for required attributes.
for _, attrS := range schema.Attributes {
if !attrS.Required {
continue
}
if content.Attributes[attrS.Name] == nil {
// We don't have any context here to produce a good diagnostic,
// which is why we warn in the Content docstring to minimize the
// use of required attributes on merged bodies.
diags = diags.Append(&Diagnostic{
Severity: DiagError,
Summary: "Missing required attribute",
Detail: fmt.Sprintf(
"The attribute %q is required, but was not assigned.",
attrS.Name,
),
})
}
}
leftoverBody := MergeBodies(mergedLeftovers)
return content, leftoverBody, diags
}

147
vendor/github.com/hashicorp/hcl2/hcl/ops.go generated vendored Normal file
View File

@ -0,0 +1,147 @@
package hcl
import (
"fmt"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
)
// Index is a helper function that performs the same operation as the index
// operator in the zcl expression language. That is, the result is the
// same as it would be for collection[key] in a configuration expression.
//
// This is exported so that applications can perform indexing in a manner
// consistent with how the language does it, including handling of null and
// unknown values, etc.
//
// Diagnostics are produced if the given combination of values is not valid.
// Therefore a pointer to a source range must be provided to use in diagnostics,
// though nil can be provided if the calling application is going to
// ignore the subject of the returned diagnostics anyway.
func Index(collection, key cty.Value, srcRange *Range) (cty.Value, Diagnostics) {
if collection.IsNull() {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Attempt to index null value",
Detail: "This value is null, so it does not have any indices.",
Subject: srcRange,
},
}
}
if key.IsNull() {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Invalid index",
Detail: "Can't use a null value as an indexing key.",
Subject: srcRange,
},
}
}
ty := collection.Type()
kty := key.Type()
if kty == cty.DynamicPseudoType || ty == cty.DynamicPseudoType {
return cty.DynamicVal, nil
}
switch {
case ty.IsListType() || ty.IsTupleType() || ty.IsMapType():
var wantType cty.Type
switch {
case ty.IsListType() || ty.IsTupleType():
wantType = cty.Number
case ty.IsMapType():
wantType = cty.String
default:
// should never happen
panic("don't know what key type we want")
}
key, keyErr := convert.Convert(key, wantType)
if keyErr != nil {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Invalid index",
Detail: fmt.Sprintf(
"The given key does not identify an element in this collection value: %s.",
keyErr.Error(),
),
Subject: srcRange,
},
}
}
has := collection.HasIndex(key)
if !has.IsKnown() {
if ty.IsTupleType() {
return cty.DynamicVal, nil
} else {
return cty.UnknownVal(ty.ElementType()), nil
}
}
if has.False() {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Invalid index",
Detail: "The given key does not identify an element in this collection value.",
Subject: srcRange,
},
}
}
return collection.Index(key), nil
case ty.IsObjectType():
key, keyErr := convert.Convert(key, cty.String)
if keyErr != nil {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Invalid index",
Detail: fmt.Sprintf(
"The given key does not identify an element in this collection value: %s.",
keyErr.Error(),
),
Subject: srcRange,
},
}
}
if !collection.IsKnown() {
return cty.DynamicVal, nil
}
if !key.IsKnown() {
return cty.DynamicVal, nil
}
attrName := key.AsString()
if !ty.HasAttribute(attrName) {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Invalid index",
Detail: "The given key does not identify an element in this collection value.",
Subject: srcRange,
},
}
}
return collection.GetAttr(attrName), nil
default:
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Invalid index",
Detail: "This value does not have any indices.",
Subject: srcRange,
},
}
}
}

96
vendor/github.com/hashicorp/hcl2/hcl/pos.go generated vendored Normal file
View File

@ -0,0 +1,96 @@
package hcl
import "fmt"
// Pos represents a single position in a source file, by addressing the
// start byte of a unicode character encoded in UTF-8.
//
// Pos is generally used only in the context of a Range, which then defines
// which source file the position is within.
type Pos struct {
// Line is the source code line where this position points. Lines are
// counted starting at 1 and incremented for each newline character
// encountered.
Line int
// Column is the source code column where this position points, in
// unicode characters, with counting starting at 1.
//
// Column counts characters as they appear visually, so for example a
// latin letter with a combining diacritic mark counts as one character.
// This is intended for rendering visual markers against source code in
// contexts where these diacritics would be rendered in a single character
// cell. Technically speaking, Column is counting grapheme clusters as
// used in unicode normalization.
Column int
// Byte is the byte offset into the file where the indicated character
// begins. This is a zero-based offset to the first byte of the first
// UTF-8 codepoint sequence in the character, and thus gives a position
// that can be resolved _without_ awareness of Unicode characters.
Byte int
}
// Range represents a span of characters between two positions in a source
// file.
//
// This struct is usually used by value in types that represent AST nodes,
// but by pointer in types that refer to the positions of other objects,
// such as in diagnostics.
type Range struct {
// Filename is the name of the file into which this range's positions
// point.
Filename string
// Start and End represent the bounds of this range. Start is inclusive
// and End is exclusive.
Start, End Pos
}
// RangeBetween returns a new range that spans from the beginning of the
// start range to the end of the end range.
//
// The result is meaningless if the two ranges do not belong to the same
// source file or if the end range appears before the start range.
func RangeBetween(start, end Range) Range {
return Range{
Filename: start.Filename,
Start: start.Start,
End: end.End,
}
}
// ContainsOffset returns true if and only if the given byte offset is within
// the receiving Range.
func (r Range) ContainsOffset(offset int) bool {
return offset >= r.Start.Byte && offset < r.End.Byte
}
// Ptr returns a pointer to a copy of the receiver. This is a convenience when
// ranges in places where pointers are required, such as in Diagnostic, but
// the range in question is returned from a method. Go would otherwise not
// allow one to take the address of a function call.
func (r Range) Ptr() *Range {
return &r
}
// String returns a compact string representation of the receiver.
// Callers should generally prefer to present a range more visually,
// e.g. via markers directly on the relevant portion of source code.
func (r Range) String() string {
if r.Start.Line == r.End.Line {
return fmt.Sprintf(
"%s:%d,%d-%d",
r.Filename,
r.Start.Line, r.Start.Column,
r.End.Column,
)
} else {
return fmt.Sprintf(
"%s:%d,%d-%d,%d",
r.Filename,
r.Start.Line, r.Start.Column,
r.End.Line, r.End.Column,
)
}
}

21
vendor/github.com/hashicorp/hcl2/hcl/schema.go generated vendored Normal file
View File

@ -0,0 +1,21 @@
package hcl
// BlockHeaderSchema represents the shape of a block header, and is
// used for matching blocks within bodies.
type BlockHeaderSchema struct {
Type string
LabelNames []string
}
// AttributeSchema represents the requirements for an attribute, and is used
// for matching attributes within bodies.
type AttributeSchema struct {
Name string
Required bool
}
// BodySchema represents the desired shallow structure of a body.
type BodySchema struct {
Attributes []AttributeSchema
Blocks []BlockHeaderSchema
}

646
vendor/github.com/hashicorp/hcl2/hcl/spec.md generated vendored Normal file
View File

@ -0,0 +1,646 @@
# HCL Syntax-Agnostic Information Model
This is the specification for the general information model (abstract types and
semantics) for hcl. HCL is a system for defining configuration languages for
applications. The HCL information model is designed to support multiple
concrete syntaxes for configuration, each with a mapping to the model defined
in this specification.
The two primary syntaxes intended for use in conjunction with this model are
[the HCL native syntax](./zclsyntax/spec.md) and [the JSON syntax](./json/spec.md).
In principle other syntaxes are possible as long as either their language model
is sufficiently rich to express the concepts described in this specification
or the language targets a well-defined subset of the specification.
## Structural Elements
The primary structural element is the _body_, which is a container representing
a set of zero or more _attributes_ and a set of zero or more _blocks_.
A _configuration file_ is the top-level object, and will usually be produced
by reading a file from disk and parsing it as a particular syntax. A
configuration file has its own _body_, representing the top-level attributes
and blocks.
An _attribute_ is a name and value pair associated with a body. Attribute names
are unique within a given body. Attribute values are provided as _expressions_,
which are discussed in detail in a later section.
A _block_ is a nested structure that has a _type name_, zero or more string
_labels_ (e.g. identifiers), and a nested body.
Together the structural elements create a heirarchical data structure, with
attributes intended to represent the direct properties of a particular object
in the calling application, and blocks intended to represent child objects
of a particular object.
## Body Content
To support the expression of the HCL concepts in languages whose information
model is a subset of HCL's, such as JSON, a _body_ is an opaque container
whose content can only be accessed by providing information on the expected
structure of the content.
The specification for each syntax must describe how its physical constructs
are mapped on to body content given a schema. For syntaxes that have
first-class syntax distinguishing attributes and bodies this can be relatively
straightforward, while more detailed mapping rules may be required in syntaxes
where the representation of attributes vs. blocks is ambiguous.
### Schema-driven Processing
Schema-driven processing is the primary way to access body content.
A _body schema_ is a description of what is expected within a particular body,
which can then be used to extract the _body content_, which then provides
access to the specific attributes and blocks requested.
A _body schema_ consists of a list of _attribute schemata_ and
_block header schemata_:
* An _attribute schema_ provides the name of an attribute and whether its
presence is required.
* A _block header schema_ provides a block type name and the semantic names
assigned to each of the labels of that block type, if any.
Within a schema, it is an error to request the same attribute name twice or
to request a block type whose name is also an attribute name. While this can
in principle be supported in some syntaxes, in other syntaxes the attribute
and block namespaces are combined and so an an attribute cannot coexist with
a block whose type name is identical to the attribute name.
The result of applying a body schema to a body is _body content_, which
consists of an _attribute map_ and a _block sequence_:
* The _attribute map_ is a map data structure whose keys are attribute names
and whose values are _expressions_ that represent the corresponding attribute
values.
* The _block sequence_ is an ordered sequence of blocks, with each specifying
a block _type name_, the sequence of _labels_ specified for the block,
and the body object (not body _content_) representing the block's own body.
After obtaining _body content_, the calling application may continue processing
by evaluating attribute expressions and/or recursively applying further
schema-driven processing to the child block bodies.
**Note:** The _body schema_ is intentionally minimal, to reduce the set of
mapping rules that must be defined for each syntax. Higher-level utility
libraries may be provided to assist in the construction of a schema and
perform additional processing, such as automatically evaluating attribute
expressions and assigning their result values into a data structure, or
recursively applying a schema to child blocks. Such utilities are not part of
this core specification and will vary depending on the capabilities and idiom
of the implementation language.
### _Dynamic Attributes_ Processing
The _schema-driven_ processing model is useful when the expected structure
of a body is known a priori by the calling application. Some blocks are
instead more free-form, such as a user-provided set of arbitrary key/value
pairs.
The alternative _dynamic attributes_ processing mode allows for this more
ad-hoc approach. Processing in this mode behaves as if a schema had been
constructed without any _block header schemata_ and with an attribute
schema for each distinct key provided within the physical representation
of the body.
The means by which _distinct keys_ are identified is dependent on the
physical syntax; this processing mode assumes that the syntax has a way
to enumerate keys provided by the author and identify expressions that
correspond with those keys, but does not define the means by which this is
done.
The result of _dynamic attributes_ processing is an _attribute map_ as
defined in the previous section. No _block sequence_ is produced in this
processing mode.
### Partial Processing of Body Content
Under _schema-driven processing_, by default the given schema is assumed
to be exhaustive, such that any attribute or block not matched by schema
elements is considered an error. This allows feedback about unsupported
attributes and blocks (such as typos) to be provided.
An alternative is _partial processing_, where any additional elements within
the body are not considered an error.
Under partial processing, the result is both body content as described
above _and_ a new body that represents any body elements that remain after
the schema has been processed.
Specifically:
* Any attribute whose name is specified in the schema is returned in body
content and elided from the new body.
* Any block whose type is specified in the schema is returned in body content
and elided from the new body.
* Any attribute or block _not_ meeting the above conditions is placed into
the new body, unmodified.
The new body can then be recursively processed using any of the body
processing models. This facility allows different subsets of body content
to be processed by different parts of the calling application.
Processing a body in two steps — first partial processing of a source body,
then exhaustive processing of the returned body — is equivalent to single-step
processing with a schema that is the union of the schemata used
across the two steps.
## Expressions
Attribute values are represented by _expressions_. Depending on the concrete
syntax in use, an expression may just be a literal value or it may describe
a computation in terms of literal values, variables, and functions.
Each syntax defines its own representation of expressions. For syntaxes based
in languages that do not have any non-literal expression syntax, it is
recommended to embed the template language from
[the native syntax](./zclsyntax/spec.md) e.g. as a post-processing step on
string literals.
### Expression Evaluation
In order to obtain a concrete value, each expression must be _evaluated_.
Evaluation is performed in terms of an evaluation context, which
consists of the following:
* An _evaluation mode_, which is defined below.
* A _variable scope_, which provides a set of named variables for use in
expressions.
* A _function table_, which provides a set of named functions for use in
expressions.
The _evaluation mode_ allows for two different interpretations of an
expression:
* In _literal-only mode_, variables and functions are not available and it
is assumed that the calling application's intent is to treat the attribute
value as a literal.
* In _full expression mode_, variables and functions are defined and it is
assumed that the calling application wishes to provide a full expression
language for definition of the attribute value.
The actual behavior of these two modes depends on the syntax in use. For
languages with first-class expression syntax, these two modes may be considered
equivalent, with _literal-only mode_ simply not defining any variables or
functions. For languages that embed arbitrary expressions via string templates,
_literal-only mode_ may disable such processing, allowing literal strings to
pass through without interpretation as templates.
Since literal-only mode does not support variables and functions, it is an
error for the calling application to enable this mode and yet provide a
variable scope and/or function table.
## Values and Value Types
The result of expression evaluation is a _value_. Each value has a _type_,
which is dynamically determined during evaluation. The _variable scope_ in
the evaluation context is a map from variable name to value, using the same
definition of value.
The type system for HCL values is intended to be of a level abstraction
suitable for configuration of various applications. A well-defined,
implementation-language-agnostic type system is defined to allow for
consistent processing of configuration across many implementation languages.
Concrete implementations may provide additional functionality to lower
HCL values and types to corresponding native language types, which may then
impose additional constraints on the values outside of the scope of this
specification.
Two values are _equal_ if and only if they have identical types and their
values are equal according to the rules of their shared type.
### Primitive Types
The primitive types are _string_, _bool_, and _number_.
A _string_ is a sequence of unicode characters. Two strings are equal if
NFC normalization ([UAX#15](http://unicode.org/reports/tr15/)
of each string produces two identical sequences of characters.
NFC normalization ensures that, for example, a precomposed combination of a
latin letter and a diacritic compares equal with the letter followed by
a combining diacritic.
The _bool_ type has only two non-null values: _true_ and _false_. Two bool
values are equal if and only if they are either both true or both false.
A _number_ is an arbitrary-precision floating point value. An implementation
_must_ make the full-precision values available to the calling application
for interpretation into any suitable number representation. An implementation
may in practice implement numbers with limited precision so long as the
following constraints are met:
* Integers are represented with at least 256 bits.
* Non-integer numbers are represented as floating point values with a
mantissa of at least 256 bits and a signed binary exponent of at least
16 bits.
* An error is produced if an integer value given in source cannot be
represented precisely.
* An error is produced if a non-integer value cannot be represented due to
overflow.
* A non-integer number is rounded to the nearest possible value when a
value is of too high a precision to be represented.
The _number_ type also requires representation of both positive and negative
infinity. A "not a number" (NaN) value is _not_ provided nor used.
Two number values are equal if they are numerically equal to the precision
associated with the number. Positive infinity and negative infinity are
equal to themselves but not to each other. Positive infinity is greater than
any other number value, and negative infinity is less than any other number
value.
Some syntaxes may be unable to represent numeric literals of arbitrary
precision. This must be defined in the syntax specification as part of its
description of mapping numeric literals to HCL values.
### Structural Types
_Structural types_ are types that are constructed by combining other types.
Each distinct combination of other types is itself a distinct type. There
are two structural type _kinds_:
* _Object types_ are constructed of a set of named attributes, each of which
has a type. Attribute names are always strings. (_Object_ attributes are a
distinct idea from _body_ attributes, though calling applications
may choose to blur the distinction by use of common naming schemes.)
* _Tuple tupes_ are constructed of a sequence of elements, each of which
has a type.
Values of structural types are compared for equality in terms of their
attributes or elements. A structural type value is equal to another if and
only if all of the corresponding attributes or elements are equal.
Two structural types are identical if they are of the same kind and
have attributes or elements with identical types.
### Collection Types
_Collection types_ are types that combine together an arbitrary number of
values of some other single type. There are three collection type _kinds_:
* _List types_ represent ordered sequences of values of their element type.
* _Map types_ represent values of their element type accessed via string keys.
* _Set types_ represent unordered sets of distinct values of their element type.
For each of these kinds and each distinct element type there is a distinct
collection type. For example, "list of string" is a distinct type from
"set of string", and "list of number" is a distinct type from "list of string".
Values of collection types are compared for equality in terms of their
elements. A collection type value is equal to another if and only if both
have the same number of elements and their corresponding elements are equal.
Two collection types are identical if they are of the same kind and have
the same element type.
### Null values
Each type has a null value. The null value of a type represents the absense
of a value, but with type information retained to allow for type checking.
Null values are used primarily to represent the conditional absense of a
body attribute. In a syntax with a conditional operator, one of the result
values of that conditional may be null to indicate that the attribute should be
considered not present in that case.
Calling applications _should_ consider an attribute with a null value as
equivalent to the value not being present at all.
A null value of a particular type is equal to itself.
### Unknown Values and the Dynamic Pseudo-type
An _unknown value_ is a placeholder for a value that is not yet known.
Operations on unknown values themselves return unknown values that have a
type appropriate to the operation. For example, adding together two unknown
numbers yields an unknown number, while comparing two unknown values of any
type for equality yields an unknown bool.
Each type has a distinct unknown value. For example, an unknown _number_ is
a distinct value from an unknown _string_.
_The dynamic pseudo-type_ is a placeholder for a type that is not yet known.
The only values of this type are its null value and its unknown value. It is
referred to as a _pseudo-type_ because it should not be considered a type in
its own right, but rather as a placeholder for a type yet to be established.
The unknown value of the dynamic pseudo-type is referred to as _the dynamic
value_.
Operations on values of the dynamic pseudo-type behave as if it is a value
of the expected type, optimistically assuming that once the value and type
are known they will be valid for the operation. For example, adding together
a number and the dynamic value produces an unknown number.
Unknown values and the dynamic pseudo-type can be used as a mechanism for
partial type checking and semantic checking: by evaluating an expression with
all variables set to an unknown value, the expression can be evaluated to
produce an unknown value of a given type, or produce an error if any operation
is provably invalid with only type information.
Unknown values and the dynamic pseudo-type must never be returned from
operations unless at least one operand is unknown or dynamic. Calling
applications are guaranteed that unless the global scope includes unknown
values, or the function table includes functions that return unknown values,
no expression will evaluate to an unknown value. The calling application is
thus in total control over the use and meaning of unknown values.
The dynamic pseudo-type is identical only to itself.
### Capsule Types
A _capsule type_ is a custom type defined by the calling application. A value
of a capsule type is considered opaque to HCL, but may be accepted
by functions provided by the calling application.
A particular capsule type is identical only to itself. The equality of two
values of the same capsule type is defined by the calling application. No
other operations are supported for values of capsule types.
Support for capsule types in a HCL implementation is optional. Capsule types
are intended to allow calling applications to pass through values that are
not part of the standard type system. For example, an application that
deals with raw binary data may define a capsule type representing a byte
array, and provide functions that produce or operate on byte arrays.
### Type Specifications
In certain situations it is necessary to define expectations about the expected
type of a value. Whereas two _types_ have a commutative _identity_ relationship,
a type has a non-commutative _matches_ relationship with a _type specification_.
A type specification is, in practice, just a different interpretation of a
type such that:
* Any type _matches_ any type that it is identical to.
* Any type _matches_ the dynamic pseudo-type.
For example, given a type specification "list of dynamic pseudo-type", the
concrete types "list of string" and "list of map" match, but the
type "set of string" does not.
## Functions and Function Calls
The evaluation context used to evaluate an expression includes a function
table, which represents an application-defined set of named functions
available for use in expressions.
Each syntax defines whether function calls are supported and how they are
physically represented in source code, but the semantics of function calls are
defined here to ensure consistent results across syntaxes and to allow
applications to provide functions that are interoperable with all syntaxes.
A _function_ is defined from the following elements:
* Zero or more _positional parameters_, each with a name used for documentation,
a type specification for expected argument values, and a flag for whether
each of null values, unknown values, and values of the dynamic pseudo-type
are accepted.
* Zero or one _variadic parameters_, with the same structure as the _positional_
parameters, which if present collects any additional arguments provided at
the function call site.
* A _result type definition_, which specifies the value type returned for each
valid sequence of argument values.
* A _result value definition_, which specifies the value returned for each
valid sequence of argument values.
A _function call_, regardless of source syntax, consists of a sequence of
argument values. The argument values are each mapped to a corresponding
parameter as follows:
* For each of the function's positional parameters in sequence, take the next
argument. If there are no more arguments, the call is erroneous.
* If the function has a variadic parameter, take all remaining arguments that
where not yet assigned to a positional parameter and collect them into
a sequence of variadic arguments that each correspond to the variadic
parameter.
* If the function has _no_ variadic parameter, it is an error if any arguments
remain after taking one argument for each positional parameter.
After mapping each argument to a parameter, semantic checking proceeds
for each argument:
* If the argument value corresponding to a parameter does not match the
parameter's type specification, the call is erroneous.
* If the argument value corresponding to a parameter is null and the parameter
is not specified as accepting nulls, the call is erroneous.
* If the argument value corresponding to a parameter is the dynamic value
and the parameter is not specified as accepting values of the dynamic
pseudo-type, the call is valid but its _result type_ is forced to be the
dynamic pseudo type.
* If neither of the above conditions holds for any argument, the call is
valid and the function's value type definition is used to determine the
call's _result type_. A function _may_ vary its result type depending on
the argument _values_ as well as the argument _types_; for example, a
function that decodes a JSON value will return a different result type
depending on the data structure described by the given JSON source code.
If semantic checking succeeds without error, the call is _executed_:
* For each argument, if its value is unknown and its corresponding parameter
is not specified as accepting unknowns, the _result value_ is forced to be an
unknown value of the result type.
* If the previous condition does not apply, the function's result value
definition is used to determine the call's _result value_.
The result of a function call expression is either an error, if one of the
erroenous conditions above applies, or the _result value_.
## Type Conversions and Unification
Values given in configuration may not always match the expectations of the
operations applied to them or to the calling application. In such situations,
automatic type conversion is attempted as a convenience to the user.
Along with conversions to a _specified_ type, it is sometimes necessary to
ensure that a selection of values are all of the _same_ type, without any
constraint on which type that is. This is the process of _type unification_,
which attempts to find the most general type that all of the given types can
be converted to.
Both type conversions and unification are defined in the syntax-agnostic
model to ensure consistency of behavior between syntaxes.
Type conversions are broadly characterized into two categories: _safe_ and
_unsafe_. A conversion is "safe" if any distinct value of the source type
has a corresponding distinct value in the target type. A conversion is
"unsafe" if either the target type values are _not_ distinct (information
may be lost in conversion) or if some values of the source type do not have
any corresponding value in the target type. An unsafe conversion may result
in an error.
A given type can always be converted to itself, which is a no-op.
### Conversion of Null Values
All null values are safely convertable to a null value of any other type,
regardless of other type-specific rules specified in the sections below.
### Conversion to and from the Dynamic Pseudo-type
Conversion _from_ the dynamic pseudo-type _to_ any other type always succeeds,
producing an unknown value of the target type.
Conversion of any value _to_ the dynamic pseudo-type is a no-op. The result
is the input value, verbatim. This is the only situation where the conversion
result value is not of the the given target type.
### Primitive Type Conversions
Bidirectional conversions are available between the string and number types,
and between the string and boolean types.
The bool value true corresponds to the string containing the characters "true",
while the bool value false corresponds to teh string containing the characters
"false". Conversion from bool to string is safe, while the converse is
unsafe. The strings "1" and "0" are alternative string representations
of true and false respectively. It is an error to convert a string other than
the four in this paragraph to type bool.
A number value is converted to string by translating its integer portion
into a sequence of decimal digits (`0` through `9`), and then if it has a
non-zero fractional part, a period `.` followed by a sequence of decimal
digits representing its fractional part. No exponent portion is included.
The number is converted at its full precision. Conversion from number to
string is safe.
A string is converted to a number value by reversing the above mapping.
No exponent portion is allowed. Conversion from string to number is unsafe.
It is an error to convert a string that does not comply with the expected
syntax to type number.
No direct conversion is available between the bool and number types.
### Collection and Structural Type Conversions
Conversion from set types to list types is _safe_, as long as their
element types are safely convertable. If the element types are _unsafely_
convertable, then the collection conversion is also unsafe. Each set element
becomes a corresponding list element, in an undefined order. Although no
particular ordering is required, implementations _should_ produce list
elements in a consistent order for a given input set, as a convenience
to calling applications.
Conversion from list types to set types is _unsafe_, as long as their element
types are convertable. Each distinct list item becomes a distinct set item.
If two list items are equal, one of the two is lost in the conversion.
Conversion from tuple types to list types permitted if all of the
tuple element types are convertable to the target list element type.
The safety of the conversion depends on the safety of each of the element
conversions. Each element in turn is converted to the list element type,
producing a list of identical length.
Conversion from tuple types to set types is permitted, behaving as if the
tuple type was first converted to a list of the same element type and then
that list converted to the target set type.
Conversion from object types to map types is permitted if all of the object
attribute types are convertable to the target map element type. The safety
of the conversion depends on the safety of each of the attribute conversions.
Each attribute in turn is converted to the map element type, and map element
keys are set to the name of each corresponding object attribute.
Conversion from list and set types to tuple types is permitted, following
the opposite steps as the converse conversions. Such conversions are _unsafe_.
It is an error to convert a list or set to a tuple type whose number of
elements does not match the list or set length.
Conversion from map types to object types is permitted if each map key
corresponds to an attribute in the target object type. It is an error to
convert from a map value whose set of keys does not exactly match the target
type's attributes. The conversion takes the opposite steps of the converse
conversion.
Conversion from one object type to another is permitted as long as the
common attribute names have convertable types. Any attribute present in the
target type but not in the source type is populated with a null value of
the appropriate type.
Conversion from one tuple type to another is permitted as long as the
tuples have the same length and the elements have convertable types.
### Type Unification
Type unification is an operation that takes a list of types and attempts
to find a single type to which they can all be converted. Since some
type pairs have bidirectional conversions, preference is given to _safe_
conversions. In technical terms, all possible types are arranged into
a lattice, from which a most general supertype is selected where possible.
The type resulting from type unification may be one of the input types, or
it may be an entirely new type produced by combination of two or more
input types.
The following rules do not guarantee a valid result. In addition to these
rules, unification fails if any of the given types are not convertable
(per the above rules) to the selected result type.
The following unification rules apply transitively. That is, if a rule is
defined from A to B, and one from B to C, then A can unify to C.
Number and bool types both unify with string by preferring string.
Two collection types of the same kind unify according to the unification
of their element types.
List and set types unify by preferring the list type.
Map and object types unify by preferring the object type.
List, set and tuple types unify by preferring the tuple type.
The dynamic pseudo-type unifies with any other type by selecting that other
type. The dynamic pseudo-type is the result type only if _all_ input types
are the dynamic pseudo-type.
Two object types unify by constructing a new type whose attributes are
the union of those of the two input types. Any common attributes themselves
have their types unified.
Two tuple types of the same length unify constructing a new type of the
same length whose elements are the unification of the corresponding elements
in the two input types.
## Implementation Considerations
Implementations of this specification are free to adopt any strategy that
produces behavior consistent with the specification. This non-normative
section describes some possible implementation strategies that are consistent
with the goals of this specification.
### Language-agnosticism
The language-agnosticism of this specification assumes that certain behaviors
are implemented separately for each syntax:
* Matching of a body schema with the physical elements of a body in the
source language, to determine correspondance between physical constructs
and schema elements.
* Implementing the _dynamic attributes_ body processing mode by either
interpreting all physical constructs as attributes or producing an error
if non-attribute constructs are present.
* Providing an evaluation function for all possible expressions that produces
a value given an evaluation context.
The suggested implementation strategy is to use an implementation language's
closest concept to an _abstract type_, _virtual type_ or _interface type_
to represent both Body and Expression. Each language-specific implementation
can then provide an implementation of each of these types wrapping AST nodes
or other physical constructs from the language parser.

40
vendor/github.com/hashicorp/hcl2/hcl/static_expr.go generated vendored Normal file
View File

@ -0,0 +1,40 @@
package hcl
import (
"github.com/zclconf/go-cty/cty"
)
type staticExpr struct {
val cty.Value
rng Range
}
// StaticExpr returns an Expression that always evaluates to the given value.
//
// This is useful to substitute default values for expressions that are
// not explicitly given in configuration and thus would otherwise have no
// Expression to return.
//
// Since expressions are expected to have a source range, the caller must
// provide one. Ideally this should be a real source range, but it can
// be a synthetic one (with an empty-string filename) if no suitable range
// is available.
func StaticExpr(val cty.Value, rng Range) Expression {
return staticExpr{val, rng}
}
func (e staticExpr) Value(ctx *EvalContext) (cty.Value, Diagnostics) {
return e.val, nil
}
func (e staticExpr) Variables() []Traversal {
return nil
}
func (e staticExpr) Range() Range {
return e.rng
}
func (e staticExpr) StartRange() Range {
return e.rng
}

151
vendor/github.com/hashicorp/hcl2/hcl/structure.go generated vendored Normal file
View File

@ -0,0 +1,151 @@
package hcl
import (
"github.com/zclconf/go-cty/cty"
)
// File is the top-level node that results from parsing a HCL file.
type File struct {
Body Body
Bytes []byte
// Nav is used to integrate with the "hcled" editor integration package,
// and with diagnostic information formatters. It is not for direct use
// by a calling application.
Nav interface{}
}
// Block represents a nested block within a Body.
type Block struct {
Type string
Labels []string
Body Body
DefRange Range // Range that can be considered the "definition" for seeking in an editor
TypeRange Range // Range for the block type declaration specifically.
LabelRanges []Range // Ranges for the label values specifically.
}
// Blocks is a sequence of Block.
type Blocks []*Block
// Attributes is a set of attributes keyed by their names.
type Attributes map[string]*Attribute
// Body is a container for attributes and blocks. It serves as the primary
// unit of heirarchical structure within configuration.
//
// The content of a body cannot be meaningfully intepreted without a schema,
// so Body represents the raw body content and has methods that allow the
// content to be extracted in terms of a given schema.
type Body interface {
// Content verifies that the entire body content conforms to the given
// schema and then returns it, and/or returns diagnostics. The returned
// body content is valid if non-nil, regardless of whether Diagnostics
// are provided, but diagnostics should still be eventually shown to
// the user.
Content(schema *BodySchema) (*BodyContent, Diagnostics)
// PartialContent is like Content except that it permits the configuration
// to contain additional blocks or attributes not specified in the
// schema. If any are present, the returned Body is non-nil and contains
// the remaining items from the body that were not selected by the schema.
PartialContent(schema *BodySchema) (*BodyContent, Body, Diagnostics)
// JustAttributes attempts to interpret all of the contents of the body
// as attributes, allowing for the contents to be accessed without a priori
// knowledge of the structure.
//
// The behavior of this method depends on the body's source language.
// Some languages, like JSON, can't distinguish between attributes and
// blocks without schema hints, but for languages that _can_ error
// diagnostics will be generated if any blocks are present in the body.
//
// Diagnostics may be produced for other reasons too, such as duplicate
// declarations of the same attribute.
JustAttributes() (Attributes, Diagnostics)
// MissingItemRange returns a range that represents where a missing item
// might hypothetically be inserted. This is used when producing
// diagnostics about missing required attributes or blocks. Not all bodies
// will have an obvious single insertion point, so the result here may
// be rather arbitrary.
MissingItemRange() Range
}
// BodyContent is the result of applying a BodySchema to a Body.
type BodyContent struct {
Attributes Attributes
Blocks Blocks
MissingItemRange Range
}
// Attribute represents an attribute from within a body.
type Attribute struct {
Name string
Expr Expression
Range Range
NameRange Range
}
// Expression is a literal value or an expression provided in the
// configuration, which can be evaluated within a scope to produce a value.
type Expression interface {
// Value returns the value resulting from evaluating the expression
// in the given evaluation context.
//
// The context may be nil, in which case the expression may contain
// only constants and diagnostics will be produced for any non-constant
// sub-expressions. (The exact definition of this depends on the source
// language.)
//
// The context may instead be set but have either its Variables or
// Functions maps set to nil, in which case only use of these features
// will return diagnostics.
//
// Different diagnostics are provided depending on whether the given
// context maps are nil or empty. In the former case, the message
// tells the user that variables/functions are not permitted at all,
// while in the latter case usage will produce a "not found" error for
// the specific symbol in question.
Value(ctx *EvalContext) (cty.Value, Diagnostics)
// Variables returns a list of variables referenced in the receiving
// expression. These are expressed as absolute Traversals, so may include
// additional information about how the variable is used, such as
// attribute lookups, which the calling application can potentially use
// to only selectively populate the scope.
Variables() []Traversal
Range() Range
StartRange() Range
}
// OfType filters the receiving block sequence by block type name,
// returning a new block sequence including only the blocks of the
// requested type.
func (els Blocks) OfType(typeName string) Blocks {
ret := make(Blocks, 0)
for _, el := range els {
if el.Type == typeName {
ret = append(ret, el)
}
}
return ret
}
// ByType transforms the receiving block sequence into a map from type
// name to block sequences of only that type.
func (els Blocks) ByType() map[string]Blocks {
ret := make(map[string]Blocks)
for _, el := range els {
ty := el.Type
if ret[ty] == nil {
ret[ty] = make(Blocks, 0, 1)
}
ret[ty] = append(ret[ty], el)
}
return ret
}

324
vendor/github.com/hashicorp/hcl2/hcl/traversal.go generated vendored Normal file
View File

@ -0,0 +1,324 @@
package hcl
import (
"fmt"
"github.com/zclconf/go-cty/cty"
)
// A Traversal is a description of traversing through a value through a
// series of operations such as attribute lookup, index lookup, etc.
//
// It is used to look up values in scopes, for example.
//
// The traversal operations are implementations of interface Traverser.
// This is a closed set of implementations, so the interface cannot be
// implemented from outside this package.
//
// A traversal can be absolute (its first value is a symbol name) or relative
// (starts from an existing value).
type Traversal []Traverser
// TraversalJoin appends a relative traversal to an absolute traversal to
// produce a new absolute traversal.
func TraversalJoin(abs Traversal, rel Traversal) Traversal {
if abs.IsRelative() {
panic("first argument to TraversalJoin must be absolute")
}
if !rel.IsRelative() {
panic("second argument to TraversalJoin must be relative")
}
ret := make(Traversal, len(abs)+len(rel))
copy(ret, abs)
copy(ret[len(abs):], rel)
return ret
}
// TraverseRel applies the receiving traversal to the given value, returning
// the resulting value. This is supported only for relative traversals,
// and will panic if applied to an absolute traversal.
func (t Traversal) TraverseRel(val cty.Value) (cty.Value, Diagnostics) {
if !t.IsRelative() {
panic("can't use TraverseRel on an absolute traversal")
}
current := val
var diags Diagnostics
for _, tr := range t {
var newDiags Diagnostics
current, newDiags = tr.TraversalStep(current)
diags = append(diags, newDiags...)
if newDiags.HasErrors() {
return cty.DynamicVal, diags
}
}
return current, diags
}
// TraverseAbs applies the receiving traversal to the given eval context,
// returning the resulting value. This is supported only for absolute
// traversals, and will panic if applied to a relative traversal.
func (t Traversal) TraverseAbs(ctx *EvalContext) (cty.Value, Diagnostics) {
if t.IsRelative() {
panic("can't use TraverseAbs on a relative traversal")
}
split := t.SimpleSplit()
root := split.Abs[0].(TraverseRoot)
name := root.Name
thisCtx := ctx
hasNonNil := false
for thisCtx != nil {
if thisCtx.Variables == nil {
thisCtx = thisCtx.parent
continue
}
hasNonNil = true
val, exists := thisCtx.Variables[name]
if exists {
return split.Rel.TraverseRel(val)
}
thisCtx = thisCtx.parent
}
if !hasNonNil {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Variables not allowed",
Detail: "Variables may not be used here.",
Subject: &root.SrcRange,
},
}
}
suggestions := make([]string, 0, len(ctx.Variables))
thisCtx = ctx
for thisCtx != nil {
for k := range thisCtx.Variables {
suggestions = append(suggestions, k)
}
thisCtx = thisCtx.parent
}
suggestion := nameSuggestion(name, suggestions)
if suggestion != "" {
suggestion = fmt.Sprintf(" Did you mean %q?", suggestion)
}
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Unknown variable",
Detail: fmt.Sprintf("There is no variable named %q.%s", name, suggestion),
Subject: &root.SrcRange,
},
}
}
// IsRelative returns true if the receiver is a relative traversal, or false
// otherwise.
func (t Traversal) IsRelative() bool {
if len(t) == 0 {
return true
}
if _, firstIsRoot := t[0].(TraverseRoot); firstIsRoot {
return false
}
return true
}
// SimpleSplit returns a TraversalSplit where the name lookup is the absolute
// part and the remainder is the relative part. Supported only for
// absolute traversals, and will panic if applied to a relative traversal.
//
// This can be used by applications that have a relatively-simple variable
// namespace where only the top-level is directly populated in the scope, with
// everything else handled by relative lookups from those initial values.
func (t Traversal) SimpleSplit() TraversalSplit {
if t.IsRelative() {
panic("can't use SimpleSplit on a relative traversal")
}
return TraversalSplit{
Abs: t[0:1],
Rel: t[1:],
}
}
// RootName returns the root name for a absolute traversal. Will panic if
// called on a relative traversal.
func (t Traversal) RootName() string {
if t.IsRelative() {
panic("can't use RootName on a relative traversal")
}
return t[0].(TraverseRoot).Name
}
// TraversalSplit represents a pair of traversals, the first of which is
// an absolute traversal and the second of which is relative to the first.
//
// This is used by calling applications that only populate prefixes of the
// traversals in the scope, with Abs representing the part coming from the
// scope and Rel representing the remaining steps once that part is
// retrieved.
type TraversalSplit struct {
Abs Traversal
Rel Traversal
}
// TraverseAbs traverses from a scope to the value resulting from the
// absolute traversal.
func (t TraversalSplit) TraverseAbs(ctx *EvalContext) (cty.Value, Diagnostics) {
return t.Abs.TraverseAbs(ctx)
}
// TraverseRel traverses from a given value, assumed to be the result of
// TraverseAbs on some scope, to a final result for the entire split traversal.
func (t TraversalSplit) TraverseRel(val cty.Value) (cty.Value, Diagnostics) {
return t.Rel.TraverseRel(val)
}
// Traverse is a convenience function to apply TraverseAbs followed by
// TraverseRel.
func (t TraversalSplit) Traverse(ctx *EvalContext) (cty.Value, Diagnostics) {
v1, diags := t.TraverseAbs(ctx)
if diags.HasErrors() {
return cty.DynamicVal, diags
}
v2, newDiags := t.TraverseRel(v1)
diags = append(diags, newDiags...)
return v2, diags
}
// Join concatenates together the Abs and Rel parts to produce a single
// absolute traversal.
func (t TraversalSplit) Join() Traversal {
return TraversalJoin(t.Abs, t.Rel)
}
// RootName returns the root name for the absolute part of the split.
func (t TraversalSplit) RootName() string {
return t.Abs.RootName()
}
// A Traverser is a step within a Traversal.
type Traverser interface {
TraversalStep(cty.Value) (cty.Value, Diagnostics)
isTraverserSigil() isTraverser
}
// Embed this in a struct to declare it as a Traverser
type isTraverser struct {
}
func (tr isTraverser) isTraverserSigil() isTraverser {
return isTraverser{}
}
// TraverseRoot looks up a root name in a scope. It is used as the first step
// of an absolute Traversal, and cannot itself be traversed directly.
type TraverseRoot struct {
isTraverser
Name string
SrcRange Range
}
// TraversalStep on a TraverseName immediately panics, because absolute
// traversals cannot be directly traversed.
func (tn TraverseRoot) TraversalStep(cty.Value) (cty.Value, Diagnostics) {
panic("Cannot traverse an absolute traversal")
}
// TraverseAttr looks up an attribute in its initial value.
type TraverseAttr struct {
isTraverser
Name string
SrcRange Range
}
func (tn TraverseAttr) TraversalStep(val cty.Value) (cty.Value, Diagnostics) {
if val.IsNull() {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Attempt to get attribute from null value",
Detail: "This value is null, so it does not have any attributes.",
Subject: &tn.SrcRange,
},
}
}
ty := val.Type()
switch {
case ty.IsObjectType():
if !ty.HasAttribute(tn.Name) {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Unsupported attribute",
Detail: fmt.Sprintf("This object does not have an attribute named %q.", tn.Name),
Subject: &tn.SrcRange,
},
}
}
if !val.IsKnown() {
return cty.UnknownVal(ty.AttributeType(tn.Name)), nil
}
return val.GetAttr(tn.Name), nil
case ty.IsMapType():
if !val.IsKnown() {
return cty.UnknownVal(ty.ElementType()), nil
}
idx := cty.StringVal(tn.Name)
if val.HasIndex(idx).False() {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Missing map element",
Detail: fmt.Sprintf("This map does not have an element with the key %q.", tn.Name),
Subject: &tn.SrcRange,
},
}
}
return val.Index(idx), nil
case ty == cty.DynamicPseudoType:
return cty.DynamicVal, nil
default:
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Unsupported attribute",
Detail: "This value does not have any attributes.",
Subject: &tn.SrcRange,
},
}
}
}
// TraverseIndex applies the index operation to its initial value.
type TraverseIndex struct {
isTraverser
Key cty.Value
SrcRange Range
}
func (tn TraverseIndex) TraversalStep(val cty.Value) (cty.Value, Diagnostics) {
return Index(val, tn.Key, &tn.SrcRange)
}
// TraverseSplat applies the splat operation to its initial value.
type TraverseSplat struct {
isTraverser
Each Traversal
SrcRange Range
}
func (tn TraverseSplat) TraversalStep(val cty.Value) (cty.Value, Diagnostics) {
panic("TraverseSplat not yet implemented")
}

32
vendor/github.com/hashicorp/hcl2/hcldec/decode.go generated vendored Normal file
View File

@ -0,0 +1,32 @@
package hcldec
import (
"github.com/hashicorp/hcl2/hcl"
"github.com/zclconf/go-cty/cty"
)
func decode(body hcl.Body, block *hcl.Block, ctx *hcl.EvalContext, spec Spec, partial bool) (cty.Value, hcl.Body, hcl.Diagnostics) {
schema := ImpliedSchema(spec)
var content *hcl.BodyContent
var diags hcl.Diagnostics
var leftovers hcl.Body
if partial {
content, leftovers, diags = body.PartialContent(schema)
} else {
content, diags = body.Content(schema)
}
val, valDiags := spec.decode(content, block, ctx)
diags = append(diags, valDiags...)
return val, leftovers, diags
}
func sourceRange(body hcl.Body, block *hcl.Block, spec Spec) hcl.Range {
schema := ImpliedSchema(spec)
content, _, _ := body.PartialContent(schema)
return spec.sourceRange(content, block)
}

12
vendor/github.com/hashicorp/hcl2/hcldec/doc.go generated vendored Normal file
View File

@ -0,0 +1,12 @@
// Package hcldec provides a higher-level API for unpacking the content of
// HCL bodies, implemented in terms of the low-level "Content" API exposed
// by the bodies themselves.
//
// It allows decoding an entire nested configuration in a single operation
// by providing a description of the intended structure.
//
// For some applications it may be more convenient to use the "gohcl"
// package, which has a similar purpose but decodes directly into native
// Go data types. hcldec instead targets the cty type system, and thus allows
// a cty-driven application to remain within that type system.
package hcldec

22
vendor/github.com/hashicorp/hcl2/hcldec/gob.go generated vendored Normal file
View File

@ -0,0 +1,22 @@
package hcldec
import (
"encoding/gob"
)
func init() {
// Every Spec implementation should be registered with gob, so that
// specs can be sent over gob channels, such as using
// github.com/hashicorp/go-plugin with plugins that need to describe
// what shape of configuration they are expecting.
gob.Register(ObjectSpec(nil))
gob.Register(TupleSpec(nil))
gob.Register((*AttrSpec)(nil))
gob.Register((*LiteralSpec)(nil))
gob.Register((*ExprSpec)(nil))
gob.Register((*BlockSpec)(nil))
gob.Register((*BlockListSpec)(nil))
gob.Register((*BlockSetSpec)(nil))
gob.Register((*BlockMapSpec)(nil))
gob.Register((*BlockLabelSpec)(nil))
}

47
vendor/github.com/hashicorp/hcl2/hcldec/public.go generated vendored Normal file
View File

@ -0,0 +1,47 @@
package hcldec
import (
"github.com/hashicorp/hcl2/hcl"
"github.com/zclconf/go-cty/cty"
)
// Decode interprets the given body using the given specification and returns
// the resulting value. If the given body is not valid per the spec, error
// diagnostics are returned and the returned value is likely to be incomplete.
//
// The ctx argument may be nil, in which case any references to variables or
// functions will produce error diagnostics.
func Decode(body hcl.Body, spec Spec, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
val, _, diags := decode(body, nil, ctx, spec, false)
return val, diags
}
// PartialDecode is like Decode except that it permits "leftover" items in
// the top-level body, which are returned as a new body to allow for
// further processing.
//
// Any descendent block bodies are _not_ decoded partially and thus must
// be fully described by the given specification.
func PartialDecode(body hcl.Body, spec Spec, ctx *hcl.EvalContext) (cty.Value, hcl.Body, hcl.Diagnostics) {
return decode(body, nil, ctx, spec, true)
}
// SourceRange interprets the given body using the given specification and
// then returns the source range of the value that would be used to
// fulfill the spec.
//
// This can be used if application-level validation detects value errors, to
// obtain a reasonable SourceRange to use for generated diagnostics. It works
// best when applied to specific body items (e.g. using AttrSpec, BlockSpec, ...)
// as opposed to entire bodies using ObjectSpec, TupleSpec. The result will
// be less useful the broader the specification, so e.g. a spec that returns
// the entirety of all of the blocks of a given type is likely to be
// _particularly_ arbitrary and useless.
//
// If the given body is not valid per the given spec, the result is best-effort
// and may not actually be something ideal. It's expected that an application
// will already have used Decode or PartialDecode earlier and thus had an
// opportunity to detect and report spec violations.
func SourceRange(body hcl.Body, spec Spec) hcl.Range {
return sourceRange(body, nil, spec)
}

34
vendor/github.com/hashicorp/hcl2/hcldec/schema.go generated vendored Normal file
View File

@ -0,0 +1,34 @@
package hcldec
import (
"github.com/hashicorp/hcl2/hcl"
)
// ImpliedSchema returns the *hcl.BodySchema implied by the given specification.
// This is the schema that the Decode function will use internally to
// access the content of a given body.
func ImpliedSchema(spec Spec) *hcl.BodySchema {
var attrs []hcl.AttributeSchema
var blocks []hcl.BlockHeaderSchema
// visitSameBodyChildren walks through the spec structure, calling
// the given callback for each descendent spec encountered. We are
// interested in the specs that reference attributes and blocks.
visit := func(s Spec) {
if as, ok := s.(attrSpec); ok {
attrs = append(attrs, as.attrSchemata()...)
}
if bs, ok := s.(blockSpec); ok {
blocks = append(blocks, bs.blockHeaderSchemata()...)
}
}
visit(spec)
spec.visitSameBodyChildren(visit)
return &hcl.BodySchema{
Attributes: attrs,
Blocks: blocks,
}
}

568
vendor/github.com/hashicorp/hcl2/hcldec/spec.go generated vendored Normal file
View File

@ -0,0 +1,568 @@
package hcldec
import (
"fmt"
"github.com/hashicorp/hcl2/hcl"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
)
// A Spec is a description of how to decode a hcl.Body to a cty.Value.
//
// The various other types in this package whose names end in "Spec" are
// the spec implementations. The most common top-level spec is ObjectSpec,
// which decodes body content into a cty.Value of an object type.
type Spec interface {
// Perform the decode operation on the given body, in the context of
// the given block (which might be null), using the given eval context.
//
// "block" is provided only by the nested calls performed by the spec
// types that work on block bodies.
decode(content *hcl.BodyContent, block *hcl.Block, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics)
// Call the given callback once for each of the nested specs that would
// get decoded with the same body and block as the receiver. This should
// not descend into the nested specs used when decoding blocks.
visitSameBodyChildren(cb visitFunc)
// Determine the source range of the value that would be returned for the
// spec in the given content, in the context of the given block
// (which might be null). If the corresponding item is missing, return
// a place where it might be inserted.
sourceRange(content *hcl.BodyContent, block *hcl.Block) hcl.Range
}
type visitFunc func(spec Spec)
// An ObjectSpec is a Spec that produces a cty.Value of an object type whose
// attributes correspond to the keys of the spec map.
type ObjectSpec map[string]Spec
// attrSpec is implemented by specs that require attributes from the body.
type attrSpec interface {
attrSchemata() []hcl.AttributeSchema
}
// blockSpec is implemented by specs that require blocks from the body.
type blockSpec interface {
blockHeaderSchemata() []hcl.BlockHeaderSchema
}
// specNeedingVariables is implemented by specs that can use variables
// from the EvalContext, to declare which variables they need.
type specNeedingVariables interface {
variablesNeeded(content *hcl.BodyContent) []hcl.Traversal
}
func (s ObjectSpec) visitSameBodyChildren(cb visitFunc) {
for _, c := range s {
cb(c)
}
}
func (s ObjectSpec) decode(content *hcl.BodyContent, block *hcl.Block, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
vals := make(map[string]cty.Value, len(s))
var diags hcl.Diagnostics
for k, spec := range s {
var kd hcl.Diagnostics
vals[k], kd = spec.decode(content, block, ctx)
diags = append(diags, kd...)
}
return cty.ObjectVal(vals), diags
}
func (s ObjectSpec) sourceRange(content *hcl.BodyContent, block *hcl.Block) hcl.Range {
if block != nil {
return block.DefRange
}
// This is not great, but the best we can do. In practice, it's rather
// strange to ask for the source range of an entire top-level body, since
// that's already readily available to the caller.
return content.MissingItemRange
}
// A TupleSpec is a Spec that produces a cty.Value of a tuple type whose
// elements correspond to the elements of the spec slice.
type TupleSpec []Spec
func (s TupleSpec) visitSameBodyChildren(cb visitFunc) {
for _, c := range s {
cb(c)
}
}
func (s TupleSpec) decode(content *hcl.BodyContent, block *hcl.Block, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
vals := make([]cty.Value, len(s))
var diags hcl.Diagnostics
for i, spec := range s {
var ed hcl.Diagnostics
vals[i], ed = spec.decode(content, block, ctx)
diags = append(diags, ed...)
}
return cty.TupleVal(vals), diags
}
func (s TupleSpec) sourceRange(content *hcl.BodyContent, block *hcl.Block) hcl.Range {
if block != nil {
return block.DefRange
}
// This is not great, but the best we can do. In practice, it's rather
// strange to ask for the source range of an entire top-level body, since
// that's already readily available to the caller.
return content.MissingItemRange
}
// An AttrSpec is a Spec that evaluates a particular attribute expression in
// the body and returns its resulting value converted to the requested type,
// or produces a diagnostic if the type is incorrect.
type AttrSpec struct {
Name string
Type cty.Type
Required bool
}
func (s *AttrSpec) visitSameBodyChildren(cb visitFunc) {
// leaf node
}
// specNeedingVariables implementation
func (s *AttrSpec) variablesNeeded(content *hcl.BodyContent) []hcl.Traversal {
attr, exists := content.Attributes[s.Name]
if !exists {
return nil
}
return attr.Expr.Variables()
}
// attrSpec implementation
func (s *AttrSpec) attrSchemata() []hcl.AttributeSchema {
return []hcl.AttributeSchema{
{
Name: s.Name,
Required: s.Required,
},
}
}
func (s *AttrSpec) sourceRange(content *hcl.BodyContent, block *hcl.Block) hcl.Range {
attr, exists := content.Attributes[s.Name]
if !exists {
return content.MissingItemRange
}
return attr.Expr.Range()
}
func (s *AttrSpec) decode(content *hcl.BodyContent, block *hcl.Block, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
attr, exists := content.Attributes[s.Name]
if !exists {
// We don't need to check required and emit a diagnostic here, because
// that would already have happened when building "content".
return cty.NullVal(s.Type), nil
}
val, diags := attr.Expr.Value(ctx)
convVal, err := convert.Convert(val, s.Type)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Incorrect attribute value type",
Detail: fmt.Sprintf(
"Inappropriate value for attribute %q: %s.",
s.Name, err.Error(),
),
Subject: attr.Expr.StartRange().Ptr(),
Context: hcl.RangeBetween(attr.NameRange, attr.Expr.StartRange()).Ptr(),
})
// We'll return an unknown value of the _correct_ type so that the
// incomplete result can still be used for some analysis use-cases.
val = cty.UnknownVal(s.Type)
} else {
val = convVal
}
return val, diags
}
// A LiteralSpec is a Spec that produces the given literal value, ignoring
// the given body.
type LiteralSpec struct {
Value cty.Value
}
func (s *LiteralSpec) visitSameBodyChildren(cb visitFunc) {
// leaf node
}
func (s *LiteralSpec) decode(content *hcl.BodyContent, block *hcl.Block, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
return s.Value, nil
}
func (s *LiteralSpec) sourceRange(content *hcl.BodyContent, block *hcl.Block) hcl.Range {
// No sensible range to return for a literal, so the caller had better
// ensure it doesn't cause any diagnostics.
return hcl.Range{
Filename: "<unknown>",
}
}
// An ExprSpec is a Spec that evaluates the given expression, ignoring the
// given body.
type ExprSpec struct {
Expr hcl.Expression
}
func (s *ExprSpec) visitSameBodyChildren(cb visitFunc) {
// leaf node
}
// specNeedingVariables implementation
func (s *ExprSpec) variablesNeeded(content *hcl.BodyContent) []hcl.Traversal {
return s.Expr.Variables()
}
func (s *ExprSpec) decode(content *hcl.BodyContent, block *hcl.Block, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
return s.Expr.Value(ctx)
}
func (s *ExprSpec) sourceRange(content *hcl.BodyContent, block *hcl.Block) hcl.Range {
return s.Expr.Range()
}
// A BlockSpec is a Spec that produces a cty.Value by decoding the contents
// of a single nested block of a given type, using a nested spec.
//
// If the Required flag is not set, the nested block may be omitted, in which
// case a null value is produced. If it _is_ set, an error diagnostic is
// produced if there are no nested blocks of the given type.
type BlockSpec struct {
TypeName string
Nested Spec
Required bool
}
func (s *BlockSpec) visitSameBodyChildren(cb visitFunc) {
// leaf node ("Nested" does not use the same body)
}
// blockSpec implementation
func (s *BlockSpec) blockHeaderSchemata() []hcl.BlockHeaderSchema {
return []hcl.BlockHeaderSchema{
{
Type: s.TypeName,
},
}
}
// specNeedingVariables implementation
func (s *BlockSpec) variablesNeeded(content *hcl.BodyContent) []hcl.Traversal {
var childBlock *hcl.Block
for _, candidate := range content.Blocks {
if candidate.Type != s.TypeName {
continue
}
childBlock = candidate
break
}
if childBlock == nil {
return nil
}
return Variables(childBlock.Body, s.Nested)
}
func (s *BlockSpec) decode(content *hcl.BodyContent, block *hcl.Block, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
var diags hcl.Diagnostics
var childBlock *hcl.Block
for _, candidate := range content.Blocks {
if candidate.Type != s.TypeName {
continue
}
if childBlock != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Duplicate %s block", s.TypeName),
Detail: fmt.Sprintf(
"Only one block of type %q is allowed. Previous definition was at %s.",
s.TypeName, childBlock.DefRange.String(),
),
Subject: &candidate.DefRange,
})
break
}
childBlock = candidate
}
if childBlock == nil {
if s.Required {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Missing %s block", s.TypeName),
Detail: fmt.Sprintf(
"A block of type %q is required here.", s.TypeName,
),
Subject: &content.MissingItemRange,
})
}
return cty.NullVal(cty.DynamicPseudoType), diags
}
if s.Nested == nil {
panic("BlockSpec with no Nested Spec")
}
val, _, childDiags := decode(childBlock.Body, childBlock, ctx, s.Nested, false)
diags = append(diags, childDiags...)
return val, diags
}
func (s *BlockSpec) sourceRange(content *hcl.BodyContent, block *hcl.Block) hcl.Range {
var childBlock *hcl.Block
for _, candidate := range content.Blocks {
if candidate.Type != s.TypeName {
continue
}
childBlock = candidate
break
}
if childBlock == nil {
return content.MissingItemRange
}
return sourceRange(childBlock.Body, childBlock, s.Nested)
}
// A BlockListSpec is a Spec that produces a cty list of the results of
// decoding all of the nested blocks of a given type, using a nested spec.
type BlockListSpec struct {
TypeName string
Nested Spec
MinItems int
MaxItems int
}
func (s *BlockListSpec) visitSameBodyChildren(cb visitFunc) {
// leaf node ("Nested" does not use the same body)
}
// blockSpec implementation
func (s *BlockListSpec) blockHeaderSchemata() []hcl.BlockHeaderSchema {
return []hcl.BlockHeaderSchema{
{
Type: s.TypeName,
// FIXME: Need to peek into s.Nested to see if it has any
// BlockLabelSpec instances, which will define then how many
// labels we need.
},
}
}
// specNeedingVariables implementation
func (s *BlockListSpec) variablesNeeded(content *hcl.BodyContent) []hcl.Traversal {
var ret []hcl.Traversal
for _, childBlock := range content.Blocks {
if childBlock.Type != s.TypeName {
continue
}
ret = append(ret, Variables(childBlock.Body, s.Nested)...)
}
return ret
}
func (s *BlockListSpec) decode(content *hcl.BodyContent, block *hcl.Block, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
var diags hcl.Diagnostics
if s.Nested == nil {
panic("BlockSpec with no Nested Spec")
}
var elems []cty.Value
var sourceRanges []hcl.Range
for _, childBlock := range content.Blocks {
if childBlock.Type != s.TypeName {
continue
}
val, _, childDiags := decode(childBlock.Body, childBlock, ctx, s.Nested, false)
diags = append(diags, childDiags...)
elems = append(elems, val)
sourceRanges = append(sourceRanges, sourceRange(childBlock.Body, childBlock, s.Nested))
}
if len(elems) < s.MinItems {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("insufficient %s blocks", s.TypeName),
Detail: fmt.Sprintf("at least %d blocks are required", s.MinItems),
Subject: &content.MissingItemRange,
})
} else if s.MaxItems > 0 && len(elems) > s.MaxItems {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("too many %s blocks", s.TypeName),
Detail: fmt.Sprintf("no more than %d blocks are allowed", s.MaxItems),
Subject: &sourceRanges[s.MaxItems],
})
}
var ret cty.Value
if len(elems) == 0 {
// FIXME: We don't currently have enough info to construct a type for
// an empty list, so we'll just stub it out.
ret = cty.ListValEmpty(cty.DynamicPseudoType)
} else {
ret = cty.ListVal(elems)
}
return ret, diags
}
func (s *BlockListSpec) sourceRange(content *hcl.BodyContent, block *hcl.Block) hcl.Range {
// We return the source range of the _first_ block of the given type,
// since they are not guaranteed to form a contiguous range.
var childBlock *hcl.Block
for _, candidate := range content.Blocks {
if candidate.Type != s.TypeName {
continue
}
childBlock = candidate
break
}
if childBlock == nil {
return content.MissingItemRange
}
return sourceRange(childBlock.Body, childBlock, s.Nested)
}
// A BlockSetSpec is a Spec that produces a cty set of the results of
// decoding all of the nested blocks of a given type, using a nested spec.
type BlockSetSpec struct {
TypeName string
Nested Spec
MinItems int
MaxItems int
}
func (s *BlockSetSpec) visitSameBodyChildren(cb visitFunc) {
// leaf node ("Nested" does not use the same body)
}
func (s *BlockSetSpec) decode(content *hcl.BodyContent, block *hcl.Block, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
panic("BlockSetSpec.decode not yet implemented")
}
func (s *BlockSetSpec) sourceRange(content *hcl.BodyContent, block *hcl.Block) hcl.Range {
// We return the source range of the _first_ block of the given type,
// since they are not guaranteed to form a contiguous range.
var childBlock *hcl.Block
for _, candidate := range content.Blocks {
if candidate.Type != s.TypeName {
continue
}
childBlock = candidate
break
}
if childBlock == nil {
return content.MissingItemRange
}
return sourceRange(childBlock.Body, childBlock, s.Nested)
}
// A BlockMapSpec is a Spec that produces a cty map of the results of
// decoding all of the nested blocks of a given type, using a nested spec.
//
// One level of map structure is created for each of the given label names.
// There must be at least one given label name.
type BlockMapSpec struct {
TypeName string
LabelNames []string
Nested Spec
}
func (s *BlockMapSpec) visitSameBodyChildren(cb visitFunc) {
// leaf node ("Nested" does not use the same body)
}
func (s *BlockMapSpec) decode(content *hcl.BodyContent, block *hcl.Block, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
panic("BlockMapSpec.decode not yet implemented")
}
func (s *BlockMapSpec) sourceRange(content *hcl.BodyContent, block *hcl.Block) hcl.Range {
// We return the source range of the _first_ block of the given type,
// since they are not guaranteed to form a contiguous range.
var childBlock *hcl.Block
for _, candidate := range content.Blocks {
if candidate.Type != s.TypeName {
continue
}
childBlock = candidate
break
}
if childBlock == nil {
return content.MissingItemRange
}
return sourceRange(childBlock.Body, childBlock, s.Nested)
}
// A BlockLabelSpec is a Spec that returns a cty.String representing the
// label of the block its given body belongs to, if indeed its given body
// belongs to a block. It is a programming error to use this in a non-block
// context, so this spec will panic in that case.
//
// This spec only works in the nested spec within a BlockSpec, BlockListSpec,
// BlockSetSpec or BlockMapSpec.
//
// The full set of label specs used against a particular block must have a
// consecutive set of indices starting at zero. The maximum index found
// defines how many labels the corresponding blocks must have in cty source.
type BlockLabelSpec struct {
Index int
Name string
}
func (s *BlockLabelSpec) visitSameBodyChildren(cb visitFunc) {
// leaf node
}
func (s *BlockLabelSpec) decode(content *hcl.BodyContent, block *hcl.Block, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
panic("BlockLabelSpec.decode not yet implemented")
}
func (s *BlockLabelSpec) sourceRange(content *hcl.BodyContent, block *hcl.Block) hcl.Range {
if block == nil {
panic("BlockListSpec used in non-block context")
}
return block.LabelRanges[s.Index]
}

34
vendor/github.com/hashicorp/hcl2/hcldec/variables.go generated vendored Normal file
View File

@ -0,0 +1,34 @@
package hcldec
import (
"github.com/hashicorp/hcl2/hcl"
)
// Variables processes the given body with the given spec and returns a
// list of the variable traversals that would be required to decode
// the same pairing of body and spec.
//
// This can be used to conditionally populate the variables in the EvalContext
// passed to Decode, for applications where a static scope is insufficient.
//
// If the given body is not compliant with the given schema, the result may
// be incomplete, but that's assumed to be okay because the eventual call
// to Decode will produce error diagnostics anyway.
func Variables(body hcl.Body, spec Spec) []hcl.Traversal {
schema := ImpliedSchema(spec)
content, _, _ := body.PartialContent(schema)
var vars []hcl.Traversal
if vs, ok := spec.(specNeedingVariables); ok {
vars = append(vars, vs.variablesNeeded(content)...)
}
spec.visitSameBodyChildren(func(s Spec) {
if vs, ok := s.(specNeedingVariables); ok {
vars = append(vars, vs.variablesNeeded(content)...)
}
})
return vars
}

4
vendor/github.com/hashicorp/hcl2/hcled/doc.go generated vendored Normal file
View File

@ -0,0 +1,4 @@
// Package hcled provides functionality intended to help an application
// that embeds HCL to deliver relevant information to a text editor or IDE
// for navigating around and analyzing configuration files.
package hcled

20
vendor/github.com/hashicorp/hcl2/hcled/navigation.go generated vendored Normal file
View File

@ -0,0 +1,20 @@
package hcled
import (
"github.com/hashicorp/hcl2/hcl"
)
type contextStringer interface {
ContextString(offset int) string
}
// ContextString returns a string describing the context of the given byte
// offset, if available. An empty string is returned if no such information
// is available, or otherwise the returned string is in a form that depends
// on the language used to write the referenced file.
func ContextString(file *hcl.File, offset int) string {
if cser, ok := file.Nav.(contextStringer); ok {
return cser.ContextString(offset)
}
return ""
}

123
vendor/github.com/hashicorp/hcl2/hclparse/parser.go generated vendored Normal file
View File

@ -0,0 +1,123 @@
package hclparse
import (
"fmt"
"io/ioutil"
"github.com/hashicorp/hcl2/hcl"
"github.com/hashicorp/hcl2/hcl/hclsyntax"
"github.com/hashicorp/hcl2/hcl/json"
)
// NOTE: This is the public interface for parsing. The actual parsers are
// in other packages alongside this one, with this package just wrapping them
// to provide a unified interface for the caller across all supported formats.
// Parser is the main interface for parsing configuration files. As well as
// parsing files, a parser also retains a registry of all of the files it
// has parsed so that multiple attempts to parse the same file will return
// the same object and so the collected files can be used when printing
// diagnostics.
//
// Any diagnostics for parsing a file are only returned once on the first
// call to parse that file. Callers are expected to collect up diagnostics
// and present them together, so returning diagnostics for the same file
// multiple times would create a confusing result.
type Parser struct {
files map[string]*hcl.File
}
// NewParser creates a new parser, ready to parse configuration files.
func NewParser() *Parser {
return &Parser{
files: map[string]*hcl.File{},
}
}
// ParseHCL parses the given buffer (which is assumed to have been loaded from
// the given filename) as a native-syntax configuration file and returns the
// hcl.File object representing it.
func (p *Parser) ParseHCL(src []byte, filename string) (*hcl.File, hcl.Diagnostics) {
if existing := p.files[filename]; existing != nil {
return existing, nil
}
file, diags := hclsyntax.ParseConfig(src, filename, hcl.Pos{Byte: 0, Line: 1, Column: 1})
p.files[filename] = file
return file, diags
}
// ParseHCLFile reads the given filename and parses it as a native-syntax HCL
// configuration file. An error diagnostic is returned if the given file
// cannot be read.
func (p *Parser) ParseHCLFile(filename string) (*hcl.File, hcl.Diagnostics) {
if existing := p.files[filename]; existing != nil {
return existing, nil
}
src, err := ioutil.ReadFile(filename)
if err != nil {
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Failed to read file",
Detail: fmt.Sprintf("The configuration file %q could not be read.", filename),
},
}
}
return p.ParseHCL(src, filename)
}
// ParseJSON parses the given JSON buffer (which is assumed to have been loaded
// from the given filename) and returns the hcl.File object representing it.
func (p *Parser) ParseJSON(src []byte, filename string) (*hcl.File, hcl.Diagnostics) {
if existing := p.files[filename]; existing != nil {
return existing, nil
}
file, diags := json.Parse(src, filename)
p.files[filename] = file
return file, diags
}
// ParseJSONFile reads the given filename and parses it as JSON, similarly to
// ParseJSON. An error diagnostic is returned if the given file cannot be read.
func (p *Parser) ParseJSONFile(filename string) (*hcl.File, hcl.Diagnostics) {
if existing := p.files[filename]; existing != nil {
return existing, nil
}
file, diags := json.ParseFile(filename)
p.files[filename] = file
return file, diags
}
// AddFile allows a caller to record in a parser a file that was parsed some
// other way, thus allowing it to be included in the registry of sources.
func (p *Parser) AddFile(filename string, file *hcl.File) {
p.files[filename] = file
}
// Sources returns a map from filenames to the raw source code that was
// read from them. This is intended to be used, for example, to print
// diagnostics with contextual information.
//
// The arrays underlying the returned slices should not be modified.
func (p *Parser) Sources() map[string][]byte {
ret := make(map[string][]byte)
for fn, f := range p.files {
ret[fn] = f.Bytes
}
return ret
}
// Files returns a map from filenames to the File objects produced from them.
// This is intended to be used, for example, to print diagnostics with
// contextual information.
//
// The returned map and all of the objects it refers to directly or indirectly
// must not be modified.
func (p *Parser) Files() map[string]*hcl.File {
return p.files
}

191
vendor/github.com/hashicorp/hcl2/hclwrite/ast.go generated vendored Normal file
View File

@ -0,0 +1,191 @@
package hclwrite
import (
"bytes"
"io"
"github.com/hashicorp/hcl2/hcl"
"github.com/zclconf/go-cty/cty"
)
type Node interface {
walkChildNodes(w internalWalkFunc)
Tokens() *TokenSeq
}
type internalWalkFunc func(Node)
type File struct {
Name string
SrcBytes []byte
Body *Body
AllTokens *TokenSeq
}
// WriteTo writes the tokens underlying the receiving file to the given writer.
func (f *File) WriteTo(wr io.Writer) (int, error) {
return f.AllTokens.WriteTo(wr)
}
// Bytes returns a buffer containing the source code resulting from the
// tokens underlying the receiving file. If any updates have been made via
// the AST API, these will be reflected in the result.
func (f *File) Bytes() []byte {
buf := &bytes.Buffer{}
f.WriteTo(buf)
return buf.Bytes()
}
// Format makes in-place modifications to the tokens underlying the receiving
// file in order to change the whitespace to be in canonical form.
func (f *File) Format() {
format(f.Body.AllTokens.Tokens())
}
type Body struct {
// Items may contain Attribute, Block and Unstructured instances.
// Items and AllTokens should be updated only by methods of this type,
// since they must be kept synchronized for correct operation.
Items []Node
AllTokens *TokenSeq
// IndentLevel is the number of spaces that should appear at the start
// of lines added within this body.
IndentLevel int
}
func (n *Body) walkChildNodes(w internalWalkFunc) {
for _, item := range n.Items {
w(item)
}
}
func (n *Body) Tokens() *TokenSeq {
return n.AllTokens
}
func (n *Body) AppendItem(node Node) {
n.Items = append(n.Items, node)
n.AppendUnstructuredTokens(node.Tokens())
}
func (n *Body) AppendUnstructuredTokens(seq *TokenSeq) {
if n.AllTokens == nil {
new := make(TokenSeq, 0, 1)
n.AllTokens = &new
}
*(n.AllTokens) = append(*(n.AllTokens), seq)
}
// FindAttribute returns the first attribute item from the body that has the
// given name, or returns nil if there is currently no matching attribute.
//
// A valid AST has only one definition of each attribute, but that constraint
// is not enforced in the zclwrite AST, so a tree that has been mutated by
// other calls may contain additional matching attributes that cannot be seen
// by this method.
func (n *Body) FindAttribute(name string) *Attribute {
nameBytes := []byte(name)
for _, item := range n.Items {
if attr, ok := item.(*Attribute); ok {
if attr.NameTokens.IsIdent(nameBytes) {
return attr
}
}
}
return nil
}
// SetAttributeValue either replaces the expression of an existing attribute
// of the given name or adds a new attribute definition to the end of the block.
//
// The value is given as a cty.Value, and must therefore be a literal. To set
// a variable reference or other traversal, use SetAttributeTraversal.
//
// The return value is the attribute that was either modified in-place or
// created.
func (n *Body) SetAttributeValue(name string, val cty.Value) *Attribute {
panic("Body.SetAttributeValue not yet implemented")
}
// SetAttributeTraversal either replaces the expression of an existing attribute
// of the given name or adds a new attribute definition to the end of the block.
//
// The new expression is given as a hcl.Traversal, which must be an absolute
// traversal. To set a literal value, use SetAttributeValue.
//
// The return value is the attribute that was either modified in-place or
// created.
func (n *Body) SetAttributeTraversal(name string, traversal hcl.Traversal) *Attribute {
panic("Body.SetAttributeTraversal not yet implemented")
}
type Attribute struct {
AllTokens *TokenSeq
LeadCommentTokens *TokenSeq
NameTokens *TokenSeq
EqualsTokens *TokenSeq
Expr *Expression
LineCommentTokens *TokenSeq
EOLTokens *TokenSeq
}
func (a *Attribute) walkChildNodes(w internalWalkFunc) {
w(a.Expr)
}
func (n *Attribute) Tokens() *TokenSeq {
return n.AllTokens
}
type Block struct {
AllTokens *TokenSeq
LeadCommentTokens *TokenSeq
TypeTokens *TokenSeq
LabelTokens []*TokenSeq
LabelTokensFlat *TokenSeq
OBraceTokens *TokenSeq
Body *Body
CBraceTokens *TokenSeq
EOLTokens *TokenSeq
}
func (n *Block) walkChildNodes(w internalWalkFunc) {
w(n.Body)
}
func (n *Block) Tokens() *TokenSeq {
return n.AllTokens
}
type Expression struct {
AllTokens *TokenSeq
VarRefs []*VarRef
}
func (n *Expression) walkChildNodes(w internalWalkFunc) {
for _, name := range n.VarRefs {
w(name)
}
}
func (n *Expression) Tokens() *TokenSeq {
return n.AllTokens
}
type VarRef struct {
// Tokens alternate between TokenIdent and TokenDot, with the first
// and last elements always being TokenIdent.
AllTokens *TokenSeq
}
func (n *VarRef) walkChildNodes(w internalWalkFunc) {
// no child nodes of a variable name
}
func (n *VarRef) Tokens() *TokenSeq {
return n.AllTokens
}

7
vendor/github.com/hashicorp/hcl2/hclwrite/doc.go generated vendored Normal file
View File

@ -0,0 +1,7 @@
// Package hclwrite deals with the problem of generating HCL configuration
// and of making specific surgical changes to existing HCL configurations.
//
// It operates at a different level of abstraction than the main HCL parser
// and AST, since details such as the placement of comments and newlines
// are preserved when unchanged.
package hclwrite

410
vendor/github.com/hashicorp/hcl2/hclwrite/format.go generated vendored Normal file
View File

@ -0,0 +1,410 @@
package hclwrite
import (
"github.com/hashicorp/hcl2/hcl/hclsyntax"
)
// placeholder token used when we don't have a token but we don't want
// to pass a real "nil" and complicate things with nil pointer checks
var nilToken = &Token{
Type: hclsyntax.TokenNil,
Bytes: []byte{},
SpacesBefore: 0,
}
// format rewrites tokens within the given sequence, in-place, to adjust the
// whitespace around their content to achieve canonical formatting.
func format(tokens Tokens) {
// Formatting is a multi-pass process. More details on the passes below,
// but this is the overview:
// - adjust the leading space on each line to create appropriate
// indentation
// - adjust spaces between tokens in a single cell using a set of rules
// - adjust the leading space in the "assign" and "comment" cells on each
// line to vertically align with neighboring lines.
// All of these steps operate in-place on the given tokens, so a caller
// may collect a flat sequence of all of the tokens underlying an AST
// and pass it here and we will then indirectly modify the AST itself.
// Formatting must change only whitespace. Specifically, that means
// changing the SpacesBefore attribute on a token while leaving the
// other token attributes unchanged.
lines := linesForFormat(tokens)
formatIndent(lines)
formatSpaces(lines)
formatCells(lines)
}
func formatIndent(lines []formatLine) {
// Our methodology for indents is to take the input one line at a time
// and count the bracketing delimiters on each line. If a line has a net
// increase in open brackets, we increase the indent level by one and
// remember how many new openers we had. If the line has a net _decrease_,
// we'll compare it to the most recent number of openers and decrease the
// dedent level by one each time we pass an indent level remembered
// earlier.
// The "indent stack" used here allows for us to recognize degenerate
// input where brackets are not symmetrical within lines and avoid
// pushing things too far left or right, creating confusion.
// We'll start our indent stack at a reasonable capacity to minimize the
// chance of us needing to grow it; 10 here means 10 levels of indent,
// which should be more than enough for reasonable zcl uses.
indents := make([]int, 0, 10)
for i := range lines {
// TODO: need to track when we're inside a multi-line template and
// suspend indentation processing.
line := &lines[i]
if len(line.lead) == 0 {
continue
}
if line.lead[0].Type == hclsyntax.TokenNewline {
// Never place spaces before a newline
line.lead[0].SpacesBefore = 0
continue
}
netBrackets := 0
for _, token := range line.lead {
netBrackets += tokenBracketChange(token)
}
for _, token := range line.assign {
netBrackets += tokenBracketChange(token)
}
switch {
case netBrackets > 0:
line.lead[0].SpacesBefore = 2 * len(indents)
indents = append(indents, netBrackets)
case netBrackets < 0:
closed := -netBrackets
for closed > 0 && len(indents) > 0 {
switch {
case closed > indents[len(indents)-1]:
closed -= indents[len(indents)-1]
indents = indents[:len(indents)-1]
case closed < indents[len(indents)-1]:
indents[len(indents)-1] -= closed
closed = 0
default:
indents = indents[:len(indents)-1]
closed = 0
}
}
line.lead[0].SpacesBefore = 2 * len(indents)
default:
line.lead[0].SpacesBefore = 2 * len(indents)
}
}
}
func formatSpaces(lines []formatLine) {
for _, line := range lines {
for i, token := range line.lead {
var before, after *Token
if i > 0 {
before = line.lead[i-1]
} else {
before = nilToken
}
if i < (len(line.lead) - 1) {
after = line.lead[i+1]
} else {
after = nilToken
}
if spaceAfterToken(token, before, after) {
after.SpacesBefore = 1
} else {
after.SpacesBefore = 0
}
}
for i, token := range line.assign {
if i == 0 {
// first token in "assign" always has one space before to
// separate the equals sign from what it's assigning.
token.SpacesBefore = 1
}
var before, after *Token
if i > 0 {
before = line.assign[i-1]
} else {
before = nilToken
}
if i < (len(line.assign) - 1) {
after = line.assign[i+1]
} else {
after = nilToken
}
if spaceAfterToken(token, before, after) {
after.SpacesBefore = 1
} else {
after.SpacesBefore = 0
}
}
}
}
func formatCells(lines []formatLine) {
chainStart := -1
maxColumns := 0
// We'll deal with the "assign" cell first, since moving that will
// also impact the "comment" cell.
closeAssignChain := func(i int) {
for _, chainLine := range lines[chainStart:i] {
columns := chainLine.lead.Columns()
spaces := (maxColumns - columns) + 1
chainLine.assign[0].SpacesBefore = spaces
}
chainStart = -1
maxColumns = 0
}
for i, line := range lines {
if line.assign == nil {
if chainStart != -1 {
closeAssignChain(i)
}
} else {
if chainStart == -1 {
chainStart = i
}
columns := line.lead.Columns()
if columns > maxColumns {
maxColumns = columns
}
}
}
if chainStart != -1 {
closeAssignChain(len(lines))
}
// Now we'll deal with the comments
closeCommentChain := func(i int) {
for _, chainLine := range lines[chainStart:i] {
columns := chainLine.lead.Columns() + chainLine.assign.Columns()
spaces := (maxColumns - columns) + 1
chainLine.comment[0].SpacesBefore = spaces
}
chainStart = -1
maxColumns = 0
}
for i, line := range lines {
if line.comment == nil {
if chainStart != -1 {
closeCommentChain(i)
}
} else {
if chainStart == -1 {
chainStart = i
}
columns := line.lead.Columns() + line.assign.Columns()
if columns > maxColumns {
maxColumns = columns
}
}
}
if chainStart != -1 {
closeCommentChain(len(lines))
}
}
// spaceAfterToken decides whether a particular subject token should have a
// space after it when surrounded by the given before and after tokens.
// "before" can be TokenNil, if the subject token is at the start of a sequence.
func spaceAfterToken(subject, before, after *Token) bool {
switch {
case after.Type == hclsyntax.TokenNewline || after.Type == hclsyntax.TokenNil:
// Never add spaces before a newline
return false
case subject.Type == hclsyntax.TokenIdent && after.Type == hclsyntax.TokenOParen:
// Don't split a function name from open paren in a call
return false
case subject.Type == hclsyntax.TokenDot || after.Type == hclsyntax.TokenDot:
// Don't use spaces around attribute access dots
return false
case after.Type == hclsyntax.TokenComma:
// No space right before a comma in an argument list
return false
case subject.Type == hclsyntax.TokenQuotedLit || subject.Type == hclsyntax.TokenStringLit || subject.Type == hclsyntax.TokenOQuote || subject.Type == hclsyntax.TokenOHeredoc || after.Type == hclsyntax.TokenQuotedLit || after.Type == hclsyntax.TokenStringLit || after.Type == hclsyntax.TokenCQuote || after.Type == hclsyntax.TokenCHeredoc:
// No extra spaces within templates
return false
case subject.Type == hclsyntax.TokenMinus:
// Since a minus can either be subtraction or negation, and the latter
// should _not_ have a space after it, we need to use some heuristics
// to decide which case this is.
// We guess that we have a negation if the token before doesn't look
// like it could be the end of an expression.
switch before.Type {
case hclsyntax.TokenNil:
// Minus at the start of input must be a negation
return false
case hclsyntax.TokenOParen, hclsyntax.TokenOBrace, hclsyntax.TokenOBrack, hclsyntax.TokenEqual, hclsyntax.TokenColon, hclsyntax.TokenComma, hclsyntax.TokenQuestion:
// Minus immediately after an opening bracket or separator must be a negation.
return false
case hclsyntax.TokenPlus, hclsyntax.TokenStar, hclsyntax.TokenSlash, hclsyntax.TokenPercent, hclsyntax.TokenMinus:
// Minus immediately after another arithmetic operator must be negation.
return false
case hclsyntax.TokenEqualOp, hclsyntax.TokenNotEqual, hclsyntax.TokenGreaterThan, hclsyntax.TokenGreaterThanEq, hclsyntax.TokenLessThan, hclsyntax.TokenLessThanEq:
// Minus immediately after another comparison operator must be negation.
return false
case hclsyntax.TokenAnd, hclsyntax.TokenOr, hclsyntax.TokenBang:
// Minus immediately after logical operator doesn't make sense but probably intended as negation.
return false
default:
return true
}
case tokenBracketChange(subject) > 0:
// No spaces after open brackets
return false
case tokenBracketChange(after) < 0:
// No spaces before close brackets
return false
default:
// Most tokens are space-separated
return true
}
}
func linesForFormat(tokens Tokens) []formatLine {
if len(tokens) == 0 {
// should never happen, since we should always have EOF, but let's
// not crash anyway.
return make([]formatLine, 0)
}
// first we'll count our lines, so we can allocate the array for them in
// a single block. (We want to minimize memory pressure in this codepath,
// so it can be run somewhat-frequently by editor integrations.)
lineCount := 1 // if there are zero newlines then there is one line
for _, tok := range tokens {
if tokenIsNewline(tok) {
lineCount++
}
}
// To start, we'll just put everything in the "lead" cell on each line,
// and then do another pass over the lines afterwards to adjust.
lines := make([]formatLine, lineCount)
li := 0
lineStart := 0
for i, tok := range tokens {
if tok.Type == hclsyntax.TokenEOF {
// The EOF token doesn't belong to any line, and terminates the
// token sequence.
lines[li].lead = tokens[lineStart:i]
break
}
if tokenIsNewline(tok) {
lines[li].lead = tokens[lineStart : i+1]
lineStart = i + 1
li++
}
}
// Now we'll pick off any trailing comments and attribute assignments
// to shuffle off into the "comment" and "assign" cells.
for i := range lines {
line := &lines[i]
if len(line.lead) == 0 {
// if the line is empty then there's nothing for us to do
// (this should happen only for the final line, because all other
// lines would have a newline token of some kind)
continue
}
if len(line.lead) > 1 && line.lead[len(line.lead)-1].Type == hclsyntax.TokenComment {
line.comment = line.lead[len(line.lead)-1:]
line.lead = line.lead[:len(line.lead)-1]
}
for i, tok := range line.lead {
if i > 0 && tok.Type == hclsyntax.TokenEqual {
// We only move the tokens into "assign" if the RHS seems to
// be a whole expression, which we determine by counting
// brackets. If there's a net positive number of brackets
// then that suggests we're introducing a multi-line expression.
netBrackets := 0
for _, token := range line.lead[i:] {
netBrackets += tokenBracketChange(token)
}
if netBrackets == 0 {
line.assign = line.lead[i:]
line.lead = line.lead[:i]
}
break
}
}
}
return lines
}
func tokenIsNewline(tok *Token) bool {
if tok.Type == hclsyntax.TokenNewline {
return true
} else if tok.Type == hclsyntax.TokenComment {
// Single line tokens (# and //) consume their terminating newline,
// so we need to treat them as newline tokens as well.
if len(tok.Bytes) > 0 && tok.Bytes[len(tok.Bytes)-1] == '\n' {
return true
}
}
return false
}
func tokenBracketChange(tok *Token) int {
switch tok.Type {
case hclsyntax.TokenOBrace, hclsyntax.TokenOBrack, hclsyntax.TokenOParen, hclsyntax.TokenTemplateControl, hclsyntax.TokenTemplateInterp:
return 1
case hclsyntax.TokenCBrace, hclsyntax.TokenCBrack, hclsyntax.TokenCParen, hclsyntax.TokenTemplateSeqEnd:
return -1
default:
return 0
}
}
// formatLine represents a single line of source code for formatting purposes,
// splitting its tokens into up to three "cells":
//
// lead: always present, representing everything up to one of the others
// assign: if line contains an attribute assignment, represents the tokens
// starting at (and including) the equals symbol
// comment: if line contains any non-comment tokens and ends with a
// single-line comment token, represents the comment.
//
// When formatting, the leading spaces of the first tokens in each of these
// cells is adjusted to align vertically their occurences on consecutive
// rows.
type formatLine struct {
lead Tokens
assign Tokens
comment Tokens
}

View File

@ -0,0 +1,23 @@
package hclwrite
import (
"github.com/hashicorp/hcl2/hcl/hclsyntax"
)
type nativeNodeSorter struct {
Nodes []hclsyntax.Node
}
func (s nativeNodeSorter) Len() int {
return len(s.Nodes)
}
func (s nativeNodeSorter) Less(i, j int) bool {
rangeI := s.Nodes[i].Range()
rangeJ := s.Nodes[j].Range()
return rangeI.Start.Byte < rangeJ.Start.Byte
}
func (s nativeNodeSorter) Swap(i, j int) {
s.Nodes[i], s.Nodes[j] = s.Nodes[j], s.Nodes[i]
}

499
vendor/github.com/hashicorp/hcl2/hclwrite/parser.go generated vendored Normal file
View File

@ -0,0 +1,499 @@
package hclwrite
import (
"sort"
"github.com/hashicorp/hcl2/hcl/hclsyntax"
"github.com/hashicorp/hcl2/hcl"
)
// Our "parser" here is actually not doing any parsing of its own. Instead,
// it leans on the native parser in zclsyntax, and then uses the source ranges
// from the AST to partition the raw token sequence to match the raw tokens
// up to AST nodes.
//
// This strategy feels somewhat counter-intuitive, since most of the work the
// parser does is thrown away here, but this strategy is chosen because the
// normal parsing work done by zclsyntax is considered to be the "main case",
// while modifying and re-printing source is more of an edge case, used only
// in ancillary tools, and so it's good to keep all the main parsing logic
// with the main case but keep all of the extra complexity of token wrangling
// out of the main parser, which is already rather complex just serving the
// use-cases it already serves.
//
// If the parsing step produces any errors, the returned File is nil because
// we can't reliably extract tokens from the partial AST produced by an
// erroneous parse.
func parse(src []byte, filename string, start hcl.Pos) (*File, hcl.Diagnostics) {
file, diags := hclsyntax.ParseConfig(src, filename, start)
if diags.HasErrors() {
return nil, diags
}
// To do our work here, we use the "native" tokens (those from zclsyntax)
// to match against source ranges in the AST, but ultimately produce
// slices from our sequence of "writer" tokens, which contain only
// *relative* position information that is more appropriate for
// transformation/writing use-cases.
nativeTokens, diags := hclsyntax.LexConfig(src, filename, start)
if diags.HasErrors() {
// should never happen, since we would've caught these diags in
// the first call above.
return nil, diags
}
writerTokens := writerTokens(nativeTokens)
from := inputTokens{
nativeTokens: nativeTokens,
writerTokens: writerTokens,
}
before, root, after := parseBody(file.Body.(*hclsyntax.Body), from)
return &File{
Name: filename,
SrcBytes: src,
Body: root,
AllTokens: &TokenSeq{
before.Seq(),
root.AllTokens,
after.Seq(),
},
}, nil
}
type inputTokens struct {
nativeTokens hclsyntax.Tokens
writerTokens Tokens
}
func (it inputTokens) Partition(rng hcl.Range) (before, within, after inputTokens) {
start, end := partitionTokens(it.nativeTokens, rng)
before = it.Slice(0, start)
within = it.Slice(start, end)
after = it.Slice(end, len(it.nativeTokens))
return
}
// PartitionIncludeComments is like Partition except the returned "within"
// range includes any lead and line comments associated with the range.
func (it inputTokens) PartitionIncludingComments(rng hcl.Range) (before, within, after inputTokens) {
start, end := partitionTokens(it.nativeTokens, rng)
start = partitionLeadCommentTokens(it.nativeTokens[:start])
_, afterNewline := partitionLineEndTokens(it.nativeTokens[end:])
end += afterNewline
before = it.Slice(0, start)
within = it.Slice(start, end)
after = it.Slice(end, len(it.nativeTokens))
return
}
// PartitionBlockItem is similar to PartitionIncludeComments but it returns
// the comments as separate token sequences so that they can be captured into
// AST attributes. It makes assumptions that apply only to block items, so
// should not be used for other constructs.
func (it inputTokens) PartitionBlockItem(rng hcl.Range) (before, leadComments, within, lineComments, newline, after inputTokens) {
before, within, after = it.Partition(rng)
before, leadComments = before.PartitionLeadComments()
lineComments, newline, after = after.PartitionLineEndTokens()
return
}
func (it inputTokens) PartitionLeadComments() (before, within inputTokens) {
start := partitionLeadCommentTokens(it.nativeTokens)
before = it.Slice(0, start)
within = it.Slice(start, len(it.nativeTokens))
return
}
func (it inputTokens) PartitionLineEndTokens() (comments, newline, after inputTokens) {
afterComments, afterNewline := partitionLineEndTokens(it.nativeTokens)
comments = it.Slice(0, afterComments)
newline = it.Slice(afterComments, afterNewline)
after = it.Slice(afterNewline, len(it.nativeTokens))
return
}
func (it inputTokens) Slice(start, end int) inputTokens {
// When we slice, we create a new slice with no additional capacity because
// we expect that these slices will be mutated in order to insert
// new code into the AST, and we want to ensure that a new underlying
// array gets allocated in that case, rather than writing into some
// following slice and corrupting it.
return inputTokens{
nativeTokens: it.nativeTokens[start:end:end],
writerTokens: it.writerTokens[start:end:end],
}
}
func (it inputTokens) Len() int {
return len(it.nativeTokens)
}
func (it inputTokens) Seq() *TokenSeq {
return &TokenSeq{it.writerTokens}
}
func (it inputTokens) Types() []hclsyntax.TokenType {
ret := make([]hclsyntax.TokenType, len(it.nativeTokens))
for i, tok := range it.nativeTokens {
ret[i] = tok.Type
}
return ret
}
// parseBody locates the given body within the given input tokens and returns
// the resulting *Body object as well as the tokens that appeared before and
// after it.
func parseBody(nativeBody *hclsyntax.Body, from inputTokens) (inputTokens, *Body, inputTokens) {
before, within, after := from.PartitionIncludingComments(nativeBody.SrcRange)
// The main AST doesn't retain the original source ordering of the
// body items, so we need to reconstruct that ordering by inspecting
// their source ranges.
nativeItems := make([]hclsyntax.Node, 0, len(nativeBody.Attributes)+len(nativeBody.Blocks))
for _, nativeAttr := range nativeBody.Attributes {
nativeItems = append(nativeItems, nativeAttr)
}
for _, nativeBlock := range nativeBody.Blocks {
nativeItems = append(nativeItems, nativeBlock)
}
sort.Sort(nativeNodeSorter{nativeItems})
body := &Body{
IndentLevel: 0, // TODO: deal with this
}
remain := within
for _, nativeItem := range nativeItems {
beforeItem, item, afterItem := parseBodyItem(nativeItem, remain)
if beforeItem.Len() > 0 {
body.AppendUnstructuredTokens(beforeItem.Seq())
}
body.AppendItem(item)
remain = afterItem
}
if remain.Len() > 0 {
body.AppendUnstructuredTokens(remain.Seq())
}
return before, body, after
}
func parseBodyItem(nativeItem hclsyntax.Node, from inputTokens) (inputTokens, Node, inputTokens) {
before, leadComments, within, lineComments, newline, after := from.PartitionBlockItem(nativeItem.Range())
var item Node
switch tItem := nativeItem.(type) {
case *hclsyntax.Attribute:
item = parseAttribute(tItem, within, leadComments, lineComments, newline)
case *hclsyntax.Block:
item = parseBlock(tItem, within, leadComments, lineComments, newline)
default:
// should never happen if caller is behaving
panic("unsupported native item type")
}
return before, item, after
}
func parseAttribute(nativeAttr *hclsyntax.Attribute, from, leadComments, lineComments, newline inputTokens) *Attribute {
var allTokens TokenSeq
attr := &Attribute{}
if leadComments.Len() > 0 {
attr.LeadCommentTokens = leadComments.Seq()
allTokens = append(allTokens, attr.LeadCommentTokens)
}
before, nameTokens, from := from.Partition(nativeAttr.NameRange)
if before.Len() > 0 {
allTokens = append(allTokens, before.Seq())
}
attr.NameTokens = nameTokens.Seq()
allTokens = append(allTokens, attr.NameTokens)
before, equalsTokens, from := from.Partition(nativeAttr.EqualsRange)
if before.Len() > 0 {
allTokens = append(allTokens, before.Seq())
}
attr.EqualsTokens = equalsTokens.Seq()
allTokens = append(allTokens, attr.EqualsTokens)
before, exprTokens, from := from.Partition(nativeAttr.Expr.Range())
if before.Len() > 0 {
allTokens = append(allTokens, before.Seq())
}
attr.Expr = parseExpression(nativeAttr.Expr, exprTokens)
allTokens = append(allTokens, attr.Expr.AllTokens)
if lineComments.Len() > 0 {
attr.LineCommentTokens = lineComments.Seq()
allTokens = append(allTokens, attr.LineCommentTokens)
}
if newline.Len() > 0 {
attr.EOLTokens = newline.Seq()
allTokens = append(allTokens, attr.EOLTokens)
}
// Collect any stragglers, though there shouldn't be any
if from.Len() > 0 {
allTokens = append(allTokens, from.Seq())
}
attr.AllTokens = &allTokens
return attr
}
func parseBlock(nativeBlock *hclsyntax.Block, from, leadComments, lineComments, newline inputTokens) *Block {
var allTokens TokenSeq
block := &Block{}
if leadComments.Len() > 0 {
block.LeadCommentTokens = leadComments.Seq()
allTokens = append(allTokens, block.LeadCommentTokens)
}
before, typeTokens, from := from.Partition(nativeBlock.TypeRange)
if before.Len() > 0 {
allTokens = append(allTokens, before.Seq())
}
block.TypeTokens = typeTokens.Seq()
allTokens = append(allTokens, block.TypeTokens)
for _, rng := range nativeBlock.LabelRanges {
var labelTokens inputTokens
before, labelTokens, from = from.Partition(rng)
if before.Len() > 0 {
allTokens = append(allTokens, before.Seq())
}
seq := labelTokens.Seq()
block.LabelTokens = append(block.LabelTokens, seq)
*(block.LabelTokensFlat) = append(*(block.LabelTokensFlat), seq)
allTokens = append(allTokens, seq)
}
before, oBrace, from := from.Partition(nativeBlock.OpenBraceRange)
if before.Len() > 0 {
allTokens = append(allTokens, before.Seq())
}
block.OBraceTokens = oBrace.Seq()
allTokens = append(allTokens, block.OBraceTokens)
// We go a bit out of order here: we go hunting for the closing brace
// so that we have a delimited body, but then we'll deal with the body
// before we actually append the closing brace and any straggling tokens
// that appear after it.
bodyTokens, cBrace, from := from.Partition(nativeBlock.CloseBraceRange)
before, body, after := parseBody(nativeBlock.Body, bodyTokens)
if before.Len() > 0 {
allTokens = append(allTokens, before.Seq())
}
block.Body = body
allTokens = append(allTokens, body.AllTokens)
if after.Len() > 0 {
allTokens = append(allTokens, after.Seq())
}
block.CBraceTokens = cBrace.Seq()
allTokens = append(allTokens, block.CBraceTokens)
// stragglers
if after.Len() > 0 {
allTokens = append(allTokens, from.Seq())
}
if lineComments.Len() > 0 {
// blocks don't actually have line comments, so we'll just treat
// them as extra stragglers
allTokens = append(allTokens, lineComments.Seq())
}
if newline.Len() > 0 {
block.EOLTokens = newline.Seq()
allTokens = append(allTokens, block.EOLTokens)
}
block.AllTokens = &allTokens
return block
}
func parseExpression(nativeExpr hclsyntax.Expression, from inputTokens) *Expression {
// TODO: Populate VarRefs by analyzing the result of nativeExpr.Variables()
return &Expression{
AllTokens: from.Seq(),
}
}
// writerTokens takes a sequence of tokens as produced by the main zclsyntax
// package and transforms it into an equivalent sequence of tokens using
// this package's own token model.
//
// The resulting list contains the same number of tokens and uses the same
// indices as the input, allowing the two sets of tokens to be correlated
// by index.
func writerTokens(nativeTokens hclsyntax.Tokens) Tokens {
// Ultimately we want a slice of token _pointers_, but since we can
// predict how much memory we're going to devote to tokens we'll allocate
// it all as a single flat buffer and thus give the GC less work to do.
tokBuf := make([]Token, len(nativeTokens))
var lastByteOffset int
for i, mainToken := range nativeTokens {
// Create a copy of the bytes so that we can mutate without
// corrupting the original token stream.
bytes := make([]byte, len(mainToken.Bytes))
copy(bytes, mainToken.Bytes)
tokBuf[i] = Token{
Type: mainToken.Type,
Bytes: bytes,
// We assume here that spaces are always ASCII spaces, since
// that's what the scanner also assumes, and thus the number
// of bytes skipped is also the number of space characters.
SpacesBefore: mainToken.Range.Start.Byte - lastByteOffset,
}
lastByteOffset = mainToken.Range.End.Byte
}
// Now make a slice of pointers into the previous slice.
ret := make(Tokens, len(tokBuf))
for i := range ret {
ret[i] = &tokBuf[i]
}
return ret
}
// partitionTokens takes a sequence of tokens and a hcl.Range and returns
// two indices within the token sequence that correspond with the range
// boundaries, such that the slice operator could be used to produce
// three token sequences for before, within, and after respectively:
//
// start, end := partitionTokens(toks, rng)
// before := toks[:start]
// within := toks[start:end]
// after := toks[end:]
//
// This works best when the range is aligned with token boundaries (e.g.
// because it was produced in terms of the scanner's result) but if that isn't
// true then it will make a best effort that may produce strange results at
// the boundaries.
//
// Native zclsyntax tokens are used here, because they contain the necessary
// absolute position information. However, since writerTokens produces a
// correlatable sequence of writer tokens, the resulting indices can be
// used also to index into its result, allowing the partitioning of writer
// tokens to be driven by the partitioning of native tokens.
//
// The tokens are assumed to be in source order and non-overlapping, which
// will be true if the token sequence from the scanner is used directly.
func partitionTokens(toks hclsyntax.Tokens, rng hcl.Range) (start, end int) {
// We us a linear search here because we assume tha in most cases our
// target range is close to the beginning of the sequence, and the seqences
// are generally small for most reasonable files anyway.
for i := 0; ; i++ {
if i >= len(toks) {
// No tokens for the given range at all!
return len(toks), len(toks)
}
if toks[i].Range.Start.Byte >= rng.Start.Byte {
start = i
break
}
}
for i := start; ; i++ {
if i >= len(toks) {
// The range "hangs off" the end of the token sequence
return start, len(toks)
}
if toks[i].Range.Start.Byte >= rng.End.Byte {
end = i // end marker is exclusive
break
}
}
return start, end
}
// partitionLeadCommentTokens takes a sequence of tokens that is assumed
// to immediately precede a construct that can have lead comment tokens,
// and returns the index into that sequence where the lead comments begin.
//
// Lead comments are defined as whole lines containing only comment tokens
// with no blank lines between. If no such lines are found, the returned
// index will be len(toks).
func partitionLeadCommentTokens(toks hclsyntax.Tokens) int {
// single-line comments (which is what we're interested in here)
// consume their trailing newline, so we can just walk backwards
// until we stop seeing comment tokens.
for i := len(toks) - 1; i >= 0; i-- {
if toks[i].Type != hclsyntax.TokenComment {
return i + 1
}
}
return 0
}
// partitionLineEndTokens takes a sequence of tokens that is assumed
// to immediately follow a construct that can have a line comment, and
// returns first the index where any line comments end and then second
// the index immediately after the trailing newline.
//
// Line comments are defined as comments that appear immediately after
// a construct on the same line where its significant tokens ended.
//
// Since single-line comment tokens (# and //) include the newline that
// terminates them, in the presence of these the two returned indices
// will be the same since the comment itself serves as the line end.
func partitionLineEndTokens(toks hclsyntax.Tokens) (afterComment, afterNewline int) {
for i := 0; i < len(toks); i++ {
tok := toks[i]
if tok.Type != hclsyntax.TokenComment {
switch tok.Type {
case hclsyntax.TokenNewline:
return i, i + 1
case hclsyntax.TokenEOF:
// Although this is valid, we mustn't include the EOF
// itself as our "newline" or else strange things will
// happen when we try to append new items.
return i, i
default:
// If we have well-formed input here then nothing else should be
// possible. This path should never happen, because we only try
// to extract tokens from the sequence if the parser succeeded,
// and it should catch this problem itself.
panic("malformed line trailers: expected only comments and newlines")
}
}
if len(tok.Bytes) > 0 && tok.Bytes[len(tok.Bytes)-1] == '\n' {
// Newline at the end of a single-line comment serves both as
// the end of comments *and* the end of the line.
return i + 1, i + 1
}
}
return len(toks), len(toks)
}
// lexConfig uses the zclsyntax scanner to get a token stream and then
// rewrites it into this package's token model.
//
// Any errors produced during scanning are ignored, so the results of this
// function should be used with care.
func lexConfig(src []byte) Tokens {
mainTokens, _ := hclsyntax.LexConfig(src, "", hcl.Pos{Byte: 0, Line: 1, Column: 1})
return writerTokens(mainTokens)
}

30
vendor/github.com/hashicorp/hcl2/hclwrite/public.go generated vendored Normal file
View File

@ -0,0 +1,30 @@
package hclwrite
import (
"bytes"
"github.com/hashicorp/hcl2/hcl"
)
// ParseConfig interprets the given source bytes into a *zclwrite.File. The
// resulting AST can be used to perform surgical edits on the source code
// before turning it back into bytes again.
func ParseConfig(src []byte, filename string, start hcl.Pos) (*File, hcl.Diagnostics) {
return parse(src, filename, start)
}
// Format takes source code and performs simple whitespace changes to transform
// it to a canonical layout style.
//
// Format skips constructing an AST and works directly with tokens, so it
// is less expensive than formatting via the AST for situations where no other
// changes will be made. It also ignores syntax errors and can thus be applied
// to partial source code, although the result in that case may not be
// desirable.
func Format(src []byte) []byte {
tokens := lexConfig(src)
format(tokens)
buf := &bytes.Buffer{}
(&TokenSeq{tokens}).WriteTo(buf)
return buf.Bytes()
}

170
vendor/github.com/hashicorp/hcl2/hclwrite/tokens.go generated vendored Normal file
View File

@ -0,0 +1,170 @@
package hclwrite
import (
"bytes"
"io"
"github.com/apparentlymart/go-textseg/textseg"
"github.com/hashicorp/hcl2/hcl/hclsyntax"
)
// TokenGen is an abstract type that can append tokens to a list. It is the
// low-level foundation underlying the zclwrite AST; the AST provides a
// convenient abstraction over raw token sequences to facilitate common tasks,
// but it's also possible to directly manipulate the tree of token generators
// to make changes that the AST API doesn't directly allow.
type TokenGen interface {
EachToken(TokenCallback)
}
// TokenCallback is used with TokenGen implementations to specify the action
// that is to be taken for each token in the flattened token sequence.
type TokenCallback func(*Token)
// Token is a single sequence of bytes annotated with a type. It is similar
// in purpose to zclsyntax.Token, but discards the source position information
// since that is not useful in code generation.
type Token struct {
Type hclsyntax.TokenType
Bytes []byte
// We record the number of spaces before each token so that we can
// reproduce the exact layout of the original file when we're making
// surgical changes in-place. When _new_ code is created it will always
// be in the canonical style, but we preserve layout of existing code.
SpacesBefore int
}
// Tokens is a flat list of tokens.
type Tokens []*Token
func (ts Tokens) WriteTo(wr io.Writer) (int, error) {
seq := &TokenSeq{ts}
return seq.WriteTo(wr)
}
func (ts Tokens) Bytes() []byte {
buf := &bytes.Buffer{}
ts.WriteTo(buf)
return buf.Bytes()
}
// Columns returns the number of columns (grapheme clusters) the token sequence
// occupies. The result is not meaningful if there are newline or single-line
// comment tokens in the sequence.
func (ts Tokens) Columns() int {
ret := 0
for _, token := range ts {
ret += token.SpacesBefore // spaces are always worth one column each
ct, _ := textseg.TokenCount(token.Bytes, textseg.ScanGraphemeClusters)
ret += ct
}
return ret
}
// TokenSeq combines zero or more TokenGens together to produce a flat sequence
// of tokens from a tree of TokenGens.
type TokenSeq []TokenGen
func (t *Token) EachToken(cb TokenCallback) {
cb(t)
}
func (ts Tokens) EachToken(cb TokenCallback) {
for _, t := range ts {
cb(t)
}
}
func (ts *TokenSeq) EachToken(cb TokenCallback) {
if ts == nil {
return
}
for _, gen := range *ts {
gen.EachToken(cb)
}
}
// Tokens returns the flat list of tokens represented by the receiving
// token sequence.
func (ts *TokenSeq) Tokens() Tokens {
var tokens Tokens
ts.EachToken(func(token *Token) {
tokens = append(tokens, token)
})
return tokens
}
// WriteTo takes an io.Writer and writes the bytes for each token to it,
// along with the spacing that separates each token. In other words, this
// allows serializing the tokens to a file or other such byte stream.
func (ts *TokenSeq) WriteTo(wr io.Writer) (int, error) {
// We know we're going to be writing a lot of small chunks of repeated
// space characters, so we'll prepare a buffer of these that we can
// easily pass to wr.Write without any further allocation.
spaces := make([]byte, 40)
for i := range spaces {
spaces[i] = ' '
}
var n int
var err error
ts.EachToken(func(token *Token) {
if err != nil {
return
}
for spacesBefore := token.SpacesBefore; spacesBefore > 0; spacesBefore -= len(spaces) {
thisChunk := spacesBefore
if thisChunk > len(spaces) {
thisChunk = len(spaces)
}
var thisN int
thisN, err = wr.Write(spaces[:thisChunk])
n += thisN
if err != nil {
return
}
}
var thisN int
thisN, err = wr.Write(token.Bytes)
n += thisN
})
return n, err
}
// SoloToken returns the single token represented by the receiving sequence,
// or nil if the sequence does not represent exactly one token.
func (ts *TokenSeq) SoloToken() *Token {
var ret *Token
found := false
ts.EachToken(func(tok *Token) {
if ret == nil && !found {
ret = tok
found = true
} else if ret != nil && found {
ret = nil
}
})
return ret
}
// IsIdent returns true if and only if the token sequence represents a single
// ident token whose name matches the given string.
func (ts *TokenSeq) IsIdent(name []byte) bool {
tok := ts.SoloToken()
if tok == nil {
return false
}
if tok.Type != hclsyntax.TokenIdent {
return false
}
return bytes.Equal(tok.Bytes, name)
}
// TokenSeqEmpty is a TokenSeq that contains no tokens. It can be used anywhere,
// but its primary purpose is to be assigned as a replacement for a non-empty
// TokenSeq when eliminating a section of an input file.
var TokenSeqEmpty = TokenSeq([]TokenGen(nil))

21
vendor/github.com/mitchellh/go-wordwrap/LICENSE.md generated vendored Normal file
View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014 Mitchell Hashimoto
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

39
vendor/github.com/mitchellh/go-wordwrap/README.md generated vendored Normal file
View File

@ -0,0 +1,39 @@
# go-wordwrap
`go-wordwrap` (Golang package: `wordwrap`) is a package for Go that
automatically wraps words into multiple lines. The primary use case for this
is in formatting CLI output, but of course word wrapping is a generally useful
thing to do.
## Installation and Usage
Install using `go get github.com/mitchellh/go-wordwrap`.
Full documentation is available at
http://godoc.org/github.com/mitchellh/go-wordwrap
Below is an example of its usage ignoring errors:
```go
wrapped := wordwrap.WrapString("foo bar baz", 3)
fmt.Println(wrapped)
```
Would output:
```
foo
bar
baz
```
## Word Wrap Algorithm
This library doesn't use any clever algorithm for word wrapping. The wrapping
is actually very naive: whenever there is whitespace or an explicit linebreak.
The goal of this library is for word wrapping CLI output, so the input is
typically pretty well controlled human language. Because of this, the naive
approach typically works just fine.
In the future, we'd like to make the algorithm more advanced. We would do
so without breaking the API.

73
vendor/github.com/mitchellh/go-wordwrap/wordwrap.go generated vendored Normal file
View File

@ -0,0 +1,73 @@
package wordwrap
import (
"bytes"
"unicode"
)
// WrapString wraps the given string within lim width in characters.
//
// Wrapping is currently naive and only happens at white-space. A future
// version of the library will implement smarter wrapping. This means that
// pathological cases can dramatically reach past the limit, such as a very
// long word.
func WrapString(s string, lim uint) string {
// Initialize a buffer with a slightly larger size to account for breaks
init := make([]byte, 0, len(s))
buf := bytes.NewBuffer(init)
var current uint
var wordBuf, spaceBuf bytes.Buffer
for _, char := range s {
if char == '\n' {
if wordBuf.Len() == 0 {
if current+uint(spaceBuf.Len()) > lim {
current = 0
} else {
current += uint(spaceBuf.Len())
spaceBuf.WriteTo(buf)
}
spaceBuf.Reset()
} else {
current += uint(spaceBuf.Len() + wordBuf.Len())
spaceBuf.WriteTo(buf)
spaceBuf.Reset()
wordBuf.WriteTo(buf)
wordBuf.Reset()
}
buf.WriteRune(char)
current = 0
} else if unicode.IsSpace(char) {
if spaceBuf.Len() == 0 || wordBuf.Len() > 0 {
current += uint(spaceBuf.Len() + wordBuf.Len())
spaceBuf.WriteTo(buf)
spaceBuf.Reset()
wordBuf.WriteTo(buf)
wordBuf.Reset()
}
spaceBuf.WriteRune(char)
} else {
wordBuf.WriteRune(char)
if current+uint(spaceBuf.Len()+wordBuf.Len()) > lim && uint(wordBuf.Len()) < lim {
buf.WriteRune('\n')
current = 0
spaceBuf.Reset()
}
}
}
if wordBuf.Len() == 0 {
if current+uint(spaceBuf.Len()) <= lim {
spaceBuf.WriteTo(buf)
}
} else {
spaceBuf.WriteTo(buf)
wordBuf.WriteTo(buf)
}
return buf.String()
}

89
vendor/github.com/zclconf/go-cty/cty/capsule.go generated vendored Normal file
View File

@ -0,0 +1,89 @@
package cty
import (
"fmt"
"reflect"
)
type capsuleType struct {
typeImplSigil
Name string
GoType reflect.Type
}
func (t *capsuleType) Equals(other Type) bool {
if otherP, ok := other.typeImpl.(*capsuleType); ok {
// capsule types compare by pointer identity
return otherP == t
}
return false
}
func (t *capsuleType) FriendlyName() string {
return t.Name
}
func (t *capsuleType) GoString() string {
// To get a useful representation of our native type requires some
// shenanigans.
victimVal := reflect.Zero(t.GoType)
return fmt.Sprintf("cty.Capsule(%q, reflect.TypeOf(%#v))", t.Name, victimVal.Interface())
}
// Capsule creates a new Capsule type.
//
// A Capsule type is a special type that can be used to transport arbitrary
// Go native values of a given type through the cty type system. A language
// that uses cty as its type system might, for example, provide functions
// that return capsule-typed values and then other functions that operate
// on those values.
//
// From cty's perspective, Capsule types have a few interesting characteristics,
// described in the following paragraphs.
//
// Each capsule type has an associated Go native type that it is able to
// transport. Capsule types compare by identity, so each call to the
// Capsule function creates an entirely-distinct cty Type, even if two calls
// use the same native type.
//
// Each capsule-typed value contains a pointer to a value of the given native
// type. A capsule-typed value supports no operations except equality, and
// equality is implemented by pointer identity of the encapsulated pointer.
//
// The given name is used as the new type's "friendly name". This can be any
// string in principle, but will usually be a short, all-lowercase name aimed
// at users of the embedding language (i.e. not mention Go-specific details)
// and will ideally not create ambiguity with any predefined cty type.
//
// Capsule types are never introduced by any standard cty operation, so a
// calling application opts in to including them within its own type system
// by creating them and introducing them via its own functions. At that point,
// the application is responsible for dealing with any capsule-typed values
// that might be returned.
func Capsule(name string, nativeType reflect.Type) Type {
return Type{
&capsuleType{
Name: name,
GoType: nativeType,
},
}
}
// IsCapsuleType returns true if this type is a capsule type, as created
// by cty.Capsule .
func (t Type) IsCapsuleType() bool {
_, ok := t.typeImpl.(*capsuleType)
return ok
}
// EncapsulatedType returns the encapsulated native type of a capsule type,
// or panics if the receiver is not a Capsule type.
//
// Is IsCapsuleType to determine if this method is safe to call.
func (t Type) EncapsulatedType() reflect.Type {
impl, ok := t.typeImpl.(*capsuleType)
if !ok {
panic("not a capsule type")
}
return impl.GoType
}

34
vendor/github.com/zclconf/go-cty/cty/collection.go generated vendored Normal file
View File

@ -0,0 +1,34 @@
package cty
import (
"errors"
)
type collectionTypeImpl interface {
ElementType() Type
}
// IsCollectionType returns true if the given type supports the operations
// that are defined for all collection types.
func (t Type) IsCollectionType() bool {
_, ok := t.typeImpl.(collectionTypeImpl)
return ok
}
// ElementType returns the element type of the receiver if it is a collection
// type, or panics if it is not. Use IsCollectionType first to test whether
// this method will succeed.
func (t Type) ElementType() Type {
if ct, ok := t.typeImpl.(collectionTypeImpl); ok {
return ct.ElementType()
}
panic(errors.New("not a collection type"))
}
// ElementCallback is a callback type used for iterating over elements of
// collections and attributes of objects.
//
// The types of key and value depend on what type is being iterated over.
// Return true to stop iterating after the current element, or false to
// continue iterating.
type ElementCallback func(key Value, val Value) (stop bool)

View File

@ -0,0 +1,165 @@
package convert
import (
"github.com/zclconf/go-cty/cty"
)
// compareTypes implements a preference order for unification.
//
// The result of this method is not useful for anything other than unification
// preferences, since it assumes that the caller will verify that any suggested
// conversion is actually possible and it is thus able to to make certain
// optimistic assumptions.
func compareTypes(a cty.Type, b cty.Type) int {
// DynamicPseudoType always has lowest preference, because anything can
// convert to it (it acts as a placeholder for "any type") and we want
// to optimistically assume that any dynamics will converge on matching
// their neighbors.
if a == cty.DynamicPseudoType || b == cty.DynamicPseudoType {
if a != cty.DynamicPseudoType {
return -1
}
if b != cty.DynamicPseudoType {
return 1
}
return 0
}
if a.IsPrimitiveType() && b.IsPrimitiveType() {
// String is a supertype of all primitive types, because we can
// represent all primitive values as specially-formatted strings.
if a == cty.String || b == cty.String {
if a != cty.String {
return 1
}
if b != cty.String {
return -1
}
return 0
}
}
if a.IsListType() && b.IsListType() {
return compareTypes(a.ElementType(), b.ElementType())
}
if a.IsSetType() && b.IsSetType() {
return compareTypes(a.ElementType(), b.ElementType())
}
if a.IsMapType() && b.IsMapType() {
return compareTypes(a.ElementType(), b.ElementType())
}
// From this point on we may have swapped the two items in order to
// simplify our cases. Therefore any non-zero return after this point
// must be multiplied by "swap" to potentially invert the return value
// if needed.
swap := 1
switch {
case a.IsTupleType() && b.IsListType():
fallthrough
case a.IsObjectType() && b.IsMapType():
fallthrough
case a.IsSetType() && b.IsTupleType():
fallthrough
case a.IsSetType() && b.IsListType():
a, b = b, a
swap = -1
}
if b.IsSetType() && (a.IsTupleType() || a.IsListType()) {
// We'll just optimistically assume that the element types are
// unifyable/convertible, and let a second recursive pass
// figure out how to make that so.
return -1 * swap
}
if a.IsListType() && b.IsTupleType() {
// We'll just optimistically assume that the tuple's element types
// can be unified into something compatible with the list's element
// type.
return -1 * swap
}
if a.IsMapType() && b.IsObjectType() {
// We'll just optimistically assume that the object's attribute types
// can be unified into something compatible with the map's element
// type.
return -1 * swap
}
// For object and tuple types, comparing two types doesn't really tell
// the whole story because it may be possible to construct a new type C
// that is the supertype of both A and B by unifying each attribute/element
// separately. That possibility is handled by Unify as a follow-up if
// type sorting is insufficient to produce a valid result.
//
// Here we will take care of the simple possibilities where no new type
// is needed.
if a.IsObjectType() && b.IsObjectType() {
atysA := a.AttributeTypes()
atysB := b.AttributeTypes()
if len(atysA) != len(atysB) {
return 0
}
hasASuper := false
hasBSuper := false
for k := range atysA {
if _, has := atysB[k]; !has {
return 0
}
cmp := compareTypes(atysA[k], atysB[k])
if cmp < 0 {
hasASuper = true
} else if cmp > 0 {
hasBSuper = true
}
}
switch {
case hasASuper && hasBSuper:
return 0
case hasASuper:
return -1 * swap
case hasBSuper:
return 1 * swap
default:
return 0
}
}
if a.IsTupleType() && b.IsTupleType() {
etysA := a.TupleElementTypes()
etysB := b.TupleElementTypes()
if len(etysA) != len(etysB) {
return 0
}
hasASuper := false
hasBSuper := false
for i := range etysA {
cmp := compareTypes(etysA[i], etysB[i])
if cmp < 0 {
hasASuper = true
} else if cmp > 0 {
hasBSuper = true
}
}
switch {
case hasASuper && hasBSuper:
return 0
case hasASuper:
return -1 * swap
case hasBSuper:
return 1 * swap
default:
return 0
}
}
return 0
}

View File

@ -0,0 +1,95 @@
package convert
import (
"github.com/zclconf/go-cty/cty"
)
// conversion is an internal variant of Conversion that carries around
// a cty.Path to be used in error responses.
type conversion func(cty.Value, cty.Path) (cty.Value, error)
func getConversion(in cty.Type, out cty.Type, unsafe bool) conversion {
conv := getConversionKnown(in, out, unsafe)
if conv == nil {
return nil
}
// Wrap the conversion in some standard checks that we don't want to
// have to repeat in every conversion function.
return func(in cty.Value, path cty.Path) (cty.Value, error) {
if !in.IsKnown() {
return cty.UnknownVal(out), nil
}
if in.IsNull() {
// We'll pass through nulls, albeit type converted, and let
// the caller deal with whatever handling they want to do in
// case null values are considered valid in some applications.
return cty.NullVal(out), nil
}
return conv(in, path)
}
}
func getConversionKnown(in cty.Type, out cty.Type, unsafe bool) conversion {
switch {
case out == cty.DynamicPseudoType:
// Conversion *to* DynamicPseudoType means that the caller wishes
// to allow any type in this position, so we'll produce a do-nothing
// conversion that just passes through the value as-is.
return dynamicPassthrough
case unsafe && in == cty.DynamicPseudoType:
// Conversion *from* DynamicPseudoType means that we have a value
// whose type isn't yet known during type checking. For these we will
// assume that conversion will succeed and deal with any errors that
// result (which is why we can only do this when "unsafe" is set).
return dynamicFixup(out)
case in.IsPrimitiveType() && out.IsPrimitiveType():
conv := primitiveConversionsSafe[in][out]
if conv != nil {
return conv
}
if unsafe {
return primitiveConversionsUnsafe[in][out]
}
return nil
case out.IsListType() && (in.IsListType() || in.IsSetType()):
inEty := in.ElementType()
outEty := out.ElementType()
if inEty.Equals(outEty) {
// This indicates that we're converting from list to set with
// the same element type, so we don't need an element converter.
return conversionCollectionToList(outEty, nil)
}
convEty := getConversion(inEty, outEty, unsafe)
if convEty == nil {
return nil
}
return conversionCollectionToList(outEty, convEty)
case out.IsListType() && in.IsTupleType():
outEty := out.ElementType()
return conversionTupleToList(in, outEty, unsafe)
default:
return nil
}
}
// retConversion wraps a conversion (internal type) so it can be returned
// as a Conversion (public type).
func retConversion(conv conversion) Conversion {
if conv == nil {
return nil
}
return func(in cty.Value) (cty.Value, error) {
return conv(in, cty.Path(nil))
}
}

View File

@ -0,0 +1,115 @@
package convert
import (
"github.com/zclconf/go-cty/cty"
)
// conversionCollectionToList returns a conversion that will apply the given
// conversion to all of the elements of a collection (something that supports
// ForEachElement and LengthInt) and then returns the result as a list.
//
// "conv" can be nil if the elements are expected to already be of the
// correct type and just need to be re-wrapped into a list. (For example,
// if we're converting from a set into a list of the same element type.)
func conversionCollectionToList(ety cty.Type, conv conversion) conversion {
return func(val cty.Value, path cty.Path) (cty.Value, error) {
elems := make([]cty.Value, 0, val.LengthInt())
i := int64(0)
path = append(path, nil)
it := val.ElementIterator()
for it.Next() {
_, val := it.Element()
var err error
path[len(path)-1] = cty.IndexStep{
Key: cty.NumberIntVal(i),
}
if conv != nil {
val, err = conv(val, path)
if err != nil {
return cty.NilVal, err
}
}
elems = append(elems, val)
i++
}
if len(elems) == 0 {
return cty.ListValEmpty(ety), nil
}
return cty.ListVal(elems), nil
}
}
// conversionTupleToList returns a conversion that will take a value of the
// given tuple type and return a list of the given element type.
//
// Will panic if the given tupleType isn't actually a tuple type.
func conversionTupleToList(tupleType cty.Type, listEty cty.Type, unsafe bool) conversion {
tupleEtys := tupleType.TupleElementTypes()
if len(tupleEtys) == 0 {
// Empty tuple short-circuit
return func(val cty.Value, path cty.Path) (cty.Value, error) {
return cty.ListValEmpty(listEty), nil
}
}
if listEty == cty.DynamicPseudoType {
// This is a special case where the caller wants us to find
// a suitable single type that all elements can convert to, if
// possible.
listEty, _ = unify(tupleEtys, unsafe)
if listEty == cty.NilType {
return nil
}
}
elemConvs := make([]conversion, len(tupleEtys))
for i, tupleEty := range tupleEtys {
if tupleEty.Equals(listEty) {
// no conversion required
continue
}
elemConvs[i] = getConversion(tupleEty, listEty, unsafe)
if elemConvs[i] == nil {
// If any of our element conversions are impossible, then the our
// whole conversion is impossible.
return nil
}
}
// If we fall out here then a conversion is possible, using the
// element conversions in elemConvs
return func(val cty.Value, path cty.Path) (cty.Value, error) {
elems := make([]cty.Value, 0, len(elemConvs))
path = append(path, nil)
i := int64(0)
it := val.ElementIterator()
for it.Next() {
_, val := it.Element()
var err error
path[len(path)-1] = cty.IndexStep{
Key: cty.NumberIntVal(i),
}
conv := elemConvs[i]
if conv != nil {
val, err = conv(val, path)
if err != nil {
return cty.NilVal, err
}
}
elems = append(elems, val)
i++
}
return cty.ListVal(elems), nil
}
}

View File

@ -0,0 +1,33 @@
package convert
import (
"github.com/zclconf/go-cty/cty"
)
// dynamicFixup deals with just-in-time conversions of values that were
// input-typed as cty.DynamicPseudoType during analysis, ensuring that
// we end up with the desired output type once the value is known, or
// failing with an error if that is not possible.
//
// This is in the spirit of the cty philosophy of optimistically assuming that
// DynamicPseudoType values will become the intended value eventually, and
// dealing with any inconsistencies during final evaluation.
func dynamicFixup(wantType cty.Type) conversion {
return func(in cty.Value, path cty.Path) (cty.Value, error) {
ret, err := Convert(in, wantType)
if err != nil {
// Re-wrap this error so that the returned path is relative
// to the caller's original value, rather than relative to our
// conversion value here.
return cty.NilVal, path.NewError(err)
}
return ret, nil
}
}
// dynamicPassthrough is an identity conversion that is used when the
// target type is DynamicPseudoType, indicating that the caller doesn't care
// which type is returned.
func dynamicPassthrough(in cty.Value, path cty.Path) (cty.Value, error) {
return in, nil
}

View File

@ -0,0 +1,50 @@
package convert
import (
"math/big"
"github.com/zclconf/go-cty/cty"
)
var stringTrue = cty.StringVal("true")
var stringFalse = cty.StringVal("false")
var primitiveConversionsSafe = map[cty.Type]map[cty.Type]conversion{
cty.Number: {
cty.String: func(val cty.Value, path cty.Path) (cty.Value, error) {
f := val.AsBigFloat()
return cty.StringVal(f.Text('f', -1)), nil
},
},
cty.Bool: {
cty.String: func(val cty.Value, path cty.Path) (cty.Value, error) {
if val.True() {
return stringTrue, nil
} else {
return stringFalse, nil
}
},
},
}
var primitiveConversionsUnsafe = map[cty.Type]map[cty.Type]conversion{
cty.String: {
cty.Number: func(val cty.Value, path cty.Path) (cty.Value, error) {
f, _, err := (&big.Float{}).Parse(val.AsString(), 10)
if err != nil {
return cty.NilVal, path.NewErrorf("a number is required")
}
return cty.NumberVal(f), nil
},
cty.Bool: func(val cty.Value, path cty.Path) (cty.Value, error) {
switch val.AsString() {
case "true", "1":
return cty.True, nil
case "false", "0":
return cty.False, nil
default:
return cty.NilVal, path.NewErrorf("a bool is required")
}
},
},
}

15
vendor/github.com/zclconf/go-cty/cty/convert/doc.go generated vendored Normal file
View File

@ -0,0 +1,15 @@
// Package convert contains some routines for converting between cty types.
// The intent of providing this package is to encourage applications using
// cty to have consistent type conversion behavior for maximal interoperability
// when Values pass from one application to another.
//
// The conversions are categorized into two categories. "Safe" conversions are
// ones that are guaranteed to succeed if given a non-null value of the
// appropriate source type. "Unsafe" conversions, on the other hand, are valid
// for only a subset of input values, and thus may fail with an error when
// called for values outside of that valid subset.
//
// The functions whose names end in Unsafe support all of the conversions that
// are supported by the corresponding functions whose names do not have that
// suffix, and then additional unsafe conversions as well.
package convert

83
vendor/github.com/zclconf/go-cty/cty/convert/public.go generated vendored Normal file
View File

@ -0,0 +1,83 @@
package convert
import (
"fmt"
"github.com/zclconf/go-cty/cty"
)
// This file contains the public interface of this package, which is intended
// to be a small, convenient interface designed for easy integration into
// a hypothetical language type checker and interpreter.
// Conversion is a named function type representing a conversion from a
// value of one type to a value of another type.
//
// The source type for a conversion is always the source type given to
// the function that returned the Conversion, but there is no way to recover
// that from a Conversion value itself. If a Conversion is given a value
// that is not of its expected type (with the exception of DynamicPseudoType,
// which is always supported) then the function may panic or produce undefined
// results.
type Conversion func(in cty.Value) (out cty.Value, err error)
// GetConversion returns a Conversion between the given in and out Types if
// a safe one is available, or returns nil otherwise.
func GetConversion(in cty.Type, out cty.Type) Conversion {
return retConversion(getConversion(in, out, false))
}
// GetConversionUnsafe returns a Conversion between the given in and out Types
// if either a safe or unsafe one is available, or returns nil otherwise.
func GetConversionUnsafe(in cty.Type, out cty.Type) Conversion {
return retConversion(getConversion(in, out, true))
}
// Convert returns the result of converting the given value to the given type
// if an safe or unsafe conversion is available, or returns an error if such a
// conversion is impossible.
//
// This is a convenience wrapper around calling GetConversionUnsafe and then
// immediately passing the given value to the resulting function.
func Convert(in cty.Value, want cty.Type) (cty.Value, error) {
if in.Type().Equals(want) {
return in, nil
}
conv := GetConversionUnsafe(in.Type(), want)
if conv == nil {
return cty.NilVal, fmt.Errorf("incorrect type; %s required", want.FriendlyName())
}
return conv(in)
}
// Unify attempts to find the most general type that can be converted from
// all of the given types. If this is possible, that type is returned along
// with a slice of necessary conversions for some of the given types.
//
// If no common supertype can be found, this function returns cty.NilType and
// a nil slice.
//
// If a common supertype *can* be found, the returned slice will always be
// non-nil and will contain a non-nil conversion for each given type that
// needs to be converted, with indices corresponding to the input slice.
// Any given type that does *not* need conversion (because it is already of
// the appropriate type) will have a nil Conversion.
//
// cty.DynamicPseudoType is, as usual, a special case. If the given type list
// contains a mixture of dynamic and non-dynamic types, the dynamic types are
// disregarded for type selection and a conversion is returned for them that
// will attempt a late conversion of the given value to the target type,
// failing with a conversion error if the eventual concrete type is not
// compatible. If *all* given types are DynamicPseudoType, or in the
// degenerate case of an empty slice of types, the returned type is itself
// cty.DynamicPseudoType and no conversions are attempted.
func Unify(types []cty.Type) (cty.Type, []Conversion) {
return unify(types, false)
}
// UnifyUnsafe is the same as Unify except that it may return unsafe
// conversions in situations where a safe conversion isn't also available.
func UnifyUnsafe(types []cty.Type) (cty.Type, []Conversion) {
return unify(types, true)
}

Some files were not shown because too many files have changed in this diff Show More