terraform/builtin/providers/circonus/resource_circonus_graph.go

931 lines
27 KiB
Go
Raw Normal View History

Circonus Provider (#12338) * Begin stubbing out the Circonus provider. * Remove all references to `reverse:secret_key`. This value is dynamically set by the service and unused by Terraform. * Update the `circonus_check` resource. Still a WIP. * Add docs for the `circonus_check` resource. Commit miss, this should have been included in the last commit. * "Fix" serializing check tags I still need to figure out how I can make them order agnostic w/o using a TypeSet. I'm worried that's what I'm going to have to do. * Spike a quick circonus_broker data source. * Convert tags to a Set so the order does not matter. * Add a `circonus_account` data source. * Correctly spell account. Pointed out by: @postwait * Add the `circonus_contact_group` resource. * Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code. * Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively. Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others). * Use upstream contsants where available. * Import the latest circonus-gometrics. * Move to using a Set of collectors vs a list attached to a single attribute. * Rename "cid" to "id" in the circonus_account data source and elsewhere where possible. * Inject a tag automatically. Update gometrics. * Checkpoint `circonus_metric` resource. * Enable provider-level auto-tagging. This is disabled by default. * Rearrange metric. This is an experimental "style" of a provider. We'll see. That moment. When you think you've gone off the rails on a mad scientist experiment but like the outcome and think you may be onto something but haven't proven it to yourself or anyone else yet? That. That exact feeling of semi-confidence while being alone in the wilderness. Please let this not be the Terraform provider equivalent of DJB's C style of coding. We'll know in another resource or two if this was a horrible mistake or not. * Begin moving `resource_circonus_check` over to the new world order/structure: Much of this is WIP and incomplete, but here is the new supported structure: ``` variable "used_metric_name" { default = "_usage`0`_used" } resource "circonus_check" "usage" { # collectors = ["${var.collectors}"] collector { id = "${var.collectors[0]}" } name = "${var.check_name}" notes = "${var.notes}" json { url = "https://${var.target}/account/current" http_headers = { "Accept" = "application/json" "X-Circonus-App-Name" = "TerraformCheck" "X-Circonus-Auth-Token" = "${var.api_token}" } } stream { name = "${circonus_metric.used.name}" tags = "${circonus_metric.used.tags}" type = "${circonus_metric.used.type}" } tags = { source = "circonus" } } resource "circonus_metric" "used" { name = "${var.used_metric_name}" tags = { source = "circonus" } type = "numeric" } ``` * Document the `circonus_metric` resource. * Updated `circonus_check` docs. * If a port was present, automatically set it in the Config. * Alpha sort the check parameters now that they've been renamed. * Fix a handful of panics as a result of the schema changing. * Move back to a `TypeSet` for tags. After a stint with `TypeMap`, move back to `TypeSet`. A set of strings seems to match the API the best. The `map` type was convenient because it reduced the amount of boilerplate, but you loose out on other things. For instance, tags come in the form of `category:value`, so naturally it seems like you could use a map, but you can't without severe loss of functionality because assigning two values to the same category is common. And you can't normalize map input or suppress the output correctly (this was eventually what broke the camel's back). I tried an experiment of normalizing the input to be `category:value` as the key in the map and a value of `""`, but... seee diff suppress. In this case, simple is good. While here bring some cleanups to _Metric since that was my initial testing target. * Rename `providerConfig` to `_ProviderConfig` * Checkpoint the `json` check type. * Fix a few residual issues re: missing descriptions. * Rename `validateRegexp` to `_ValidateRegexp` * Use tags as real sets, not just a slice of strings. * Move the DiffSuppressFunc for tags down to the Elem. * Fix up unit tests to chase the updated, default hasher function being used. * Remove `Computed` attribute from `TypeSet` objects. This fixes a pile of issues re: update that I was having. * Rename functions. `GetStringOk` -> `GetStringOK` `GetSetAsListOk` -> `GetSetAsListOK` `GetIntOk` -> `GetIntOK` * Various small cleanups and comments rolled into a single commit. * Add a `postgresql` check type for the `circonus_check` resource. * Rename various validator functions to be _CapitalCase vs capitalCase. * Err... finish the validator renames. * Add `GetFloat64()` support. * Add `icmp_ping` check type support. * Catch up to the _API*Attr renames. Deliberately left out of the previous commit in order to create a clean example of what is required to add a new check type to the `circonus_check` resource. * Clarify when the `target` attribute is required for the `postgresql` check type. * Correctly pull the metric ID attribute from the right location. * Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster") * Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type. * Add support for the `http` check type. * `s/SSL/TLS/g` * Add support for `tcp` check types. * Enumerate the available metrics that are supported for each check type. * Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support. * Add a `circonus_trigger` resource (a.k.a Circonus Ruleset). * Rename a handful of functions to make it clear in the function name the direction of flow for information moving through the provider. TL;DR: Replace `parse` and `read` with "foo to bar"-like names. * Fix the attribute name used in a validator. Absent != After. * Set the minimum `absent` predicate to 70s per testing. * Fix the regression tests for circonus_trigger now that absent has a 70s min * Fix up the `tcp` check to require a `host` attribute. Fix tests. It's clear I didn't run these before committing/pushing the `tcp` check last time. * Fix `circonus_check` for `cloudwatch` checks. * Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be consistent with other function names. grep(1)ability of code++ * Slack buttons as an integer are string encoded. * Fix updates for `circonus_contact`. * Fix the out parameters for contact groups. * Move to using `_CastSchemaToTF()` where appropriate. * Fix circonus_contact_group. Updates work as expected now. * Use `_StateSet()` in place of `d.Set()` everywhere. * Make a quick pass over the collector datasource to modernize its style * Quick pass for items identified by `golint`. * Fix up collectors * Fix the `json` check type. Reconcile possible sources of drift. Update now works as expected. * Normalize trigger durations to seconds. * Improve the robustness of the state handling for the `circonus_contact_group` resource. * I'm torn on this, but sort the contact groups in the notify list. This does mean that if the first contact group in the list has a higher lexical sort order the plan won't converge until the offending resource is tainted and recreated. But there's also some sorting happening elsewhere, so.... sort and taint for now and this will need to be revisited in the future. * Add support for the `httptrap` check type. * Remove empty units from the state file. * Metric clusters can return a 404. Detect this accordingly in its respective Exists handler. * Add a `circonus_graph` resource. * Fix a handful of bugs in the graph provider. * Re-enable the necessary `ConflictsWith` definitions and normalize attribute names. * Objects that have been deleted via the UI return a 404. Handle in Exists(). * Teach `circonus_graph`'s Stack set to accept nil values. * Set `ForceNew: true` for a graph's name. * Chase various API fixes required to make `circonus_graph` work as expected. * Fix up the handling of sub-1 zoom resolutions for graphs. * Add the `check_by_collector` out parameter to the `circonus_check` resource. * Improve validation of line vs area graphs. Fix graph_style. * Fix up the `logarithmic` graph axis option. * Resolve various trivial `go vet` issues. * Add a stream_group out parameter. * Remove incorrectly applied `Optional` attributes to the `circonus_account` resource. * Remove various `Optional` attributes from the `circonus_collector` data source. * Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`. * Sync up with upstream vendor fixes for circonus_graph. * Update the checksum value for the http check. * Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`. * Clean up tests to use a generic terraform regression testing account. * Add support for the MySQL to the `circonus_check` resource. * Begin stubbing out the Circonus provider. * Remove all references to `reverse:secret_key`. This value is dynamically set by the service and unused by Terraform. * Update the `circonus_check` resource. Still a WIP. * Add docs for the `circonus_check` resource. Commit miss, this should have been included in the last commit. * "Fix" serializing check tags I still need to figure out how I can make them order agnostic w/o using a TypeSet. I'm worried that's what I'm going to have to do. * Spike a quick circonus_broker data source. * Convert tags to a Set so the order does not matter. * Add a `circonus_account` data source. * Correctly spell account. Pointed out by: @postwait * Add the `circonus_contact_group` resource. * Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code. * Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively. Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others). * Use upstream contsants where available. * Import the latest circonus-gometrics. * Move to using a Set of collectors vs a list attached to a single attribute. * Rename "cid" to "id" in the circonus_account data source and elsewhere where possible. * Inject a tag automatically. Update gometrics. * Checkpoint `circonus_metric` resource. * Enable provider-level auto-tagging. This is disabled by default. * Rearrange metric. This is an experimental "style" of a provider. We'll see. That moment. When you think you've gone off the rails on a mad scientist experiment but like the outcome and think you may be onto something but haven't proven it to yourself or anyone else yet? That. That exact feeling of semi-confidence while being alone in the wilderness. Please let this not be the Terraform provider equivalent of DJB's C style of coding. We'll know in another resource or two if this was a horrible mistake or not. * Begin moving `resource_circonus_check` over to the new world order/structure: Much of this is WIP and incomplete, but here is the new supported structure: ``` variable "used_metric_name" { default = "_usage`0`_used" } resource "circonus_check" "usage" { # collectors = ["${var.collectors}"] collector { id = "${var.collectors[0]}" } name = "${var.check_name}" notes = "${var.notes}" json { url = "https://${var.target}/account/current" http_headers = { "Accept" = "application/json" "X-Circonus-App-Name" = "TerraformCheck" "X-Circonus-Auth-Token" = "${var.api_token}" } } stream { name = "${circonus_metric.used.name}" tags = "${circonus_metric.used.tags}" type = "${circonus_metric.used.type}" } tags = { source = "circonus" } } resource "circonus_metric" "used" { name = "${var.used_metric_name}" tags = { source = "circonus" } type = "numeric" } ``` * Document the `circonus_metric` resource. * Updated `circonus_check` docs. * If a port was present, automatically set it in the Config. * Alpha sort the check parameters now that they've been renamed. * Fix a handful of panics as a result of the schema changing. * Move back to a `TypeSet` for tags. After a stint with `TypeMap`, move back to `TypeSet`. A set of strings seems to match the API the best. The `map` type was convenient because it reduced the amount of boilerplate, but you loose out on other things. For instance, tags come in the form of `category:value`, so naturally it seems like you could use a map, but you can't without severe loss of functionality because assigning two values to the same category is common. And you can't normalize map input or suppress the output correctly (this was eventually what broke the camel's back). I tried an experiment of normalizing the input to be `category:value` as the key in the map and a value of `""`, but... seee diff suppress. In this case, simple is good. While here bring some cleanups to _Metric since that was my initial testing target. * Rename `providerConfig` to `_ProviderConfig` * Checkpoint the `json` check type. * Fix a few residual issues re: missing descriptions. * Rename `validateRegexp` to `_ValidateRegexp` * Use tags as real sets, not just a slice of strings. * Move the DiffSuppressFunc for tags down to the Elem. * Fix up unit tests to chase the updated, default hasher function being used. * Remove `Computed` attribute from `TypeSet` objects. This fixes a pile of issues re: update that I was having. * Rename functions. `GetStringOk` -> `GetStringOK` `GetSetAsListOk` -> `GetSetAsListOK` `GetIntOk` -> `GetIntOK` * Various small cleanups and comments rolled into a single commit. * Add a `postgresql` check type for the `circonus_check` resource. * Rename various validator functions to be _CapitalCase vs capitalCase. * Err... finish the validator renames. * Add `GetFloat64()` support. * Add `icmp_ping` check type support. * Catch up to the _API*Attr renames. Deliberately left out of the previous commit in order to create a clean example of what is required to add a new check type to the `circonus_check` resource. * Clarify when the `target` attribute is required for the `postgresql` check type. * Correctly pull the metric ID attribute from the right location. * Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster") * Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type. * Add support for the `http` check type. * `s/SSL/TLS/g` * Add support for `tcp` check types. * Enumerate the available metrics that are supported for each check type. * Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support. * Add a `circonus_trigger` resource (a.k.a Circonus Ruleset). * Rename a handful of functions to make it clear in the function name the direction of flow for information moving through the provider. TL;DR: Replace `parse` and `read` with "foo to bar"-like names. * Fix the attribute name used in a validator. Absent != After. * Set the minimum `absent` predicate to 70s per testing. * Fix the regression tests for circonus_trigger now that absent has a 70s min * Fix up the `tcp` check to require a `host` attribute. Fix tests. It's clear I didn't run these before committing/pushing the `tcp` check last time. * Fix `circonus_check` for `cloudwatch` checks. * Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be consistent with other function names. grep(1)ability of code++ * Slack buttons as an integer are string encoded. * Fix updates for `circonus_contact`. * Fix the out parameters for contact groups. * Move to using `_CastSchemaToTF()` where appropriate. * Fix circonus_contact_group. Updates work as expected now. * Use `_StateSet()` in place of `d.Set()` everywhere. * Make a quick pass over the collector datasource to modernize its style * Quick pass for items identified by `golint`. * Fix up collectors * Fix the `json` check type. Reconcile possible sources of drift. Update now works as expected. * Normalize trigger durations to seconds. * Improve the robustness of the state handling for the `circonus_contact_group` resource. * I'm torn on this, but sort the contact groups in the notify list. This does mean that if the first contact group in the list has a higher lexical sort order the plan won't converge until the offending resource is tainted and recreated. But there's also some sorting happening elsewhere, so.... sort and taint for now and this will need to be revisited in the future. * Add support for the `httptrap` check type. * Remove empty units from the state file. * Metric clusters can return a 404. Detect this accordingly in its respective Exists handler. * Add a `circonus_graph` resource. * Fix a handful of bugs in the graph provider. * Re-enable the necessary `ConflictsWith` definitions and normalize attribute names. * Objects that have been deleted via the UI return a 404. Handle in Exists(). * Teach `circonus_graph`'s Stack set to accept nil values. * Set `ForceNew: true` for a graph's name. * Chase various API fixes required to make `circonus_graph` work as expected. * Fix up the handling of sub-1 zoom resolutions for graphs. * Add the `check_by_collector` out parameter to the `circonus_check` resource. * Improve validation of line vs area graphs. Fix graph_style. * Fix up the `logarithmic` graph axis option. * Resolve various trivial `go vet` issues. * Add a stream_group out parameter. * Remove incorrectly applied `Optional` attributes to the `circonus_account` resource. * Remove various `Optional` attributes from the `circonus_collector` data source. * Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`. * Sync up with upstream vendor fixes for circonus_graph. * Update the checksum value for the http check. * Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`. * Clean up tests to use a generic terraform regression testing account. * Rename all identifiers that began with a `_` and replace with a corresponding lowercase glyph. * Remove stale comment in types. * Move the calls to `ResourceData`'s `SetId()` calls to be first in the list so that no resources are lost in the event of a `panic()`. * Remove `stateSet` from the `circonus_trigger` resource. * Remove `stateSet` from the `circonus_stream_group` resource. * Remove `schemaSet` from the `circonus_graph` resource. * Remove `stateSet` from the `circonus_contact` resource. * Remove `stateSet` from the `circonus_metric` resource. * Remove `stateSet` from the `circonus_account` data source. * Remove `stateSet` from the `circonus_collector` data source. * Remove stray `stateSet` call from the `circonus_contact` resource. This is an odd artifact to find... I'm completely unsure as to why it was there to begin with but am mostly certain it's a bug and needs to be removed. * Remove `stateSet` from the `circonus_check` resource. * Remove the `stateSet` helper function. All call sites have been converted to return errors vs `panic()`'ing at runtime. * Remove a pile of unused functions and type definitions. * Remove the last of the `attrReader` interface. * Remove an unused `Sprintf` call. * Update `circonus-gometrics` and remove unused files. * Document what `convertToHelperSchema()` does. Rename `castSchemaToTF` to `convertToHelperSchema`. Change the function parameter ordering so the `map` of attribute descriptions: this is much easier to maintain when the description map is first when creating schema inline. * Move descriptions into their respective source files. * Remove all instances of `panic()`. In the case of software bugs, log an error. Never `panic()` and always return a value. * Rename `stream_group` to `metric_cluster`. * Rename triggers to rule sets * Rename `stream` to `metric`. * Chase the `stream` -> `metric` change into the docs. * Remove some unused test functions. * Add the now required `color` attribute for graphing a `metric_cluster`. * Add a missing description to silence a warning. * Add `id` as a selector for the account data source. * Futureproof testing: Randomize all asset names to prevent any possible resource conflicts. This isn't a necessary change for our current build and regression testing, but *just in case* we have a radical change to our testing framework in the future, make all resource names fully random. * Rename various values to match the Circonus docs. * s/alarm/alert/g * Ensure ruleset criteria can not be empty.
2017-03-10 21:19:17 +01:00
package circonus
import (
"fmt"
"regexp"
"strconv"
"strings"
"github.com/circonus-labs/circonus-gometrics/api"
"github.com/circonus-labs/circonus-gometrics/api/config"
"github.com/hashicorp/errwrap"
"github.com/hashicorp/terraform/helper/schema"
)
const (
// circonus_graph.* resource attribute names
graphDescriptionAttr = "description"
graphLeftAttr = "left"
graphLineStyleAttr = "line_style"
graphMetricClusterAttr = "metric_cluster"
graphNameAttr = "name"
graphNotesAttr = "notes"
graphRightAttr = "right"
graphMetricAttr = "metric"
graphStyleAttr = "graph_style"
graphTagsAttr = "tags"
// circonus_graph.metric.* resource attribute names
graphMetricActiveAttr = "active"
graphMetricAlphaAttr = "alpha"
graphMetricAxisAttr = "axis"
graphMetricCAQLAttr = "caql"
graphMetricCheckAttr = "check"
graphMetricColorAttr = "color"
graphMetricFormulaAttr = "formula"
graphMetricFormulaLegendAttr = "legend_formula"
graphMetricFunctionAttr = "function"
graphMetricHumanNameAttr = "name"
graphMetricMetricTypeAttr = "metric_type"
graphMetricNameAttr = "metric_name"
graphMetricStackAttr = "stack"
// circonus_graph.metric_cluster.* resource attribute names
graphMetricClusterActiveAttr = "active"
graphMetricClusterAggregateAttr = "aggregate"
graphMetricClusterAxisAttr = "axis"
graphMetricClusterColorAttr = "color"
graphMetricClusterQueryAttr = "query"
graphMetricClusterHumanNameAttr = "name"
// circonus_graph.{left,right}.* resource attribute names
graphAxisLogarithmicAttr = "logarithmic"
graphAxisMaxAttr = "max"
graphAxisMinAttr = "min"
)
const (
apiGraphStyleLine = "line"
)
var graphDescriptions = attrDescrs{
// circonus_graph.* resource attribute names
graphDescriptionAttr: "",
graphLeftAttr: "",
graphLineStyleAttr: "How the line should change between point. A string containing either 'stepped', 'interpolated' or null.",
graphNameAttr: "",
graphNotesAttr: "",
graphRightAttr: "",
graphMetricAttr: "",
graphMetricClusterAttr: "",
graphStyleAttr: "",
graphTagsAttr: "",
}
var graphMetricDescriptions = attrDescrs{
// circonus_graph.metric.* resource attribute names
graphMetricActiveAttr: "",
graphMetricAlphaAttr: "",
graphMetricAxisAttr: "",
graphMetricCAQLAttr: "",
graphMetricCheckAttr: "",
graphMetricColorAttr: "",
graphMetricFormulaAttr: "",
graphMetricFormulaLegendAttr: "",
graphMetricFunctionAttr: "",
graphMetricMetricTypeAttr: "",
graphMetricHumanNameAttr: "",
graphMetricNameAttr: "",
graphMetricStackAttr: "",
}
var graphMetricClusterDescriptions = attrDescrs{
// circonus_graph.metric_cluster.* resource attribute names
graphMetricClusterActiveAttr: "",
graphMetricClusterAggregateAttr: "",
graphMetricClusterAxisAttr: "",
graphMetricClusterColorAttr: "",
graphMetricClusterQueryAttr: "",
graphMetricClusterHumanNameAttr: "",
}
// NOTE(sean@): There is no way to set a description on map inputs, but if that
// does happen:
//
// var graphMetricAxisOptionDescriptions = attrDescrs{
// // circonus_graph.if.value.over.* resource attribute names
// graphAxisLogarithmicAttr: "",
// graphAxisMaxAttr: "",
// graphAxisMinAttr: "",
// }
func resourceGraph() *schema.Resource {
makeConflictsWith := func(in ...schemaAttr) []string {
out := make([]string, 0, len(in))
for _, attr := range in {
out = append(out, string(graphMetricAttr)+"."+string(attr))
}
return out
}
return &schema.Resource{
Create: graphCreate,
Read: graphRead,
Update: graphUpdate,
Delete: graphDelete,
Exists: graphExists,
Importer: &schema.ResourceImporter{
State: schema.ImportStatePassthrough,
},
Schema: convertToHelperSchema(graphDescriptions, map[schemaAttr]*schema.Schema{
graphDescriptionAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
StateFunc: suppressWhitespace,
},
graphLeftAttr: &schema.Schema{
Type: schema.TypeMap,
Elem: schema.TypeString,
Optional: true,
ValidateFunc: validateGraphAxisOptions,
},
graphLineStyleAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
Default: defaultGraphLineStyle,
ValidateFunc: validateStringIn(graphLineStyleAttr, validGraphLineStyles),
},
graphNameAttr: &schema.Schema{
Type: schema.TypeString,
Required: true,
ValidateFunc: validateRegexp(graphNameAttr, `.+`),
},
graphNotesAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
},
graphRightAttr: &schema.Schema{
Type: schema.TypeMap,
Elem: schema.TypeString,
Optional: true,
ValidateFunc: validateGraphAxisOptions,
},
graphMetricAttr: &schema.Schema{
Type: schema.TypeList,
Optional: true,
MinItems: 1,
Elem: &schema.Resource{
Schema: convertToHelperSchema(graphMetricDescriptions, map[schemaAttr]*schema.Schema{
graphMetricActiveAttr: &schema.Schema{
Type: schema.TypeBool,
Optional: true,
Default: true,
},
graphMetricAlphaAttr: &schema.Schema{
Type: schema.TypeFloat,
Optional: true,
ValidateFunc: validateFuncs(
validateFloatMin(graphMetricAlphaAttr, 0.0),
validateFloatMax(graphMetricAlphaAttr, 1.0),
),
},
graphMetricAxisAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
Default: "left",
ValidateFunc: validateStringIn(graphMetricAxisAttr, validAxisAttrs),
},
graphMetricCAQLAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateRegexp(graphMetricCAQLAttr, `.+`),
ConflictsWith: makeConflictsWith(graphMetricCheckAttr, graphMetricNameAttr),
},
graphMetricCheckAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateRegexp(graphMetricCheckAttr, config.CheckCIDRegex),
ConflictsWith: makeConflictsWith(graphMetricCAQLAttr),
},
graphMetricColorAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateRegexp(graphMetricColorAttr, `^#[0-9a-fA-F]{6}$`),
},
graphMetricFormulaAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateRegexp(graphMetricFormulaAttr, `^.+$`),
},
graphMetricFormulaLegendAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateRegexp(graphMetricFormulaLegendAttr, `^.+$`),
},
graphMetricFunctionAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
Default: defaultGraphFunction,
ValidateFunc: validateStringIn(graphMetricFunctionAttr, validGraphFunctionValues),
},
graphMetricMetricTypeAttr: &schema.Schema{
Type: schema.TypeString,
Required: true,
ValidateFunc: validateStringIn(graphMetricMetricTypeAttr, validMetricTypes),
},
graphMetricHumanNameAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateRegexp(graphMetricHumanNameAttr, `.+`),
},
graphMetricNameAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateRegexp(graphMetricNameAttr, `^[\S]+$`),
},
graphMetricStackAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateRegexp(graphMetricStackAttr, `^[\d]*$`),
},
}),
},
},
graphMetricClusterAttr: &schema.Schema{
Type: schema.TypeList,
Optional: true,
MinItems: 1,
Elem: &schema.Resource{
Schema: convertToHelperSchema(graphMetricClusterDescriptions, map[schemaAttr]*schema.Schema{
graphMetricClusterActiveAttr: &schema.Schema{
Type: schema.TypeBool,
Optional: true,
Default: true,
},
graphMetricClusterAggregateAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
Default: "none",
ValidateFunc: validateStringIn(graphMetricClusterAggregateAttr, validAggregateFuncs),
},
graphMetricClusterAxisAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
Default: "left",
ValidateFunc: validateStringIn(graphMetricClusterAttr, validAxisAttrs),
},
graphMetricClusterColorAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateRegexp(graphMetricClusterColorAttr, `^#[0-9a-fA-F]{6}$`),
},
graphMetricClusterQueryAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateRegexp(graphMetricClusterQueryAttr, config.MetricClusterCIDRegex),
},
graphMetricClusterHumanNameAttr: &schema.Schema{
Type: schema.TypeString,
Required: true,
ValidateFunc: validateRegexp(graphMetricHumanNameAttr, `.+`),
},
}),
},
},
graphStyleAttr: &schema.Schema{
Type: schema.TypeString,
Optional: true,
Default: defaultGraphStyle,
ValidateFunc: validateStringIn(graphStyleAttr, validGraphStyles),
},
graphTagsAttr: tagMakeConfigSchema(graphTagsAttr),
}),
}
}
func graphCreate(d *schema.ResourceData, meta interface{}) error {
ctxt := meta.(*providerContext)
g := newGraph()
if err := g.ParseConfig(d); err != nil {
return errwrap.Wrapf("error parsing graph schema during create: {{err}}", err)
}
if err := g.Create(ctxt); err != nil {
return errwrap.Wrapf("error creating graph: {{err}}", err)
}
d.SetId(g.CID)
return graphRead(d, meta)
}
func graphExists(d *schema.ResourceData, meta interface{}) (bool, error) {
ctxt := meta.(*providerContext)
cid := d.Id()
g, err := ctxt.client.FetchGraph(api.CIDType(&cid))
if err != nil {
if strings.Contains(err.Error(), defaultCirconus404ErrorString) {
return false, nil
}
return false, err
}
if g.CID == "" {
return false, nil
}
return true, nil
}
// graphRead pulls data out of the Graph object and stores it into the
// appropriate place in the statefile.
func graphRead(d *schema.ResourceData, meta interface{}) error {
ctxt := meta.(*providerContext)
cid := d.Id()
g, err := loadGraph(ctxt, api.CIDType(&cid))
if err != nil {
return err
}
d.SetId(g.CID)
metrics := make([]interface{}, 0, len(g.Datapoints))
for _, datapoint := range g.Datapoints {
dataPointAttrs := make(map[string]interface{}, 13) // 13 == len(members in api.GraphDatapoint)
dataPointAttrs[string(graphMetricActiveAttr)] = !datapoint.Hidden
if datapoint.Alpha != nil && *datapoint.Alpha != 0 {
dataPointAttrs[string(graphMetricAlphaAttr)] = *datapoint.Alpha
}
switch datapoint.Axis {
case "l", "":
dataPointAttrs[string(graphMetricAxisAttr)] = "left"
case "r":
dataPointAttrs[string(graphMetricAxisAttr)] = "right"
default:
return fmt.Errorf("PROVIDER BUG: Unsupported axis type %q", datapoint.Axis)
}
if datapoint.CAQL != nil {
dataPointAttrs[string(graphMetricCAQLAttr)] = *datapoint.CAQL
}
if datapoint.CheckID != 0 {
dataPointAttrs[string(graphMetricCheckAttr)] = fmt.Sprintf("%s/%d", config.CheckPrefix, datapoint.CheckID)
}
if datapoint.Color != nil {
dataPointAttrs[string(graphMetricColorAttr)] = *datapoint.Color
}
if datapoint.DataFormula != nil {
dataPointAttrs[string(graphMetricFormulaAttr)] = *datapoint.DataFormula
}
switch datapoint.Derive.(type) {
case bool:
case string:
dataPointAttrs[string(graphMetricFunctionAttr)] = datapoint.Derive.(string)
default:
return fmt.Errorf("PROVIDER BUG: Unsupported type for derive: %T", datapoint.Derive)
}
if datapoint.LegendFormula != nil {
dataPointAttrs[string(graphMetricFormulaLegendAttr)] = *datapoint.LegendFormula
}
if datapoint.MetricName != "" {
dataPointAttrs[string(graphMetricNameAttr)] = datapoint.MetricName
}
if datapoint.MetricType != "" {
dataPointAttrs[string(graphMetricMetricTypeAttr)] = datapoint.MetricType
}
if datapoint.Name != "" {
dataPointAttrs[string(graphMetricHumanNameAttr)] = datapoint.Name
}
if datapoint.Stack != nil {
dataPointAttrs[string(graphMetricStackAttr)] = fmt.Sprintf("%d", *datapoint.Stack)
}
metrics = append(metrics, dataPointAttrs)
}
metricClusters := make([]interface{}, 0, len(g.MetricClusters))
for _, metricCluster := range g.MetricClusters {
metricClusterAttrs := make(map[string]interface{}, 8) // 8 == len(num struct attrs in api.GraphMetricCluster)
metricClusterAttrs[string(graphMetricClusterActiveAttr)] = !metricCluster.Hidden
if metricCluster.AggregateFunc != "" {
metricClusterAttrs[string(graphMetricClusterAggregateAttr)] = metricCluster.AggregateFunc
}
switch metricCluster.Axis {
case "l", "":
metricClusterAttrs[string(graphMetricClusterAxisAttr)] = "left"
case "r":
metricClusterAttrs[string(graphMetricClusterAxisAttr)] = "right"
default:
return fmt.Errorf("PROVIDER BUG: Unsupported axis type %q", metricCluster.Axis)
}
if metricCluster.Color != nil {
metricClusterAttrs[string(graphMetricClusterColorAttr)] = *metricCluster.Color
}
if metricCluster.DataFormula != nil {
metricClusterAttrs[string(graphMetricFormulaAttr)] = *metricCluster.DataFormula
}
if metricCluster.LegendFormula != nil {
metricClusterAttrs[string(graphMetricFormulaLegendAttr)] = *metricCluster.LegendFormula
}
if metricCluster.MetricCluster != "" {
metricClusterAttrs[string(graphMetricClusterQueryAttr)] = metricCluster.MetricCluster
}
if metricCluster.Name != "" {
metricClusterAttrs[string(graphMetricHumanNameAttr)] = metricCluster.Name
}
if metricCluster.Stack != nil {
metricClusterAttrs[string(graphMetricStackAttr)] = fmt.Sprintf("%d", *metricCluster.Stack)
}
metricClusters = append(metricClusters, metricClusterAttrs)
}
leftAxisMap := make(map[string]interface{}, 3)
if g.LogLeftY != nil {
leftAxisMap[string(graphAxisLogarithmicAttr)] = fmt.Sprintf("%d", *g.LogLeftY)
}
if g.MaxLeftY != nil {
leftAxisMap[string(graphAxisMaxAttr)] = strconv.FormatFloat(*g.MaxLeftY, 'f', -1, 64)
}
if g.MinLeftY != nil {
leftAxisMap[string(graphAxisMinAttr)] = strconv.FormatFloat(*g.MinLeftY, 'f', -1, 64)
}
rightAxisMap := make(map[string]interface{}, 3)
if g.LogRightY != nil {
rightAxisMap[string(graphAxisLogarithmicAttr)] = fmt.Sprintf("%d", *g.LogRightY)
}
if g.MaxRightY != nil {
rightAxisMap[string(graphAxisMaxAttr)] = strconv.FormatFloat(*g.MaxRightY, 'f', -1, 64)
}
if g.MinRightY != nil {
rightAxisMap[string(graphAxisMinAttr)] = strconv.FormatFloat(*g.MinRightY, 'f', -1, 64)
}
d.Set(graphDescriptionAttr, g.Description)
if err := d.Set(graphLeftAttr, leftAxisMap); err != nil {
return errwrap.Wrapf(fmt.Sprintf("Unable to store graph %q attribute: {{err}}", graphLeftAttr), err)
}
d.Set(graphLineStyleAttr, g.LineStyle)
d.Set(graphNameAttr, g.Title)
d.Set(graphNotesAttr, indirect(g.Notes))
if err := d.Set(graphRightAttr, rightAxisMap); err != nil {
return errwrap.Wrapf(fmt.Sprintf("Unable to store graph %q attribute: {{err}}", graphRightAttr), err)
}
if err := d.Set(graphMetricAttr, metrics); err != nil {
return errwrap.Wrapf(fmt.Sprintf("Unable to store graph %q attribute: {{err}}", graphMetricAttr), err)
}
if err := d.Set(graphMetricClusterAttr, metricClusters); err != nil {
return errwrap.Wrapf(fmt.Sprintf("Unable to store graph %q attribute: {{err}}", graphMetricClusterAttr), err)
}
d.Set(graphStyleAttr, g.Style)
if err := d.Set(graphTagsAttr, tagsToState(apiToTags(g.Tags))); err != nil {
return errwrap.Wrapf(fmt.Sprintf("Unable to store graph %q attribute: {{err}}", graphTagsAttr), err)
}
return nil
}
func graphUpdate(d *schema.ResourceData, meta interface{}) error {
ctxt := meta.(*providerContext)
g := newGraph()
if err := g.ParseConfig(d); err != nil {
return err
}
g.CID = d.Id()
if err := g.Update(ctxt); err != nil {
return errwrap.Wrapf(fmt.Sprintf("unable to update graph %q: {{err}}", d.Id()), err)
}
return graphRead(d, meta)
}
func graphDelete(d *schema.ResourceData, meta interface{}) error {
ctxt := meta.(*providerContext)
cid := d.Id()
if _, err := ctxt.client.DeleteGraphByCID(api.CIDType(&cid)); err != nil {
return errwrap.Wrapf(fmt.Sprintf("unable to delete graph %q: {{err}}", d.Id()), err)
}
d.SetId("")
return nil
}
type circonusGraph struct {
api.Graph
}
func newGraph() circonusGraph {
g := circonusGraph{
Graph: *api.NewGraph(),
}
return g
}
func loadGraph(ctxt *providerContext, cid api.CIDType) (circonusGraph, error) {
var g circonusGraph
ng, err := ctxt.client.FetchGraph(cid)
if err != nil {
return circonusGraph{}, err
}
g.Graph = *ng
return g, nil
}
// ParseConfig reads Terraform config data and stores the information into a
// Circonus Graph object. ParseConfig and graphRead() must be kept in sync.
func (g *circonusGraph) ParseConfig(d *schema.ResourceData) error {
g.Datapoints = make([]api.GraphDatapoint, 0, defaultGraphDatapoints)
if v, found := d.GetOk(graphLeftAttr); found {
listRaw := v.(map[string]interface{})
leftAxisMap := make(map[string]interface{}, len(listRaw))
for k, v := range listRaw {
leftAxisMap[k] = v
}
if v, ok := leftAxisMap[string(graphAxisLogarithmicAttr)]; ok {
i64, _ := strconv.ParseInt(v.(string), 10, 64)
i := int(i64)
g.LogLeftY = &i
}
if v, ok := leftAxisMap[string(graphAxisMaxAttr)]; ok && v.(string) != "" {
f, _ := strconv.ParseFloat(v.(string), 64)
g.MaxLeftY = &f
}
if v, ok := leftAxisMap[string(graphAxisMinAttr)]; ok && v.(string) != "" {
f, _ := strconv.ParseFloat(v.(string), 64)
g.MinLeftY = &f
}
}
if v, found := d.GetOk(graphRightAttr); found {
listRaw := v.(map[string]interface{})
rightAxisMap := make(map[string]interface{}, len(listRaw))
for k, v := range listRaw {
rightAxisMap[k] = v
}
if v, ok := rightAxisMap[string(graphAxisLogarithmicAttr)]; ok {
i64, _ := strconv.ParseInt(v.(string), 10, 64)
i := int(i64)
g.LogRightY = &i
}
if v, ok := rightAxisMap[string(graphAxisMaxAttr)]; ok && v.(string) != "" {
f, _ := strconv.ParseFloat(v.(string), 64)
g.MaxRightY = &f
}
if v, ok := rightAxisMap[string(graphAxisMinAttr)]; ok && v.(string) != "" {
f, _ := strconv.ParseFloat(v.(string), 64)
g.MinRightY = &f
}
}
if v, found := d.GetOk(graphDescriptionAttr); found {
g.Description = v.(string)
}
if v, found := d.GetOk(graphLineStyleAttr); found {
switch v.(type) {
case string:
s := v.(string)
g.LineStyle = &s
case *string:
g.LineStyle = v.(*string)
default:
return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphLineStyleAttr, v)
}
}
if v, found := d.GetOk(graphNameAttr); found {
g.Title = v.(string)
}
if v, found := d.GetOk(graphNotesAttr); found {
s := v.(string)
g.Notes = &s
}
if listRaw, found := d.GetOk(graphMetricAttr); found {
metricList := listRaw.([]interface{})
for _, metricListElem := range metricList {
metricAttrs := newInterfaceMap(metricListElem.(map[string]interface{}))
datapoint := api.GraphDatapoint{}
if v, found := metricAttrs[graphMetricActiveAttr]; found {
datapoint.Hidden = !(v.(bool))
}
if v, found := metricAttrs[graphMetricAlphaAttr]; found {
f := v.(float64)
if f != 0 {
datapoint.Alpha = &f
}
}
if v, found := metricAttrs[graphMetricAxisAttr]; found {
switch v.(string) {
case "left", "":
datapoint.Axis = "l"
case "right":
datapoint.Axis = "r"
default:
return fmt.Errorf("PROVIDER BUG: Unsupported axis attribute %q: %q", graphMetricAxisAttr, v.(string))
}
}
if v, found := metricAttrs[graphMetricCheckAttr]; found {
re := regexp.MustCompile(config.CheckCIDRegex)
matches := re.FindStringSubmatch(v.(string))
if len(matches) == 3 {
checkID, _ := strconv.ParseUint(matches[2], 10, 64)
datapoint.CheckID = uint(checkID)
}
}
if v, found := metricAttrs[graphMetricColorAttr]; found {
s := v.(string)
datapoint.Color = &s
}
if v, found := metricAttrs[graphMetricFormulaAttr]; found {
switch v.(type) {
case string:
s := v.(string)
datapoint.DataFormula = &s
case *string:
datapoint.DataFormula = v.(*string)
default:
return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphMetricAttr, v)
}
}
if v, found := metricAttrs[graphMetricFunctionAttr]; found {
s := v.(string)
if s != "" {
datapoint.Derive = s
} else {
datapoint.Derive = false
}
} else {
datapoint.Derive = false
}
if v, found := metricAttrs[graphMetricFormulaLegendAttr]; found {
switch u := v.(type) {
case string:
datapoint.LegendFormula = &u
case *string:
datapoint.LegendFormula = u
default:
return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphMetricAttr, v)
}
}
if v, found := metricAttrs[graphMetricNameAttr]; found {
s := v.(string)
if s != "" {
datapoint.MetricName = s
}
}
if v, found := metricAttrs[graphMetricMetricTypeAttr]; found {
s := v.(string)
if s != "" {
datapoint.MetricType = s
}
}
if v, found := metricAttrs[graphMetricHumanNameAttr]; found {
s := v.(string)
if s != "" {
datapoint.Name = s
}
}
if v, found := metricAttrs[graphMetricStackAttr]; found {
var stackStr string
switch u := v.(type) {
case string:
stackStr = u
case *string:
if u != nil {
stackStr = *u
}
default:
return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphMetricStackAttr, v)
}
if stackStr != "" {
u64, _ := strconv.ParseUint(stackStr, 10, 64)
u := uint(u64)
datapoint.Stack = &u
}
}
g.Datapoints = append(g.Datapoints, datapoint)
}
}
if listRaw, found := d.GetOk(graphMetricClusterAttr); found {
metricClusterList := listRaw.([]interface{})
for _, metricClusterListRaw := range metricClusterList {
metricClusterAttrs := newInterfaceMap(metricClusterListRaw.(map[string]interface{}))
metricCluster := api.GraphMetricCluster{}
if v, found := metricClusterAttrs[graphMetricClusterActiveAttr]; found {
metricCluster.Hidden = !(v.(bool))
}
if v, found := metricClusterAttrs[graphMetricClusterAggregateAttr]; found {
metricCluster.AggregateFunc = v.(string)
}
if v, found := metricClusterAttrs[graphMetricClusterAxisAttr]; found {
switch v.(string) {
case "left", "":
metricCluster.Axis = "l"
case "right":
metricCluster.Axis = "r"
default:
return fmt.Errorf("PROVIDER BUG: Unsupported axis attribute %q: %q", graphMetricClusterAxisAttr, v.(string))
}
}
if v, found := metricClusterAttrs[graphMetricClusterColorAttr]; found {
s := v.(string)
if s != "" {
metricCluster.Color = &s
}
}
if v, found := metricClusterAttrs[graphMetricFormulaAttr]; found {
switch v.(type) {
case string:
s := v.(string)
metricCluster.DataFormula = &s
case *string:
metricCluster.DataFormula = v.(*string)
default:
return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphMetricFormulaAttr, v)
}
}
if v, found := metricClusterAttrs[graphMetricFormulaLegendAttr]; found {
switch v.(type) {
case string:
s := v.(string)
metricCluster.LegendFormula = &s
case *string:
metricCluster.LegendFormula = v.(*string)
default:
return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphMetricFormulaLegendAttr, v)
}
}
if v, found := metricClusterAttrs[graphMetricClusterQueryAttr]; found {
s := v.(string)
if s != "" {
metricCluster.MetricCluster = s
}
}
if v, found := metricClusterAttrs[graphMetricHumanNameAttr]; found {
s := v.(string)
if s != "" {
metricCluster.Name = s
}
}
if v, found := metricClusterAttrs[graphMetricStackAttr]; found {
var stackStr string
switch u := v.(type) {
case string:
stackStr = u
case *string:
if u != nil {
stackStr = *u
}
default:
return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphMetricStackAttr, v)
}
if stackStr != "" {
u64, _ := strconv.ParseUint(stackStr, 10, 64)
u := uint(u64)
metricCluster.Stack = &u
}
}
g.MetricClusters = append(g.MetricClusters, metricCluster)
}
}
if v, found := d.GetOk(graphStyleAttr); found {
switch v.(type) {
case string:
s := v.(string)
g.Style = &s
case *string:
g.Style = v.(*string)
default:
return fmt.Errorf("PROVIDER BUG: unsupported type for %q: %T", graphStyleAttr, v)
}
}
if v, found := d.GetOk(graphTagsAttr); found {
g.Tags = derefStringList(flattenSet(v.(*schema.Set)))
}
if err := g.Validate(); err != nil {
return err
}
return nil
}
func (g *circonusGraph) Create(ctxt *providerContext) error {
ng, err := ctxt.client.CreateGraph(&g.Graph)
if err != nil {
return err
}
g.CID = ng.CID
return nil
}
func (g *circonusGraph) Update(ctxt *providerContext) error {
_, err := ctxt.client.UpdateGraph(&g.Graph)
if err != nil {
return errwrap.Wrapf(fmt.Sprintf("Unable to update graph %s: {{err}}", g.CID), err)
}
return nil
}
func (g *circonusGraph) Validate() error {
for i, datapoint := range g.Datapoints {
if *g.Style == apiGraphStyleLine && datapoint.Alpha != nil && *datapoint.Alpha != 0 {
return fmt.Errorf("%s can not be set on graphs with style %s", graphMetricAlphaAttr, apiGraphStyleLine)
}
if datapoint.CheckID != 0 && datapoint.MetricName == "" {
return fmt.Errorf("Error with %s[%d] name=%q: %s is set, missing attribute %s must also be set", graphMetricAttr, i, datapoint.Name, graphMetricCheckAttr, graphMetricNameAttr)
}
if datapoint.CheckID == 0 && datapoint.MetricName != "" {
return fmt.Errorf("Error with %s[%d] name=%q: %s is set, missing attribute %s must also be set", graphMetricAttr, i, datapoint.Name, graphMetricNameAttr, graphMetricCheckAttr)
}
if datapoint.CAQL != nil && (datapoint.CheckID != 0 || datapoint.MetricName != "") {
return fmt.Errorf("Error with %s[%d] name=%q: %q attribute is mutually exclusive with attributes %s or %s", graphMetricAttr, i, datapoint.Name, graphMetricCAQLAttr, graphMetricNameAttr, graphMetricCheckAttr)
}
}
for i, mc := range g.MetricClusters {
if mc.AggregateFunc != "" && (mc.Color == nil || *mc.Color == "") {
return fmt.Errorf("Error with %s[%d] name=%q: %s is a required attribute for graphs with %s set", graphMetricClusterAttr, i, mc.Name, graphMetricClusterColorAttr, graphMetricClusterAggregateAttr)
}
}
return nil
}