Commit Graph

8 Commits

Author SHA1 Message Date
Raphael Randschau 41e14baa57 provider/scaleway: improve schema resources (#13522)
improve readability by removing unecessary schema type declarations

this changeset does not include any semantic changes
2017-04-11 11:10:45 +03:00
Raphael Randschau 403ea9f6d7 provider/scaleway: work around parallel request limitation (#13045)
according to the official scaleway support, requests within the same session can
not be parallelized.

While I do not know for sure that this is a write-only limitation, I've
implemented it as a write-only limitation for now.

Previously requests like this would produce a 500 internal server error:

```
resource "scaleway_ip" "test_ip" {
  count = 2
}
```

now this limitation should be lifted, for all scaleway resources
2017-03-27 13:00:11 +03:00
Raphael Randschau a422bf02f1 provider/scaleway: improve volume attachment (#10084)
* provider/scaleway: increase wait for server time

according to the scaleway community, shutdown/ startup might actually take an
hour. since a regular shutdown transfers data this is bound by the size of the
actual volumes in use.

https://community.online.net/t/solving-the-long-shutdown-boot-when-only-needed-to-attach-detach-a-volume/326

anyhow, 20 minutes seems quite optimistic, and we've seen some timeout errors in
the logs, too

* provider/scaleway: clear cache on volume attachment

the volume attachment errors quite often, and while I have no hard evidence
(yet) I guess it might be related to the cache that the official scaleway SDK
includes.

for now this is just a tiny experiment, clearing the cache when creating/
destroying volume attachments. let's see if this improves anything, really

* provider/scaleway: guard against attaching already attached volumes

* provider/scaleway: use cheaper instance types for tests

Scaleway bills by the hour and C2S costs much more than C1, since in the tests
we just spin up instances, to destroy them later on...
2016-11-17 15:08:05 +02:00
Raphael Randschau e5ff0f0cb2 provider/scaleway: update sdk (#9976)
the SDK updates includes an fix for the image cache, which might lead to wrong
images ids being returned on lookup...
2016-11-09 09:32:56 +00:00
Raphael Randschau b0772b9131 provider/scaleway: retry volume attachment create/ destroy (#9972)
this PR fixes a flakyness in the `scaleway_volume_attachment` resource, as
described below:

when attaching/ detaching a volume from a `scaleway_server`, the server needs to
be stopped. even though the code already waits for the server to be stopped, the
`PatchServer` calls gets a `400 server is being stopped or rebooted` error
response.

If the API returns the `400` we bail, leaving terraform in a broken state.

Assuming this is the only error that the API might return to us, as the payload
itself is correct, this retry behaviour should fix the issue.

\cc @stack72 PTAL
2016-11-08 21:57:33 +00:00
Raphael Randschau 98d84680b7 provider/scaleway server volume property (#9695)
* provider/scaleway: extract volume validation helpers

* provider/scaleway: add server volume property

fixes #9499

* provider/scaleway: update `scaleway_server` docu

* provider/scaleway: fix volume handling

this actually broken when merging the latest SDK update :(

* provider/scaleway: fix volume attachment

* provider/scaleway: fix volume expectation
2016-10-29 12:07:35 +01:00
Raphael Randschau d9a2e0dbb3 provider/scaleway: fix scaleway_volume_attachment with count > 1 (#9493)
* provider/scaleway: fix scaleway_volume_attachment with count > 1

since scaleway requires servers to be powered off to attach volumes to, we need
to make sure that we don't power down a server twice, or power up a server while
it's supposed to be modified.

sadly terraform doesn't seem to sport serialization primitives for usecases like
this, but putting the code in question behind a `sync.Mutex` does the trick, too

fixes #9417

* provider/scaleway: use mutexkv to lock per-resource

following  @dcharbonnier  suggestion. thanks!

* provider/scaleway: cleanup waitForServerState signature

* provider/scaleway: store serverID in var

* provider/scaleway: correct imports

* provider/scaleway: increase timeouts
2016-10-27 16:51:34 +01:00
Raphael Randschau 9081cabd6e Add scaleway provider (#7331)
* Add scaleway provider

this PR allows the entire scaleway stack to be managed with terraform

example usage looks like this:

```
provider "scaleway" {
  api_key = "snap"
  organization = "snip"
}

resource "scaleway_ip" "base" {
  server = "${scaleway_server.base.id}"
}

resource "scaleway_server" "base" {
  name = "test"
  # ubuntu 14.04
  image = "aecaed73-51a5-4439-a127-6d8229847145"
  type = "C2S"
}

resource "scaleway_volume" "test" {
  name = "test"
  size_in_gb = 20
  type = "l_ssd"
}

resource "scaleway_volume_attachment" "test" {
  server = "${scaleway_server.base.id}"
  volume = "${scaleway_volume.test.id}"
}

resource "scaleway_security_group" "base" {
  name = "public"
  description = "public gateway"
}

resource "scaleway_security_group_rule" "http-ingress" {
  security_group = "${scaleway_security_group.base.id}"

  action = "accept"
  direction = "inbound"
  ip_range = "0.0.0.0/0"
  protocol = "TCP"
  port = 80
}

resource "scaleway_security_group_rule" "http-egress" {
  security_group = "${scaleway_security_group.base.id}"

  action = "accept"
  direction = "outbound"
  ip_range = "0.0.0.0/0"
  protocol = "TCP"
  port = 80
}
```

Note that volume attachments require the server to be stopped, which can lead to
downtimes of you attach new volumes to already used servers

* Update IP read to handle 404 gracefully

* Read back resource on update

* Ensure IP detachment works as expected

Sadly this is not part of the official scaleway api just yet

* Adjust detachIP helper

based on feedback from @QuentinPerez in
https://github.com/scaleway/scaleway-cli/pull/378

* Cleanup documentation

* Rename api_key to access_key

following @stack72 suggestion and rename the provider api_key for more clarity

* Make tests less chatty by using custom logger
2016-07-13 21:03:41 +01:00