Initial Enterprise Doc File Move

This commit is contained in:
Chris Riley 2017-03-16 12:42:33 -07:00 committed by Seth Vargo
parent a3ae38d1bb
commit 6c981f2524
No known key found for this signature in database
GPG Key ID: C921994F9C27E0FF
22 changed files with 953 additions and 0 deletions

View File

@ -0,0 +1,41 @@
---
title: "Atlas Artifact Provider"
---
# Atlas Artifact Provider
Terraform has a [provider](https://terraform.io/docs/providers/index.html) for managing Atlas artifacts called `atlas_artifact`.
This is used to make data stored in Atlas Artifacts available to
Terraform for interpolation. In the following example, an artifact
is defined and references an AMI ID stored in Atlas.
provider "atlas" {
# You can also set the atlas token by exporting
# ATLAS_TOKEN into your env
token = "${var.atlas_token}"
}
resource "atlas_artifact" "web-worker" {
name = "%{DEFAULT_USERNAME}/web-worker"
type = "amazon.image"
version = "latest"
}
resource "aws_instance" "worker-machine" {
ami = "${atlas_artifact.web-worker.metadata_full.region-us-east-1}"
instance_type = "m1.small"
}
This automatically pulls the "latest" artifact version.
Following a new artifact version being created via a Packer build, the following
diff would be generated when running `terraform plan`.
-/+ aws_instance.worker-machine
ami: "ami-168f9d7e" => "ami-2f3a9df2" (forces new resource)
instance_type: "m1.small" => "m1.small"
This allows you to reference changing artifacts and trigger new deployments
upon pushing subsequent Packer builds.
Read more about artifacts in the [Terraform documentation](https://terraform.io/docs/providers/atlas/r/artifact.html).

View File

@ -0,0 +1,9 @@
---
title: "Creating AMI Artifacts with Packer and Atlas"
---
# Creating AMI Artifacts with Packer and Atlas
Currently, the best way to create AWS AMI artifacts is with Packer.
We detail how to do this in the [Packer section of the documentation](/help/packer/artifacts/creating-amis).

View File

@ -0,0 +1,18 @@
---
title: "About Terraform Artifacts in Atlas"
---
# About Terraform Artifacts in Atlas
Atlas can be used to store artifacts for use by Terraform. Typically,
artifacts are [stored with Packer](/help/packer/artifacts).
Artifacts can be used in Atlas to deploy and manage images
of configuration. Artifacts are generic, but can be of varying types
like `amazon.image`. See the Packer [`artifact_type`](https://packer.io/docs/post-processors/atlas.html#artifact_type)
docs for more information.
Packer can create artifacts both while running in Atlas and out of Atlas'
network. This is possible due to the post-processors use of the public
artifact API to store the artifacts.

View File

@ -0,0 +1,61 @@
---
title: "Managing Artifact Versions"
---
# Managing Artifact Versions
Artifacts stored in Atlas are versioned and assigned a version number.
Versions are useful to roll back, audit and deploy images specific versions
of images to certain environments in a targeted way.
This assumes you are familiar with the [Atlas artifact provider](https://terraform.io/docs/providers/atlas/index.html)
in Terraform.
### Finding the Version of an Artifact
Artifact versions can be found with the [`terraform show` command](https://terraform.io/docs/commands/show.html),
or by looking at the Packer logs generated during builds. After a
successful artifact upload, version numbers are displayed. "latest" can
be used to use the latest version of the artifact.
The following output is from `terraform show`.
atlas_artifact.web-worker:
id = us-east-1:ami-3a0a1d52
build = latest
metadata_full.# = 1
metadata_full.region-us-east-1 = ami-3a0a1d52
name = %{DEFAULT_USERNAME}/web-worker
slug = %{DEFAULT_USERNAME}/web-worker/amazon.image/7
type = amazon.image
In this case, the version is 7 and can be found in the persisted slug
attribute.
### Pinning Artifacts to Specific Versions
You can pin artifacts to a specific version. This allows for a targeted
deploy.
resource "atlas_artifact" "web-worker" {
name = "%{DEFAULT_USERNAME}/web-worker"
type = "amazon.image"
version = 7
}
This will use version 7 of the `web-worker` artifact.
### Pinning Artifacts to Specific Builds
Artifacts can also be pinned to an Atlas build number. This is only
possible if Atlas was used to build the artifact with Packer.
resource "atlas_artifact" "web-worker" {
name = "%{DEFAULT_USERNAME}/web-worker"
type = "amazon.image"
build = 5
}
It's recommended to use versions, instead of builds, as it will
be easier to track within Atlas and when building outside of the Atlas
environment.

View File

@ -0,0 +1,15 @@
---
title: "Terraform Features in Atlas"
---
# Terraform Features in Atlas
[Terraform](https://terraform.io) is a tool for safely and
efficiently changing infrastructure across providers.
This is a list of features specific to Terraform that Atlas provides.
- [Terraform Plans and Applies](/help/terraform/runs)
- [Terraform Artifact Registry](/help/terraform/artifacts)
- [Terraform Remote State Storage](/help/terraform/state)
- [Terraform Run Notifications](/help/terraform/runs/notifications)

View File

@ -0,0 +1,30 @@
---
title: "Automatic Terraform Applies in Atlas"
---
# Automatic Terraform Applies in Atlas
<div class="alert-infos">
<div class="alert-info">
This is an unreleased beta feature. Please <a href="/help/support">contact support</a> if you are interested in helping us test this feature.
</div>
</div>
Atlas can optionally automatically apply successful Terraform plans to your
infrastructure. This option is disabled by default and can be enabled by an
organization owner on a per-[environment](/help/glossary#environment) basis.
<div class="alert-errors">
<div class="row alert-error">
This is an advanced feature that enables changes to active infrastructure
without user confirmation. Please understand the implications to your
infrastructure before enabling.
</div>
</div>
## Enabling Auto-Apply
To enable auto-apply for an environment, visit the environment settings page in
Atlas and check the box labeled "auto apply" and click the save button to
persist the changes. The next successful Terraform plan for the environment will
automatically apply without user confirmation.

View File

@ -0,0 +1,55 @@
---
title: "How Terraform Runs Execute in Atlas"
---
# How Terraform Runs Execute in Atlas
This briefly covers the internal process of running Terraform plan and
applies in Atlas. It is not necessary to know this information, but may be
valuable to help understand implications of running in Atlas or debug failing
runs.
## Steps of Execution
1. A set of Terraform configuration and directory of files is uploaded via Terraform Push or GitHub
1. Atlas creates a version of the Terraform configuration and waits for the upload
to complete. At this point, the version will be visible in the UI even if the upload has
not completed
1. Once the upload finishes, Atlas creates a run and queues a `terraform plan`
1. In the run environment, the package including the files and Terraform
configuration are downloaded
1. `terraform plan` is run against the configuration in the run environment
1. Logs are streamed into the UI and stored
1. The `.tfplan` file created in the plan is uploaded and stored
1. Once the plan completes, the environment is torn down and status is
updated in the UI
1. The plan then requires confirmation by an operator. It can optionally
be discarded and ignored at this stage
1. Once confirmed, the run then executes a `terraform apply` in a new
environment against the saved `.tfplan` file
1. The logs are streamed into the UI and stored
1. Once the apply completes, the environment is torn down, status is
updated in the UI and changed state is saved back to Atlas
Note: In the case of a failed apply, it's safe to re-run. This is possible
because Terraform saves partial state and can "pick up where it left off".
### Customizing Terraform Execution
As described in the steps above, Atlas will run Terraform against your configuration
when changes are pushed via GitHub, `terraform push`, or manually queued in the
Atlas UI. There are a few options available to customize the execution of Terraform.
These are:
- The directory that contains your environment's Terraform configuration can be customized
to support directory structures with more than one set of Terraform configuration files.
To customize the directory for your Atlas Environment, set the _Terraform Directory_
property in the _GitHub Integration_ settings for your environment. This is equivalent to
passing the `[dir]` argument when running Terraform in your local shell.
- The directory in which Terraform is executed from can be customized to support directory
structures with nested sub-directories or configurations that use Terraform modules with
relative paths. To customize the directory used for Terraform execution in your Atlas
Environment, set the `TF_ATLAS_DIR`
[environment variable](/help/terraform/runs/variables-and-configuration#environment-variables)
to the relative path of the directory - ie. `terraform/production`. This is equivalent to
changing directories to the appropriate path in your local shell and then executing Terraform.

View File

@ -0,0 +1,66 @@
---
title: "About Terraform Runs in Atlas"
---
# About Terraform Runs in Atlas
A "run" in Atlas represents the logical grouping of two Terraform steps - a
"plan" and an "apply". The distinction between these two phases of a Terraform
run are documented below.
When a [new run is created](/help/terraform/runs/starting), Atlas automatically
queues a Terraform plan. Because a plan does not change the state of
infrastructure, it is safe to execute a plan multiple times without
consequence. An apply executes the output of a plan and actively changes
infrastructure. To prevent race conditions, Atlas will only execute one
plan/apply at a time (plans for validating GitHub Pull Requests are allowed to
happen concurrently, as they do not modify state). You can read more about
Terraform plans and applies below.
## Plan
During the plan phase of a run, Atlas executes the command `terraform plan`.
Terraform performs a refresh and then determines what actions are necessary to
reach the desired state specified in the Terraform configuration files. A
successful plan outputs an executable file that is securely stored in Atlas
and may be used in the subsequent apply.
Terraform plans in Atlas do not change the state of infrastructure, so it is
safe to execute a plan multiple times. In fact, there are a number of components
in Atlas that can trigger a Terraform plan. You can read more about this in the
[starting runs](/help/terraform/runs/starting) section.
## Apply
During the apply phase of a run, Atlas executes the command `terraform apply`
with the executable result of the prior Terraform plan. This phase **can change
infrastructure** by applying the changes required to reach the desired state
specified in the Terraform configuration file.
While Terraform plans are safe to run multiple times, Terraform applies often
change active infrastructure. Because of this, the default behavior for Atlas
is to require user confirmation as part of the
[Terraform run execution](/help/terraform/runs/how-runs-execute). Upon
user confirmation, Atlas will queue and execute the Terraform apply. It is also
possible to configure Atlas to
[automatically apply](/help/terraform/runs/automatic-applies), but this option is
disabled by default.
## Environment Locking
During run execution, Atlas will lock the environment to prevent other plans
and applies from executing simultaneously. When the run completes, the next
pending run, if any, will be started.
An administrator of the environment can also manually lock the environment, for
example during a maintenance period.
You can see the lock status of an environment, and lock/unlock the environment
by visiting that environment's settings page.
## Notifications
To receive alerts when user confirmation is needed or for any other phase of the
run process, you can
[enable run notifications](/help/terraform/runs/notifications) for your
organization or environment.

View File

@ -0,0 +1,28 @@
---
title: "Installing Custom Software on the Terraform Runners"
---
# Installing Custom Software
The machines that run Terraform exist in an isolated environment and are
destroyed on each use. In some cases, it may be necessary to install certain
software on the Terraform runner, such as a configuration management tool like
Chef, Puppet, Ansible, or Salt.
The easiest way to install software on the Packer builder is via the
`local-exec` provisioner. This will execute commands on the host machine running
Terraform.
resource "null_resource" "local-software" {
provisioner "local-exec" {
command = <<EOH
sudo apt-get update
sudo apt-get install -y ansible
EOH
}
}
Please note that nothing is persisted between Terraform runs, so you will need
to install custom software on each run.
The Packer builders run the latest version of Ubuntu LTS.

View File

@ -0,0 +1,27 @@
---
title: "Managing Terraform Versions"
---
# Managing Terraform Versions
Atlas does not automatically upgrade the version of Terraform
used to execute plans and applies. This is intentional, as occasionally
there can be backwards incompatible changes made to Terraform that cause state
and plans to differ based on the same configuration,
or new versions that produce some other unexpected behavior.
All upgrades must be performed by a user, but Atlas will display a notice
above any plans or applies run with out of date versions. We encourage the use
of the latest version when possible.
Note that regardless of when an upgrade is performed, the version of
Terraform used in a plan will be used in the subsequent apply.
### Upgrading Terraform
1. Go the Settings tab of an environment
1. Go to the "Terraform Version" section and select the version you
wish to use
1. Review the changelog for that version and previous versions
1. Click the save button. At this point, future builds will use that
version

View File

@ -0,0 +1,72 @@
---
title: " AWS Multi-Factor Authentication for Terraform Runs in Atlas"
---
# AWS Multi-Factor Authentication for Terraform Runs in Atlas
You can optionally configure Terraform plans and applies to use multi-factor authentication using [AWS Secure Token Service](http://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html).
This option is disabled by default and can be enabled by an organization owner.
<div class="alert-errors">
<div class="row alert-error">
This is an advanced feature that enables changes to active infrastructure
without user confirmation. Please understand the implications to your
infrastructure before enabling.
</div>
</div>
## Setting Up AWS Multi-Factor Authentication
Before you are able to set up multi-factor authentication in atlas, you must set up an IAM user in AWS. More details about creating an IAM user can be found [here](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable.html). Setting up an AWS IAM user will provide you with the serial number and access keys that you will need in order to connect to AWS Secure Token Service.
In order to set up multi-factor authentication for your organization, you must have the following environment variables in your configuration: 'AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY", "AWS_MFA_SERIAL_NUMBER". You can set these variables at `/settings/organization_variables.`
## Enabling AWS Multi-Factor Authentication
To enable multi-factor authentication, visit the environment settings page in
Atlas: `terraform/<organization>/environments/<environment>/settings`. Use the drop down labeled "AWS Multi-Factor Authentication
". There are currently three levels available: "never", "applies only", and "plans and applies". Once you have selected your desired level, save your settings. All subsequent runs on the environment will now require the selected level of authentication.
## Using AWS Multi-Factor Authentication
Once you have elected to use AWS MFA for your Terraform Runs, you will then be prompted to enter a token code each time you plan or apply the run depending on your settings. Your one time use token code will be sent to you via the method you selected when setting up your [IAM account](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable.html).
If you have selected "applies only", you will be able to queue and run a plan without entering your token code. Once the run finishes, you will need to enter your token code and click "Authenticate" before the applying the plan. Once you submit your token code, the apply will start, and you will see "Authenticated with MFA by `user`" in the UI. If for any case there is an error when submitting your token code, the lock icon in the UI will turn red, and an error will appear alerting you to the failure.
If you have selected "plans and applies", you will be prompted to enter your token before queueing your plan. Once you enter the token and click "Authenticate", you will see "Authenticated with MFA by `user`" appear in the UI logs. The plan will queue and you may run the plan once it is queued. Then, before applying, you will be asked to authenticate with MFA again. Enter your token, click Authenticate, and note that "Authenticated with MFA by `user`" appears in the UI log after the apply begins. If for any case there is an error authenticating, the lock icon in the UI will turn red, and an error will appear alerting you to the failure.
## Using AWS Multi-Factor Authentication with AWS STS AssumeRole
The AWS Secure Token Service can be used to return a set of temporary security credentials that a user can use to access resources that they might not normally have access to (known as AssumeRole). The AssumeRole workflow is compatible with AWS multi-factor authentication in Atlas.
To use AssumeRole, you first need to create an IAM role and edit the trust relationship policy document to contain the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::[INT]:user/[USER]"
},
"Action": "sts:AssumeRole",
"Condition": {
"Bool": {
"aws:MultiFactorAuthPresent": "true"
}
}
}
]
}
You can then configure the Terraform AWS provider to assume a given role by specifying the role ARN within the nested assume_role block:
provider "aws" {
...
assume_role {
role_arn = "arn:aws:iam::[INT]:role/[ROLE]"
}
}

View File

@ -0,0 +1,25 @@
---
title: "Terraform Run Notifications"
---
# Terraform Run Notifications
Atlas can send run notifications to your organization via one of our [supported
notification methods](/help/consul/alerts/notification-methods). The following
events are configurable:
- **Needs Confirmation** - The plan phase has succeeded, and there are changes
that need to be confirmed before applying.
- **Confirmed** - A plan has been confirmed, and it will begin applying
shortly.
- **Discarded** - A user in Atlas has discarded the plan.
- **Applying** - The plan has begun to apply and make changes to your
infrastructure.
- **Applied** - The plan was applied successfully.
- **Errored** - An error has occurred during the plan or apply phase.
> Emails will include logs for the **Needs Confirmation**, **Applied**, and
> **Errored** events.
You can toggle notifications for each of these events on the "Integrations" tab
of an environment.

View File

@ -0,0 +1,38 @@
---
title: "Schedule Periodic Plans in Atlas"
---
# Schedule Periodic Plans in Atlas
<div class="alert-infos">
<div class="alert-info">
This is an unreleased beta feature. Please <a href="/help/support">contact support</a> if you are interested in helping us test this feature.
</div>
</div>
Atlas can automatically run a Terraform plan against
your infrastructure on a specified schedule. This option is disabled by default and can be enabled by an
organization owner on a per-[environment](/help/glossary#environment) basis.
On the specified interval, Atlas will automatically queue a plan that
runs Terraform for you, determining any changes and sending the appropriate
notifications.
When used with [automatic applies](/help/terraform/runs/automatic-applies), this feature can help converge
changes to infrastructure without human input.
Atlas will not queue new runs while another plan or apply is in progress, or if
the environment has been manually locked. See [Environment
Locking](/help/terraform/runs#environment-locking) for more information.
## Enabling Periodic Plans
To enable periodic plans for an environment, visit the environment settings page in
Atlas and select the desired interval and click the save button to
persist the changes. An initial plan may immediately run, depending
on the state of your environment, and then will automatically
plan at the specified interval.
If you have manually run a plan separately, Atlas will not queue a new
plan until the alloted time after the manual plan ran. This means that
Atlas simply ensures that a plan has been executed at the specified schedule.

View File

@ -0,0 +1,113 @@
---
title: "Starting Terraform Runs in Atlas"
---
# Starting Terraform Runs in Atlas
There are a variety of ways to queue a Terraform run in Atlas. In addition to
`terraform push`, you can connect your [environment](/help/glossary#environment)
to GitHub and have Atlas queue Terraform runs based on new commits. Atlas can
also intelligently queue new runs when linked artifacts are uploaded or changed.
Remember from the [previous section about Terraform runs](/help/terraform/runs)
that it is safe to trigger many plans without consequence since Terraform plans
do not change infrastructure.
## Terraform Push
Terraform `push` is a [Terraform command](https://terraform.io/docs/commands/push.html)
that packages and uploads a set of Terraform configuration and directory to Atlas. This then creates a run
in Atlas, which performs `terraform plan` and `terraform apply` against the uploaded
configuration.
The directory is included in order to run any associated provisioners,
that might use local files. For example, a remote-exec provisioner
that executes a shell script.
By default, everything in your directory is uploaded as part of the push.
However, it's not always the case that the entire directory should be uploaded. Often,
temporary or cache directories and files like `.git`, `.tmp` will be included by default. This
can cause Atlas to fail at certain sizes and should be avoided. You can
specify [exclusions](https://terraform.io/docs/commands/push.html) to avoid this situation.
Terraform also allows for a [VCS option](https://terraform.io/docs/commands/push.html#_vcs_true)
that will detect your VCS (if there is one) and only upload the files that are tracked by the VCS. This is
useful for automatically excluding ignored files. In a VCS like git, this
basically does a `git ls-files`.
## GitHub Webhooks
Optionally, GitHub can be used to import Terraform configuration. When used
within an organization, this can be extremely valuable for keeping differences
in environments and last mile changes from occurring before an upload to Atlas.
After you have [connected your GitHub account to Atlas](/settings/connections),
you can connect your [environment](/help/glossary#environment) to the target
GitHub repository. The GitHub repository will be linked to the Atlas Terraform
configuration, and GitHub will start sending webhooks to Atlas. Certain
GitHub webhook events, detailed below, will cause the repository to be
automatically ingressed into Atlas and stored, along with references to the
GitHub commits and authorship information.
Currently, an environment must already exist to be connected to GitHub. You can
create the environment with `terraform push`, detailed above, and then link it
to GitHub.
Each ingress will trigger a Terraform plan. If you have auto-apply enabled then
the plan will also be applied.
You can disable an ingress by adding the text `[atlas skip]` or `[ci skip]` to
your commit message.
Supported GitHub webhook events:
- pull_request (on by default)
- ingress when opened or reopened
- ingress when synchronized (new commits are pushed to the branch)
- push (on by default)
- ingress when a tag is created
- ingress when the default branch is updated
- note: the default branch is either configured on your configuration's
integrations tab in Atlas, or if that is blank it is the GitHub
repository's default branch
- create (off by default)
- ingress when a tag is created
- note: if you want to only run on tag creation, turn on create events and
turn off push events
## Artifact Uploads
Upon successful completion of a Terraform run, Atlas parses the remote state and
detects any [Atlas artifacts](/help/terraform/artifacts/artifact-provider) that
were referenced. When new versions of those referenced artifacts are uploaded
to Atlas, you have the option to automatically queue a new Terraform run.
For example, consider the following Terraform configuration which references an
Atlas artifact named "worker":
resource "aws_instance" "worker" {
ami = "${atlas_artifact.worker.metadata_full.region-us-east-1}"
instance_type = "m1.small"
}
When a new version of the Atlas artifact "worker" is uploaded either manually
or as the output of a [Packer build](/help/packer/builds/starting.html), Atlas
can automatically trigger a Terraform plan with this new artifact version.
You can enable this feature on a per-environment basis from the
[environment](/help/glossary#environment) settings page in Atlas.
Combined with
[Terraform auto apply](/help/terraform/runs/automatic-applies), you can
continuously deliver infrastructure using Terraform and Atlas.
## Terraform Plugins
If you are using a custom [Terraform Plugin](https://www.terraform.io/docs/plugins/index.html)
binary for a provider or provisioner that's not currently in a released
version of Terraform, you can still use this in Atlas.
All you need to do is include a Linux AMD64 binary for the plugin in the
directory in which Terraform commands are run from; Atlas will then use
the plugin the next time you `terraform push` or ingress from GitHub.

View File

@ -0,0 +1,125 @@
---
title: "Terraform Variables and Configuration"
---
# Terraform Variables and Configuration
There are two ways to configure Terraform runs in Atlas with
Terraform variables or environment variables.
## Terraform Variables
Terraform variables are first-class configuration in Terraform. They
define the parameterization of Terraform configurations and are important
for sharing and removal of sensitive secrets from version control.
Variables are sent to Atlas with `terraform push`. Any variables in your local
`.tfvars` files are securely uploaded to Atlas. Once variables are uploaded to
Atlas, Terraform will prefer the Atlas-stored variables over any changes you
make locally. Please refer to the
[Terraform push documentation](https://www.terraform.io/docs/commands/push.html)
for more information.
You can also add, edit, and delete Terraform variables via Atlas. To update
Terraform variables in Atlas, visit the "variables" page on your
[environment](/help/glossary#environment).
The maximum size for the value of Terraform variables is `256kb`.
For detailed information about Terraform variables, please read the
[Terraform variables](https://terraform.io/docs/configuration/variables.html)
section of the Terraform documentation.
## Environment Variables
Environment variables are injected into the virtual environment that Terraform
executes in during the `plan` and `apply` phases.
You can add, edit, and delete environment variables from the "variables" page
on your [environment](/help/glossary#environment).
Additionally, the following environment variables are automatically injected by
Atlas. All Atlas-injected environment variables will be prefixed with `ATLAS_`
- `ATLAS_TOKEN` - This is a unique, per-run token that expires at the end of
run execution (e.g. `"abcd.atlasv1.ghjkl..."`).
- `ATLAS_RUN_ID` - This is a unique identifier for this run (e.g. `"33"`).
- `ATLAS_CONFIGURATION_NAME` - This is the name of the configuration used in
this run. Unless you have configured it differently, this will also be the
name of the environment (e.g `"production"`).
- `ATLAS_CONFIGURATION_SLUG` - This is the full slug of the configuration used
in this run. Unless you have configured it differently, this will also be the
name of the environment (e.g. `"company/production"`).
- `ATLAS_CONFIGURATION_VERSION` - This is the unique, auto-incrementing version
for the Terraform configuration (e.g. `"34"`).
- `ATLAS_CONFIGURATION_VERSION_GITHUB_BRANCH` - This is the name of the branch
that the associated Terraform configuration version was ingressed from
(e.g. `master`).
- `ATLAS_CONFIGURATION_VERSION_GITHUB_COMMIT_SHA` - This is the full commit hash
of the commit that the associated Terraform configuration version was
ingressed from (e.g. `"abcd1234..."`).
- `ATLAS_CONFIGURATION_VERSION_GITHUB_TAG` - This is the name of the tag
that the associated Terraform configuration version was ingressed from
(e.g. `"v0.1.0"`).
For any of the `GITHUB_` attributes, the value of the environment variable will
be the empty string (`""`) if the resource is not connected to GitHub or if the
resource was created outside of GitHub (like using `terraform push`).
## Managing Secret Multi-Line Files
Atlas has the ability to store multi-line files as variables. The recommended way to manage your secret/sensitive multi-line files (private key, SSL cert, SSL private key, CA, etc.) is to add them as [Terraform Variables](#terraform-variables) or [Environment Variables](#environment-variables) in Atlas.
Just like secret strings, it is recommended that you never check in these multi-line secret files to version control by following the below steps.
Set the [variables](https://www.terraform.io/docs/configuration/variables.html) in your Terraform template that resources utilizing the secret file will reference:
variable "private_key" {}
resource "aws_instance" "example" {
...
provisioner "remote-exec" {
connection {
host = "${self.private_ip}"
private_key = "${var.private_key}"
}
...
}
}
`terraform push` any "Terraform Variables" to Atlas:
$ terraform push -name $ATLAS_USERNAME/example -var "private_key=$MY_PRIVATE_KEY"
`terraform push` any "Environment Variables" to Atlas:
$ TF_VAR_private_key=$MY_PRIVATE_KEY terraform push -name $ATLAS_USERNAME/example
Alternatively, you can add or update variables manually by going to the "Variables" section of your Atlas Environment and pasting the contents of the file in as the value.
Now, any resource that consumes that variable will have access to the variable value, without having to check the file into version control. If you want to run Terraform locally, that file will still need to be passed in as a variable in the CLI. View the [Terraform Variable Documentation](https://www.terraform.io/docs/configuration/variables.html) for more info on how to accomplish this.
A few things to note...
The `.tfvars` file does not support multi-line files. You can still use `.tfvars` to define variables, however, you will not be able to actually set the variable in `.tfvars` with the multi-line file contents like you would a variable in a `.tf` file.
If you are running Terraform locally, you can pass in the variables at the command line:
$ terraform apply -var "private_key=$MY_PRIVATE_KEY"
$ TF_VAR_private_key=$MY_PRIVATE_KEY terraform apply
You can update variables locally by using the `-overwrite` flag with your `terraform push` command:
$ terraform push -name $ATLAS_USERNAME/example -var "private_key=$MY_PRIVATE_KEY" -overwrite=private_key
$ TF_VAR_private_key=$MY_PRIVATE_KEY terraform push -name $ATLAS_USERNAME/example -overwrite=private_key
- - -
## Notes on Security
Terraform variables and environment variables in Atlas are encrypted using
[Vault](https://vaultproject.io) and closely guarded and audited. If you have
questions or concerns about the safety of your configuration, please contact
our security team at [security@hashicorp.com](mailto:security@hashicorp.com).

View File

@ -0,0 +1,17 @@
---
title: "Collaborating on Terraform Remote State in Atlas"
---
# Collaborating on Terraform Remote State in Atlas
Atlas is one of a few options to store [remote state](/help/terraform/state).
Remote state gives you the ability to version and collaborate on Terraform changes. It
stores information about the changes Terraform makes based on configuration.
In order to collaborate safely on remote state, we recommend
[creating an organization](/help/organizations/create) to manage teams of users.
Then, following a [remote state push](/help/terraform/state) you can view state versions
in the changes tab of the [environment](/help/glossary#environment) created under the same name
as the remote state.

View File

@ -0,0 +1,20 @@
---
title: "About Remote State"
---
# About Remote State
Terraform stores the state of your managed infrastructure from the last
time Terraform was run. By default this state is stored in a local file
named `terraform.tfstate`, but it can also be stored remotely, which
works better in a team environment.
Atlas is a remote state provider, allowing you to store, version and
collaborate on state with Atlas.
Remote state gives you more than just easier version control and safer
storage. It also allows you to delegate the outputs to other teams.
This allows your infrastructure to be more easily broken down into
components that multiple teams can access.
Read [more about remote state](https://www.terraform.io/docs/state/remote/index.html).

View File

@ -0,0 +1,15 @@
---
title: "Pushing Terraform Remote State to Atlas"
---
# Pushing Terraform Remote State to Atlas
Atlas is one of a few options to store [remote state](/help/terraform/state).
Remote state gives you the ability to version and collaborate on Terraform changes. It
stores information about the changes Terraform makes based on configuration.
To use Atlas to store remote state, you'll first need to have the
`ATLAS_TOKEN` environment variable set and run the following command.
$ terraform remote config -backend-config="name=%{DEFAULT_USERNAME}/product"

View File

@ -0,0 +1,67 @@
---
title: "Resolving Conflicts in Atlas Remote State"
---
# Resolving Conflicts in Atlas Remote State
Resolving state conflicts can be time consuming and error prone, so
it's important to approach it carefully.
There are several tools provided by Atlas to help resolve conflicts
and fix remote state issues. First, you can navigate between state
versions in the changes view of your environment (after toggling on
the remote state checkbox) and view plain-text differences between
versions.
This allows you to pinpoint where things may have gone wrong and
make a educated decision about resolving the conflict.
### Rolling Back to a Specific State Version
The rollback feature allows you to choose a new version to set as the
"Head" version of the state. Rolling back to a version means it will
then return that state upon request from a client. It will not
increment the serial in the state, but perform a hard rollback to the
exact version of the state provided.
This allows you to reset the state to an older version, essentially
forgetting changes made in versions after that point.
To roll back to a specific version, navigate to it in the changes view
and use the rollback link. You'll need to confirm the version number
to perform the operation.
### Using Terraform Locally
Another way to resolve conflicts in Atlas remote state
is to merge and conflicted copies locally by inspecting the
raw state available in the path `.terraform/terraform.tfstate`.
When making state changes, it's important to make backup copies in
order to avoid losing any data.
Atlas will reject any state that is pushed with a serial that is lower
than the known serial when the MD5 of the state does not match.
The serial is embedded in the state file:
{
"version": 1,
"serial": 555,
"remote": {
"type": "atlas",
"config": {
"name": "%{DEFAULT_USERNAME}/production"
}
},
...
}
Once a conflict has been resolved locally by editing the state file,
the serial can be incremented past the current version in Atlas and
pushed:
terraform remote push
This will upload the manually resolved state and set it as the head
version in Atlas.

View File

@ -0,0 +1,64 @@
---
title: "Git Integration"
---
# Git Integation
Git repositories can be integrated with Atlas by using
[`terraform push`](https://www.terraform.io/docs/commands/push.html) to import
Terraform configuration when changes are committed. When Terraform
configuration is imported using `terraform push` a plan is automatically queued
in Atlas.
_**Note:** This integration is for Git repositories **not** hosted on GitHub.
For repositories on GitHub, there is native [GitHub Integration](/help/terraform/vcs/github).
## Setup
Terraform configuration can be manually imported by running `terraform push`
like below:
```
$ terraform push -name=$ATLAS_USERNAME/ENV_NAME
```
A better option than having to manually run `terraform push` is to run it
using a git commit hook. A client-side `pre-push` hook is suitable and will
push your Terraform configuration when you push local changes to your Git
server.
### Client-side Commit Hook
The script below will execute `terraform push` when you push local changes to
your Git server. Place the script at `.git/pre-push` in your local Git
repository, set the necessary variables, and ensure the script is executable.
```
#!/bin/bash
#
# An example hook script to push Terraform configuration to Atlas.
#
# Set the following variables for your project:
# - ENV_NAME - your Atlas environment name (e.g. org/env)
# - TERRAFORM_DIR - the local directory to push
# - DEFAULT_BRANCH - the branch to push. Other branches will be ignored.
ENV_NAME="YOUR_ORG/YOUR_ENV"
TERRAFORM_DIR="terraform"
DEFAULT_BRANCH=""
if [[ -z "$ENV_NAME" || -z "$TERRAFORM_DIR" || -z "$DEFAULT_BRANCH" ]]; then
echo 'pre-push hook: One or more variables are undefined. Canceling push.'
exit 1
fi
current_branch=$(git symbolic-ref HEAD | sed -e 's,.*/\(.*\),\1,')
if [ "$current_branch" == "$DEFAULT_BRANCH" ]; then
echo "pre-push hook: Pushing branch [$current_branch] to Atlas environment [$ENV_NAME]."
terraform push -name="$ENV_NAME" $TERRAFORM_DIR
else
echo "pre-push hook: NOT pushing branch [$current_branch] to Atlas environment [$ENV_NAME]."
fi
```

View File

@ -0,0 +1,32 @@
---
title: "GitHub Integration"
---
# GitHub Integration
GitHub can be used to import Terraform configuration, automatically queuing
runs when changes are merged into a repository's default branch. Additionally,
plans are run when a pull request is created or updated. Atlas will update the
pull request with the result of the Terraform plan providing quick feedback on
proposed changes.
## Setup
Atlas environments are linked to individual GitHub repositories. However, a
single GitHub repository can be linked to multiple Atlas environments allowing
a single set of Terraform configuration to be used across multiple environments.
Atlas environments can be linked when they're initially created using the
[New Environment](https://atlas.hashicorp.com/configurations/import) process.
Existing environments can be linked by setting GitHub details in their
**Integrations**.
To link an Atlas environment to a GitHub repository, you need three pieces of
information:
- **GitHub repository** - The location of the repository being imported in the
format _username/repository_.
- **GitHub branch** - The branch from which to ingress new versions. This
defaults to the value GitHub provides as the default branch for this repository.
- **Path to directory of Terraform files** - The repository's subdirectory that
contains its terraform files. This defaults to the root of the repository.

View File

@ -0,0 +1,15 @@
---
title: "Integration with Version Control Software"
---
# Integration with Version Control Software
Atlas can integrate with your version control software to automatically execute
Terraform with your latest Terraform configuration as you commit changes to
source control.
Different capabilities within Atlas are available depending on the integration
in use. The available integration options are below.
- [Git](/help/terraform/vcs/git)
- [GitHub](/help/terraform/vcs/github)