Added API docs to FTE

This commit is contained in:
Chris Riley 2017-03-21 15:01:58 -07:00 committed by Seth Vargo
parent ced173d248
commit 8a1d2cca14
No known key found for this signature in database
GPG Key ID: C921994F9C27E0FF
20 changed files with 775 additions and 5 deletions

View File

@ -17,7 +17,7 @@ to include the necessary variables for the Terraform configuration.
### Configuration Version Attributes ### Configuration Version Attributes
<table> <table border="1" cellspaceing="5">
<tr> <tr>
<th>Attribute</th> <th>Attribute</th>
<th>Description</th> <th>Description</th>

View File

@ -17,7 +17,7 @@ configuration.
### Configuration Attributes ### Configuration Attributes
<table> <table border="1" cellspaceing="5">
<tr> <tr>
<th>Attribute</th> <th>Attribute</th>
<th>Description</th> <th>Description</th>

View File

@ -15,7 +15,7 @@ This documentation covers the environment interactions with Terraform.
### Environment Attributes ### Environment Attributes
<table> <table border="1" cellspaceing="5">
<tr> <tr>
<th>Attribute</th> <th>Attribute</th>
<th>Description</th> <th>Description</th>

View File

@ -16,7 +16,7 @@ can be configured to auto-apply to avoid this.
### Run Attributes ### Run Attributes
<table> <table border="1" cellspaceing="5">
<tr> <tr>
<th>Attribute</th> <th>Attribute</th>
<th>Description</th> <th>Description</th>

View File

@ -12,7 +12,7 @@ State represents the status of your infrastructure at the last time Terraform wa
### State Attributes ### State Attributes
<table> <table border="1" cellspaceing="5">
<tr> <tr>
<th>Attribute</th> <th>Attribute</th>
<th>Description</th> <th>Description</th>

View File

@ -0,0 +1,57 @@
---
title: "Creating AMI Artifacts with Atlas"
---
# Creating AMI Artifacts with Atlas
In an [immutable infrastructure](/help/intro/use-cases/continuous-deployment-of-immutable-infrastructure)
workflow, it's important to version and store full images (artifacts)
to be deployed. This section covers storing [AWS AMI](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)
images in Atlas to be queried and used later.
Note the actual AMI does _not get stored in Atlas_. Atlas
simply keeps the AMI ID as a reference to the target image. Tools
like Terraform can then use this in a deploy.
### Steps
If you run Packer in Atlas, the following will happen after a [push](/help/packer/builds/starting):
1. Atlas will run `packer build` against your template in our infrastructure.
This spins up an AWS instance in your account and provisions it with
any specified provisioners
1. Packer stops the instance and stores the result as an AMI in AWS
under your account. This then returns an ID (the artifact) that it passes to the Atlas post-processor
1. The Atlas post-processor creates and uploads the new artifact version with the
ID in Atlas of the type `amazon.image` for use later
### Example
Below is a complete example Packer template that starts an AWS instance.
{
"push": {
"name": "%{DEFAULT_USERNAME}/frontend"
},
"provisioners": [],
"builders": [
{
"type": "amazon-ebs",
"access_key": "",
"secret_key": "",
"region": "us-east-1",
"source_ami": "ami-2ccc7a44",
"instance_type": "c3.large",
"ssh_username": "ubuntu",
"ami_name": "Atlas Example {{ timestamp }}"
}
],
"post-processors": [
{
"type": "atlas",
"artifact": "%{DEFAULT_USERNAME}/web-server",
"artifact_type": "amazon.image"
}
]
}

View File

@ -0,0 +1,134 @@
---
title: "Creating Vagrant Boxes with Packer"
---
# Creating Vagrant Boxes with Packer
We recommend using Packer to create boxes, as is it is fully repeatable and keeps a strong
history of changes within Atlas.
## Getting Started
Using Packer requires more up front effort, but the repeatable and
automated builds will end any manual management of boxes. Additionally,
all boxes will be stored and served from Atlas, keeping a history along
the way.
Some useful Vagrant Boxes documentation will help you learn
about managing Vagrant boxes in Atlas.
- [Vagrant Box Lifecycle](/help/vagrant/boxes/lifecycle)
- [Distributing Vagrant Boxes with Atlas](/help/vagrant/boxes/distributing)
You can also read on to learn more about how Packer uploads and versions
the boxes with post-processors.
## Post-Processors
Packer uses [post-processors](https://packer.io/docs/templates/post-processors.html) to define how to process
images and artifacts after provisioning. Both the `vagrant` and `atlas` post-processors must be used in order
to upload Vagrant Boxes to Atlas via Packer.
It's important that they are [sequenced](https://packer.io/docs/templates/post-processors.html)
in the Packer template so they run in order. This is done by nesting arrays:
"post-processors": [
[
{
"type": "vagrant"
...
},
{
"type": "atlas"
...
}
]
]
Sequencing automatically passes the resulting artifact from one
post-processor to the next in this case, the `.box` file.
### Vagrant Post-Processor
The [Vagrant post-processor](https://packer.io/docs/post-processors/vagrant.html) is required to package the image
from the build (an `.ovf` file, for example) into a `.box` file before
passing it to the `atlas` post-processor.
{
"type": "vagrant",
"keep_input_artifact": false
}
The input artifact (i.e and `.ovf` file) does not need to be kept when building Vagrant Boxes,
as the resulting `.box` will contain it.
### Atlas Post-Processor
The [Atlas post-processor](https://packer.io/docs/post-processors/atlas.html) takes the resulting `.box` file and uploads
it to Atlas, adding metadata about the box version.
{
"type": "atlas",
"artifact": "%{DEFAULT_USERNAME}/dev-environment",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "vmware_desktop",
"version": "0.0.1"
}
}
#### Attributes Required
These are all of the attributes for that Atlas post-processor
required for uploading Vagrant Boxes. A complete example is shown below.
- `artifact`: The username and box name (`username/name`) you're creating the version
of the box under. If the box doesn't exist, it will be automatically
created
- `artifact_type`: This must be `vagrant.box`. Atlas uses this to determine
how to treat this artifact.
For `vagrant.box` type artifacts, you can specify keys in the metadata block:
- `provider`: The Vagrant provider for the box. Common providers are
`virtualbox`, `vmware_desktop`, `aws` and so on _(required)_
- `version`: This is the Vagrant box [version](/help/vagrant/boxes/lifecycle) and is constrained to the
same formatting as in the web UI: `*.*.*` _(optional, but required for boxes
with multiple providers). The version will increment on the minor version if left blank (e.g the initial version will be set to 0.1.0, the subsequent version will be set to 0.2.0)._
- `description`: This is the desciption that will be shown with the
version of the box. You can use Markdown for links and style. _(optional)_
## Example
An example post-processor block for Atlas and Vagrant is below. In this example,
the build runs on both VMware and Virtualbox creating two
different providers for the same box version (`0.0.1`).
"post-processors": [
[
{
"type": "vagrant",
"keep_input_artifact": false
},
{
"type": "atlas",
"only": ["vmware-iso"],
"artifact": "%{DEFAULT_USERNAME}/dev-environment",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "vmware_desktop",
"version": "0.0.1"
}
},
{
"type": "atlas",
"only": ["virtualbox-iso"],
"artifact": "%{DEFAULT_USERNAME}/dev-environment",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "virtualbox",
"version": "0.0.1"
}
}
]
]

View File

@ -0,0 +1,36 @@
---
title: "About Packer and Artifacts "
---
# About Packer and Artifacts
Packer creates and uploads artifacts to Atlas. This is done
with the [Atlas post-processor](https://packer.io/docs/post-processors/atlas.html).
Artifacts can then be used in Atlas to deploy services or access
via Vagrant. Artifacts are generic, but can be of varying types.
These types define different behavior within Atlas.
For uploading artifacts to Atlas, `artifact_type` can be set to any
unique identifier, however, the following are recommended for consistency.
- `amazon.image`
- `azure.image`
- `digitalocean.image`
- `docker.image`
- `google.image`
- `openstack.image`
- `parallels.image`
- `qemu.image`
- `virtualbox.image`
- `vmware.image`
- `custom.image`
- `application.archive`
- `vagrant.box`
Packer can create artifacts when running in Atlas or locally.
This is possible due to the post-processors use of the public
artifact API to store the artifacts.
You can read more about artifacts and their use in the [Terraform section](/help/terraform/features)
of the documentation.

View File

@ -0,0 +1,166 @@
---
title: "Packer Build Environment"
---
# Packer Build Environment
This page outlines the environment that Packer runs in within Atlas.
### Supported Builders
Atlas currently supports running the following Packer builders:
- amazon-chroot
- amazon-ebs
- amazon-instance
- digitalocean
- docker
- googlecompute
- null
- openstack
- qemu
- virtualbox-iso
- vmware-iso
### Files
All files in the uploading package (via [Packer push or GitHub](/help/packer/builds/starting)),
and the application from the [build pipeline](/help/applications/build-pipeline) are available on the filesystem
of the build environment.
You can use the file icon on the running build to show a list of
available files.
Files can be copied to the destination image Packer is provisioning
with [Packer Provisioners](https://packer.io/docs/templates/provisioners.html).
An example of this with the Shell provisioner is below.
"provisioners": [
{
"type": "shell",
"scripts": [
"scripts/vagrant.sh",
"scripts/dependencies.sh",
"scripts/cleanup.sh"
]
}
]
We encourage use of relative paths over absolute paths to maintain portability
between Atlas and local builds.
The total size of all files in the package being uploaded via
[Packer push or GitHub](/help/packer/builds/starting) must be 5 GB or less.
If you need to upload objects that are larger, such as dmgs, see the
[`packer push` "Limits" documentation](https://packer.io/docs/command-line/push.html)
for ways around this limitation.
### Hardware Limitations
Currently, each builder defined in the Packer template receives
the following hardware resources. This is subject to change.
- 1 CPU core
- 2 GB of memory
- 20 GBs of disk space
### Environment Variables
You can set any number of environment variables that will be injected
into your build environment at runtime. These variables can be
used to configure your build with secrets or other key value configuration.
Variables are encrypted and stored securely.
Additionally, the following environment variables are automatically injected by
Atlas. All Atlas-injected environment variables will be prefixed with `ATLAS_`
- `ATLAS_TOKEN` - This is a unique, per-build token that expires at the end of
build execution (e.g. `"abcd.atlasv1.ghjkl..."`)
- `ATLAS_BUILD_ID` - This is a unique identifier for this build (e.g. `"33"`)
- `ATLAS_BUILD_NUMBER` - This is a unique identifier for all builds in the same
scope (e.g. `"12"`)
- `ATLAS_BUILD_NAME` - This is the name of the build (e.g. `"mybuild"`).
- `ATLAS_BUILD_SLUG` - This is the full name of the build
(e.g. `"company/mybuild"`).
- `ATLAS_BUILD_USERNAME` - This is the username associated with the build
(e.g. `"sammy"`)
- `ATLAS_BUILD_CONFIGURATION_VERSION` - This is the unique, auto-incrementing
version for the [Packer build configuration](/help/glossary) (e.g. `"34"`).
- `ATLAS_BUILD_GITHUB_BRANCH` - This is the name of the branch
that the associated Packer build configuration version was ingressed from
(e.g. `master`).
- `ATLAS_BUILD_GITHUB_COMMIT_SHA` - This is the full commit hash
of the commit that the associated Packer build configuration version was
ingressed from (e.g. `"abcd1234..."`).
- `ATLAS_BUILD_GITHUB_TAG` - This is the name of the tag
that the associated Packer build configuration version was ingressed from
(e.g. `"v0.1.0"`).
If the build was triggered by a new application version, the following
environment variables are also available:
- `ATLAS_APPLICATION_NAME` - This is the name of the application connected to
the Packer build (e.g. `"myapp"`).
- `ATLAS_APPLICATION_SLUG` - This is the full name of the application connected
to the Packer build (e.g. `"company/myapp"`).
- `ATLAS_APPLICATION_USERNAME` - This is the username associated with the
application connected to the Packer build (e.g. `"sammy"`)
- `ATLAS_APPLICATION_VERSION` - This is the version of the application connected
to the Packer build (e.g. `"2"`).
- `ATLAS_APPLICATION_GITHUB_BRANCH` - This is the name of the branch that the
associated application version was ingressed from (e.g. `master`).
- `ATLAS_APPLICATION_GITHUB_COMMIT_SHA` - This is the full commit hash
of the commit that the associated application version was ingressed from
(e.g. `"abcd1234..."`).
- `ATLAS_APPLICATION_GITHUB_TAG` - This is the name of the tag that the
associated application version was ingressed from (e.g. `"v0.1.0"`).
For any of the `GITHUB_` attributes, the value of the environment variable will
be the empty string (`""`) if the resource is not connected to GitHub or if the
resource was created outside of GitHub (like using `packer push` or
`vagrant push`).
### Base Artifact Variable Injection
A base artifact can be selected on the "Settings" page for a build configuration.
During each build, the latest artifact version will have it's external
ID (such as an AMI for AWS) injected as an environment variable for the
environment.
The keys for the following artifact types will be injected:
- `aws.ami`: `ATLAS_BASE_ARTIFACT_AWS_AMI_ID`
- `amazon.ami`: `ATLAS_BASE_ARTIFACT_AMAZON_AMI_ID`
- `amazon.image`: `ATLAS_BASE_ARTIFACT_AMAZON_IMAGE_ID`
- `google.image`: `ATLAS_BASE_ARTIFACT_GOOGLE_IMAGE_ID`
You can then reference this artifact in your Packer template, like this
AWS example:
{
"variables": {
"base_ami": "{{env `ATLAS_BASE_ARTIFACT_AWS_AMI_ID`}}"
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "",
"secret_key": "",
"region": "us-east-1",
"source_ami": "{{user `base_ami`}}"
}
]
}
- - -
## Notes on Security
Packer environment variables in Atlas are encrypted using [Vault](https://vaultproject.io)
and closely guarded and audited. If you have questions or concerns
about the safety of your configuration, please contact our security team
at [security@hashicorp.com](mailto:security@hashicorp.com).

View File

@ -0,0 +1,28 @@
---
title: "How Packer Builds Run in Atlas"
---
# How Packer Builds Run in Atlas
This briefly covers the internal process of running builds in Atlas. It's
not necessary to know this information, but may be valuable to
help understand implications of running in Atlas or debug failing
builds.
### Steps of Execution
1. A Packer template and directory of files is uploaded via Packer Push or GitHub
1. Atlas creates a version of the build configuration and waits for the upload
to complete. At this point, the version will be visible in the UI even if the upload has
not completed
1. Once the upload finishes, Atlas queues the build. This is potentially
split across multiple machines for faster processing
1. In the build environment, the package including the files and Packer template
are downloaded
1. `packer build` is run against the template in the build environment
1. Logs are streamed into the UI and stored
1. Any artifacts as part of the build are then uploaded via the public
Atlas artifact API, as they would be if Packer was executed locally
1. The build completes, the environment is teared down and status
updated within Atlas

View File

@ -0,0 +1,28 @@
---
title: "About Builds"
---
# About Builds
Builds are instances of `packer build` being run within Atlas. Every
build belongs to a build configuration.
__Build configurations__ represent a set of Packer configuration versions and
builds run. It is used as a namespace within Atlas, Packer commands and URLs. Packer
configuration sent to Atlas are stored and versioned under
these build configurations.
These __versions__ of Packer configuration can contain:
- The Packer template, a JSON file which define one or
more builds by configuring the various components of Packer
- Any provisioning scripts or packages used by the template
- Applications that use the build as part of the [pipeline](/help/applications/build-pipeline) and merged into the version
prior to Atlas running Packer on it
When Atlas receives a new version of Packer configuration and associated
scripts from GitHub or `packer push`, it automatically starts a new
Packer build. That Packer build runs in an isolated machine environment with the contents
of that version available to it.
You can be alerted of build events with [Build Notifications](/help/packer/builds/notifications).

View File

@ -0,0 +1,28 @@
---
title: "Installing Software"
---
# Installing Software
Please review the [Packer Build Environment](/help/packer/builds/build-environment)
specification for important information on isolation, security, and hardware
limitations before continuing.
In some cases, it may be necessary to install custom software to build your
artifact using Packer. The easiest way to install software on the Packer builder
is via the `shell-local` provisioner. This will execute commands on the host
machine running Packer.
{
"provisioners": [
{
"type": "shell-local",
"command": "sudo apt-get install -y customsoftware"
}
]
}
Please note that nothing is persisted between Packer builds, so you will need
to install custom software on each run.
The Packer builders run the latest version of Ubuntu LTS.

View File

@ -0,0 +1,7 @@
---
title: "Linked Applications"
---
# Linked Applications
Linked applications have been deprecated in favor of the [application build pipeline](/help/applications/build-pipeline).

View File

@ -0,0 +1,23 @@
---
title: "Managing Packer Versions"
---
# Managing Packer Versions
Atlas does not automatically upgrade the version of Packer
used to run builds or compiles. This is intentional, as occasionally
there can be backwards incompatible changes made to Packer that cause templates to stop
building properly, or new versions that produce some other unexpected behavior.
All upgrades must be performed by a user, but Atlas will display a notice
above any builds run with out of date versions. We encourage the use
of the latest version when possible.
### Upgrading Packer
1. Go the Settings tab of a build configuration or application
1. Go to the "Packer Version" section and select the version you
wish to use
1. Review the changelog for that version and previous versions
1. Click the save button. At this point, future builds will use that
version

View File

@ -0,0 +1,19 @@
---
title: "About Packer Build Notifications"
---
# About Packer Build Notifications
Atlas can send build notifications to your organization via one of our
[supported notification methods](/help/consul/alerts/notification-methods). The
following events are configurable:
- **Starting** - The build has begun.
- **Finished** - All build jobs have finished successfully.
- **Errored** - An error has occurred during one of the build jobs.
- **Canceled** - A user in Atlas has canceled the build.
> Emails will include logs for the **Finished** and **Errored** events.
You can toggle notifications for each of these events on the "Integrations" tab
of a build configuration.

View File

@ -0,0 +1,16 @@
---
title: "Rebuilding Builds"
---
# Rebuilding Builds
Sometimes builds fail due to temporary or remotely controlled conditions.
In this case, it may make sense to "rebuild" a Packer build. To do so,
visit the build you wish to run again and click the Rebuild button. This
will take that exact version of configuration and run it again.
You can rebuild at any point in history, but this may cause side effects
that are not wanted. For example, if you were to rebuild an old version
of a build, it may create the next version of an artifact that is then released,
causing a rollback of your configuration to occur.

View File

@ -0,0 +1,32 @@
---
title: "Schedule Periodic Builds in Atlas"
---
# Schedule Periodic Builds in Atlas
Atlas can automatically run a Packer build and
create artifacts on a specified schedule. This option is disabled by default and can be enabled by an
organization owner on a per-[environment](/help/glossary#environment) basis.
On the specified interval, Atlas will automatically queue a build that
runs Packer for you, creating any artifacts and sending the appropriate
notifications.
If your artifacts are used in any other environments and you have activated
the plan on aritfact upload feature, this may also queue Terraform
plans.
This feature is useful for maintenance of images and automatic updates,
or to build nightly style images for staging or development environments.
## Enabling Periodic Builds
To enable periodic builds for a build, visit the build settings page in
Atlas and select the desired interval and click the save button to
persist the changes. An initial build may immediately run, depending
on the history, and then will automatically build at the specified interval.
If you have run a build separately, either manually or triggered from GitHub
or Packer configuration version uploads, Atlas will not queue a new
build until the alloted time after the manual build ran. This means that
Atlas simply ensures that a build has been executed at the specified schedule.

View File

@ -0,0 +1,68 @@
---
title: "Starting Packer Builds in Atlas"
---
# Starting Packer Builds in Atlas
Packer builds can be started in Atlas in two ways: `packer push`
to upload the template and directory or via a GitHub connection that retrieves
the contents of a repository after changes to the default branch (usually
master).
### Packer Push
Packer `push` is a [Packer command](https://packer.io/docs/command-line/push.html)
that packages and uploads a Packer template and directory to Atlas. This then creates a build
in Atlas, which performs `packer build` against the uploaded template
and packaged directory.
The directory is included in order to run any associated provisioners,
builds or post-processors that all might use local files. For example,
a shell script or set of Puppet modules used in a Packer build needs
to be part of the upload for Packer to be run remotely.
By default, everything in your directory is uploaded as part of the push.
However, it's not always the case that the entire directory should be uploaded. Often,
temporary or cache directories and files like `.git`, `.tmp` will be included by default. This
can cause Atlas to fail at certain sizes and should be avoided. You can
specify [exclusions](https://packer.io/docs/templates/push.html#exclude) to avoid this situation.
Packer also allows for a [VCS option](https://packer.io/docs/templates/push.html#vcs)
that will detect your VCS (if there is one) and only upload the files that are tracked by the VCS.
This is useful for automatically excluding ignored files. In a VCS
like git, this basically does a `git ls-files`.
### GitHub Webhooks
Optionally, GitHub can be used to import Packer templates and configurations.
When used within an organization, this can be extremely valuable for keeping
differences in environments and last mile changes from occurring before an
upload to Atlas.
After you have [connected your GitHub account to Atlas](/settings/connections),
you can connect your [Build Configuration](/help/glossary#build-configuration)
to the target GitHub repository. The GitHub repository will be linked to the
Atlas Packer configuration, and GitHub will start sending webhooks to Atlas.
Certain GitHub webhook events, detailed below, will cause the repository to be
automatically ingressed into Atlas and stored, along with references to the
GitHub commits and authorship information.
After each ingress the configuration will automatically build.
You can disable an ingress by adding the text `[atlas skip]` or `[ci skip]` to
your commit message.
Supported GitHub webhook events:
- push (on by default)
- ingress when a tag is created
- ingress when the default branch is updated
- note: the default branch is either configured on your configuration's
integrations tab in Atlas, or if that is blank it is the GitHub
repository's default branch
- create (off by default)
- ingress when a tag is created
- note: if you want to only run on tag creation, turn on create events and
turn off push events

View File

@ -0,0 +1,112 @@
---
title: "Troubleshooting Failing Builds"
---
# Troubleshooting Failing Builds
Packer builds can fail in Atlas for a number of reasons improper
configuration, transient networking errors, and hardware constraints
are all possible. Below is a list of debugging options you can use.
### Verbose Packer Logging
You can [set a variable](/help/packer/builds/build-environment#environment-variables) in the UI that increases the logging verbosity
in Packer. Set the `PACKER_LOG` key to a value of `1` to accomplish this.
After setting the variable, you'll need to [rebuild](/help/packer/builds/rebuilding).
Verbose logging will be much louder than normal Packer logs and isn't
recommended for day-to-day operations. Once enabled, you'll be able to
see in further detail why things failed or what operations Packer was performing.
This can also be used locally:
PACKER_LOG=1 packer build ...
### Hanging Builds
Some VM builds, such as VMware or Virtualbox, may hang at various stages,
most notably `Waiting for SSH...`.
Things to pay attention to when this happens:
- SSH credentials must be properly configured. AWS keypairs should
match, SSH usernames should be correct, passwords should match, etc.
- Any VM preseed configuration should have the same SSH configuration
as your template defines
A good way to debug this is to manually attempt to use the same SSH
configuration locally, running with `packer build -debug`. See
more about [debugging Packer builds](https://packer.io/docs/other/debugging.html).
### Hardware Limitations
Your build may be failing by requesting larger memory or
disk usage then is available. Read more about the [build environment](/help/packer/builds/build-environment#hardware-limitations).
_Typically_ Packer builds that fail due to requesting hardware limits
that exceed Atlas's [hardware limitations](/help/packer/builds/build-environment#hardware-limitations)
will fail with a _The operation was canceled_ error message as shown below:
```
...
==> vmware-iso: Starting virtual machine...
vmware-iso: The VM will be run headless, without a GUI. If you want to
vmware-iso: view the screen of the VM, connect via VNC without a password to
vmware-iso: 127.0.0.1:5918
==> vmware-iso: Error starting VM: VMware error: Error: The operation was canceled
==> vmware-iso: Waiting 4.604392397s to give VMware time to clean up...
==> vmware-iso: Deleting output directory...
Build 'vmware-iso' errored: Error starting VM: VMware error: Error: The operation was canceled
==> Some builds didn't complete successfully and had errors:
--> vmware-iso: Error starting VM: VMware error: Error: The operation was canceled
```
### Local Debugging
Sometimes it's faster to debug failing builds locally. In this case,
you'll want to [install Packer](/help/intro/updating-tools) and any providers (like Virtualbox) necessary.
Because Atlas runs the open source version of Packer, there should be
no difference in execution between the two, other than the environment that
Packer is running in. For more on hardware constraints in the Atlas environment
read below.
Once your builds are running smoothly locally you can push it up to Atlas
for versioning and automated builds.
### Internal Errors
This is a short list of internal errors and what they mean.
- SIC-001: Your data was being ingressed from GitHub but failed
to properly unpack. This can be caused by bad permissions, using
symlinks or very large repository sizes. Using symlinks inside of the
packer directory, or the root of the repository, if the packer directory
is unspecified, will result in this internal error.
_**Note:** Most often this error occurs
when applications or builds are linked to a GitHub repository and the
directory and/or template paths are incorrect. Double check that the paths
specified when you linked the GitHub repository match the actual paths
to your template file._
- SEC-001: Your data was being unpacked from a tarball uploaded to Atlas
and encountered an error. This can be caused by bad permissions, using
symlinks or very large tarball sizes.
### Community Resources
Packer is an open source project with an active community. If you're
having an issue specific to Packer, the best avenue for support is
the mailing list or IRC. All bug reports should go to GitHub.
- Website: [packer.io](https://packer.io)
- GitHub: [github.com/mitchellh/packer](https://github.com/mitchellh/packer)
- IRC: `#packer-tool` on Freenode
- Mailing list: [Google Groups](http://groups.google.com/group/packer-tool)
### Getting Support
If you believe your build is failing as a result of a bug in Atlas,
or would like other support, please [email us](mailto:support@hashicorp.com).

View File

@ -0,0 +1,16 @@
---
title: "Packer Features in Atlas"
---
# Packer Features in Atlas
[Packer](https://packer.io) is a tool for creating images for platforms such as Amazon AWS,
OpenStack, VMware, VirtualBox, Docker, and more — all from a single
source configuration.
This is a list of features specific to Packer
that Atlas provides.
- [Running Packer Builds](/help/packer/builds)
- [Creating and Uploading AMIs](/help/packer/artifacts/creating-amis)
- [Creating Vagrant Boxes](/help/packer/artifacts/creating-vagrant-boxes)