provider/azurerm: Example of Openshift origin (#15294)

* initial commit - 101-vm-from-user-image

* changed branch name

* not deploying - storage problems

* provisions vm but image not properly prepared

* storage not correct

* provisions properly

* changed main.tf to azuredeploy.tf

* added tfvars and info for README

* tfvars ignored and corrected file ext

* added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing

* deploy.sh to be executable

* executable deploy files

* added CI files; changed vars

* prep for PR

* removal of old folder

* prep for PR

* wrong args for travis

* more PR prep

* updated README

* commented out variables in terraform.tfvars

* Topic 101 vm from user image (#2)

* initial commit - 101-vm-from-user-image
* added tfvars and info for README
* added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing
* prep for PR

* added new template

* oops, left off master

* prep for PR

* correct repository for destination

* renamed scripts to be more intuitive; added check for docker

* merge vm simple; vm from image

* initial commit

* deploys locally

* updated deploy

* consolidated deploy and after_deploy into a single script; simplified ci process; added os_profile_linux_config

* added terraform show

* changed to allow http & https (like ARM tmplt)

* changed host_name & host_name variable desc

* added az cli check

* on this branch, only build test_dir; master will aggregate all the examples

* merge master

* added new constructs/naming for deploy scripts, etc.

* suppress az login output

* suppress az login output

* forgot about line breaks

* breaking build as an example

* fixing broken build example

* merge of CI config

* fixed grammar in readme

* prep for PR

* took out armviz button and minor README changes

* changed host_name

* fixed merge conflicts

* changed host_name variable

* updating Hashicorp's changes to merged simple linux branch

* updating files to merge w/master and prep for Hashicorp pr

* Revert "updating files to merge w/master and prep for Hashicorp pr"

This reverts commit b850cd5d2a858eff073fc5a1097a6813d0f8b362.

* Revert "updating Hashicorp's changes to merged simple linux branch"

This reverts commit dbaf8d14a9cdfcef0281919671357f6171ebd4e6.

* removing vm from user image example from this branch

* removed old branch

* azure-2-vms-loadbalancer-lbrules (#13)

* initial commit

* need to change lb_rule & nic

* deploys locally

* updated README

* updated travis and deploy scripts for Hari's repo

* renamed deploy script

* clean up

* prep for PR

* updated readme

* fixing conflict in .travis.yml

* add CI build tag

* initial commit; in progress

* in progress; merged Hashicorp master into this branch

* in progress

* in progress; created nsg

* added vars to deploy; added vnet

* chmod on deploy

* edited vars

* added var in travis

* added var

* added var to deploy

* added storage accounts

* fixed storage typos

* removed storage tags

* added PIPs

* changed dns name vars

* corrected PIP naming convention

* added availability sets

* added master-lb & rules

* added infra lb & rules

* added nics

* added VMs, ready for VM extensions, can modularize in the future

* added vm exts.; nsg is possibly broken; can't ssh

* in progress

* master ext succeeds

* in progress, infra and nodes exts not succeeding

* infra and node extensions fail

* provisions with extensions

* disabled password auth; ssh config added

* changed ssh key vars

* adding ssh var to deploy

* commenting out validation

* in progress; building openshift ext

* troubleshooting openshift deploy script

* changed vm names; added container

* increased os disk size

* in progress; troubleshooting deploy opnshft script

* Updated the readme

* updated deployment scripts; cleaned up variables, use remote-exec

* more variable cleanup

* more cleanup

* simplified password; got rid of a needless comment

* merge conflicts resolved
This commit is contained in:
Annie Hedgpeth 2017-06-15 11:26:59 -05:00 committed by Paul Stack
parent 956ab165bd
commit a37a70b133
6 changed files with 1239 additions and 0 deletions

View File

@ -0,0 +1,114 @@
# OpenShift Origin Deployment Template
This Terraform template was based on [this](https://github.com/Microsoft/openshift-origin) Azure Quickstart Template. Changes to the ARM template that may have occurred since the creation of this example may not be reflected here.
## OpenShift Origin with Username / Password
Current template deploys OpenShift Origin 1.5 RC0.
This template deploys OpenShift Origin with basic username / password for authentication to OpenShift. You can select to use either CentOS or RHEL for the OS. It includes the following resources:
|Resource |Properties |
|-------------------|------------------------------------------------------------------------------------------------------------------------------------|
|Virtual Network |**Address prefix:** 10.0.0.0/16<br />**Master subnet:** 10.0.0.0/24<br />**Node subnet:** 10.0.1.0/24 |
|Load Balancer |2 probes and two rules for TCP 80 and TCP 443 |
|Public IP Addresses|OpenShift Master public IP<br />OpenShift Router public IP attached to Load Balancer |
|Storage Accounts |2 Storage Accounts |
|Virtual Machines |Single master<br />User-defined number of nodes<br />All VMs include a single attached data disk for Docker thin pool logical volume|
If you have a Red Hat subscription and would like to deploy an OpenShift Container Platform (formerly OpenShift Enterprise) cluster, please visit: https://github.com/Microsoft/openshift-container-platform
### Generate SSH Keys
You'll need to generate an SSH key pair in order to provision this template. Ensure that you do not include a passcode with the private key. <br/>
If you are using a Windows computer, you can download `puttygen.exe`. You will need to export to OpenSSH (from Conversions menu) to get a valid Private Key for use in the Template.<br/>
From a Linux or Mac, you can just use the `ssh-keygen` command. Once you are finished deploying the cluster, you can always generate a new key pair that uses a passphrase and replaces the original one used during initial deployment.
### Create Key Vault to store SSH Private Key
You will need to create a Key Vault to store your SSH Private Key that will then be used as part of the deployment.
1. **Create Key Vault using Powershell**<br/>
a. Create new resource group: New-AzureRMResourceGroup -Name 'ResourceGroupName' -Location 'West US'<br/>
b. Create key vault: New-AzureRmKeyVault -VaultName 'KeyVaultName' -ResourceGroup 'ResourceGroupName' -Location 'West US'<br/>
c. Create variable with sshPrivateKey: $securesecret = ConvertTo-SecureString -String '[copy ssh Private Key here - including line feeds]' -AsPlainText -Force<br/>
d. Create Secret: Set-AzureKeyVaultSecret -Name 'SecretName' -SecretValue $securesecret -VaultName 'KeyVaultName'<br/>
e. Enable the Key Vault for Template Deployments: Set-AzureRmKeyVaultAccessPolicy -VaultName 'KeyVaultName' -ResourceGroupName 'ResourceGroupName' -EnabledForTemplateDeployment
2. **Create Key Vault using Azure CLI 1.0**<br/>
a. Create new Resource Group: azure group create \<name\> \<location\><br/>
Ex: `azure group create ResourceGroupName 'East US'`<br/>
b. Create Key Vault: azure keyvault create -u \<vault-name\> -g \<resource-group\> -l \<location\><br/>
Ex: `azure keyvault create -u KeyVaultName -g ResourceGroupName -l 'East US'`<br/>
c. Create Secret: azure keyvault secret set -u \<vault-name\> -s \<secret-name\> --file \<private-key-file-name\><br/>
Ex: `azure keyvault secret set -u KeyVaultName -s SecretName --file ~/.ssh/id_rsa`<br/>
d. Enable the Keyvvault for Template Deployment: azure keyvault set-policy -u \<vault-name\> --enabled-for-template-deployment true<br/>
Ex: `azure keyvault set-policy -u KeyVaultName --enabled-for-template-deployment true`<br/>
3. **Create Key Vault using Azure CLI 2.0**<br/>
a. Create new Resource Group: az group create -n \<name\> -l \<location\><br/>
Ex: `az group create -n ResourceGroupName -l 'East US'`<br/>
b. Create Key Vault: az keyvault create -n \<vault-name\> -g \<resource-group\> -l \<location\> --enabled-for-template-deployment true<br/>
Ex: `az keyvault create -n KeyVaultName -g ResourceGroupName -l 'East US' --enabled-for-template-deployment true`<br/>
c. Create Secret: az keyvault secret set --vault-name \<vault-name\> -n \<secret-name\> --file \<private-key-file-name\><br/>
Ex: `az keyvault secret set --vault-name KeyVaultName -n SecretName --file ~/.ssh/id_rsa`<br/>
3. **Clone the Openshift repository [here](https://github.com/Microsoft/openshift-origin)**<br/>
a. Note the local script path, this will be needed for remote-execs on the remote machines.<br/>
## Deploy Template
Once you have collected all of the prerequisites for the template, you can deploy the template via terraform.
Monitor deployment via Terraform and get the console URL from outputs of successful deployment which will look something like (if using sample parameters file and "West US 2" location):
`https://me-master1.westus2.cloudapp.azure.com:8443/console`
The cluster will use self-signed certificates. Accept the warning and proceed to the login page.
### NOTE
Ensure combination of openshiftMasterPublicIpDnsLabelPrefix, and nodeLbPublicIpDnsLabelPrefix parameters, combined with the deployment location give you globally unique URL for the cluster or deployment will fail at the step of allocating public IPs with fully-qualified-domain-names as above.
### NOTE
This template deploys a bastion host, merely for the connection provisioner and allowing remote-exec to run commands on machines without public IPs; notice the specific dependencies on the order in which VMs are created for this to work properly.
### NOTE
The OpenShift Ansible playbook does take a while to run when using VMs backed by Standard Storage. VMs backed by Premium Storage are faster. If you want Premimum Storage, select a DS or GS series VM.
<hr />
Be sure to follow the OpenShift instructions to create the ncessary DNS entry for the OpenShift Router for access to applications.
## Post-Deployment Operations
This template creates an OpenShift user but does not make it a full OpenShift user. To do that, please perform the following.
1. SSH in to master node
2. Execute the following command:
```sh
sudo oadm policy add-cluster-role-to-user cluster-admin <user>
```
### Additional OpenShift Configuration Options
You can configure additional settings per the official [OpenShift Origin Documentation](https://docs.openshift.org/latest/welcome/index.html).
Few options you have
1. Deployment Output
a. openshiftConsoleUrl the openshift console url<br/>
b. openshiftMasterSsh ssh command for master node<br/>
c. openshiftNodeLoadBalancerFQDN node load balancer<br/>
get the deployment output data
a. portal.azure.com -> choose 'Resource groups' select your group select 'Deployments' and there the deployment 'Microsoft.Template'. As output from the deployment it contains information about the openshift console url, ssh command and load balancer url.<br/>
b. With the Azure CLI : azure group deployment list &lt;resource group name>
2. add additional users. you can find much detail about this in the openshift.org documentation under 'Cluster Administration' and 'Managing Users'. This installation uses htpasswd as the identity provider. To add more user ssh in to master node and execute following command:
```sh
sudo htpasswd /etc/origin/master/htpasswd user1
```
Now this user can login with the 'oc' CLI tool or the openshift console url.

View File

@ -0,0 +1,46 @@
#!/bin/bash
set -o errexit -o nounset
docker run --rm -it \
-e ARM_CLIENT_ID \
-e ARM_CLIENT_SECRET \
-e ARM_SUBSCRIPTION_ID \
-e ARM_TENANT_ID \
-e AAD_CLIENT_ID \
-e AAD_CLIENT_SECRET \
-e KEY_ENCRYPTION_KEY_URL \
-e SSH_PUBLIC_KEY \
-v /:/data \
--workdir=/data/$(pwd) \
--entrypoint "/bin/sh" \
hashicorp/terraform:light \
-c "/bin/terraform get; \
/bin/terraform validate; \
/bin/terraform plan -out=out.tfplan \
-var subscription_id=$ARM_SUBSCRIPTION_ID \
-var tenant_id=$ARM_TENANT_ID \
-var aad_client_id=$ARM_CLIENT_ID \
-var aad_client_secret=$ARM_CLIENT_SECRET \
-var resource_group_name=$KEY \
-var key_vault_name=$KEY_VAULT_NAME \
-var key_vault_resource_group=$KEY_VAULT_RESOURCE_GROUP \
-var key_vault_secret=$KEY_VAULT_SECRET \
-var openshift_cluster_prefix=$KEY \
-var openshift_password=$PASSWORD \
-var openshift_script_path=$LOCAL_SCRIPT_PATH \
-var ssh_public_key=\"$OS_PUBLIC_KEY\" \
-var connection_private_ssh_key_path=$CONTAINER_PRIVATE_KEY_PATH \
-var master_instance_count=$MASTER_COUNT \
-var infra_instance_count=$INFRA_COUNT \
-var node_instance_count=$NODE_COUNT; \
/bin/terraform apply out.tfplan;"
# cleanup deployed azure resources via azure-cli
# docker run --rm -it \
# azuresdk/azure-cli-python \
# sh -c "az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID > /dev/null; \
# az vm show -g $KEY -n $KEY; \
# az vm encryption show -g $KEY -n $KEY"
# cleanup deployed azure resources via terraform

View File

@ -0,0 +1,24 @@
#!/bin/bash
set -o errexit -o nounset
if docker -v; then
# generate a unique string for CI deployment
export KEY=$(cat /dev/urandom | env LC_CTYPE=C tr -cd 'a-z' | head -c 12)
export PASSWORD="P4ssw0rd1"
export KEY_VAULT_RESOURCE_GROUP=permanent
export KEY_VAULT_NAME=TerraformVault
export KEY_VAULT_SECRET=OpenShiftSSH
export OS_PUBLIC_KEY='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCvdOGspeeBhsOZw6OK2WmP3bBUOeZj1yaz6Dw+lzsRmjwNSmJIoGZPzlbdy1lzlkXIm2JaT4h/cUi39w+Q2RZRjxmr7TbLyuidJfFLvRJ35RDullUYLWEPx3csBroPkCv+0qgmTW/MqqjqS4yhlJ01uc9RNx9Jt3XZN7LNr8SUoBzdLCWJa1rpCTtUckO1Jyzi4VwZ2ek+nYPJuJ8hG0KeHnyXDXV4hQZTFtGvtbmgoyoybppFQMbM3a31KZeaWXUeZkZczBsdNRkX8XCDjb6zUmUMQUzZpalFlL1O+rZD0kaXKr0uZWiYOKu2LjnWeDW9x4tig1mf+L84vniP+lLKFW8na3Lzx11ysEpuhIJGPMMI8sjTCnu51PmiwHW2U9OR06skPUO7ZGD0QHg7jKXdz5bHT+1OqXeAStULDiPVRIPrxxpurPXiJRm7JPbPvPqrMqZJ3K7J9W6OGHG3CoDR5RfYlPWURTaVH10stb4hKevasCd+YoLStB1XgMaL/cG9bM0TIWmODV/+pfn800PgxeBn1vABpL0NF8K2POLs37vGJoh/RyGCDVd0HEKArpZj0/g+fv7tr3tFFOCY5bHSuDTZcY8sWPhxKXSismoApM3a+USF5HkDkWSTEiETs2wgUdTSt4MuN2maRXOK2JboQth1Qw+vCOvqcls0dMa0NQ== you@example.com'
export CONTAINER_PRIVATE_KEY_PATH="/data/Users/$USER/.ssh/id_rsa"
export LOCAL_SCRIPT_PATH="/data/Users/$USER/Code/10thmagnitude/openshift-origin/scripts"
export MASTER_COUNT=1
export INFRA_COUNT=1
export NODE_COUNT=1
/bin/sh ./deploy.ci.sh
else
echo "Docker is used to run terraform commands, please install before run: https://docs.docker.com/docker-for-mac/install/"
fi

View File

@ -0,0 +1,826 @@
provider "azurerm" {
subscription_id = "${var.subscription_id}"
client_id = "${var.aad_client_id}"
client_secret = "${var.aad_client_secret}"
tenant_id = "${var.tenant_id}"
}
resource "azurerm_resource_group" "rg" {
name = "${var.resource_group_name}"
location = "${var.resource_group_location}"
}
# ******* NETWORK SECURITY GROUPS ***********
resource "azurerm_network_security_group" "master_nsg" {
name = "${var.openshift_cluster_prefix}-master-nsg"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
security_rule {
name = "allow_SSH_in_all"
description = "Allow SSH in from all locations"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "allow_HTTPS_all"
description = "Allow HTTPS connections from all locations"
priority = 200
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "443"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "allow_OpenShift_console_in_all"
description = "Allow OpenShift Console connections from all locations"
priority = 300
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "8443"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_network_security_group" "infra_nsg" {
name = "${var.openshift_cluster_prefix}-infra-nsg"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
security_rule {
name = "allow_SSH_in_all"
description = "Allow SSH in from all locations"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "allow_HTTPS_all"
description = "Allow HTTPS connections from all locations"
priority = 200
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "443"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "allow_HTTP_in_all"
description = "Allow HTTP connections from all locations"
priority = 300
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "80"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_network_security_group" "node_nsg" {
name = "${var.openshift_cluster_prefix}-node-nsg"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
security_rule {
name = "allow_SSH_in_all"
description = "Allow SSH in from all locations"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "allow_HTTPS_all"
description = "Allow HTTPS connections from all locations"
priority = 200
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "443"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "allow_HTTP_in_all"
description = "Allow HTTP connections from all locations"
priority = 300
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "80"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
# ******* STORAGE ACCOUNTS ***********
resource "azurerm_storage_account" "bastion_storage_account" {
name = "${var.openshift_cluster_prefix}bsa"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
account_type = "${var.storage_account_type_map["${var.bastion_vm_size}"]}"
}
resource "azurerm_storage_account" "master_storage_account" {
name = "${var.openshift_cluster_prefix}msa"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
account_type = "${var.storage_account_type_map["${var.master_vm_size}"]}"
}
resource "azurerm_storage_account" "infra_storage_account" {
name = "${var.openshift_cluster_prefix}infrasa"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
account_type = "${var.storage_account_type_map["${var.infra_vm_size}"]}"
}
resource "azurerm_storage_account" "nodeos_storage_account" {
name = "${var.openshift_cluster_prefix}nodeossa"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
account_type = "${var.storage_account_type_map["${var.node_vm_size}"]}"
}
resource "azurerm_storage_account" "nodedata_storage_account" {
name = "${var.openshift_cluster_prefix}nodedatasa"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
account_type = "${var.storage_account_type_map["${var.node_vm_size}"]}"
}
resource "azurerm_storage_account" "registry_storage_account" {
name = "${var.openshift_cluster_prefix}regsa"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
account_type = "Standard_LRS"
}
resource "azurerm_storage_account" "persistent_volume_storage_account" {
name = "${var.openshift_cluster_prefix}pvsa"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
account_type = "Standard_LRS"
}
# ******* AVAILABILITY SETS ***********
resource "azurerm_availability_set" "master" {
name = "masteravailabilityset"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
}
resource "azurerm_availability_set" "infra" {
name = "infraavailabilityset"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
}
resource "azurerm_availability_set" "node" {
name = "nodeavailabilityset"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
}
# ******* IP ADDRESSES ***********
resource "azurerm_public_ip" "bastion_pip" {
name = "bastionpip"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
public_ip_address_allocation = "Static"
domain_name_label = "${var.openshift_cluster_prefix}-bastion"
}
resource "azurerm_public_ip" "openshift_master_pip" {
name = "masterpip"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
public_ip_address_allocation = "Static"
domain_name_label = "${var.openshift_cluster_prefix}"
}
resource "azurerm_public_ip" "infra_lb_pip" {
name = "infraip"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
public_ip_address_allocation = "Static"
domain_name_label = "${var.openshift_cluster_prefix}infrapip"
}
# ******* VNETS / SUBNETS ***********
resource "azurerm_virtual_network" "vnet" {
name = "openshiftvnet"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
address_space = ["10.0.0.0/8"]
depends_on = ["azurerm_virtual_network.vnet"]
}
resource "azurerm_subnet" "master_subnet" {
name = "mastersubnet"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
resource_group_name = "${azurerm_resource_group.rg.name}"
address_prefix = "10.1.0.0/16"
depends_on = ["azurerm_virtual_network.vnet"]
}
resource "azurerm_subnet" "node_subnet" {
name = "nodesubnet"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
resource_group_name = "${azurerm_resource_group.rg.name}"
address_prefix = "10.2.0.0/16"
}
# ******* MASTER LOAD BALANCER ***********
resource "azurerm_lb" "master_lb" {
name = "masterloadbalancer"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
depends_on = ["azurerm_public_ip.openshift_master_pip"]
frontend_ip_configuration {
name = "LoadBalancerFrontEnd"
public_ip_address_id = "${azurerm_public_ip.openshift_master_pip.id}"
}
}
resource "azurerm_lb_backend_address_pool" "master_lb" {
resource_group_name = "${azurerm_resource_group.rg.name}"
name = "loadBalancerBackEnd"
loadbalancer_id = "${azurerm_lb.master_lb.id}"
depends_on = ["azurerm_lb.master_lb"]
}
resource "azurerm_lb_probe" "master_lb" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.master_lb.id}"
name = "8443Probe"
port = 8443
interval_in_seconds = 5
number_of_probes = 2
protocol = "Tcp"
depends_on = ["azurerm_lb.master_lb"]
}
resource "azurerm_lb_rule" "master_lb" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.master_lb.id}"
name = "OpenShiftAdminConsole"
protocol = "Tcp"
frontend_port = 8443
backend_port = 8443
frontend_ip_configuration_name = "LoadBalancerFrontEnd"
backend_address_pool_id = "${azurerm_lb_backend_address_pool.master_lb.id}"
load_distribution = "SourceIP"
idle_timeout_in_minutes = 30
probe_id = "${azurerm_lb_probe.master_lb.id}"
enable_floating_ip = false
depends_on = ["azurerm_lb_probe.master_lb", "azurerm_lb.master_lb", "azurerm_lb_backend_address_pool.master_lb"]
}
resource "azurerm_lb_nat_rule" "master_lb" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.master_lb.id}"
name = "${azurerm_lb.master_lb.name}-SSH-${count.index}"
protocol = "Tcp"
frontend_port = "${count.index + 2200}"
backend_port = 22
frontend_ip_configuration_name = "LoadBalancerFrontEnd"
count = "${var.master_instance_count}"
depends_on = ["azurerm_lb.master_lb"]
}
# ******* INFRA LOAD BALANCER ***********
resource "azurerm_lb" "infra_lb" {
name = "infraloadbalancer"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
depends_on = ["azurerm_public_ip.infra_lb_pip"]
frontend_ip_configuration {
name = "LoadBalancerFrontEnd"
public_ip_address_id = "${azurerm_public_ip.infra_lb_pip.id}"
}
}
resource "azurerm_lb_backend_address_pool" "infra_lb" {
resource_group_name = "${azurerm_resource_group.rg.name}"
name = "loadBalancerBackEnd"
loadbalancer_id = "${azurerm_lb.infra_lb.id}"
depends_on = ["azurerm_lb.infra_lb"]
}
resource "azurerm_lb_probe" "infra_lb_http_probe" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.infra_lb.id}"
name = "httpProbe"
port = 80
interval_in_seconds = 5
number_of_probes = 2
protocol = "Tcp"
depends_on = ["azurerm_lb.infra_lb"]
}
resource "azurerm_lb_probe" "infra_lb_https_probe" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.infra_lb.id}"
name = "httpsProbe"
port = 443
interval_in_seconds = 5
number_of_probes = 2
protocol = "Tcp"
}
resource "azurerm_lb_rule" "infra_lb_http" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.infra_lb.id}"
name = "OpenShiftRouterHTTP"
protocol = "Tcp"
frontend_port = 80
backend_port = 80
frontend_ip_configuration_name = "LoadBalancerFrontEnd"
backend_address_pool_id = "${azurerm_lb_backend_address_pool.infra_lb.id}"
probe_id = "${azurerm_lb_probe.infra_lb_http_probe.id}"
depends_on = ["azurerm_lb_probe.infra_lb_http_probe", "azurerm_lb.infra_lb", "azurerm_lb_backend_address_pool.infra_lb"]
}
resource "azurerm_lb_rule" "infra_lb_https" {
resource_group_name = "${azurerm_resource_group.rg.name}"
loadbalancer_id = "${azurerm_lb.infra_lb.id}"
name = "OpenShiftRouterHTTPS"
protocol = "Tcp"
frontend_port = 443
backend_port = 443
frontend_ip_configuration_name = "LoadBalancerFrontEnd"
backend_address_pool_id = "${azurerm_lb_backend_address_pool.infra_lb.id}"
probe_id = "${azurerm_lb_probe.infra_lb_https_probe.id}"
depends_on = ["azurerm_lb_probe.infra_lb_https_probe", "azurerm_lb_backend_address_pool.infra_lb"]
}
# ******* NETWORK INTERFACES ***********
resource "azurerm_network_interface" "bastion_nic" {
name = "bastionnic${count.index}"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_security_group_id = "${azurerm_network_security_group.master_nsg.id}"
ip_configuration {
name = "bastionip${count.index}"
subnet_id = "${azurerm_subnet.master_subnet.id}"
private_ip_address_allocation = "Dynamic"
public_ip_address_id = "${azurerm_public_ip.bastion_pip.id}"
}
}
resource "azurerm_network_interface" "master_nic" {
name = "masternic${count.index}"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_security_group_id = "${azurerm_network_security_group.master_nsg.id}"
count = "${var.master_instance_count}"
ip_configuration {
name = "masterip${count.index}"
subnet_id = "${azurerm_subnet.master_subnet.id}"
private_ip_address_allocation = "Dynamic"
load_balancer_backend_address_pools_ids = ["${azurerm_lb_backend_address_pool.master_lb.id}"]
load_balancer_inbound_nat_rules_ids = ["${element(azurerm_lb_nat_rule.master_lb.*.id, count.index)}"]
}
}
resource "azurerm_network_interface" "infra_nic" {
name = "infra_nic${count.index}"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_security_group_id = "${azurerm_network_security_group.infra_nsg.id}"
count = "${var.infra_instance_count}"
ip_configuration {
name = "infraip${count.index}"
subnet_id = "${azurerm_subnet.master_subnet.id}"
private_ip_address_allocation = "Dynamic"
load_balancer_backend_address_pools_ids = ["${azurerm_lb_backend_address_pool.infra_lb.id}"]
}
}
resource "azurerm_network_interface" "node_nic" {
name = "node_nic${count.index}"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_security_group_id = "${azurerm_network_security_group.node_nsg.id}"
count = "${var.node_instance_count}"
ip_configuration {
name = "nodeip${count.index}"
subnet_id = "${azurerm_subnet.node_subnet.id}"
private_ip_address_allocation = "Dynamic"
}
}
# ******* Bastion Host *******
resource "azurerm_virtual_machine" "bastion" {
name = "${var.openshift_cluster_prefix}-bastion-1"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_interface_ids = ["${azurerm_network_interface.bastion_nic.id}"]
vm_size = "${var.bastion_vm_size}"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
tags {
displayName = "${var.openshift_cluster_prefix}-bastion VM Creation"
}
os_profile {
computer_name = "${var.openshift_cluster_prefix}-bastion-${count.index}"
admin_username = "${var.admin_username}"
admin_password = "${var.openshift_password}"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/${var.admin_username}/.ssh/authorized_keys"
key_data = "${var.ssh_public_key}"
}
}
storage_image_reference {
publisher = "${lookup(var.os_image_map, join("_publisher", list(var.os_image, "")))}"
offer = "${lookup(var.os_image_map, join("_offer", list(var.os_image, "")))}"
sku = "${lookup(var.os_image_map, join("_sku", list(var.os_image, "")))}"
version = "${lookup(var.os_image_map, join("_version", list(var.os_image, "")))}"
}
storage_os_disk {
name = "${var.openshift_cluster_prefix}-master-osdisk${count.index}"
vhd_uri = "${azurerm_storage_account.bastion_storage_account.primary_blob_endpoint}vhds/${var.openshift_cluster_prefix}-bastion-osdisk.vhd"
caching = "ReadWrite"
create_option = "FromImage"
disk_size_gb = 60
}
}
# ******* Master VMs *******
resource "azurerm_virtual_machine" "master" {
name = "${var.openshift_cluster_prefix}-master-${count.index}"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
availability_set_id = "${azurerm_availability_set.master.id}"
network_interface_ids = ["${element(azurerm_network_interface.master_nic.*.id, count.index)}"]
vm_size = "${var.master_vm_size}"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
count = "${var.master_instance_count}"
depends_on = ["azurerm_virtual_machine.infra", "azurerm_virtual_machine.node"]
tags {
displayName = "${var.openshift_cluster_prefix}-master VM Creation"
}
connection {
host = "${azurerm_public_ip.openshift_master_pip.fqdn}"
user = "${var.admin_username}"
port = 2200
private_key = "${file(var.connection_private_ssh_key_path)}"
}
provisioner "file" {
source = "${var.openshift_script_path}/masterPrep.sh"
destination = "masterPrep.sh"
}
provisioner "file" {
source = "${var.openshift_script_path}/deployOpenShift.sh"
destination = "deployOpenShift.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x masterPrep.sh",
"chmod +x deployOpenShift.sh",
"sudo bash masterPrep.sh \"${azurerm_storage_account.persistent_volume_storage_account.name}\" \"${var.admin_username}\" && sudo bash deployOpenShift.sh \"${var.admin_username}\" \"${var.openshift_password}\" \"${var.key_vault_secret}\" \"${var.openshift_cluster_prefix}-master\" \"${azurerm_public_ip.openshift_master_pip.fqdn}\" \"${azurerm_public_ip.openshift_master_pip.ip_address}\" \"${var.openshift_cluster_prefix}-infra\" \"${var.openshift_cluster_prefix}-node\" \"${var.node_instance_count}\" \"${var.infra_instance_count}\" \"${var.master_instance_count}\" \"${var.default_sub_domain_type}\" \"${azurerm_storage_account.registry_storage_account.name}\" \"${azurerm_storage_account.registry_storage_account.primary_access_key}\" \"${var.tenant_id}\" \"${var.subscription_id}\" \"${var.aad_client_id}\" \"${var.aad_client_secret}\" \"${azurerm_resource_group.rg.name}\" \"${azurerm_resource_group.rg.location}\" \"${var.key_vault_name}\""
]
}
os_profile {
computer_name = "${var.openshift_cluster_prefix}-master-${count.index}"
admin_username = "${var.admin_username}"
admin_password = "${var.openshift_password}"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/${var.admin_username}/.ssh/authorized_keys"
key_data = "${var.ssh_public_key}"
}
}
storage_image_reference {
publisher = "${lookup(var.os_image_map, join("_publisher", list(var.os_image, "")))}"
offer = "${lookup(var.os_image_map, join("_offer", list(var.os_image, "")))}"
sku = "${lookup(var.os_image_map, join("_sku", list(var.os_image, "")))}"
version = "${lookup(var.os_image_map, join("_version", list(var.os_image, "")))}"
}
storage_os_disk {
name = "${var.openshift_cluster_prefix}-master-osdisk${count.index}"
vhd_uri = "${azurerm_storage_account.master_storage_account.primary_blob_endpoint}vhds/${var.openshift_cluster_prefix}-master-osdisk${count.index}.vhd"
caching = "ReadWrite"
create_option = "FromImage"
disk_size_gb = 60
}
storage_data_disk {
name = "${var.openshift_cluster_prefix}-master-docker-pool${count.index}"
vhd_uri = "${azurerm_storage_account.master_storage_account.primary_blob_endpoint}vhds/${var.openshift_cluster_prefix}-master-docker-pool${count.index}.vhd"
disk_size_gb = "${var.data_disk_size}"
create_option = "Empty"
lun = 0
}
}
# ******* Infra VMs *******
resource "azurerm_virtual_machine" "infra" {
name = "${var.openshift_cluster_prefix}-infra-${count.index}"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
availability_set_id = "${azurerm_availability_set.infra.id}"
network_interface_ids = ["${element(azurerm_network_interface.infra_nic.*.id, count.index)}"]
vm_size = "${var.infra_vm_size}"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
count = "${var.infra_instance_count}"
tags {
displayName = "${var.openshift_cluster_prefix}-infra VM Creation"
}
connection {
type = "ssh"
bastion_host = "${azurerm_public_ip.bastion_pip.fqdn}"
bastion_user = "${var.admin_username}"
bastion_private_key = "${file(var.connection_private_ssh_key_path)}"
host = "${element(azurerm_network_interface.infra_nic.*.private_ip_address, count.index)}"
user = "${var.admin_username}"
private_key = "${file(var.connection_private_ssh_key_path)}"
}
provisioner "file" {
source = "${var.openshift_script_path}/nodePrep.sh"
destination = "nodePrep.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x nodePrep.sh",
"sudo bash nodePrep.sh",
]
}
os_profile {
computer_name = "${var.openshift_cluster_prefix}-infra-${count.index}"
admin_username = "${var.admin_username}"
admin_password = "${var.openshift_password}"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/${var.admin_username}/.ssh/authorized_keys"
key_data = "${var.ssh_public_key}"
}
}
storage_image_reference {
publisher = "${lookup(var.os_image_map, join("_publisher", list(var.os_image, "")))}"
offer = "${lookup(var.os_image_map, join("_offer", list(var.os_image, "")))}"
sku = "${lookup(var.os_image_map, join("_sku", list(var.os_image, "")))}"
version = "${lookup(var.os_image_map, join("_version", list(var.os_image, "")))}"
}
storage_os_disk {
name = "${var.openshift_cluster_prefix}-infra-osdisk${count.index}"
vhd_uri = "${azurerm_storage_account.infra_storage_account.primary_blob_endpoint}vhds/${var.openshift_cluster_prefix}-infra-osdisk${count.index}.vhd"
caching = "ReadWrite"
create_option = "FromImage"
}
storage_data_disk {
name = "${var.openshift_cluster_prefix}-infra-docker-pool"
vhd_uri = "${azurerm_storage_account.infra_storage_account.primary_blob_endpoint}vhds/${var.openshift_cluster_prefix}-infra-docker-pool${count.index}.vhd"
disk_size_gb = "${var.data_disk_size}"
create_option = "Empty"
lun = 0
}
}
# ******* Node VMs *******
resource "azurerm_virtual_machine" "node" {
name = "${var.openshift_cluster_prefix}-node-${count.index}"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
availability_set_id = "${azurerm_availability_set.node.id}"
network_interface_ids = ["${element(azurerm_network_interface.node_nic.*.id, count.index)}"]
vm_size = "${var.node_vm_size}"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
count = "${var.node_instance_count}"
tags {
displayName = "${var.openshift_cluster_prefix}-node VM Creation"
}
connection {
type = "ssh"
bastion_host = "${azurerm_public_ip.bastion_pip.fqdn}"
bastion_user = "${var.admin_username}"
bastion_private_key = "${file(var.connection_private_ssh_key_path)}"
host = "${element(azurerm_network_interface.node_nic.*.private_ip_address, count.index)}"
user = "${var.admin_username}"
private_key = "${file(var.connection_private_ssh_key_path)}"
}
provisioner "file" {
source = "${var.openshift_script_path}/nodePrep.sh"
destination = "nodePrep.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x nodePrep.sh",
"sudo bash nodePrep.sh",
]
}
os_profile {
computer_name = "${var.openshift_cluster_prefix}-node-${count.index}"
admin_username = "${var.admin_username}"
admin_password = "${var.openshift_password}"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/${var.admin_username}/.ssh/authorized_keys"
key_data = "${var.ssh_public_key}"
}
}
storage_image_reference {
publisher = "${lookup(var.os_image_map, join("_publisher", list(var.os_image, "")))}"
offer = "${lookup(var.os_image_map, join("_offer", list(var.os_image, "")))}"
sku = "${lookup(var.os_image_map, join("_sku", list(var.os_image, "")))}"
version = "${lookup(var.os_image_map, join("_version", list(var.os_image, "")))}"
}
storage_os_disk {
name = "${var.openshift_cluster_prefix}-node-osdisk"
vhd_uri = "${azurerm_storage_account.nodeos_storage_account.primary_blob_endpoint}vhds/${var.openshift_cluster_prefix}-node-osdisk${count.index}.vhd"
caching = "ReadWrite"
create_option = "FromImage"
}
storage_data_disk {
name = "${var.openshift_cluster_prefix}-node-docker-pool${count.index}"
vhd_uri = "${azurerm_storage_account.nodeos_storage_account.primary_blob_endpoint}vhds/${var.openshift_cluster_prefix}-node-docker-pool${count.index}.vhd"
disk_size_gb = "${var.data_disk_size}"
create_option = "Empty"
lun = 0
}
}
# ******* VM EXTENSIONS *******
# resource "azurerm_virtual_machine_extension" "deploy_open_shift_master" {
# name = "masterOpShExt${count.index}"
# location = "${azurerm_resource_group.rg.location}"
# resource_group_name = "${azurerm_resource_group.rg.name}"
# virtual_machine_name = "${element(azurerm_virtual_machine.master.*.name, count.index)}"
# publisher = "Microsoft.Azure.Extensions"
# type = "CustomScript"
# type_handler_version = "2.0"
# auto_upgrade_minor_version = true
# depends_on = ["azurerm_virtual_machine.master", "azurerm_virtual_machine_extension.node_prep", "azurerm_storage_container.vhds", "azurerm_virtual_machine_extension.deploy_infra"]
#
# settings = <<SETTINGS
# {
# "fileUris": [
# "${var.artifacts_location}scripts/masterPrep.sh",
# "${var.artifacts_location}scripts/deployOpenShift.sh"
# ]
# }
# SETTINGS
#
# protected_settings = <<SETTINGS
# {
# "commandToExecute": "bash masterPrep.sh ${azurerm_storage_account.persistent_volume_storage_account.name} ${var.admin_username} && bash deployOpenShift.sh \"${var.admin_username}\" '${var.openshift_password}' \"${var.key_vault_secret}\" \"${var.openshift_cluster_prefix}-master\" \"${azurerm_public_ip.openshift_master_pip.fqdn}\" \"${azurerm_public_ip.openshift_master_pip.ip_address}\" \"${var.openshift_cluster_prefix}-infra\" \"${var.openshift_cluster_prefix}-node\" \"${var.node_instance_count}\" \"${var.infra_instance_count}\" \"${var.master_instance_count}\" \"${var.default_sub_domain_type}\" \"${azurerm_storage_account.registry_storage_account.name}\" \"${azurerm_storage_account.registry_storage_account.primary_access_key}\" \"${var.tenant_id}\" \"${var.subscription_id}\" \"${var.aad_client_id}\" \"${var.aad_client_secret}\" \"${azurerm_resource_group.rg.name}\" \"${azurerm_resource_group.rg.location}\" \"${var.key_vault_name}\""
# }
# SETTINGS
# }
# resource "azurerm_virtual_machine_extension" "deploy_infra" {
# name = "infraOpShExt${count.index}"
# location = "${azurerm_resource_group.rg.location}"
# resource_group_name = "${azurerm_resource_group.rg.name}"
# virtual_machine_name = "${element(azurerm_virtual_machine.infra.*.name, count.index)}"
# publisher = "Microsoft.Azure.Extensions"
# type = "CustomScript"
# type_handler_version = "2.0"
# auto_upgrade_minor_version = true
# depends_on = ["azurerm_virtual_machine.infra"]
#
# settings = <<SETTINGS
# {
# "fileUris": [
# "${var.artifacts_location}scripts/nodePrep.sh"
# ]
# }
# SETTINGS
#
# protected_settings = <<SETTINGS
# {
# "commandToExecute": "bash nodePrep.sh"
# }
# SETTINGS
# }
# resource "azurerm_virtual_machine_extension" "node_prep" {
# name = "nodePrepExt${count.index}"
# location = "${azurerm_resource_group.rg.location}"
# resource_group_name = "${azurerm_resource_group.rg.name}"
# virtual_machine_name = "${element(azurerm_virtual_machine.node.*.name, count.index)}"
# publisher = "Microsoft.Azure.Extensions"
# type = "CustomScript"
# type_handler_version = "2.0"
# auto_upgrade_minor_version = true
# depends_on = ["azurerm_virtual_machine.node", "azurerm_storage_account.nodeos_storage_account"]
#
# settings = <<SETTINGS
# {
# "fileUris": [
# "${var.artifacts_location}scripts/nodePrep.sh"
# ]
# }
# SETTINGS
#
# protected_settings = <<SETTINGS
# {
# "commandToExecute": "bash nodePrep.sh"
# }
# SETTINGS
# }

View File

@ -0,0 +1,23 @@
output "openshift_console_url" {
value = "https://${azurerm_public_ip.openshift_master_pip.fqdn}:8443/console"
}
output "openshift_master_ssh" {
value = "ssh ${var.admin_username}@${azurerm_public_ip.openshift_master_pip.fqdn} -p 2200"
}
output "openshift_infra_load_balancer_fqdn" {
value = "${azurerm_public_ip.infra_lb_pip.fqdn}"
}
output "node_os_storage_account_name" {
value = "${azurerm_storage_account.nodeos_storage_account.name}"
}
output "node_data_storage_account_name" {
value = "${azurerm_storage_account.nodedata_storage_account.name}"
}
output "infra_storage_account_name" {
value = "${azurerm_storage_account.infra_storage_account.name}"
}

View File

@ -0,0 +1,206 @@
variable "resource_group_name" {
description = "Name of the azure resource group in which you will deploy this template."
}
variable "resource_group_location" {
description = "Location of the azure resource group."
default = "southcentralus"
}
variable "subscription_id" {
description = "Subscription ID of the key vault"
}
variable "tenant_id" {
description = "Tenant ID with access to your key vault and subscription"
}
variable "openshift_script_path" {
description = "Local path to openshift scripts to prep nodes and install openshift origin"
}
variable "os_image" {
description = "Select from CentOS (centos) or RHEL (rhel) for the Operating System"
default = "centos"
}
variable "bastion_vm_size" {
description = "Size of the Bastion Virtual Machine. Allowed values: Standard_A4, Standard_A5, Standard_A6, Standard_A7, Standard_A8, Standard_A9, Standard_A10, Standard_A11, Standard_D1, Standard_D2, Standard_D3, Standard_D4, Standard_D11, Standard_D12, Standard_D13, Standard_D14, Standard_D1_v2, Standard_D2_v2, Standard_D3_v2, Standard_D4_v2, Standard_D5_v2, Standard_D11_v2, Standard_D12_v2, Standard_D13_v2, Standard_D14_v2, Standard_G1, Standard_G2, Standard_G3, Standard_G4, Standard_G5, Standard_D1_v2, Standard_DS2, Standard_DS3, Standard_DS4, Standard_DS11, Standard_DS12, Standard_DS13, Standard_DS14, Standard_DS1_v2, Standard_DS2_v2, Standard_DS3_v2, Standard_DS4_v2, Standard_DS5_v2, Standard_DS11_v2, Standard_DS12_v2, Standard_DS13_v2, Standard_DS14_v2, Standard_GS1, Standard_GS2, Standard_GS3, Standard_GS4, Standard_GS5"
default = "Standard_D2_v2"
}
variable "master_vm_size" {
description = "Size of the Master Virtual Machine. Allowed values: Standard_A4, Standard_A5, Standard_A6, Standard_A7, Standard_A8, Standard_A9, Standard_A10, Standard_A11, Standard_D1, Standard_D2, Standard_D3, Standard_D4, Standard_D11, Standard_D12, Standard_D13, Standard_D14, Standard_D1_v2, Standard_D2_v2, Standard_D3_v2, Standard_D4_v2, Standard_D5_v2, Standard_D11_v2, Standard_D12_v2, Standard_D13_v2, Standard_D14_v2, Standard_G1, Standard_G2, Standard_G3, Standard_G4, Standard_G5, Standard_D1_v2, Standard_DS2, Standard_DS3, Standard_DS4, Standard_DS11, Standard_DS12, Standard_DS13, Standard_DS14, Standard_DS1_v2, Standard_DS2_v2, Standard_DS3_v2, Standard_DS4_v2, Standard_DS5_v2, Standard_DS11_v2, Standard_DS12_v2, Standard_DS13_v2, Standard_DS14_v2, Standard_GS1, Standard_GS2, Standard_GS3, Standard_GS4, Standard_GS5"
default = "Standard_DS4_v2"
}
variable "infra_vm_size" {
description = "Size of the Infra Virtual Machine. Allowed values: Standard_A4, Standard_A5, Standard_A6, Standard_A7, Standard_A8, Standard_A9, Standard_A10, Standard_A11,Standard_D1, Standard_D2, Standard_D3, Standard_D4,Standard_D11, Standard_D12, Standard_D13, Standard_D14,Standard_D1_v2, Standard_D2_v2, Standard_D3_v2, Standard_D4_v2, Standard_D5_v2,Standard_D11_v2, Standard_D12_v2, Standard_D13_v2, Standard_D14_v2,Standard_G1, Standard_G2, Standard_G3, Standard_G4, Standard_G5,Standard_D1_v2, Standard_DS2, Standard_DS3, Standard_DS4,Standard_DS11, Standard_DS12, Standard_DS13, Standard_DS14,Standard_DS1_v2, Standard_DS2_v2, Standard_DS3_v2, Standard_DS4_v2, Standard_DS5_v2,Standard_DS11_v2, Standard_DS12_v2, Standard_DS13_v2, Standard_DS14_v2,Standard_GS1, Standard_GS2, Standard_GS3, Standard_GS4, Standard_GS5"
default = "Standard_DS3_v2"
}
variable "node_vm_size" {
description = "Size of the Node Virtual Machine. Allowed values: Standard_A4, Standard_A5, Standard_A6, Standard_A7, Standard_A8, Standard_A9, Standard_A10, Standard_A11, Standard_D1, Standard_D2, Standard_D3, Standard_D4, Standard_D11, Standard_D12, Standard_D13, Standard_D14, Standard_D1_v2, Standard_D2_v2, Standard_D3_v2, Standard_D4_v2, Standard_D5_v2, Standard_D11_v2, Standard_D12_v2, Standard_D13_v2, Standard_D14_v2, Standard_G1, Standard_G2, Standard_G3, Standard_G4, Standard_G5, Standard_D1_v2, Standard_DS2, Standard_DS3, Standard_DS4, Standard_DS11, Standard_DS12, Standard_DS13, Standard_DS14, Standard_DS1_v2, Standard_DS2_v2, Standard_DS3_v2, Standard_DS4_v2, Standard_DS5_v2, Standard_DS11_v2, Standard_DS12_v2, Standard_DS13_v2, Standard_DS14_v2, Standard_GS1, Standard_GS2, Standard_GS3, Standard_GS4, Standard_GS5"
default = "Standard_DS3_v2"
}
variable "storage_account_type_map" {
description = "This is the storage account type that you will need based on the vm size that you choose (value constraints)"
type = "map"
default = {
Standard_A4 = "Standard_LRS"
Standard_A5 = "Standard_LRS"
Standard_A6 = "Standard_LRS"
Standard_A7 = "Standard_LRS"
Standard_A8 = "Standard_LRS"
Standard_A9 = "Standard_LRS"
Standard_A10 = "Standard_LRS"
Standard_A11 = "Standard_LRS"
Standard_D1 = "Standard_LRS"
Standard_D2 = "Standard_LRS"
Standard_D3 = "Standard_LRS"
Standard_D4 = "Standard_LRS"
Standard_D11 = "Standard_LRS"
Standard_D12 = "Standard_LRS"
Standard_D13 = "Standard_LRS"
Standard_D14 = "Standard_LRS"
Standard_D1_v2 = "Standard_LRS"
Standard_D2_v2 = "Standard_LRS"
Standard_D3_v2 = "Standard_LRS"
Standard_D4_v2 = "Standard_LRS"
Standard_D5_v2 = "Standard_LRS"
Standard_D11_v2 = "Standard_LRS"
Standard_D12_v2 = "Standard_LRS"
Standard_D13_v2 = "Standard_LRS"
Standard_D14_v2 = "Standard_LRS"
Standard_G1 = "Standard_LRS"
Standard_G2 = "Standard_LRS"
Standard_G3 = "Standard_LRS"
Standard_G4 = "Standard_LRS"
Standard_G5 = "Standard_LRS"
Standard_DS1 = "Premium_LRS"
Standard_DS2 = "Premium_LRS"
Standard_DS3 = "Premium_LRS"
Standard_DS4 = "Premium_LRS"
Standard_DS11 = "Premium_LRS"
Standard_DS12 = "Premium_LRS"
Standard_DS13 = "Premium_LRS"
Standard_DS14 = "Premium_LRS"
Standard_DS1_v2 = "Premium_LRS"
Standard_DS2_v2 = "Premium_LRS"
Standard_DS3_v2 = "Premium_LRS"
Standard_DS4_v2 = "Premium_LRS"
Standard_DS5_v2 = "Premium_LRS"
Standard_DS11_v2 = "Premium_LRS"
Standard_DS12_v2 = "Premium_LRS"
Standard_DS13_v2 = "Premium_LRS"
Standard_DS14_v2 = "Premium_LRS"
Standard_DS15_v2 = "Premium_LRS"
Standard_GS1 = "Premium_LRS"
Standard_GS2 = "Premium_LRS"
Standard_GS3 = "Premium_LRS"
Standard_GS4 = "Premium_LRS"
Standard_GS5 = "Premium_LRS"
}
}
variable "os_image_map" {
description = "os image map"
type = "map"
default = {
centos_publisher = "Openlogic"
centos_offer = "CentOS"
centos_sku = "7.3"
centos_version = "latest"
rhel_publisher = "RedHat"
rhel_offer = "RHEL"
rhel_sku = "7.2"
rhel_version = "latest"
}
}
variable "disk_size_gb" {
description = "storage os disk size"
default = 60
}
variable "openshift_cluster_prefix" {
description = "Cluster Prefix used to configure domain name label and hostnames for all nodes - master, infra and nodes. Between 1 and 20 characters"
}
variable "master_instance_count" {
description = "Number of OpenShift Masters nodes to deploy. 1 is non HA and 3 is for HA."
default = 1
}
variable "infra_instance_count" {
description = "Number of OpenShift infra nodes to deploy. 1 is non HA. Choose 2 or 3 for HA."
default = 1
}
variable "node_instance_count" {
description = "Number of OpenShift nodes to deploy. Allowed values: 1-30"
default = 1
}
variable "data_disk_size" {
description = "Size of data disk to attach to nodes for Docker volume - valid sizes are 128 GB, 512 GB and 1023 GB"
default = 128
}
variable "admin_username" {
description = "Admin username for both OS login and OpenShift login"
default = "ocpadmin"
}
variable "openshift_password" {
description = "Password for OpenShift login"
}
variable "ssh_public_key" {
description = "Path to your SSH Public Key"
}
variable "connection_private_ssh_key_path" {
description = "Path to the private ssh key used to connect to machines within the OpenShift cluster."
}
variable "key_vault_resource_group" {
description = "The name of the Resource Group that contains the Key Vault"
}
variable "key_vault_name" {
description = "The name of the Key Vault you will use"
}
variable "key_vault_secret" {
description = "The Secret Name you used when creating the Secret (that contains the Private Key)"
}
variable "aad_client_id" {
description = "Azure Active Directory Client ID also known as Application ID for Service Principal"
}
variable "aad_client_secret" {
description = "Azure Active Directory Client Secret for Service Principal"
}
variable "default_sub_domain_type" {
description = "This will either be 'xipio' (if you don't have your own domain) or 'custom' if you have your own domain that you would like to use for routing"
default = "xipio"
}
variable "default_sub_domain" {
description = "The wildcard DNS name you would like to use for routing if you selected 'custom' above. If you selected 'xipio' above, then this field will be ignored"
default = "contoso.com"
}
variable "api_version_compute" {
default = "2015-06-15"
}
variable "api_version" {
default = "2015-01-01"
}