terraform/examples/azure-spark-and-cassandra-o...
Annie Hedgpeth 14a2c04ddf provider/azurerm: Azurerm example ci updates (#15310)
* initial commit - 101-vm-from-user-image

* changed branch name

* not deploying - storage problems

* provisions vm but image not properly prepared

* storage not correct

* provisions properly

* changed main.tf to azuredeploy.tf

* added tfvars and info for README

* tfvars ignored and corrected file ext

* added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing

* deploy.sh to be executable

* executable deploy files

* added CI files; changed vars

* prep for PR

* removal of old folder

* prep for PR

* wrong args for travis

* more PR prep

* updated README

* commented out variables in terraform.tfvars

* Topic 101 vm from user image (#2)

* initial commit - 101-vm-from-user-image
* added tfvars and info for README
* added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing
* prep for PR

* added new template

* oops, left off master

* prep for PR

* correct repository for destination

* renamed scripts to be more intuitive; added check for docker

* merge vm simple; vm from image

* initial commit

* deploys locally

* updated deploy

* consolidated deploy and after_deploy into a single script; simplified ci process; added os_profile_linux_config

* added terraform show

* changed to allow http & https (like ARM tmplt)

* changed host_name & host_name variable desc

* added az cli check

* on this branch, only build test_dir; master will aggregate all the examples

* merge master

* added new constructs/naming for deploy scripts, etc.

* suppress az login output

* suppress az login output

* forgot about line breaks

* breaking build as an example

* fixing broken build example

* merge of CI config

* fixed grammar in readme

* prep for PR

* took out armviz button and minor README changes

* changed host_name

* fixed merge conflicts

* changed host_name variable

* updating Hashicorp's changes to merged simple linux branch

* updating files to merge w/master and prep for Hashicorp pr

* Revert "updating files to merge w/master and prep for Hashicorp pr"

This reverts commit b850cd5d2a858eff073fc5a1097a6813d0f8b362.

* Revert "updating Hashicorp's changes to merged simple linux branch"

This reverts commit dbaf8d14a9cdfcef0281919671357f6171ebd4e6.

* removing vm from user image example from this branch

* removed old branch

* azure-2-vms-loadbalancer-lbrules (#13)

* initial commit

* need to change lb_rule & nic

* deploys locally

* updated README

* updated travis and deploy scripts for Hari's repo

* renamed deploy script

* clean up

* prep for PR

* updated readme

* fixing conflict in .travis.yml

* initial commit; in progress

* in progress

* in progress; encryption fails

* in progress

* deploys successfully locally

* clean up; deploy typo fixed

* merging hashi master into this branch

* troubleshooting deploy

* added missing vars to deploy script

* updated README, outputs, and added graph

* simplified outputs

* provisions locally

* cleaned up vars

* fixed chart on README

* prepping for pr

* fixed merge conflict

* initial commit

* provisions locally; but azuremysql.sh script fails

* commented out provider

* commenting out provider vars

* tf fmt / uncommented Ext - will fail

* testing other examples

* changed os version for script compatability; changed command

* removed ssh from output (no nsg)

* changed travis to test only this topic's dir

* added nsg

* testing encrypt-running-linux

* fixed IPs and validation

* cleanup merge conflicts

* updated validation cmd; reverted non-topic ci changes

* in progress; new branch for updating CI's permanent resources

* updated travis.yml branch

* pinned version 0.2.10 azuresdk/azure-cli-python

* testing vm-specialized-vhd

* added subnet var

* testing 2 lb template

* testing encrypt-running-linux

* changed disk size

* testing all examples; new var names

* testing vm-from-user-image

* testing vm-specialized-vhd

* testing vm-custom-image WindowsImage

* test all examples

* changed storage account for vm-custom-image

* changed existing_subnet_id variable

* correcting env var for disk name

* testing all examples

* testing all examples; commenting out last two unmerged examples

* added graph to cdn readme

* merged hashi master into this branch

* testing all examples

* delete os disk

* cleanup fixes for deleting CI resources

* manually deleting resources w/azure cli

* reverted to hashicorp's .travis.yml
2017-06-16 11:30:38 +03:00
..
README.md provider/azurerm: Add example of Spark and Cassrandra on CentOS (#15123) 2017-06-08 19:38:34 +03:00
deploy.ci.sh provider/azurerm: Azurerm example ci updates (#15310) 2017-06-16 11:30:38 +03:00
deploy.mac.sh provider/azurerm: Add example of Spark and Cassrandra on CentOS (#15123) 2017-06-08 19:38:34 +03:00
main.tf provider/azurerm: Add example of Spark and Cassrandra on CentOS (#15123) 2017-06-08 19:38:34 +03:00
outputs.tf provider/azurerm: Add example of Spark and Cassrandra on CentOS (#15123) 2017-06-08 19:38:34 +03:00
variables.tf provider/azurerm: Add example of Spark and Cassrandra on CentOS (#15123) 2017-06-08 19:38:34 +03:00

README.md

Spark & Cassandra on CentOS 7.x

This Terraform template was based on this Azure Quickstart Template. Changes to the ARM template that may have occurred since the creation of this example may not be reflected here.

This project configures a Spark cluster (1 master and n-slave nodes) and a single node Cassandra on Azure using CentOS 7.x. The base image starts with CentOS 7.3, and it is updated to the latest version as part of the provisioning steps.

Please note that [Azure Resource Manager][3] is used to provision the environment.

Software

Category Software Version Notes
Operating System CentOS 7.x Based on CentOS 7.1 but it will be auto upgraded to the lastest point release
Java OpenJDK 1.8.0 Installed on all servers
Spark Spark 1.6.0 with Hadoop 2.6 The installation contains libraries needed for Hadoop 2.6
Cassandra Cassandra 3.2 Installed through DataStax's YUM repository

Defaults

Component Setting Default Notes
Spark - Master VM Size Standard D1 V2
Spark - Master Storage Standard LRS
Spark - Master Internal IP 10.0.0.5
Spark - Master Service User Account spark Password-less access
Spark - Slave VM Size Standard D3 V2
Spark - Slave Storage Standard LRS
Spark - Slave Internal IP Range 10.0.1.5 - 10.0.1.255
Spark - Slave # of Nodes 2 Maximum of 200
Spark - Slave Availability 2 fault domains, 5 update domains
Spark - Slave Service User Account spark Password-less access
Cassandra VM Size Standard D3 V2
Cassandra Storage Standard LRS
Cassandra Internal IP 10.2.0.5
Cassandra Service User Account cassandra Password-less access

Prerequisites

  1. Ensure you have an Azure subscription.
  2. Ensure you have enough available vCPU cores on your subscription. Otherwise, you will receive an error during the process. The number of cores can be increased through a support ticket in Azure Portal.

main.tf

The main.tf file contains the actual resources that will be deployed. It also contains the Azure Resource Group definition and any defined variables.

outputs.tf

This data is outputted when terraform apply is called, and can be queried using the terraform output command.

provider.tf

Azure requires that an application is added to Azure Active Directory to generate the client_id, client_secret, and tenant_id needed by Terraform (subscription_id can be recovered from your Azure account details). Please go here for full instructions on how to create this to populate your provider.tf file.

terraform.tfvars

If a terraform.tfvars file is present in the current directory, Terraform automatically loads it to populate variables. We don't recommend saving usernames and password to version control, but you can create a local secret variables file and use -var-file to load it.

If you are committing this template to source control, please insure that you add this file to your .gitignore file.

variables.tf

The variables.tf file contains all of the input parameters that the user can specify when deploying this Terraform template.

Post-Deployment

  1. All servers will have a public IP and SSH port enabled by default. These can be disabled or modified in the template or by using Azure Portal.
  2. All servers are configured with the same username and password. You may SSH into each server and ensure connectivity.
  3. Spark WebUI is running on port 8080. Access it using MASTER_WEB_UI_PUBLIC_IP:8080 on your browser. Public IP is available in the outputs as well as through Azure Portal.
  4. Delete the Resource Group that was created to stage the provisioning scripts.