Continious deployment via GitOps approach using AWS Batch on Fargate along with AWS Step-functions to enqueue and orchestrate on-demand terraform/terragrunt jobs.
- Run terraform/terragrunt commands in a specific account
- TTL (time to live)
- Install ephemeral infrastructure with time to live (TTL)
- Nuke ephemeral account from chatops
- Nuke ephemeral accounts in a nightly crone job
- Production
- Blue - (Canary) - Green Deployment
- Clone into multi regions
- A-B Test deployment
- Rollback
- Detect and Alert on configuration drifts
- Testing
- Load Tests
- End to End tests
/run
<tgCommand=[apply|plan|destroy|validate|output|show>
<stack=[live|test|light|...]>
<targetAwsRegion=[eu-west-1|...]>
<workspaceId=[testing|staging|production-1|production-2|pipeline|database]>
Note: All parameters should be in one line.
You don't need to have all accounts. Only two accounts, pipeline and testing
are required to test the pipeline. You can customize organizational units, rename them or re-structure for your requirements. On the other hand you'd better put ephemeral accounts under same organizational parent since you will nuke them every night in a crone job.
Fork sf-pipeline repository along with sf-infra and sf-app repositories and clone your forks within the same directory in your local environment.
- parent-dir
- sf-pipeline
- sf-infra
- sf-app
- Change directory to
sf-pipeline/infra/pipeline/live/all/terraform
- Dublicate
sample-custom-inputs.tfvars.json
and rename it ascustom-inputs.auto.tfvars.json
{
"pipeline_account": "<your-pipeline-account>"
}
- Run
terraform apply
- In your forked repositories enter following secrets under settings/secrets/actions
- AWS_SECRET_ACCESS_KEY (for an Iam user having name
cicd
and admin rights for your pipeline account) - AWS_SECRET_ACCESS_KEY (Same as above)
- AWS_REGION (default region for pipeline account)
- PAT_WORKFLOW (Private access token which has minimum access rights executing workflow)
- AWS_SECRET_ACCESS_KEY (for an Iam user having name
- In AWS Secrets manager create following secrets for every environment you would like to have. Save the secret names as following
- PIPELINE_AWS_ACCESS (Required)
- TESTING_AWS_ACCESS (Required)
- STAGING_AWS_ACCESS (Optional)
- PRODUCTION1_AWS_ACCESS (Optional)
- PRODUCTION2_AWS_ACCESS (Optional)
- DATABASE_AWS_ACCESS (Optional)
In the sf-infra create a test branch, open a PR and enter following as a new comment.
/run tgCommand=apply stack=light
Other stacks available are test with no real infrastructure and live with a real sample kubernetes cluster with postgres RDS and othe components.
/run tgCommand=apply stack=live
/run tgCommand=plan stack=light
- Create a hosted zone in your master account
- Create a hosted zone in your sub organizational accounts which you use in your gitops pipeline
- Enter NS records in your master account Route53 Configuration
Homebrew:
brew install terraform terragrunt awscli jq [email protected] direnv gettext rdfind
Other
# jq, direnv and python are available in standard package libraries
sudo apt-get install jq direnv python3 aws-cli gettext uuid-runtime
# For terraform you can either add the hashicorp repo:
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install terraform
#..or manually download the binary and place it somewhere (similar to process below)
# For terragrunt you need to manually download it to an appropriate folder and set as executable
# https://terragrunt.gruntwork.io/docs/getting-started/install/#download-from-releases-page
pushd /tmp/
wget https://github.com/gruntwork-io/terragrunt/releases/download/v0.36.3/terragrunt_linux_amd64
mv terragrunt_linux_amd64 ~/.local/bin/terragrunt # move to a folder that's in our $PATH
chmod +x ~/.local/bin/terragrunt # Make executable
popd
You'll need to install envsubst
. For Debian-like systems it is part ofthe gettext-base
package
apt-get install gettext-base