Skip to content
This repository has been archived by the owner on Oct 31, 2019. It is now read-only.

Commit

Permalink
Merge pull request #4 from GSA/modularize-terraform-stack
Browse files Browse the repository at this point in the history
Modularize terraform stack
  • Loading branch information
Vermyndax authored Jan 4, 2018
2 parents 0e98d36 + 19a60b2 commit 988fe4c
Show file tree
Hide file tree
Showing 20 changed files with 235 additions and 201 deletions.
7 changes: 2 additions & 5 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,12 @@ jobs:
AWS_DEFAULT_REGION: us-east-1
steps:
- checkout
- run:
name: Set variables
command: cd terraform && cp backend.tfvars.example backend.tfvars && cp terraform.tfvars.example terraform.tfvars
- run:
name: EKK Stack - Set up Terraform
command: cd terraform && terraform init -backend=false
command: cd terraform/test && terraform init -backend=false
- run:
name: EKK Stack - Validate Terraform
command: cd terraform && terraform validate
command: cd terraform/test && terraform validate -check-variables=false

workflows:
version: 2
Expand Down
24 changes: 23 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,31 @@ This stack is based on [this CloudFormation example.](https://us-west-2.console.

The stack also creates a small EC2 instance (defined in ec2-test.tf) that will be configured with a kinesis agent to test writing into the stream. If you do not wish to deploy this instance, move this file out of the terraform directory or change the extension of the file.

## Deployment
## Usage

This stack is meant to be consumed as a module in your existing terraform stack. You can consume it by using code similar to this:

```hcl
module "ekk_stack" {
source = "github.com/GSA/devsecops-ekk-stack//terraform"
s3_logging_bucket_name = "${var.s3_logging_bucket_name}"
es_kinesis_delivery_stream = "${var.es_kinesis_delivery_stream}"
}
```

...where the variables referenced above are defined in your terraform.tfvars file. "var.s3_logging_bucket_name" should be set to a bucket (which the stack will create) to contain copies of the kinesis firehose logs. "var.es_kinesis_delivery_stream" should be set to the name of the firehose delivery stream that you wish to use. The EKK stack will create this delivery stream with the name you provide with this variable.

The Kinesis stream will send to Elasticsearch and S3.

## Test Deployment

Use these steps to deploy the test.

1. Create an S3 bucket for the terraform state.
1. Run the following command:

````sh
cd terraform/test
cp backend.tfvars.example backend.tfvars
cp terraform.tfvars.example terraform.tfvars
````
Expand All @@ -35,3 +54,6 @@ The stack also creates a small EC2 instance (defined in ec2-test.tf) that will b
````sh
terraform apply
````
Following the steps above will emulate the intended behavior of the stack. You must execute it from the test directory just below the terraform directory. The test consumes the stack as a module and deploys it, then sets up an EC2 instance that will install the aws-kinesis-agent and configure it to stream to the Kinesis Firehose delivery stream.

The EC2 instance also configures itself with a cron job that performs a curl against its local apache2 daemon 5900 times every minute. This is used to generate logs that the Kinesis agent will capture. To verify that it is working properly, you can login to the EC2 instance and tail the aws-kinesis agent log (/var/log/aws-kinesis/aws-kinesis-agent.log) or look in the web console at the CloudWatch metrics for the Firehose delivery stream itself.
9 changes: 0 additions & 9 deletions terraform/aws.tf

This file was deleted.

81 changes: 0 additions & 81 deletions terraform/ec2-test.tf

This file was deleted.

42 changes: 0 additions & 42 deletions terraform/elasticsearch.tf

This file was deleted.

36 changes: 0 additions & 36 deletions terraform/kinesis.tf

This file was deleted.

19 changes: 0 additions & 19 deletions terraform/logs.tf

This file was deleted.

108 changes: 107 additions & 1 deletion terraform/iam.tf → terraform/main.tf
Original file line number Diff line number Diff line change
@@ -1,3 +1,46 @@
resource "aws_elasticsearch_domain" "elasticsearch" {
domain_name = "${var.es_domain_name}"
elasticsearch_version = "${var.es_version}"

cluster_config {
dedicated_master_enabled = "true"
instance_type = "${var.es_instance_type}"
instance_count = "${var.es_instance_count}"
zone_awareness_enabled = "true"
dedicated_master_type = "${var.es_dedicated_master_instance_type}"
dedicated_master_count = "${var.es_dedicated_master_count}"
}

advanced_options {
"rest.action.multi.allow_explicit_index" = "true"
}

ebs_options {
ebs_enabled = "true"
iops = "0"
volume_size = "20"
volume_type = "gp2"
}

snapshot_options {
automated_snapshot_start_hour = 0
}

access_policies = <<CONFIG
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "es:*",
"Principal": "*",
"Effect": "Allow",
"Resource": "*"
}
]
}
CONFIG
}

# IAM roles and policies for this stack

# Policies
Expand Down Expand Up @@ -101,4 +144,67 @@ resource "aws_iam_role" "elasticsearch_delivery_role" {
resource "aws_iam_role_policy_attachment" "es_delivery_full_access" {
role = "${aws_iam_role.elasticsearch_delivery_role.name}"
policy_arn = "arn:aws:iam::aws:policy/AmazonESFullAccess"
}
}

resource "aws_kinesis_firehose_delivery_stream" "extended_s3_stream" {
name = "${var.es_kinesis_delivery_stream}"
destination = "elasticsearch"

elasticsearch_configuration {
buffering_interval = 60
buffering_size = 50
cloudwatch_logging_options {
enabled = "true"
log_group_name = "${aws_cloudwatch_log_group.es_log_group.name}"
log_stream_name = "${aws_cloudwatch_log_stream.es_log_stream.name}"
}
domain_arn = "${aws_elasticsearch_domain.elasticsearch.arn}"
role_arn = "${aws_iam_role.elasticsearch_delivery_role.arn}"
index_name = "logmonitor"
type_name = "log"
index_rotation_period = "NoRotation"
retry_duration = "60"
role_arn = "${aws_iam_role.elasticsearch_delivery_role.arn}"
s3_backup_mode = "AllDocuments"
}

s3_configuration {
role_arn = "${aws_iam_role.s3_delivery_role.arn}"
bucket_arn = "${aws_s3_bucket.s3_logging_bucket.arn}"
buffer_size = 10
buffer_interval = 300
compression_format = "UNCOMPRESSED"
prefix = "firehose/"
cloudwatch_logging_options {
enabled = "true"
log_group_name = "${aws_cloudwatch_log_group.s3_log_group.name}"
log_stream_name = "${aws_cloudwatch_log_stream.s3_log_stream.name}"
}
}
}

resource "aws_cloudwatch_log_group" "es_log_group" {
name = "${var.es_log_group_name}"
retention_in_days = "${var.es_log_retention_in_days}"
}

resource "aws_cloudwatch_log_group" "s3_log_group" {
name = "${var.s3_log_group_name}"
retention_in_days = "${var.s3_log_retention_in_days}"
}

resource "aws_cloudwatch_log_stream" "es_log_stream" {
name = "${var.es_log_stream_name}"
log_group_name = "${aws_cloudwatch_log_group.es_log_group.name}"
}

resource "aws_cloudwatch_log_stream" "s3_log_stream" {
name = "${var.s3_log_stream_name}"
log_group_name = "${aws_cloudwatch_log_group.s3_log_group.name}"
}

resource "aws_s3_bucket" "s3_logging_bucket" {
bucket = "${var.s3_logging_bucket_name}"
acl = "private"
}

4 changes: 4 additions & 0 deletions terraform/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,7 @@ output "es_domain_arn" {
output "es_domain_endpoint" {
value = "${aws_elasticsearch_domain.elasticsearch.endpoint}"
}

output "elasticsearch_instance_profile_id" {
value = "${aws_iam_instance_profile.elasticsearch_instance_profile.id}"
}
4 changes: 0 additions & 4 deletions terraform/s3.tf

This file was deleted.

3 changes: 3 additions & 0 deletions terraform/test/aws.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
data "aws_region" "current" {
current = true
}
File renamed without changes.
Loading

0 comments on commit 988fe4c

Please sign in to comment.