Skip to content
This repository has been archived by the owner on Oct 31, 2019. It is now read-only.

Modularize terraform stack #4

Merged
merged 8 commits into from
Jan 4, 2018
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 2 additions & 5 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,12 @@ jobs:
AWS_DEFAULT_REGION: us-east-1
steps:
- checkout
- run:
name: Set variables
command: cd terraform && cp backend.tfvars.example backend.tfvars && cp terraform.tfvars.example terraform.tfvars
- run:
name: EKK Stack - Set up Terraform
command: cd terraform && terraform init -backend=false
command: cd terraform/test && terraform init -backend=false
- run:
name: EKK Stack - Validate Terraform
command: cd terraform && terraform validate
command: cd terraform/test && terraform validate -check-variables=false

workflows:
version: 2
Expand Down
21 changes: 21 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,31 @@ The stack also creates a small EC2 instance (defined in ec2-test.tf) that will b

## Deployment

This stack is meant to be consumed as a module in your existing terraform stack. You can consume it by using code similar to this:

````
module "ekk_stack" {
source = "github.com/GSA/devsecops-ekk-stack"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would actually be github.com/GSA/devsecops-ekk-stack//terraform - confirmed by adding to devsecops-example: GSA/devsecops-example#65. Alternatively, can stick to the standard module structure and put the reusable module in the root directory.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

s3_logging_bucket_name = "${var.s3_logging_bucket_name}"
es_kinesis_delivery_stream = "${var.es_kinesis_delivery_stream}"
}
````
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: de-indent this block, use three backticks, and hcl after the top three to get syntax highlighting.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.


...where the variables referenced above are defined in your terraform.tfvars file.

Following the steps below will emulate this exact behavior. You must execute it from the test directory just below the terraform directory. The test consumes the stack as a module and deploys it, then sets up an EC2 instance that will install the aws-kinesis-agent and configure it to stream to the Kinesis Firehose delivery stream.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: I'd split the README by Usage and Development (or Test) headings, to more clearly delineate.


The Kinesis stream will send to Elasticsearch and S3.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll probably want to add a bit about "you'll need to install a logging agent on your instances", or "here's how to forward logs", even if we just link elsewhere. Can take care of that separately though.


The EC2 instance also configures itself with a cron job that performs a curl against its local apache2 daemon 5900 times every minute. This is used to generate logs that the Kinesis agent will capture. To verify that it is working properly, you can login to the EC2 instance and tail the aws-kinesis agent log (/var/log/aws-kinesis/aws-kinesis-agent.log) or look in the web console at the CloudWatch metrics for the Firehose delivery stream itself.

Use these steps to deploy the test.

1. Create an S3 bucket for the terraform state.
1. Run the following command:

````sh
cd terraform/test
cp backend.tfvars.example backend.tfvars
cp terraform.tfvars.example terraform.tfvars
````
Expand Down
9 changes: 0 additions & 9 deletions terraform/aws.tf

This file was deleted.

81 changes: 0 additions & 81 deletions terraform/ec2-test.tf

This file was deleted.

42 changes: 0 additions & 42 deletions terraform/elasticsearch.tf

This file was deleted.

36 changes: 0 additions & 36 deletions terraform/kinesis.tf

This file was deleted.

19 changes: 0 additions & 19 deletions terraform/logs.tf

This file was deleted.

108 changes: 107 additions & 1 deletion terraform/iam.tf → terraform/main.tf
Original file line number Diff line number Diff line change
@@ -1,3 +1,46 @@
resource "aws_elasticsearch_domain" "elasticsearch" {
domain_name = "${var.es_domain_name}"
elasticsearch_version = "${var.es_version}"

cluster_config {
dedicated_master_enabled = "true"
instance_type = "${var.es_instance_type}"
instance_count = "${var.es_instance_count}"
zone_awareness_enabled = "true"
dedicated_master_type = "${var.es_dedicated_master_instance_type}"
dedicated_master_count = "${var.es_dedicated_master_count}"
}

advanced_options {
"rest.action.multi.allow_explicit_index" = "true"
}

ebs_options {
ebs_enabled = "true"
iops = "0"
volume_size = "20"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Presumably we'd want this configurable, yeah?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to make a pass at parameterizing a lot of things in a separate PR

volume_type = "gp2"
}

snapshot_options {
automated_snapshot_start_hour = 0
}

access_policies = <<CONFIG
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "es:*",
"Principal": "*",
"Effect": "Allow",
"Resource": "*"
}
]
}
CONFIG
}

# IAM roles and policies for this stack

# Policies
Expand Down Expand Up @@ -101,4 +144,67 @@ resource "aws_iam_role" "elasticsearch_delivery_role" {
resource "aws_iam_role_policy_attachment" "es_delivery_full_access" {
role = "${aws_iam_role.elasticsearch_delivery_role.name}"
policy_arn = "arn:aws:iam::aws:policy/AmazonESFullAccess"
}
}

resource "aws_kinesis_firehose_delivery_stream" "extended_s3_stream" {
name = "${var.es_kinesis_delivery_stream}"
destination = "elasticsearch"

elasticsearch_configuration {
buffering_interval = 60
buffering_size = 50
cloudwatch_logging_options {
enabled = "true"
log_group_name = "${aws_cloudwatch_log_group.es_log_group.name}"
log_stream_name = "${aws_cloudwatch_log_stream.es_log_stream.name}"
}
domain_arn = "${aws_elasticsearch_domain.elasticsearch.arn}"
role_arn = "${aws_iam_role.elasticsearch_delivery_role.arn}"
index_name = "logmonitor"
type_name = "log"
index_rotation_period = "NoRotation"
retry_duration = "60"
role_arn = "${aws_iam_role.elasticsearch_delivery_role.arn}"
s3_backup_mode = "AllDocuments"
}

s3_configuration {
role_arn = "${aws_iam_role.s3_delivery_role.arn}"
bucket_arn = "${aws_s3_bucket.s3_logging_bucket.arn}"
buffer_size = 10
buffer_interval = 300
compression_format = "UNCOMPRESSED"
prefix = "firehose/"
cloudwatch_logging_options {
enabled = "true"
log_group_name = "${aws_cloudwatch_log_group.s3_log_group.name}"
log_stream_name = "${aws_cloudwatch_log_stream.s3_log_stream.name}"
}
}
}

resource "aws_cloudwatch_log_group" "es_log_group" {
name = "${var.es_log_group_name}"
retention_in_days = "${var.es_log_retention_in_days}"
}

resource "aws_cloudwatch_log_group" "s3_log_group" {
name = "${var.s3_log_group_name}"
retention_in_days = "${var.s3_log_retention_in_days}"
}

resource "aws_cloudwatch_log_stream" "es_log_stream" {
name = "${var.es_log_stream_name}"
log_group_name = "${aws_cloudwatch_log_group.es_log_group.name}"
}

resource "aws_cloudwatch_log_stream" "s3_log_stream" {
name = "${var.s3_log_stream_name}"
log_group_name = "${aws_cloudwatch_log_group.s3_log_group.name}"
}

resource "aws_s3_bucket" "s3_logging_bucket" {
bucket = "${var.s3_logging_bucket_name}"
acl = "private"
}

4 changes: 4 additions & 0 deletions terraform/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,7 @@ output "es_domain_arn" {
output "es_domain_endpoint" {
value = "${aws_elasticsearch_domain.elasticsearch.endpoint}"
}

output "elasticsearch_instance_profile_id" {
value = "${aws_iam_instance_profile.elasticsearch_instance_profile.id}"
}
4 changes: 0 additions & 4 deletions terraform/s3.tf

This file was deleted.

3 changes: 3 additions & 0 deletions terraform/test/aws.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
data "aws_region" "current" {
current = true
}
Loading