-
Notifications
You must be signed in to change notification settings - Fork 7
Modularize terraform stack #4
Changes from all commits
d9d32e9
46ff9f4
b6c00de
8b62c43
3709c3b
76c9301
dedf295
19a60b2
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -6,12 +6,31 @@ This stack is based on [this CloudFormation example.](https://us-west-2.console. | |
|
||
The stack also creates a small EC2 instance (defined in ec2-test.tf) that will be configured with a kinesis agent to test writing into the stream. If you do not wish to deploy this instance, move this file out of the terraform directory or change the extension of the file. | ||
|
||
## Deployment | ||
## Usage | ||
|
||
This stack is meant to be consumed as a module in your existing terraform stack. You can consume it by using code similar to this: | ||
|
||
```hcl | ||
module "ekk_stack" { | ||
source = "github.com/GSA/devsecops-ekk-stack//terraform" | ||
s3_logging_bucket_name = "${var.s3_logging_bucket_name}" | ||
es_kinesis_delivery_stream = "${var.es_kinesis_delivery_stream}" | ||
} | ||
``` | ||
|
||
...where the variables referenced above are defined in your terraform.tfvars file. "var.s3_logging_bucket_name" should be set to a bucket (which the stack will create) to contain copies of the kinesis firehose logs. "var.es_kinesis_delivery_stream" should be set to the name of the firehose delivery stream that you wish to use. The EKK stack will create this delivery stream with the name you provide with this variable. | ||
|
||
The Kinesis stream will send to Elasticsearch and S3. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We'll probably want to add a bit about "you'll need to install a logging agent on your instances", or "here's how to forward logs", even if we just link elsewhere. Can take care of that separately though. |
||
|
||
## Test Deployment | ||
|
||
Use these steps to deploy the test. | ||
|
||
1. Create an S3 bucket for the terraform state. | ||
1. Run the following command: | ||
|
||
````sh | ||
cd terraform/test | ||
cp backend.tfvars.example backend.tfvars | ||
cp terraform.tfvars.example terraform.tfvars | ||
```` | ||
|
@@ -35,3 +54,6 @@ The stack also creates a small EC2 instance (defined in ec2-test.tf) that will b | |
````sh | ||
terraform apply | ||
```` | ||
Following the steps above will emulate the intended behavior of the stack. You must execute it from the test directory just below the terraform directory. The test consumes the stack as a module and deploys it, then sets up an EC2 instance that will install the aws-kinesis-agent and configure it to stream to the Kinesis Firehose delivery stream. | ||
|
||
The EC2 instance also configures itself with a cron job that performs a curl against its local apache2 daemon 5900 times every minute. This is used to generate logs that the Kinesis agent will capture. To verify that it is working properly, you can login to the EC2 instance and tail the aws-kinesis agent log (/var/log/aws-kinesis/aws-kinesis-agent.log) or look in the web console at the CloudWatch metrics for the Firehose delivery stream itself. |
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,3 +1,46 @@ | ||
resource "aws_elasticsearch_domain" "elasticsearch" { | ||
domain_name = "${var.es_domain_name}" | ||
elasticsearch_version = "${var.es_version}" | ||
|
||
cluster_config { | ||
dedicated_master_enabled = "true" | ||
instance_type = "${var.es_instance_type}" | ||
instance_count = "${var.es_instance_count}" | ||
zone_awareness_enabled = "true" | ||
dedicated_master_type = "${var.es_dedicated_master_instance_type}" | ||
dedicated_master_count = "${var.es_dedicated_master_count}" | ||
} | ||
|
||
advanced_options { | ||
"rest.action.multi.allow_explicit_index" = "true" | ||
} | ||
|
||
ebs_options { | ||
ebs_enabled = "true" | ||
iops = "0" | ||
volume_size = "20" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Presumably we'd want this configurable, yeah? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd like to make a pass at parameterizing a lot of things in a separate PR |
||
volume_type = "gp2" | ||
} | ||
|
||
snapshot_options { | ||
automated_snapshot_start_hour = 0 | ||
} | ||
|
||
access_policies = <<CONFIG | ||
{ | ||
"Version": "2012-10-17", | ||
"Statement": [ | ||
{ | ||
"Action": "es:*", | ||
"Principal": "*", | ||
"Effect": "Allow", | ||
"Resource": "*" | ||
} | ||
] | ||
} | ||
CONFIG | ||
} | ||
|
||
# IAM roles and policies for this stack | ||
|
||
# Policies | ||
|
@@ -101,4 +144,67 @@ resource "aws_iam_role" "elasticsearch_delivery_role" { | |
resource "aws_iam_role_policy_attachment" "es_delivery_full_access" { | ||
role = "${aws_iam_role.elasticsearch_delivery_role.name}" | ||
policy_arn = "arn:aws:iam::aws:policy/AmazonESFullAccess" | ||
} | ||
} | ||
|
||
resource "aws_kinesis_firehose_delivery_stream" "extended_s3_stream" { | ||
name = "${var.es_kinesis_delivery_stream}" | ||
destination = "elasticsearch" | ||
|
||
elasticsearch_configuration { | ||
buffering_interval = 60 | ||
buffering_size = 50 | ||
cloudwatch_logging_options { | ||
enabled = "true" | ||
log_group_name = "${aws_cloudwatch_log_group.es_log_group.name}" | ||
log_stream_name = "${aws_cloudwatch_log_stream.es_log_stream.name}" | ||
} | ||
domain_arn = "${aws_elasticsearch_domain.elasticsearch.arn}" | ||
role_arn = "${aws_iam_role.elasticsearch_delivery_role.arn}" | ||
index_name = "logmonitor" | ||
type_name = "log" | ||
index_rotation_period = "NoRotation" | ||
retry_duration = "60" | ||
role_arn = "${aws_iam_role.elasticsearch_delivery_role.arn}" | ||
s3_backup_mode = "AllDocuments" | ||
} | ||
|
||
s3_configuration { | ||
role_arn = "${aws_iam_role.s3_delivery_role.arn}" | ||
bucket_arn = "${aws_s3_bucket.s3_logging_bucket.arn}" | ||
buffer_size = 10 | ||
buffer_interval = 300 | ||
compression_format = "UNCOMPRESSED" | ||
prefix = "firehose/" | ||
cloudwatch_logging_options { | ||
enabled = "true" | ||
log_group_name = "${aws_cloudwatch_log_group.s3_log_group.name}" | ||
log_stream_name = "${aws_cloudwatch_log_stream.s3_log_stream.name}" | ||
} | ||
} | ||
} | ||
|
||
resource "aws_cloudwatch_log_group" "es_log_group" { | ||
name = "${var.es_log_group_name}" | ||
retention_in_days = "${var.es_log_retention_in_days}" | ||
} | ||
|
||
resource "aws_cloudwatch_log_group" "s3_log_group" { | ||
name = "${var.s3_log_group_name}" | ||
retention_in_days = "${var.s3_log_retention_in_days}" | ||
} | ||
|
||
resource "aws_cloudwatch_log_stream" "es_log_stream" { | ||
name = "${var.es_log_stream_name}" | ||
log_group_name = "${aws_cloudwatch_log_group.es_log_group.name}" | ||
} | ||
|
||
resource "aws_cloudwatch_log_stream" "s3_log_stream" { | ||
name = "${var.s3_log_stream_name}" | ||
log_group_name = "${aws_cloudwatch_log_group.s3_log_group.name}" | ||
} | ||
|
||
resource "aws_s3_bucket" "s3_logging_bucket" { | ||
bucket = "${var.s3_logging_bucket_name}" | ||
acl = "private" | ||
} | ||
|
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
data "aws_region" "current" { | ||
current = true | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this part under the
Test deployment
section.