diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index dc38f789fb77..7e719c324c13 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -1,4 +1,4 @@ - + diff --git a/.github/workflows/changelog.yml b/.github/workflows/changelog.yml index 970e1704296d..41879ccc8e5e 100644 --- a/.github/workflows/changelog.yml +++ b/.github/workflows/changelog.yml @@ -49,7 +49,7 @@ jobs: body: |- Thank you for your contribution! :rocket: - Please note that the `CHANGELOG.md` file contents are handled by the maintainers during merge. This is to prevent pull request merge conflicts, especially for contributions which may not be merged immediately. Please see the [Contributing Guide](https://github.com/hashicorp/terraform-provider-aws/blob/main/docs/CONTRIBUTING.md) for additional pull request review items. + Please note that the `CHANGELOG.md` file contents are handled by the maintainers during merge. This is to prevent pull request merge conflicts, especially for contributions which may not be merged immediately. Please see the [Contributing Guide](https://github.com/hashicorp/terraform-provider-aws/blob/main/docs/contributing) for additional pull request review items. Remove any changes to the `CHANGELOG.md` file and commit them in this pull request to prevent delays with reviewing and potentially merging this pull request. misspell: diff --git a/.github/workflows/pull_requests.yml b/.github/workflows/pull_requests.yml index 94d195562175..fb25c93df65a 100644 --- a/.github/workflows/pull_requests.yml +++ b/.github/workflows/pull_requests.yml @@ -50,8 +50,8 @@ jobs: pr-message: |- Welcome @${{github.actor}} :wave: - It looks like this is your first Pull Request submission to the [Terraform AWS Provider](https://github.com/hashicorp/terraform-provider-aws)! If you haven’t already done so please make sure you have checked out our [CONTRIBUTING](https://github.com/hashicorp/terraform-provider-aws/blob/main/docs/CONTRIBUTING.md) guide and [FAQ](https://github.com/hashicorp/terraform-provider-aws/blob/main/docs/FAQ.md) to make sure your contribution is adhering to best practice and has all the necessary elements in place for a successful approval. + It looks like this is your first Pull Request submission to the [Terraform AWS Provider](https://github.com/hashicorp/terraform-provider-aws)! If you haven’t already done so please make sure you have checked out our [CONTRIBUTING](https://github.com/hashicorp/terraform-provider-aws/blob/main/docs/contributing) guide and [FAQ](https://github.com/hashicorp/terraform-provider-aws/blob/main/docs/contributing/faq.md) to make sure your contribution is adhering to best practice and has all the necessary elements in place for a successful approval. - Also take a look at our [FAQ](https://github.com/hashicorp/terraform-provider-aws/blob/main/docs/FAQ.md) which details how we prioritize Pull Requests for inclusion. + Also take a look at our [FAQ](https://github.com/hashicorp/terraform-provider-aws/blob/main/docs/contributing/faq.md) which details how we prioritize Pull Requests for inclusion. Thanks again, and welcome to the community! :smiley: diff --git a/README.md b/README.md index 743a3daecec7..ed83121ca97f 100644 --- a/README.md +++ b/README.md @@ -23,7 +23,7 @@ Please note: We take Terraform's security and our users' trust very seriously. I ## Quick Starts - [Using the provider](https://www.terraform.io/docs/providers/aws/index.html) -- [Provider development](docs/DEVELOPMENT.md) +- [Provider development](docs/contributing) ## Documentation @@ -37,10 +37,10 @@ Our roadmap for expanding support in Terraform for AWS resources can be found in ## Frequently Asked Questions -Responses to our most frequently asked questions can be found in our [FAQ](docs/FAQ.md ) +Responses to our most frequently asked questions can be found in our [FAQ](docs/contributing/faq.md ) ## Contributing The Terraform AWS Provider is the work of thousands of contributors. We appreciate your help! -To contribute, please read the contribution guidelines: [Contributing to Terraform - AWS Provider](docs/CONTRIBUTING.md) +To contribute, please read the contribution guidelines: [Contributing to Terraform - AWS Provider](docs/contributing) diff --git a/ROADMAP.md b/ROADMAP.md index 433c5fb55e31..450a02dea25c 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -2,7 +2,7 @@ Every few months, the team will highlight areas of focus for our work and upcoming research. -We select items for inclusion in the roadmap from the Top 10 Community Issues, [Core Services](docs/CORE_SERVICES.md), and internal priorities. Where community sourced contributions exist we will work with the authors to review and merge their work. Where this does not exist or the original contributors are not available we will create the resources and implementation ourselves. +We select items for inclusion in the roadmap from the Top 10 Community Issues, [Core Services](docs/contributing/core-services.md), and internal priorities. Where community sourced contributions exist we will work with the authors to review and merge their work. Where this does not exist or the original contributors are not available we will create the resources and implementation ourselves. Each weekly release will include necessary tasks that lead to the completion of the stated goals as well as community pull requests, enhancements, and features that are not highlighted in the roadmap. To view all the items we've prioritized for this quarter, please see the [Roadmap milestone](https://github.com/hashicorp/terraform-provider-aws/milestone/138). diff --git a/docs/CONTRIBUTING.md b/docs/CONTRIBUTING.md deleted file mode 100644 index 472ccfbdbd14..000000000000 --- a/docs/CONTRIBUTING.md +++ /dev/null @@ -1,25 +0,0 @@ -# Contributing to Terraform - AWS Provider - -**First:** if you're unsure or afraid of _anything_, ask for help! You can -submit a work in progress (WIP) pull request, or file an issue with the parts -you know. We'll do our best to guide you in the right direction, and let you -know if there are guidelines we will need to follow. We want people to be able -to participate without fear of doing the wrong thing. - -Below are our expectations for contributors. Following these guidelines gives us -the best opportunity to work with you, by making sure we have the things we need -in order to make it happen. Doing your best to follow it will speed up our -ability to merge PRs and respond to issues. - -- [Development Environment Setup](DEVELOPMENT.md) -- [Issue Reporting and Lifecycle](contributing/issue-reporting-and-lifecycle.md) -- [Pull Request Submission and Lifecycle](contributing/pullrequest-submission-and-lifecycle.md) -- [Contribution Types and Checklists](contributing/contribution-checklists.md) - -This documentation also contains reference material specific to certain functionality: - -- [Provider Design](contributing/provider-design.md) -- [Running and Writing Acceptance Tests](contributing/running-and-writing-acceptance-tests.md) -- [Data Handling and Conversion](contributing/data-handling-and-conversion.md) -- [Error Handling](contributing/error-handling.md) -- [Retries and Waiters](contributing/retries-and-waiters.md) diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 000000000000..f848cd8a48b9 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,7 @@ +# Terraform AWS Provider Engineering Documentation + +Looking for documentation? You've come to the right place. + +* Documentation on [_using_ the Terraform AWS Provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) +* [Road maps](roadmaps/README.md) showing the future direction of the AWS Provider +* Documentation on [contributing](contributing/) to the the AWS Provider! diff --git a/docs/contributing/README.md b/docs/contributing/README.md new file mode 100644 index 000000000000..44583650f6d6 --- /dev/null +++ b/docs/contributing/README.md @@ -0,0 +1,30 @@ +# Contributing to the Terraform AWS Provider + +**First,** if you're unsure or afraid of _anything_, ask for help! You can open a draft pull request (PR) or an issue with what you know. We'll do our best to guide you in the right direction, and let you know if there are guidelines to follow. We want people to be able to participate without fear of doing the wrong thing. + +**Second,** not all of this documentation is up-to-date. If you see something that's not quite right, please submit a PR. Documentation-only PRs are often merged more quickly than code. + +**Third,** we don't always respond as quickly as we'd like. There's a lot going on. We prioritize certain aspects of the codebase but those priorities do shift over time. If we haven't gotten to something you find important, it's likely it's on our radar. We're working through other priorities to get to it. + +To improve the review and merge process, please do your best to follow the documentation. Below are our expectations for contributors. Following these guidelines gives us the best opportunity to work with you, by making sure we have the things we need in order to make it happen. Doing your best to follow it will speed up our ability to merge PRs and respond to issues. + +## Getting Started + +- [Set Up Your Development Environment](development-environment.md) +- [FAQ](faq.md) +- [Core Services](core-services.md) +- [Provider Design](provider-design.md) + +## Process + +- [Contribution Types and Checklists](contribution-checklists.md) +- [Issue Reporting and Lifecycle](issue-reporting-and-lifecycle.md) +- [Maintainers and Maintaining the Repository](maintaining.md) +- [Pull Request Submission and Lifecycle](pullrequest-submission-and-lifecycle.md) + +## Reference + +- [Acceptance Tests, Running and Writing](running-and-writing-acceptance-tests.md) +- [Data Handling and Conversion](data-handling-and-conversion.md) +- [Error Handling](error-handling.md) +- [Retries and Waiters](retries-and-waiters.md) diff --git a/docs/contributing/contribution-checklists.md b/docs/contributing/contribution-checklists.md index d9802e79030e..de0e391c9979 100644 --- a/docs/contributing/contribution-checklists.md +++ b/docs/contributing/contribution-checklists.md @@ -14,7 +14,7 @@ each type of contribution. - [Resource Name Generation With Suffix](#resource-name-generation-with-suffix) - [Adding Resource Policy Support](#adding-resource-policy-support) - [Adding Resource Tagging Support](#adding-resource-tagging-support) - - [Adding Service to Tag Generating Code](#adding-service-to-tag-generating-code) + - [Generating Tag Code for a Service](#generating-tag-code-for-a-service) - [Resource Tagging Code Implementation](#resource-tagging-code-implementation) - [Resource Tagging Acceptance Testing Implementation](#resource-tagging-acceptance-testing-implementation) - [Resource Tagging Documentation Implementation](#resource-tagging-documentation-implementation) @@ -79,9 +79,9 @@ Comprehensive code examples and information about resource import support can be In addition to the below checklist and the items noted in the Extending Terraform documentation, please see the [Common Review Items](pullrequest-submission-and-lifecycle.md#common-review-items) sections for more specific coding and testing guidelines. -- [ ] _Resource Code Implementation_: In the resource code (e.g. `aws/resource_aws_service_thing.go`), implementation of `Importer` `State` function -- [ ] _Resource Acceptance Testing Implementation_: In the resource acceptance testing (e.g. `aws/resource_aws_service_thing_test.go`), implementation of `TestStep`s with `ImportState: true` -- [ ] _Resource Documentation Implementation_: In the resource documentation (e.g. `website/docs/r/service_thing.html.markdown`), addition of `Import` documentation section at the bottom of the page +- [ ] _Resource Code Implementation_: In the resource code (e.g., `internal/service/{service}/{thing}.go`), implementation of `Importer` `State` function +- [ ] _Resource Acceptance Testing Implementation_: In the resource acceptance testing (e.g., `internal/service/{service}/{thing}_test.go`), implementation of `TestStep`s with `ImportState: true` +- [ ] _Resource Documentation Implementation_: In the resource documentation (e.g., `website/docs/r/service_thing.html.markdown`), addition of `Import` documentation section at the bottom of the page ## Adding Resource Name Generation Support @@ -89,14 +89,14 @@ Terraform AWS Provider resources can use shared logic to support and test name g Implementing name generation support for Terraform AWS Provider resources requires the following, each with its own section below: -- [ ] _Resource Name Generation Code Implementation_: In the resource code (e.g. `aws/resource_aws_service_thing.go`), implementation of `name_prefix` attribute, along with handling in `Create` function. -- [ ] _Resource Name Generation Testing Implementation_: In the resource acceptance testing (e.g. `aws/resource_aws_service_thing_test.go`), implementation of new acceptance test functions and configurations to exercise new naming logic. -- [ ] _Resource Name Generation Documentation Implementation_: In the resource documentation (e.g. `website/docs/r/service_thing.html.markdown`), addition of `name_prefix` argument and update of `name` argument description. +- [ ] _Resource Name Generation Code Implementation_: In the resource code (e.g., `internal/service/{service}/{thing}.go`), implementation of `name_prefix` attribute, along with handling in `Create` function. +- [ ] _Resource Name Generation Testing Implementation_: In the resource acceptance testing (e.g., `internal/service/{service}/{thing}_test.go`), implementation of new acceptance test functions and configurations to exercise new naming logic. +- [ ] _Resource Name Generation Documentation Implementation_: In the resource documentation (e.g., `website/docs/r/service_thing.html.markdown`), addition of `name_prefix` argument and update of `name` argument description. ### Resource Name Generation Code Implementation -- In the resource Go file (e.g. `aws/resource_aws_service_thing.go`), add the following Go import: `"github.com/hashicorp/terraform-provider-aws/aws/internal/naming"` -- In the resource schema, add the new `name_prefix` attribute and adjust the `name` attribute to be `Optional`, `Computed`, and `ConflictsWith` the `name_prefix` attribute. Ensure to keep any existing schema fields on `name` such as `ValidateFunc`. e.g. +- In the resource Go file (e.g., `internal/service/{service}/{thing}.go`), add the following Go import: `"github.com/hashicorp/terraform-provider-aws/internal/create"` +- In the resource schema, add the new `name_prefix` attribute and adjust the `name` attribute to be `Optional`, `Computed`, and `ConflictsWith` the `name_prefix` attribute. Ensure to keep any existing schema fields on `name` such as `ValidateFunc`. E.g. ```go "name": { @@ -115,10 +115,10 @@ Implementing name generation support for Terraform AWS Provider resources requir }, ``` -- In the resource `Create` function, switch any calls from `d.Get("name").(string)` to instead use the `naming.Generate()` function, e.g. +- In the resource `Create` function, switch any calls from `d.Get("name").(string)` to instead use the `create.Name()` function, e.g. ```go -name := naming.Generate(d.Get("name").(string), d.Get("name_prefix").(string)) +name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) // ... in AWS Go SDK Input types, etc. use aws.String(name) ``` @@ -127,30 +127,30 @@ name := naming.Generate(d.Get("name").(string), d.Get("name_prefix").(string)) ```go d.Set("name", resp.Name) -d.Set("name_prefix", naming.NamePrefixFromName(aws.StringValue(resp.Name))) +d.Set("name_prefix", create.NamePrefixFromName(aws.StringValue(resp.Name))) ``` ### Resource Name Generation Testing Implementation -- In the resource testing (e.g. `aws/resource_aws_service_thing_test.go`), add the following Go import: `"github.com/hashicorp/terraform-provider-aws/aws/internal/naming"` -- In the resource testing, implement two new tests named `_Name_Generated` and `_NamePrefix` with associated configurations, that verifies creating the resource without `name` and `name_prefix` arguments (for the former) and with only the `name_prefix` argument (for the latter). e.g. +- In the resource testing (e.g., `internal/service/{service}/{thing}_test.go`), add the following Go import: `"github.com/hashicorp/terraform-provider-aws/internal/create"` +- In the resource testing, implement two new tests named `_Name_Generated` and `_NamePrefix` with associated configurations, that verifies creating the resource without `name` and `name_prefix` arguments (for the former) and with only the `name_prefix` argument (for the latter). E.g. ```go -func TestAccAWSServiceThing_Name_Generated(t *testing.T) { +func TestAccServiceThing_nameGenerated(t *testing.T) { var thing service.ServiceThing resourceName := "aws_service_thing.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - ErrorCheck: testAccErrorCheck(t, service.EndpointsID), - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSServiceThingDestroy, + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, service.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckThingDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSServiceThingConfigNameGenerated(), + Config: testAccThingNameGeneratedConfig(), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSServiceThingExists(resourceName, &thing), - naming.TestCheckResourceAttrNameGenerated(resourceName, "name"), + testAccCheckThingExists(resourceName, &thing), + create.TestCheckResourceAttrNameGenerated(resourceName, "name"), resource.TestCheckResourceAttr(resourceName, "name_prefix", "terraform-"), ), }, @@ -164,21 +164,21 @@ func TestAccAWSServiceThing_Name_Generated(t *testing.T) { }) } -func TestAccAWSServiceThing_NamePrefix(t *testing.T) { +func TestAccServiceThing_namePrefix(t *testing.T) { var thing service.ServiceThing resourceName := "aws_service_thing.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - ErrorCheck: testAccErrorCheck(t, service.EndpointsID), - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSServiceThingDestroy, + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, service.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckThingDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSServiceThingConfigNamePrefix("tf-acc-test-prefix-"), + Config: testAccThingNamePrefixConfig("tf-acc-test-prefix-"), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSServiceThingExists(resourceName, &thing), - naming.TestCheckResourceAttrNameFromPrefix(resourceName, "name", "tf-acc-test-prefix-"), + testAccCheckThingExists(resourceName, &thing), + create.TestCheckResourceAttrNameFromPrefix(resourceName, "name", "tf-acc-test-prefix-"), resource.TestCheckResourceAttr(resourceName, "name_prefix", "tf-acc-test-prefix-"), ), }, @@ -192,7 +192,7 @@ func TestAccAWSServiceThing_NamePrefix(t *testing.T) { }) } -func testAccAWSServiceThingConfigNameGenerated() string { +func testAccThingNameGeneratedConfig() string { return fmt.Sprintf(` resource "aws_service_thing" "test" { # ... other configuration ... @@ -200,7 +200,7 @@ resource "aws_service_thing" "test" { `) } -func testAccAWSServiceThingConfigNamePrefix(namePrefix string) string { +func testAccThingNamePrefixConfig(namePrefix string) string { return fmt.Sprintf(` resource "aws_service_thing" "test" { # ... other configuration ... @@ -213,7 +213,7 @@ resource "aws_service_thing" "test" { ### Resource Name Generation Documentation Implementation -- In the resource documentation (e.g. `website/docs/r/service_thing.html.markdown`), add the following to the arguments reference: +- In the resource documentation (e.g., `website/docs/r/service_thing.html.markdown`), add the following to the arguments reference: ```markdown * `name_prefix` - (Optional) Creates a unique name beginning with the specified prefix. Conflicts with `name`. @@ -228,24 +228,24 @@ resource "aws_service_thing" "test" { ### Resource Name Generation With Suffix Some generated resource names require a fixed suffix (for example Amazon SNS FIFO topic names must end in `.fifo`). -In these cases use `naming.GenerateWithSuffix()` in the resource `Create` function and `naming.NamePrefixFromNameWithSuffix()` in the resource `Read` function, e.g. +In these cases use `create.NameWithSuffix()` in the resource `Create` function and `create.NamePrefixFromNameWithSuffix()` in the resource `Read` function, e.g. ```go -name := naming.GenerateWithSuffix(d.Get("name").(string), d.Get("name_prefix").(string), ".fifo") +name := create.NameWithSuffix(d.Get("name").(string), d.Get("name_prefix").(string), ".fifo") ``` and ```go d.Set("name", resp.Name) -d.Set("name_prefix", naming.NamePrefixFromNameWithSuffix(aws.StringValue(resp.Name), ".fifo")) +d.Set("name_prefix", create.NamePrefixFromNameWithSuffix(aws.StringValue(resp.Name), ".fifo")) ``` -There are also functions `naming.TestCheckResourceAttrNameWithSuffixGenerated` and `naming.TestCheckResourceAttrNameWithSuffixFromPrefix` for use in tests. +There are also functions `create.TestCheckResourceAttrNameWithSuffixGenerated` and `create.TestCheckResourceAttrNameWithSuffixFromPrefix` for use in tests. ## Adding Resource Policy Support -Some AWS components support [resource-based IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html) to control permissions. When implementing this support in the Terraform AWS Provider, we typically prefer creating a separate resource, `aws_{SERVICE}_{THING}_policy` (e.g. `aws_s3_bucket_policy`). See the [New Resource section](#new-resource) for more information about implementing the separate resource and the [Provider Design page](provider-design.md) for rationale. +Some AWS components support [resource-based IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html) to control permissions. When implementing this support in the Terraform AWS Provider, we typically prefer creating a separate resource, `aws_{SERVICE}_{THING}_policy` (e.g., `aws_s3_bucket_policy`). See the [New Resource section](#new-resource) for more information about implementing the separate resource and the [Provider Design page](provider-design.md) for rationale. ## Adding Resource Tagging Support @@ -255,45 +255,56 @@ As of version 3.38.0 of the Terraform AWS Provider, resources that previously im Thus, for in-flight and future contributions, implementing tagging support for Terraform AWS Provider resources requires the following, each with its own section below: -- [ ] _Generated Service Tagging Code_: In the internal code generators (e.g. `aws/internal/keyvaluetags`), implementation and customization of how a service handles tagging, which is standardized for the resources. -- [ ] _Resource Tagging Code Implementation_: In the resource code (e.g. `aws/resource_aws_service_thing.go`), implementation of `tags` and `tags_all` schema attributes, along with implementation of `CustomizeDiff` in the resource definition and handling in `Create`, `Read`, and `Update` functions. -- [ ] _Resource Tagging Acceptance Testing Implementation_: In the resource acceptance testing (e.g. `aws/resource_aws_service_thing_test.go`), implementation of new acceptance test function and configurations to exercise new tagging logic. -- [ ] _Resource Tagging Documentation Implementation_: In the resource documentation (e.g. `website/docs/r/service_thing.html.markdown`), addition of `tags` argument and `tags_all` attribute. +- [ ] _Generated Service Tagging Code_: Each service has a `generate.go` file where generator directives live. Through these directives and their flags, you can customize code generation for the service. You can find the code that the tagging generator generates in a `tags_gen.go` file in a service, such as `internal/service/ec2/tags_gen.go`. Unlike previously, you should generally _not_ need to edit the generator code (i.e., in `internal/generate/tags`). +- [ ] _Resource Tagging Code Implementation_: In the resource code (e.g., `internal/service/{service}/{thing}.go`), implementation of `tags` and `tags_all` schema attributes, along with implementation of `CustomizeDiff` in the resource definition and handling in `Create`, `Read`, and `Update` functions. +- [ ] _Resource Tagging Acceptance Testing Implementation_: In the resource acceptance testing (e.g., `internal/service/{service}/{thing}_test.go`), implementation of new acceptance test function and configurations to exercise new tagging logic. +- [ ] _Resource Tagging Documentation Implementation_: In the resource documentation (e.g., `website/docs/r/service_thing.html.markdown`), addition of `tags` argument and `tags_all` attribute. -See also a [full example pull request for implementing resource tags with default tags support](https://github.com/hashicorp/terraform-provider-aws/pull/18861). - -### Adding Service to Tag Generating Code +### Generating Tag Code for a Service This step is only necessary for the first implementation and may have been previously completed. If so, move on to the next section. -More details about this code generation, including fixes for potential error messages in this process, can be found in the [keyvaluetags documentation](../../aws/internal/keyvaluetags/README.md). +More details about this code generation, including fixes for potential error messages in this process, can be found in the [generate documentation](../../internal/generate/README.md). -- Open the AWS Go SDK documentation for the service, e.g. for [`service/eks`](https://docs.aws.amazon.com/sdk-for-go/api/service/eks/). Note: there can be a delay between the AWS announcement and the updated AWS Go SDK documentation. -- Determine the "type" of tagging implementation. Some services will use a simple map style (`map[string]*string` in Go) while others will have a separate structure shape (`[]service.Tag` struct with `Key` and `Value` fields). +- Open the AWS Go SDK documentation for the service, e.g., for [`service/eks`](https://docs.aws.amazon.com/sdk-for-go/api/service/eks/). Note: there can be a delay between the AWS announcement and the updated AWS Go SDK documentation. +- Use the AWS Go SDK to determine which types of tagging code to generate. There are three main types of tagging code you can generate: service tags, list tags, and update tags. These are not mutually exclusive and some services use more than one. +- Determine if a service already has a `generate.go` file (e.g., `internal/service/eks/generate.go`). If none exists, follow the example of other `generate.go` files in many other services. This is a very simple file, perhaps 3-5 lines long, and must _only_ contain generate directives at the very top of the file and a package declaration (e.g., `package eks`) -- _nothing else_. +- Check for a tagging code directive: `//go:generate go run -tags generate ../../generate/tags/main.go`. If one does not exist, add it. Note that without flags, the directive itself will not do anything useful. **WARNING:** You must never have more than one `generate/tags/main.go` directive in a `generate.go` file. Even if you want to generate all three types of tag code, you will use multiple flags but only one `generate/tags/main.go` directive! Including more than one directive will cause the generator to overwrite one set of generated code with whatever is specified in the next directive. +- If the service supports service tags, determine the service's "type" of tagging implementation. Some services will use a simple map style (`map[string]*string` in Go) while others will have a separate structure (`[]service.Tag` `struct` with `Key` and `Value` fields). - - If the type is a map, add the AWS Go SDK service name (e.g. `eks`) to `mapServiceNames` in `aws/internal/keyvaluetags/generators/servicetags/main.go` - - Otherwise, if the type is a struct, add the AWS Go SDK service name (e.g. `eks`) to `sliceServiceNames` in `aws/internal/keyvaluetags/generators/servicetags/main.go`. If the struct name is not exactly `Tag`, it can be customized via the `ServiceTagType` function. If the struct key field is not exactly `Key`, it can be customized via the `ServiceTagTypeKeyField` function. If the struct value field is not exactly `Value`, it can be customized via the `ServiceTagTypeValueField` function. + - If the type is a map, add a new flag to the tagging directive (see above): `-ServiceTagsMap=yes`. If the type is `struct`, add a `-ServiceTagsSlice=yes` flag. + - If you use the `-ServiceTagsSlice=yes` flag and if the `struct` name is not exactly `Tag`, you must include the `-TagType` flag with the name of the `struct` (e.g., `-TagType=S3Tag`). If the key and value elements of the `struct` are not exactly `Key` and `Value` respectively, you must include the `-TagTypeKeyElem` and/or `-TagTypeValElem` flags with the correct names. + - In summary, you may need to include one or more of the following flags with `-ServiceTagsSlice` in order to properly customize the generated code: `-TagKeyType`, `TagPackage`, `TagResTypeElem`, `TagType`, `TagType2`, `TagTypeAddBoolElem`, `TagTypeAddBoolElemSnake`, `TagTypeIDElem`, `TagTypeKeyElem`, and `TagTypeValElem`. -- Determine if the service API includes functionality for listing tags (usually a `ListTags` or `ListTagsForResource` API call) or updating tags (usually `TagResource` and `UntagResource` API calls). If so, add the AWS Go SDK service client information to `ServiceClientType` (along with the new required import) in `aws/internal/keyvaluetags/service_generation_customizations.go`, e.g. for EKS: - ```go - case "eks": - funcType = reflect.TypeOf(eks.New) - ``` +- If the service supports listing tags (usually a `ListTags` or `ListTagsForResource` API call), follow these guidelines. + + - Add a new flag to the tagging directive (see above): `-ListTags=yes`. + - If the API list operation is not exactly `ListTagsForResource`, include the `-ListTagsOp` flag with the name of the operation (e.g., `-ListTagsOp=DescribeTags`). + - If the API list tags operation identifying element is not exactly `ResourceArn`, include the `-ListTagsInIDElem` flag with the name of the element (e.g., `-ListTagsInIDElem=ResourceARN`). + - If the API list tags operation identifying element needs a slice, include the `-ListTagsInIDNeedSlice` flag with a `yes` value (e.g., `-ListTagsInIDNeedSlice=yes`). + - If the API list tags operation output element is not exactly `Tags`, include the `-ListTagsOutTagsElem` flag with the name of the element (e.g., `-ListTagsOutTagsElem=TagList`). + - In summary, you may need to include one or more of the following flags with `-ListTags` in order to properly customize the generated code: `ListTagsInFiltIDName`, `ListTagsInIDElem`, `ListTagsInIDNeedSlice`, `ListTagsOp`, `ListTagsOutTagsElem`, `TagPackage`, `TagResTypeElem`, and `TagTypeIDElem`. + +- If the service API supports updating tags (usually `TagResource` and `UntagResource` API calls), follow these guidelines. - - If the service API includes functionality for listing tags, add the AWS Go SDK service name (e.g. `eks`) to `serviceNames` in `aws/internal/keyvaluetags/generators/listtags/main.go`. - - If the service API includes functionality for updating tags, add the AWS Go SDK service name (e.g. `eks`) to `serviceNames` in `aws/internal/keyvaluetags/generators/updatetags/main.go`. + - Add a new flag to the tagging directive (see above): `-UpdateTags=yes`. + - If the API tag operation is not exactly `TagResource`, include the `-TagOp` flag with the name of the operation (e.g., `-TagOp=AddTags`). + - If the API untag operation is not exactly `UntagResource`, include the `-UntagOp` flag with the name of the operation (e.g., `-UntagOp=RemoveTags`). + - If the API operation identifying element is not exactly `ResourceArn`, include the `-TagInIDElem` flag with the name of the element (e.g., `-TagInIDElem=ResourceARN`). + - If the API untag operation tags input element is not exactly `TagKeys`, include the `-UntagInTagsElem` flag with the name of the element (e.g., `-UntagInTagsElem=Keys`). + - In summary, you may need to include one or more of the following flags with `-UpdateTags` in order to properly customize the generated code: `TagInCustomVal`, `TagInIDElem`, `TagInIDNeedSlice`, `TagInTagsElem`, `TagOp`, `TagOpBatchSize`, `TagPackage`, `TagResTypeElem`, `TagTypeAddBoolElem`, `TagTypeIDElem`, `UntagInCustomVal`, `UntagInNeedTagKeyType`, `UntagInNeedTagType`, `UntagInTagsElem`, and `UntagOp`. - Run `make gen` (`go generate ./...`) and ensure there are no errors via `make test` (`go test ./...`) ### Resource Tagging Code Implementation -- In the resource Go file (e.g. `aws/resource_aws_eks_cluster.go`), add the following Go import: `"github.com/hashicorp/terraform-provider-aws/aws/internal/keyvaluetags"` +- In the resource Go file (e.g., `internal/service/eks/cluster.go`), add the following Go import: `tftags "github.com/hashicorp/terraform-provider-aws/internal/tags"` - In the resource schema, add `"tags": tagsSchema(),` and `"tags_all": tagsSchemaComputed(),` - In the `schema.Resource` struct definition, add the `CustomizeDiff: SetTagsDiff` handling essential to resource support for default tags: ```go - func resourceAwsEksCluster() *schema.Resource { + func ResourceCluster() *schema.Resource { return &schema.Resource{ /* ... other configuration ... */ CustomizeDiff: SetTagsDiff, @@ -304,27 +315,27 @@ More details about this code generation, including fixes for potential error mes If the resource already contains a `CustomizeDiff` function, append the `SetTagsDiff` via the `customdiff.Sequence` method: ```go - func resourceAwsExample() *schema.Resource { + func ResourceExample() *schema.Resource { return &schema.Resource{ /* ... other configuration ... */ CustomizeDiff: customdiff.Sequence( - resourceAwsExampleCustomizeDiff, - SetTagsDiff, + resourceExampleCustomizeDiff, + verify.SetTagsDiff, ), } } ``` -- If the API supports tagging on creation (the `Input` struct accepts a `Tags` field), in the resource `Create` function, implement the logic to convert the configuration tags into the service tags, e.g. with EKS Clusters: +- If the API supports tagging on creation (the `Input` struct accepts a `Tags` field), in the resource `Create` function, implement the logic to convert the configuration tags into the service tags, e.g., with EKS Clusters: ```go // Typically declared near conn := /* ... */ defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig - tags := defaultTagsConfig.MergeTags(keyvaluetags.New(d.Get("tags").(map[string]interface{}))) + tags := defaultTagsConfig.MergeTags(tftags.New(d.Get("tags").(map[string]interface{}))) input := &eks.CreateClusterInput{ /* ... other configuration ... */ - Tags: tags.IgnoreAws().EksTags(), + Tags: Tags(tags.IgnoreAws()), } ``` @@ -333,37 +344,37 @@ More details about this code generation, including fixes for potential error mes ```go // Typically declared near conn := /* ... */ defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig - tags := defaultTagsConfig.MergeTags(keyvaluetags.New(d.Get("tags").(map[string]interface{}))) + tags := defaultTagsConfig.MergeTags(tftags.New(d.Get("tags").(map[string]interface{}))) input := &eks.CreateClusterInput{ /* ... other configuration ... */ } if len(tags) > 0 { - input.Tags = tags.IgnoreAws().EksTags() + input.Tags = Tags(tags.IgnoreAws()) } ``` -- Otherwise if the API does not support tagging on creation (the `Input` struct does not accept a `Tags` field), in the resource `Create` function, implement the logic to convert the configuration tags into the service API call to tag a resource, e.g. with ElasticSearch Domain: +- Otherwise if the API does not support tagging on creation (the `Input` struct does not accept a `Tags` field), in the resource `Create` function, implement the logic to convert the configuration tags into the service API call to tag a resource, e.g., with ElasticSearch Domain: ```go // Typically declared near conn := /* ... */ defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig - tags := defaultTagsConfig.MergeTags(keyvaluetags.New(d.Get("tags").(map[string]interface{}))) + tags := defaultTagsConfig.MergeTags(tftags.New(d.Get("tags").(map[string]interface{}))) if len(tags) > 0 { - if err := keyvaluetags.ElasticsearchserviceUpdateTags(conn, d.Id(), nil, tags); err != nil { + if err := UpdateTags(conn, d.Id(), nil, tags); err != nil { return fmt.Errorf("error adding Elasticsearch Cluster (%s) tags: %s", d.Id(), err) } } ``` -- Some EC2 resources (for example [`aws_ec2_fleet`](https://www.terraform.io/docs/providers/aws/r/ec2_fleet.html)) have a `TagsSpecification` field in the `InputStruct` instead of a `Tags` field. In these cases the `ec2TagSpecificationsFromKeyValueTags()` helper function should be used, e.g.: +- Some EC2 resources (e.g., [`aws_ec2_fleet`](https://www.terraform.io/docs/providers/aws/r/ec2_fleet.html)) have a `TagSpecifications` field in the `InputStruct` instead of a `Tags` field. In these cases the `ec2TagSpecificationsFromKeyValueTags()` helper function should be used. This example shows using `TagSpecifications`: ```go // Typically declared near conn := /* ... */ defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig - tags := defaultTagsConfig.MergeTags(keyvaluetags.New(d.Get("tags").(map[string]interface{}))) + tags := defaultTagsConfig.MergeTags(tftags.New(d.Get("tags").(map[string]interface{}))) input := &ec2.CreateFleetInput{ /* ... other configuration ... */ @@ -371,7 +382,7 @@ More details about this code generation, including fixes for potential error mes } ``` -- In the resource `Read` function, implement the logic to convert the service tags to save them into the Terraform state for drift detection, e.g. with EKS Clusters (which had the tags available in the DescribeCluster API call): +- In the resource `Read` function, implement the logic to convert the service tags to save them into the Terraform state for drift detection, e.g., with EKS Clusters (which had the tags available in the DescribeCluster API call): ```go // Typically declared near conn := /* ... */ @@ -391,7 +402,7 @@ More details about this code generation, including fixes for potential error mes } ``` - If the service API does not return the tags directly from reading the resource and requires a separate API call, its possible to use the `keyvaluetags` functionality like the following, e.g. with Athena Workgroups: + If the service API does not return the tags directly from reading the resource and requires a separate API call, its possible to use the `keyvaluetags` functionality like the following, e.g., with Athena Workgroups: ```go // Typically declared near conn := /* ... */ @@ -417,7 +428,7 @@ More details about this code generation, including fixes for potential error mes } ``` -- In the resource `Update` function (this may be the first functionality requiring the creation of the `Update` function), implement the logic to handle tagging updates, e.g. with EKS Clusters: +- In the resource `Update` function (this may be the first functionality requiring the creation of the `Update` function), implement the logic to handle tagging updates, e.g., with EKS Clusters: ```go if d.HasChange("tags_all") { @@ -428,7 +439,7 @@ More details about this code generation, including fixes for potential error mes } ``` - If the resource `Update` function applies specific updates to attributes regardless of changes to tags, implement the following e.g. with IAM Policy: + If the resource `Update` function applies specific updates to attributes regardless of changes to tags, implement the following e.g., with IAM Policy: ```go if d.HasChangesExcept("tags", "tags_all") { @@ -447,25 +458,25 @@ More details about this code generation, including fixes for potential error mes ### Resource Tagging Acceptance Testing Implementation -- In the resource testing (e.g. `aws/resource_aws_eks_cluster_test.go`), verify that existing resources without tagging are unaffected and do not have tags saved into their Terraform state. This should be done in the `_basic` acceptance test by adding one line similar to `resource.TestCheckResourceAttr(resourceName, "tags.%", "0"),` and one similar to `resource.TestCheckResourceAttr(resourceName, "tags_all.%", "0"),` -- In the resource testing, implement a new test named `_Tags` with associated configurations, that verifies creating the resource with tags and updating tags. e.g. EKS Clusters: +- In the resource testing (e.g., `internal/service/eks/cluster_test.go`), verify that existing resources without tagging are unaffected and do not have tags saved into their Terraform state. This should be done in the `_basic` acceptance test by adding one line similar to `resource.TestCheckResourceAttr(resourceName, "tags.%", "0"),` and one similar to `resource.TestCheckResourceAttr(resourceName, "tags_all.%", "0"),` +- In the resource testing, implement a new test named `_tags` with associated configurations, that verifies creating the resource with tags and updating tags. E.g., EKS Clusters: ```go - func TestAccAWSEksCluster_Tags(t *testing.T) { + func TestAccEKSCluster_tags(t *testing.T) { var cluster1, cluster2, cluster3 eks.Cluster rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_eks_cluster.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSEks(t) }, - ErrorCheck: testAccErrorCheck(t, eks.EndpointsID), - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSEksClusterDestroy, + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, eks.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckClusterDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSEksClusterConfigTags1(rName, "key1", "value1"), + Config: testAccClusterConfigTags1(rName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSEksClusterExists(resourceName, &cluster1), + testAccCheckClusterExists(resourceName, &cluster1), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), @@ -476,18 +487,18 @@ More details about this code generation, including fixes for potential error mes ImportStateVerify: true, }, { - Config: testAccAWSEksClusterConfigTags2(rName, "key1", "value1updated", "key2", "value2"), + Config: testAccClusterConfigTags2(rName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSEksClusterExists(resourceName, &cluster2), + testAccCheckClusterExists(resourceName, &cluster2), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, { - Config: testAccAWSEksClusterConfigTags1(rName, "key2", "value2"), + Config: testAccClusterConfigTags1(rName, "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSEksClusterExists(resourceName, &cluster3), + testAccCheckClusterExists(resourceName, &cluster3), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), @@ -496,8 +507,8 @@ More details about this code generation, including fixes for potential error mes }) } - func testAccAWSEksClusterConfigTags1(rName, tagKey1, tagValue1 string) string { - return testAccAWSEksClusterConfig_Base(rName) + fmt.Sprintf(` + func testAccClusterConfigTags1(rName, tagKey1, tagValue1 string) string { + return acctest.ConfigCompose(testAccClusterConfig_base(rName), fmt.Sprintf(` resource "aws_eks_cluster" "test" { name = %[1]q role_arn = aws_iam_role.test.arn @@ -512,11 +523,11 @@ More details about this code generation, including fixes for potential error mes depends_on = [aws_iam_role_policy_attachment.test-AmazonEKSClusterPolicy] } - `, rName, tagKey1, tagValue1) + `, rName, tagKey1, tagValue1)) } - func testAccAWSEksClusterConfigTags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { - return testAccAWSEksClusterConfig_Base(rName) + fmt.Sprintf(` + func testAccClusterConfigTags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return acctest.ConfigCompose(testAccClusterConfig_base(rName), fmt.Sprintf(` resource "aws_eks_cluster" "test" { name = %[1]q role_arn = aws_iam_role.test.arn @@ -532,24 +543,24 @@ More details about this code generation, including fixes for potential error mes depends_on = [aws_iam_role_policy_attachment.test-AmazonEKSClusterPolicy] } - `, rName, tagKey1, tagValue1, tagKey2, tagValue2) + `, rName, tagKey1, tagValue1, tagKey2, tagValue2)) } ``` -- Verify all acceptance testing passes for the resource (e.g. `make testacc TESTARGS='-run=TestAccAWSEksCluster_'`) +- Verify all acceptance testing passes for the resource (e.g., `make testacc TESTARGS='-run=TestAccEKSCluster_'`) ### Resource Tagging Documentation Implementation -- In the resource documentation (e.g. `website/docs/r/eks_cluster.html.markdown`), add the following to the arguments reference: +- In the resource documentation (e.g., `website/docs/r/eks_cluster.html.markdown`), add the following to the arguments reference: ```markdown * `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. ``` -- In the resource documentation (e.g. `website/docs/r/eks_cluster.html.markdown`), add the following to the attributes reference: +- In the resource documentation (e.g., `website/docs/r/eks_cluster.html.markdown`), add the following to the attributes reference: ```markdown - * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). + * `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). ``` ## Adding Resource Filtering Support @@ -559,9 +570,9 @@ See the [EC2 Listing and filtering your resources page](https://docs.aws.amazon. Implementing server-side filtering support for Terraform AWS Provider resources requires the following, each with its own section below: -- [ ] _Generated Service Filtering Code_: In the internal code generators (e.g. `aws/internal/namevaluesfilters`), implementation and customization of how a service handles filtering, which is standardized for the resources. -- [ ] _Resource Filtering Code Implementation_: In the resource's equivalent data source code (e.g. `aws/data_source_aws_service_thing.go`), implementation of `filter` schema attribute, along with handling in the `Read` function. -- [ ] _Resource Filtering Documentation Implementation_: In the resource's equivalent data source documentation (e.g. `website/docs/d/service_thing.html.markdown`), addition of `filter` argument +- [ ] _Generated Service Filtering Code_: In the internal code generators (e.g., `aws/internal/namevaluesfilters`), implementation and customization of how a service handles filtering, which is standardized for the resources. +- [ ] _Resource Filtering Code Implementation_: In the resource's equivalent data source code (e.g., `aws/data_source_aws_service_thing.go`), implementation of `filter` schema attribute, along with handling in the `Read` function. +- [ ] _Resource Filtering Documentation Implementation_: In the resource's equivalent data source documentation (e.g., `website/docs/d/service_thing.html.markdown`), addition of `filter` argument ### Adding Service to Filter Generating Code @@ -569,13 +580,13 @@ This step is only necessary for the first implementation and may have been previ More details about this code generation can be found in the [namevaluesfilters documentation](../../aws/internal/namevaluesfilters/README.md). -- Open the AWS Go SDK documentation for the service, e.g. for [`service/rds`](https://docs.aws.amazon.com/sdk-for-go/api/service/rds/). Note: there can be a delay between the AWS announcement and the updated AWS Go SDK documentation. -- Determine if the service API includes functionality for filtering resources (usually a `Filters` argument to a `DescribeThing` API call). If so, add the AWS Go SDK service name (e.g. `rds`) to `sliceServiceNames` in `aws/internal/namevaluesfilters/generators/servicefilters/main.go`. +- Open the AWS Go SDK documentation for the service, e.g., for [`service/rds`](https://docs.aws.amazon.com/sdk-for-go/api/service/rds/). Note: there can be a delay between the AWS announcement and the updated AWS Go SDK documentation. +- Determine if the service API includes functionality for filtering resources (usually a `Filters` argument to a `DescribeThing` API call). If so, add the AWS Go SDK service name (e.g., `rds`) to `sliceServiceNames` in `aws/internal/namevaluesfilters/generators/servicefilters/main.go`. - Run `make gen` (`go generate ./...`) and ensure there are no errors via `make test` (`go test ./...`) ### Resource Filter Code Implementation -- In the resource's equivalent data source Go file (e.g. `aws/data_source_aws_internet_gateway.go`), add the following Go import: `"github.com/hashicorp/terraform-provider-aws/aws/internal/namevaluesfilters"` +- In the resource's equivalent data source Go file (e.g., `aws/data_source_aws_internet_gateway.go`), add the following Go import: `"github.com/hashicorp/terraform-provider-aws/aws/internal/namevaluesfilters"` - In the resource schema, add `"filter": namevaluesfilters.Schema(),` - Implement the logic to build the list of filters: @@ -596,7 +607,7 @@ input.Filters = filters.Ec2Filters() ### Resource Filtering Documentation Implementation -- In the resource's equivalent data source documentation (e.g. `website/docs/d/internet_gateway.html.markdown`), add the following to the arguments reference: +- In the resource's equivalent data source documentation (e.g., `website/docs/d/internet_gateway.html.markdown`), add the following to the arguments reference: ```markdown * `filter` - (Optional) Custom filter block as described below. @@ -612,7 +623,7 @@ More complex filters can be expressed using one or more `filter` sub-blocks, whi ## New Resource -_Before submitting this type of contribution, it is highly recommended to read and understand the other pages of the [Contributing Guide](../CONTRIBUTING.md)._ +_Before submitting this type of contribution, it is highly recommended to read and understand the other pages of the [Contributing Guide](contributing.md)._ Implementing a new resource is a good way to learn more about how Terraform interacts with upstream APIs. There are plenty of examples to draw from in the @@ -626,8 +637,8 @@ guidelines. through long feedback cycles on a big PR with many resources. We ask you to only submit **1 resource at a time**. - [ ] __Acceptance tests__: New resources should include acceptance tests - covering their behavior. See [Writing An Acceptance - Test](running-and-writing-acceptance-tests.md#writing-an-acceptance-test) section for a detailed guide on how to + covering their behavior. See [Writing Acceptance + Tests](#writing-acceptance-tests) below for a detailed guide on how to approach these. - [ ] __Resource Naming__: Resources should be named `aws__`, using underscores (`_`) as the separator. Resources are namespaced with the @@ -665,7 +676,7 @@ Adding a tag resource, similar to the `aws_ecs_tag` resource, has its own implem - In `aws/tag_resources.go`: Add the new `//go:generate` call with the correct service name. Run `make gen` after any modifications. - In `aws/provider.go`: Add the new resource. - Run `make test` and ensure there are no failures. -- Create `aws/resource_aws_{service}_tag_test.go` with initial acceptance testing similar to the following (where the parent resource is simple to provision): +- Create `internal/service/{service}/tag_gen_test.go` with initial acceptance testing similar to the following (where the parent resource is simple to provision): ```go @@ -678,14 +689,14 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" ) -func TestAccAWS{Service}Tag_basic(t *testing.T) { +func TestAcc{Service}Tag_basic(t *testing.T) { rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_{service}_tag.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - ErrorCheck: testAccErrorCheck(t, {Service}.EndpointsID), - Providers: testAccProviders, + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, {Service}.EndpointsID), + Providers: acctest.Providers, CheckDestroy: testAccCheck{Service}TagDestroy, Steps: []resource.TestStep{ { @@ -705,21 +716,21 @@ func TestAccAWS{Service}Tag_basic(t *testing.T) { }) } -func TestAccAWS{Service}Tag_disappears(t *testing.T) { +func TestAcc{Service}Tag_disappears(t *testing.T) { rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_{service}_tag.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - ErrorCheck: testAccErrorCheck(t, {Service}.EndpointsID), - Providers: testAccProviders, + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, {Service}.EndpointsID), + Providers: acctest.Providers, CheckDestroy: testAccCheck{Service}TagDestroy, Steps: []resource.TestStep{ { Config: testAcc{Service}TagConfig(rName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( testAccCheck{Service}TagExists(resourceName), - testAccCheckResourceDisappears(testAccProvider, resourceAws{Service}Tag(), resourceName), + acctest.CheckResourceDisappears(testAccProvider, resourceAws{Service}Tag(), resourceName), ), ExpectNonEmptyPlan: true, }, @@ -727,14 +738,14 @@ func TestAccAWS{Service}Tag_disappears(t *testing.T) { }) } -func TestAccAWS{Service}Tag_Value(t *testing.T) { +func TestAcc{Service}Tag_Value(t *testing.T) { rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_{service}_tag.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - ErrorCheck: testAccErrorCheck(t, {Service}.EndpointsID), - Providers: testAccProviders, + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, {Service}.EndpointsID), + Providers: acctest.Providers, CheckDestroy: testAccCheck{Service}TagDestroy, Steps: []resource.TestStep{ { @@ -781,7 +792,7 @@ resource "aws_{service}_tag" "test" { } ``` -- Run `make testacc TEST=./aws TESTARGS='-run=TestAccAWS{Service}Tags_'` and ensure there are no failures. +- Run `make testacc TEST=./aws TESTARGS='-run=TestAcc{Service}Tags_'` and ensure there are no failures. - Create `website/docs/r/{service}_tag.html.markdown` with initial documentation similar to the following: ``````markdown @@ -795,7 +806,7 @@ description: |- # Resource: aws_{service}_tag -Manages an individual {SERVICE} resource tag. This resource should only be used in cases where {SERVICE} resources are created outside Terraform (e.g. {SERVICE} {THING}s implicitly created by {OTHER SERVICE THING}). +Manages an individual {SERVICE} resource tag. This resource should only be used in cases where {SERVICE} resources are created outside Terraform (e.g., {SERVICE} {THING}s implicitly created by {OTHER SERVICE THING}). ~> **NOTE:** This tagging resource should not be combined with the Terraform resource for managing the parent resource. For example, using `aws_{service}_{thing}` and `aws_{service}_tag` to manage tags of the same {SERVICE} {THING} will cause a perpetual difference where the `aws_{service}_{thing}` resource will try to remove the tag being added by the `aws_{service}_tag` resource. @@ -851,19 +862,19 @@ into Terraform. - In `aws/provider.go` Add a new service entry to `endpointServiceNames`. This service name should match the AWS Go SDK or AWS CLI service name. - - In `aws/config.go`: Add a new import for the AWS Go SDK code. e.g. + - In `aws/config.go`: Add a new import for the AWS Go SDK code. E.g. `github.com/aws/aws-sdk-go/service/quicksight` - In `aws/config.go`: Add a new `{SERVICE}conn` field to the `AWSClient` struct for the service client. The service name should match the name - in `endpointServiceNames`. e.g. `quicksightconn *quicksight.QuickSight` + in `endpointServiceNames`. E.g., `quicksightconn *quicksight.QuickSight` - In `aws/config.go`: Create the new service client in the `{SERVICE}conn` - field in the `AWSClient` instantiation within `Client()`. e.g. + field in the `AWSClient` instantiation within `Client()`. E.g. `quicksightconn: quicksight.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["quicksight"])})),` - In `website/allowed-subcategories.txt`: Add a name acceptable for the documentation navigation. - In `website/docs/guides/custom-service-endpoints.html.md`: Add the service name in the list of customizable endpoints. - In `infrastructure/repository/labels-service.tf`: Add the new service to create a repository label. - - In `.github/labeler-issue-triage.yml`: Add the new service to automated issue labeling. e.g. with the `quicksight` service + - In `.github/labeler-issue-triage.yml`: Add the new service to automated issue labeling. E.g., with the `quicksight` service ```yaml # ... other services ... @@ -872,7 +883,7 @@ into Terraform. # ... other services ... ``` - - In `.github/labeler-pr-triage.yml`: Add the new service to automated pull request labeling. e.g. with the `quicksight` service + - In `.github/labeler-pr-triage.yml`: Add the new service to automated pull request labeling. E.g., with the `quicksight` service ```yaml # ... other services ... diff --git a/docs/contributing/core-services.md b/docs/contributing/core-services.md index 459ebd3b5e42..07be27a77aed 100644 --- a/docs/contributing/core-services.md +++ b/docs/contributing/core-services.md @@ -1,6 +1,6 @@ -# TF AWS Provider Core Services +# Terraform AWS Provider Core Services -Core Services are AWS services we have identified as critical for a large majority of our users. Our goal is to continually increase the depth of coverage for these services. We will work to prioritize features and enhancements to these services in each weekly release, even if they are not necessarily highlighted in our quarterly roadmap. +Core Services are AWS services we have identified as critical for a large majority of our users. Our goal is to continually increase the depth of coverage for these services. We will work to prioritize features and enhancements to these services in each weekly release, even if they are not necessarily highlighted in our quarterly road map. The core services we have identified are: diff --git a/docs/contributing/data-handling-and-conversion.md b/docs/contributing/data-handling-and-conversion.md index 599ec4d01a6b..b6bad40e2bb8 100644 --- a/docs/contributing/data-handling-and-conversion.md +++ b/docs/contributing/data-handling-and-conversion.md @@ -58,7 +58,7 @@ As a generic walkthrough, the following data handling occurs when creating a Ter - Terraform CLI sends a Terraform Plugin Protocol request to create the new resource with its planned new state data - If the Terraform Plugin is using a higher level library, such as the Terraform Plugin SDK, that library receives the request and translates the Terraform Plugin Protocol data types into the expected library types - Terraform Plugin invokes the resource creation function with the planned new state data - - **The planned new state data is converted into an remote system request (e.g. API creation request) that is invoked** + - **The planned new state data is converted into an remote system request (e.g., API creation request) that is invoked** - **The remote system response is received and the data is converted into an applied new state** - If the Terraform Plugin is using a higher level library, such as the Terraform Plugin SDK, that library translates the library types back into Terraform Plugin Protocol data types - Terraform Plugin responds to Terraform Plugin Protocol request with the new state data @@ -86,10 +86,10 @@ To expand on the data handling that occurs specifically within the Terraform AWS - The `Create`/`CreateContext` function of a `schema.Resource` is invoked with `*schema.ResourceData` containing the planned new state data (conventionally named `d`) and an AWS API client (conventionally named `meta`). - Note: Before reaching this point, the `ResourceData` was already translated from the Terraform Plugin Protocol data types by the Terraform Plugin SDK so values can be read by invoking `d.Get()` and `d.GetOk()` receiver methods with Attribute and Block names from the `Schema` of the `schema.Resource`. -- An AWS Go SDK operation input type (e.g. `*ec2.CreateVpcInput`) is initialized -- For each necessary field to configure in the operation input type, the data is read from the `ResourceData` (e.g. `d.Get()`, `d.GetOk()`) and converted into the AWS Go SDK type for the field (e.g. `*string`) -- The AWS Go SDK operation is invoked and the output type (e.g. `*ec2.CreateVpcOutput`) is initialized -- For each necessary Attribute, Block, or resource identifier to be saved in the state, the data is read from the AWS Go SDK type for the field (`*string`), if necessary converted into a `ResourceData` compatible type, and saved into a mutated `ResourceData` (e.g. `d.Set()`, `d.SetId()`) +- An AWS Go SDK operation input type (e.g., `*ec2.CreateVpcInput`) is initialized +- For each necessary field to configure in the operation input type, the data is read from the `ResourceData` (e.g., `d.Get()`, `d.GetOk()`) and converted into the AWS Go SDK type for the field (e.g., `*string`) +- The AWS Go SDK operation is invoked and the output type (e.g., `*ec2.CreateVpcOutput`) is initialized +- For each necessary Attribute, Block, or resource identifier to be saved in the state, the data is read from the AWS Go SDK type for the field (`*string`), if necessary converted into a `ResourceData` compatible type, and saved into a mutated `ResourceData` (e.g., `d.Set()`, `d.SetId()`) - Function is returned ### Type Mapping @@ -155,12 +155,24 @@ _NOTE: While it is possible in certain type scenarios to deeply read and write R Given the various complexities around the Terraform Plugin SDK type system, this section contains recommended implementations for Terraform AWS Provider resource code based on the [Type Mapping section](#type-mapping) and the features of the Terraform Plugin SDK and AWS Go SDK. The eventual goal and styling for many of these recommendations is to ease static analysis of the codebase and future potential code generation efforts. -_Some of these coding patterns may not be well represented in the codebase, as refactoring the many older styles over years of community development is a large task, however this is meant to represent the most preferable implementations today. These will continue to evolve as this codebase and the Terraform Plugin ecosystem changes._ +_Some of these coding patterns may not be well represented in the codebase, as refactoring the many older styles over years of community development is a large task. However this is meant to represent the preferred implementations today. These will continue to evolve as this codebase and the Terraform Plugin ecosystem changes._ + +### Where to Define Flex Functions + +Define FLatten and EXpand (i.e., flex) functions at the _most local level_ possible. This table provides guidance on the preferred place to define flex functions based on usage. + +| Where Used | Where to Define | Include Service in Name | +|---------------|------------|--------| +| One resource (e.g., `aws_instance`) | Resource file (e.g., `internal/service/ec2/instance.go`) | No | +| Few resources in one service (e.g., `EC2`) | Resource file or service flex file (e.g., `internal/service/ec2/flex.go`) | No | +| Widely used in one service (e.g., `EC2`) | Service flex file (e.g., `internal/service/ec2/flex.go`) | No | +| Two services (e.g., `EC2` and `EKS`) | Define a copy in each service | If helpful | +| 3+ services | `internal/flex/flex.go` | Yes | ### Expand Functions for Blocks ```go -func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { +func expandStructure(tfMap map[string]interface{}) *service.Structure { if tfMap == nil { return nil } @@ -172,7 +184,7 @@ func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { return apiObject } -func expandServiceStructures(tfList []interface{}) []*service.Structure { +func expandStructures(tfList []interface{}) []*service.Structure { if len(tfList) == 0 { return nil } @@ -186,7 +198,7 @@ func expandServiceStructures(tfList []interface{}) []*service.Structure { continue } - apiObject := expandServiceStructure(tfMap) + apiObject := expandStructure(tfMap) if apiObject == nil { continue @@ -202,7 +214,7 @@ func expandServiceStructures(tfList []interface{}) []*service.Structure { ### Flatten Functions for Blocks ```go -func flattenServiceStructure(apiObject *service.Structure) map[string]interface{} { +func flattenStructure(apiObject *service.Structure) map[string]interface{} { if apiObject == nil { return nil } @@ -214,7 +226,7 @@ func flattenServiceStructure(apiObject *service.Structure) map[string]interface{ return tfMap } -func flattenServiceStructures(apiObjects []*service.Structure) []interface{} { +func flattenStructures(apiObjects []*service.Structure) []interface{} { if len(apiObjects) == 0 { return nil } @@ -226,7 +238,7 @@ func flattenServiceStructures(apiObjects []*service.Structure) []interface{} { continue } - tfList = append(tfList, flattenServiceStructure(apiObject)) + tfList = append(tfList, flattenStructure(apiObject)) } return tfList @@ -303,14 +315,14 @@ To read: input := service.ExampleOperationInput{} if v, ok := d.GetOk("attribute_name"); ok && len(v.([]interface{})) > 0 { - input.AttributeName = expandServiceStructures(v.([]interface{})) + input.AttributeName = expandStructures(v.([]interface{})) } ``` To write: ```go -if err := d.Set("attribute_name", flattenServiceStructures(output.Thing.AttributeName)); err != nil { +if err := d.Set("attribute_name", flattenStructures(output.Thing.AttributeName)); err != nil { return fmt.Errorf("error setting attribute_name: %w", err) } ``` @@ -323,7 +335,7 @@ To read: input := service.ExampleOperationInput{} if v, ok := d.GetOk("attribute_name"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.AttributeName = expandServiceStructure(v.([]interface{})[0].(map[string]interface{})) + input.AttributeName = expandStructure(v.([]interface{})[0].(map[string]interface{})) } ``` @@ -331,7 +343,7 @@ To write (_likely to have helper function introduced soon_): ```go if output.Thing.AttributeName != nil { - if err := d.Set("attribute_name", []interface{}{flattenServiceStructure(output.Thing.AttributeName)}); err != nil { + if err := d.Set("attribute_name", []interface{}{flattenStructure(output.Thing.AttributeName)}); err != nil { return fmt.Errorf("error setting attribute_name: %w", err) } } else { @@ -383,14 +395,14 @@ To read: input := service.ExampleOperationInput{} if v, ok := d.GetOk("attribute_name"); ok && v.(*schema.Set).Len() > 0 { - input.AttributeName = expandServiceStructures(v.(*schema.Set).List()) + input.AttributeName = expandStructures(v.(*schema.Set).List()) } ``` To write: ```go -if err := d.Set("attribute_name", flattenServiceStructures(output.Thing.AttributeNames)); err != nil { +if err := d.Set("attribute_name", flattenStructures(output.Thing.AttributeNames)); err != nil { return fmt.Errorf("error setting attribute_name: %w", err) } ``` @@ -470,7 +482,7 @@ if output.Thing.AttributeName != nil { To read, if always sending the attribute value is correct: ```go -func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { +func expandStructure(tfMap map[string]interface{}) *service.Structure { // ... if v, ok := tfMap["nested_attribute_name"].(bool); ok { @@ -484,7 +496,7 @@ func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { To read, if only sending the attribute value when `true` is preferred (`!v` for opposite): ```go -func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { +func expandStructure(tfMap map[string]interface{}) *service.Structure { // ... if v, ok := tfMap["nested_attribute_name"].(bool); ok && v { @@ -498,7 +510,7 @@ func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { To write: ```go -func flattenServiceStructure(apiObject *service.Structure) map[string]interface{} { +func flattenStructure(apiObject *service.Structure) map[string]interface{} { // ... if v := apiObject.NestedAttributeName; v != nil { @@ -514,7 +526,7 @@ func flattenServiceStructure(apiObject *service.Structure) map[string]interface{ To read: ```go -func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { +func expandStructure(tfMap map[string]interface{}) *service.Structure { // ... if v, ok := tfMap["nested_attribute_name"].(int); ok && v != 0.0 { @@ -528,7 +540,7 @@ func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { To write: ```go -func flattenServiceStructure(apiObject *service.Structure) map[string]interface{} { +func flattenStructure(apiObject *service.Structure) map[string]interface{} { // ... if v := apiObject.NestedAttributeName; v != nil { @@ -544,7 +556,7 @@ func flattenServiceStructure(apiObject *service.Structure) map[string]interface{ To read: ```go -func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { +func expandStructure(tfMap map[string]interface{}) *service.Structure { // ... if v, ok := tfMap["nested_attribute_name"].(int); ok && v != 0 { @@ -558,7 +570,7 @@ func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { To write: ```go -func flattenServiceStructure(apiObject *service.Structure) map[string]interface{} { +func flattenStructure(apiObject *service.Structure) map[string]interface{} { // ... if v := apiObject.NestedAttributeName; v != nil { @@ -574,11 +586,11 @@ func flattenServiceStructure(apiObject *service.Structure) map[string]interface{ To read: ```go -func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { +func expandStructure(tfMap map[string]interface{}) *service.Structure { // ... if v, ok := tfMap["nested_attribute_name"].([]interface{}); ok && len(v) > 0 { - apiObject.NestedAttributeName = expandServiceStructures(v) + apiObject.NestedAttributeName = expandStructures(v) } // ... @@ -588,11 +600,11 @@ func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { To write: ```go -func flattenServiceStructure(apiObject *service.Structure) map[string]interface{} { +func flattenStructure(apiObject *service.Structure) map[string]interface{} { // ... if v := apiObject.NestedAttributeName; v != nil { - tfMap["nested_attribute_name"] = flattenServiceNestedStructures(v) + tfMap["nested_attribute_name"] = flattenNestedStructures(v) } // ... @@ -604,11 +616,11 @@ func flattenServiceStructure(apiObject *service.Structure) map[string]interface{ To read: ```go -func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { +func expandStructure(tfMap map[string]interface{}) *service.Structure { // ... if v, ok := tfMap["nested_attribute_name"].([]interface{}); ok && len(v) > 0 { - apiObject.NestedAttributeName = expandServiceStructure(v[0].(map[string]interface{})) + apiObject.NestedAttributeName = expandStructure(v[0].(map[string]interface{})) } // ... @@ -618,11 +630,11 @@ func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { To write: ```go -func flattenServiceStructure(apiObject *service.Structure) map[string]interface{} { +func flattenStructure(apiObject *service.Structure) map[string]interface{} { // ... if v := apiObject.NestedAttributeName; v != nil { - tfMap["nested_attribute_name"] = []interface{}{flattenServiceNestedStructure(v)} + tfMap["nested_attribute_name"] = []interface{}{flattenNestedStructure(v)} } // ... @@ -634,7 +646,7 @@ func flattenServiceStructure(apiObject *service.Structure) map[string]interface{ To read: ```go -func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { +func expandStructure(tfMap map[string]interface{}) *service.Structure { // ... if v, ok := tfMap["nested_attribute_name"].([]interface{}); ok && len(v) > 0 { @@ -648,7 +660,7 @@ func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { To write: ```go -func flattenServiceStructure(apiObject *service.Structure) map[string]interface{} { +func flattenStructure(apiObject *service.Structure) map[string]interface{} { // ... if v := apiObject.NestedAttributeName; v != nil { @@ -674,7 +686,7 @@ if v, ok := tfMap["nested_attribute_name"].(map[string]interface{}); ok && len(v To write: ```go -func flattenServiceStructure(apiObject *service.Structure) map[string]interface{} { +func flattenStructure(apiObject *service.Structure) map[string]interface{} { // ... if v := apiObject.NestedAttributeName; v != nil { @@ -690,11 +702,11 @@ func flattenServiceStructure(apiObject *service.Structure) map[string]interface{ To read: ```go -func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { +func expandStructure(tfMap map[string]interface{}) *service.Structure { // ... if v, ok := tfMap["nested_attribute_name"].(*schema.Set); ok && v.Len() > 0 { - apiObject.NestedAttributeName = expandServiceStructures(v.List()) + apiObject.NestedAttributeName = expandStructures(v.List()) } // ... @@ -704,11 +716,11 @@ func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { To write: ```go -func flattenServiceStructure(apiObject *service.Structure) map[string]interface{} { +func flattenStructure(apiObject *service.Structure) map[string]interface{} { // ... if v := apiObject.NestedAttributeName; v != nil { - tfMap["nested_attribute_name"] = flattenServiceNestedStructures(v) + tfMap["nested_attribute_name"] = flattenNestedStructures(v) } // ... @@ -720,7 +732,7 @@ func flattenServiceStructure(apiObject *service.Structure) map[string]interface{ To read: ```go -func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { +func expandStructure(tfMap map[string]interface{}) *service.Structure { // ... if v, ok := tfMap["nested_attribute_name"].(*schema.Set); ok && v.Len() > 0 { @@ -734,7 +746,7 @@ func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { To write: ```go -func flattenServiceStructure(apiObject *service.Structure) map[string]interface{} { +func flattenStructure(apiObject *service.Structure) map[string]interface{} { // ... if v := apiObject.NestedAttributeName; v != nil { @@ -750,7 +762,7 @@ func flattenServiceStructure(apiObject *service.Structure) map[string]interface{ To read: ```go -func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { +func expandStructure(tfMap map[string]interface{}) *service.Structure { // ... if v, ok := tfMap["nested_attribute_name"].(string); ok && v != "" { @@ -764,7 +776,7 @@ func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { To write: ```go -func flattenServiceStructure(apiObject *service.Structure) map[string]interface{} { +func flattenStructure(apiObject *service.Structure) map[string]interface{} { // ... if v := apiObject.NestedAttributeName; v != nil { @@ -790,7 +802,7 @@ To ensure that parsing the read string value does not fail, define `nested_attri To read: ```go -func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { +func expandStructure(tfMap map[string]interface{}) *service.Structure { // ... if v, ok := tfMap["nested_attribute_name"].(string); ok && v != "" { @@ -806,7 +818,7 @@ func expandServiceStructure(tfMap map[string]interface{}) *service.Structure { To write: ```go -func flattenServiceStructure(apiObject *service.Structure) map[string]interface{} { +func flattenStructure(apiObject *service.Structure) map[string]interface{} { // ... if v := apiObject.NestedAttributeName; v != nil { @@ -827,7 +839,7 @@ Certain resources may need to interact with binary (non UTF-8) data while the Te ### Destroy State Values -During resource destroy operations, _only_ previously applied Terraform State values are available to resource logic. Even if the configuration is updated in a manner where both the resource destroy is triggered (e.g. setting the resource meta-argument `count = 0`) and an attribute value is updated, the resource logic will only have the previously applied data values. +During resource destroy operations, _only_ previously applied Terraform State values are available to resource logic. Even if the configuration is updated in a manner where both the resource destroy is triggered (e.g., setting the resource meta-argument `count = 0`) and an attribute value is updated, the resource logic will only have the previously applied data values. Any usage of attribute values during destroy should explicitly note in the resource documentation that the desired value must be applied into the Terraform State before any apply to destroy the resource. @@ -885,14 +897,14 @@ Below is a listing of relevant terms and descriptions for data handling and conv - **AWS Go SDK**: Library that converts Go code into AWS Service API compatible operations and data types. Currently refers to version 1 (v1) available since 2015, however version 2 (v2) will reach general availability status soon. [Project](https://github.com/aws/aws-sdk-go). - **AWS Go SDK Model**: AWS Go SDK compatible format of AWS Service API Model. - **AWS Go SDK Service**: AWS Service API Go code generated from the AWS Go SDK Model. Generated by the AWS Go SDK code. -- **AWS Service API**: Logical boundary of an AWS service by API endpoint. Some large AWS services may be marketed with many different product names under the same service API (e.g. VPC functionality is part of the EC2 API) and vice-versa where some services may be marketed with one product name but are split into multiple service APIs (e.g. Single Sign-On functionality is split into the Identity Store and SSO Admin APIs). +- **AWS Service API**: Logical boundary of an AWS service by API endpoint. Some large AWS services may be marketed with many different product names under the same service API (e.g., VPC functionality is part of the EC2 API) and vice-versa where some services may be marketed with one product name but are split into multiple service APIs (e.g., Single Sign-On functionality is split into the Identity Store and SSO Admin APIs). - **AWS Service API Model**: Declarative description of the AWS Service API operations and data types. Generated by the AWS service teams. Used to operate the API and generate API clients such as the various AWS Software Development Kits (SDKs). - **Terraform Language** ("Configuration"): Configuration syntax interpreted by the Terraform CLI. An implementation of [HCL](https://github.com/hashicorp/hcl). [Full Documentation](https://www.terraform.io/docs/configuration/index.html). - **Terraform Plugin Protocol**: Description of Terraform Plugin operations and data types. Currently based on the Remote Procedure Call (RPC) library [`gRPC`](https://grpc.io/). - **Terraform Plugin Go**: Low-level library that converts Go code into Terraform Plugin Protocol compatible operations and data types. Not currently implemented in the Terraform AWS Provider. [Project](https://github.com/hashicorp/terraform-plugin-go). - **Terraform Plugin SDK**: High-level library that converts Go code into Terraform Plugin Protocol compatible operations and data types. [Project](https://github.com/hashicorp/terraform-plugin-sdk). - **Terraform Plugin SDK Schema**: Declarative description of types and domain specific behaviors for a Terraform provider, including resources and attributes. [Full Documentation](https://www.terraform.io/docs/extend/schemas/index.html). -- **Terraform State**: Bindings between objects in a remote system (e.g. an EC2 VPC) and a Terraform configuration (e.g. an `aws_vpc` resource configuration). [Full Documentation](https://www.terraform.io/docs/state/index.html). +- **Terraform State**: Bindings between objects in a remote system (e.g., an EC2 VPC) and a Terraform configuration (e.g., an `aws_vpc` resource configuration). [Full Documentation](https://www.terraform.io/docs/state/index.html). AWS Service API Models use specific terminology to describe data and types: diff --git a/docs/contributing/development-environment.md b/docs/contributing/development-environment.md index 855a5980f917..c5ce54920246 100644 --- a/docs/contributing/development-environment.md +++ b/docs/contributing/development-environment.md @@ -3,19 +3,19 @@ ## Requirements - [Terraform](https://www.terraform.io/downloads.html) 0.12.26+ (to run acceptance tests) -- [Go](https://golang.org/doc/install) 1.16 (to build the provider plugin) +- [Go](https://golang.org/doc/install) 1.16+ (to build the provider plugin) ## Quick Start If you wish to work on the provider, you'll first need [Go](http://www.golang.org) installed on your machine (please check the [requirements](#requirements) before proceeding). -*Note:* This project uses [Go Modules](https://blog.golang.org/using-go-modules) making it safe to work with it outside of your existing [GOPATH](http://golang.org/doc/code.html#GOPATH). The instructions that follow assume a directory in your home directory outside of the standard GOPATH (i.e `$HOME/development/terraform-providers/`). +*Note:* This project uses [Go Modules](https://blog.golang.org/using-go-modules) making it safe to work with it outside of your existing [GOPATH](http://golang.org/doc/code.html#GOPATH). The instructions that follow assume a directory in your home directory outside of the standard GOPATH (i.e `$HOME/development/hashicorp/`). -Clone repository to: `$HOME/development/terraform-providers/` +Clone repository to: `$HOME/development/hashicorp/` ```sh -$ mkdir -p $HOME/development/terraform-providers/; cd $HOME/development/terraform-providers/ -$ git clone git@github.com:terraform-providers/terraform-provider-aws +$ mkdir -p $HOME/development/hashicorp/; cd $HOME/development/hashicorp/ +$ git clone git@github.com:hashicorp/terraform-provider-aws ... ``` diff --git a/docs/contributing/error-handling.md b/docs/contributing/error-handling.md index c8e8450fe254..b40c18133b91 100644 --- a/docs/contributing/error-handling.md +++ b/docs/contributing/error-handling.md @@ -63,7 +63,7 @@ For most use cases in this codebase, this means if code is receiving an error an return fmt.Errorf("adding some additional message: %w", err) ``` -This type of error wrapping should be applied to all Terraform resource logic. It should also be applied to any nested functions that contains two or more error conditions (e.g. a function that calls an update API and waits for the update to finish) so practitioners and code maintainers have a clear idea which generated the error. When returning errors in those situations, it is important to only include necessary additional context. Resource logic will typically include the information such as the type of operation and resource identifier (e.g. `error updating Service Thing (%s): %w`), so these messages can be more terse such as `error waiting for completion: %w`. +This type of error wrapping should be applied to all Terraform resource logic. It should also be applied to any nested functions that contains two or more error conditions (e.g., a function that calls an update API and waits for the update to finish) so practitioners and code maintainers have a clear idea which generated the error. When returning errors in those situations, it is important to only include necessary additional context. Resource logic will typically include the information such as the type of operation and resource identifier (e.g., `error updating Service Thing (%s): %w`), so these messages can be more terse such as `error waiting for completion: %w`. ### AWS Go SDK Errors @@ -71,7 +71,7 @@ The [AWS Go SDK documentation](https://docs.aws.amazon.com/sdk-for-go/) includes For the purposes of this documentation, the most important concepts with handling these errors are: -- Each response error (which eventually implements `awserr.Error`) has a `string` error code (`Code`) and `string` error message (`Message`). When printed as a string, they format as: `Code: Message`, e.g. `InvalidParameterValueException: IAM Role arn:aws:iam::123456789012:role/XXX cannot be assumed by AWS Backup`. +- Each response error (which eventually implements `awserr.Error`) has a `string` error code (`Code`) and `string` error message (`Message`). When printed as a string, they format as: `Code: Message`, e.g., `InvalidParameterValueException: IAM Role arn:aws:iam::123456789012:role/XXX cannot be assumed by AWS Backup`. - Error handling is almost exclusively done via those `string` fields and not other response information, such as HTTP Status Codes. - When the error code is non-specific, the error message should also be checked. Unfortunately, AWS APIs generally do not provide documentation or API modeling with the contents of these messages and often the Terraform AWS Provider code must rely on substring matching. - Not all errors are returned in the response error from an AWS Go SDK operation. This is service and sometimes API call specific. For example, the [EC2 `DeleteVpcEndpoints` API call](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DeleteVpcEndpoints.html) can return a "successful" response (in terms of no response error) but include information in an `Unsuccessful` field in the response body. diff --git a/docs/contributing/faq.md b/docs/contributing/faq.md index 51e80b4f3fd9..baf4e245793f 100644 --- a/docs/contributing/faq.md +++ b/docs/contributing/faq.md @@ -7,13 +7,13 @@ The HashiCorp Terraform AWS provider team is : * Mary Cutrali, Product Manager - GitHub [@maryelizbeth](https://github.com/maryelizbeth) Twitter [@marycutrali](https://twitter.com/marycutrali) -* Kit Ewbank, Engineer - GitHub [@ewbankkit](https://github.com/ewbankkit) -* Graham Davison, Engineer - GitHub [@gdavison](https://github.com/gdavison) +* Simon Davis, Engineering Manager - GitHub [@breathingdust](https://github.com/breathingdust) * Angie Pinilla, Engineer - GitHub [@angie44](https://github.com/angie44) -* Dirk Avery (Federal), Engineer - GitHub [@YakDriver](https://github.com/yakdriver) +* Dirk Avery, Engineer - GitHub [@YakDriver](https://github.com/yakdriver) +* Graham Davison, Engineer - GitHub [@gdavison](https://github.com/gdavison) +* Kerim Satirli, Developer Advocate - GitHub [@ksatirli](https://github.com/ksatirli) +* Kit Ewbank, Engineer - GitHub [@ewbankkit](https://github.com/ewbankkit) * Zoe Helding, Engineer - GitHub [@zhelding](https://github.com/zhelding) -* Simon Davis, Engineering Manager - GitHub [@breathingdust](https://github.com/breathingdust) -* Kerim Satirli, Developer Advocate - GitHub [@ksatirli](https://github.com/ksatirli) ### Why isn’t my PR merged yet? @@ -27,7 +27,7 @@ The number one factor we look at when deciding what issues to look at are your Once we have prioritized your contribution for review, we will let you know when to expect an engineer to get in touch. If changes are required, we will ask in the pull request. If you are unable to find the time, just let us know, and we will make the necessary changes required in order to merge. -We publish a [roadmap](../ROADMAP.md) every quarter which describes major themes or specific product areas of focus. +We publish a [road map](../../../ROADMAP.md) every quarter which describes major themes or specific product areas of focus. We also are investing time to improve the contributing experience by improving documentation, adding more linter coverage to ensure that incoming PR's can be in as good shape as possible. This will allow us to get through them quicker. @@ -83,7 +83,7 @@ provider "aws" { Great question, if you have contributed before check out issues with the `help-wanted` label. These are normally enhancement issues that will have a great impact, but the maintainers are unable to develop them in the near future. If you are just getting started, take a look at issues with the `good-first-issue` label. Items with these labels will always be given priority for response. -Check out the [Contributing Guide](CONTRIBUTING.md) for additional information. +Check out the [Contributing Guide](contributing.md) for additional information. ### How can I become a maintainer? diff --git a/docs/contributing/maintaining.md b/docs/contributing/maintaining.md index 43e9ecd62f70..b9c31d28f960 100644 --- a/docs/contributing/maintaining.md +++ b/docs/contributing/maintaining.md @@ -14,7 +14,9 @@ - [yaml.v2 Updates](#yaml-v2-updates) - [Pull Request Merge Process](#pull-request-merge-process) - [Breaking Changes](#breaking-changes) +- [Branch Dictionary](#branch-dictionary) - [Environment Variable Dictionary](#environment-variable-dictionary) +- [Label Dictionary](#label-dictionary) - [Release Process](#release-process) @@ -33,13 +35,13 @@ Incoming issues are classified using labels. These are assigned either by automa Throughout the review process our first priority is to interact with contributors with kindness, empathy and in accordance with the [Guidelines](https://www.hashicorp.com/community-guidelines) and [Principles](https://www.hashicorp.com/our-principles/) of Hashicorp. -Our contributors are often working within the provider as a hobby, or not in their main line of work so we need to give adequate time for response. By default this is a week, but it is worth considering taking on the work to complete the PR ourselves if the administrative effort of waiting for a response is greater than just resolving the issues ourselves (Don't wait the week, or add a context shift for yourself and the contributor to fix a typo). As long as we use their commits, contributions will be recorded by Github and as always ensure to thank the contributor for their work. Roadmap items are another area where we would consider taking on the work ourselves more quickly in order to meet the commitments made to our users. +Our contributors are often working within the provider as a hobby, or not in their main line of work so we need to give adequate time for response. By default this is a week, but it is worth considering taking on the work to complete the PR ourselves if the administrative effort of waiting for a response is greater than just resolving the issues ourselves (Don't wait the week, or add a context shift for yourself and the contributor to fix a typo). As long as we use their commits, contributions will be recorded by Github and as always ensure to thank the contributor for their work. Road map items are another area where we would consider taking on the work ourselves more quickly in order to meet the commitments made to our users. Notes for each type of pull request are (or will be) available in subsections below. - If you plan to be responsible for the pull request through the merge/closure process, assign it to yourself - Add `bug`, `enhancement`, `new-data-source`, `new-resource`, or `technical-debt` labels to match expectations from change -- Perform a quick scan of open issues and ensure they are referenced in the pull request description (e.g. `Closes #1234`, `Relates #5678`). Edit the description yourself and mention this to the author: +- Perform a quick scan of open issues and ensure they are referenced in the pull request description (e.g., `Closes #1234`, `Relates #5678`). Edit the description yourself and mention this to the author: ```markdown This pull request appears to be related to/solve #1234, so I have edited the pull request description to denote the issue reference. @@ -50,7 +52,7 @@ This pull request appears to be related to/solve #1234, so I have edited the pul - If the change is acceptable with modifications, leave a pull request review marked using the `Request Changes` option (for maintainer pull requests with minor modification requests, giving feedback with the `Approve` option is recommended so they do not need to wait for another round of review) - If the author is unresponsive for changes (by default we give two weeks), determine importance and level of effort to finish the pull request yourself including their commits or close the pull request - Run relevant acceptance testing ([locally](https://github.com/hashicorp/terraform-provider-aws/blob/main/docs/contributing/running-and-writing-acceptance-tests.md) or in TeamCity) against AWS Commercial and AWS GovCloud (US) to ensure no new failures are being introduced -- Approve the pull request with a comment outlining what steps you took that ensure the change is acceptable, e.g. acceptance testing output +- Approve the pull request with a comment outlining what steps you took that ensure the change is acceptable, e.g., acceptance testing output ``````markdown Looks good, thanks @username! :rocket: @@ -84,7 +86,7 @@ Ensure that the following steps are tracked within the issue and completed withi - Verify `make test lint` works as expected - Verify `goreleaser build --snapshot` succeeds for all currently supported architectures - Verify `goenv` support for the new version -- Update `docs/DEVELOPMENT.md` +- Update `development-environment.md` - Update `.go-version` - Update `CHANGELOG.md` detailing the update and mention any notes practitioners need to be aware of. @@ -96,13 +98,13 @@ Almost exclusively, `github.com/aws/aws-sdk-go` updates are additive in nature. Authentication changes: -Occasionally, there will be changes listed in the authentication pieces of the AWS Go SDK codebase, e.g. changes to `aws/session`. The AWS Go SDK `CHANGELOG` should include a relevant description of these changes under a heading such as `SDK Enhancements` or `SDK Bug Fixes`. If they seem worthy of a callout in the Terraform AWS Provider `CHANGELOG`, then upon merging we should include a similar message prefixed with the `provider` subsystem, e.g. `* provider: ...`. +Occasionally, there will be changes listed in the authentication pieces of the AWS Go SDK codebase, e.g., changes to `aws/session`. The AWS Go SDK `CHANGELOG` should include a relevant description of these changes under a heading such as `SDK Enhancements` or `SDK Bug Fixes`. If they seem worthy of a callout in the Terraform AWS Provider `CHANGELOG`, then upon merging we should include a similar message prefixed with the `provider` subsystem, e.g., `* provider: ...`. Additionally, if a `CHANGELOG` addition seemed appropriate, this dependency and version should also be updated in the Terraform S3 Backend, which currently lives in Terraform Core. An example of this can be found with https://github.com/hashicorp/terraform-provider-aws/pull/9305 and https://github.com/hashicorp/terraform/pull/22055. CloudFront changes: -CloudFront service client updates have previously caused an issue when a new field introduced in the SDK was not included with Terraform and caused all requests to error (https://github.com/hashicorp/terraform-provider-aws/issues/4091). As a precaution, if you see CloudFront updates, run all the CloudFront resource acceptance testing before merging (`TestAccAWSCloudFront`). +CloudFront service client updates have previously caused an issue when a new field introduced in the SDK was not included with Terraform and caused all requests to error (https://github.com/hashicorp/terraform-provider-aws/issues/4091). As a precaution, if you see CloudFront updates, run all the CloudFront resource acceptance testing before merging (`TestAccCloudFront`). New Regions: @@ -172,7 +174,7 @@ provider "aws" { ```markdown NOTES: -* provider: Region validation now automatically supports the new `XX-XXXXX-#` (Location) region. For AWS operations to work in the new region, the region must be explicitly enabled as outlined in the [AWS Documentation](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html#rande-manage-enable). When the region is not enabled, the Terraform AWS Provider will return errors during credential validation (e.g. `error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid`) or AWS operations will throw their own errors (e.g. `data.aws_availability_zones.available: Error fetching Availability Zones: AuthFailure: AWS was not able to validate the provided access credentials`). [GH-####] +* provider: Region validation now automatically supports the new `XX-XXXXX-#` (Location) region. For AWS operations to work in the new region, the region must be explicitly enabled as outlined in the [AWS Documentation](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html#rande-manage-enable). When the region is not enabled, the Terraform AWS Provider will return errors during credential validation (e.g., `error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid`) or AWS operations will throw their own errors (e.g., `data.aws_availability_zones.available: Error fetching Availability Zones: AuthFailure: AWS was not able to validate the provided access credentials`). [GH-####] ENHANCEMENTS: @@ -269,7 +271,7 @@ terraform { ```markdown NOTES: -* backend/s3: Region validation now automatically supports the new `XX-XXXXX-#` (Location) region. For AWS operations to work in the new region, the region must be explicitly enabled as outlined in the [AWS Documentation](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html#rande-manage-enable). When the region is not enabled, the Terraform S3 Backend will return errors during credential validation (e.g. `error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid`). [GH-####] +* backend/s3: Region validation now automatically supports the new `XX-XXXXX-#` (Location) region. For AWS operations to work in the new region, the region must be explicitly enabled as outlined in the [AWS Documentation](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html#rande-manage-enable). When the region is not enabled, the Terraform S3 Backend will return errors during credential validation (e.g., `error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid`). [GH-####] ENHANCEMENTS: @@ -294,7 +296,7 @@ Merge if CI passes. ##### yaml.v2 Updates -Run the acceptance testing pattern, `TestAccAWSCloudFormationStack(_dataSource)?_yaml`, and merge if passing. +Run the acceptance testing pattern, `TestAccCloudFormationStack(_dataSource)?_yaml`, and merge if passing. ### Pull Request Merge Process @@ -317,6 +319,17 @@ When breaking changes to the provider are necessary we release them in a major v - Add the issue/PR to the next major version milestone. - Leave a comment why this is a breaking change or otherwise only being considered for a major version update. If possible, detail any changes that might be made for the contributor to accomplish the task without a breaking change. +## Branch Dictionary + +The following branch conventions are used: + +| Branch | Example | Description | +|--------|---------|-------------| +| `main` | `main` | Main, unreleased code branch. | +| `release/*` | `release/2.x` | Backport branches for previous major releases. | + +Additional branch naming recommendations can be found in the [Pull Request Submission and Lifecycle documentation](contributing/pullrequest-submission-and-lifecycle.md#branch-prefixes). + ## Environment Variable Dictionary Environment variables (beyond standard AWS Go SDK ones) used by acceptance testing. See also the `aws/internal/envvar` package. @@ -395,6 +408,96 @@ Environment variables (beyond standard AWS Go SDK ones) used by acceptance testi | `TF_ACC_ASSUME_ROLE_ARN` | Amazon Resource Name of existing IAM Role to use for limited permissions acceptance testing. | | `TF_TEST_CLOUDFRONT_RETAIN` | Flag to disable but dangle CloudFront Distributions during testing to reduce feedback time (must be manually destroyed afterwards) | +## Label Dictionary + + + +| Label | Description | Automation | +|---------|-------------|----------| +| [![breaking-change][breaking-change-badge]][breaking-change]                                    | Introduces a breaking change in current functionality; breaking changes are usually deferred to the next major release. | None | +| [![bug][bug-badge]][bug] | Addresses a defect in current functionality. | None | +| [![crash][crash-badge]][crash] | Results from or addresses a Terraform crash or kernel panic. | None | +| [![dependencies][dependencies-badge]][dependencies] | Used to indicate dependency changes. | Added by Hashibot. | +| [![documentation][documentation-badge]][documentation] | Introduces or discusses updates to documentation. | None | +| [![enhancement][enhancement-badge]][enhancement] | Requests to existing resources that expand the functionality or scope. | None | +| [![examples][examples-badge]][examples] | Introduces or discusses updates to examples. | None | +| [![good first issue][good-first-issue-badge]][good-first-issue] | Call to action for new contributors looking for a place to start. Smaller or straightforward issues. | None | +| [![hacktoberfest][hacktoberfest-badge]][hacktoberfest] | Call to action for Hacktoberfest (OSS Initiative). | None | +| [![hashibot ignore][hashibot-ignore-badge]][hashibot-ignore] | Issues or PRs labelled with this are ignored by Hashibot. | None | +| [![help wanted][help-wanted-badge]][help-wanted] | Call to action for contributors. Indicates an area of the codebase we’d like to expand/work on but don’t have the bandwidth inside the team. | None | +| [![needs-triage][needs-triage-badge]][needs-triage] | Waiting for first response or review from a maintainer. | Added to all new issues or PRs by GitHub action in `.github/workflows/issues.yml` or PRs by Hashibot in `.hashibot.hcl` unless they were submitted by a maintainer. | +| [![new-data-source][new-data-source-badge]][new-data-source] | Introduces a new data source. | None | +| [![new-resource][new-resource-badge]][new-resource] | Introduces a new resrouce. | None | +| [![proposal][proposal-badge]][proposal] | Proposes new design or functionality. | None | +| [![provider][provider-badge]][provider] | Pertains to the provider itself, rather than any interaction with AWS. | Added by Hashibot when the code change is in an area configured in `.hashibot.hcl` | +| [![question][question-badge]][question] | Includes a question about existing functionality; most questions will be re-routed to discuss.hashicorp.com. | None | +| [![regression][regression-badge]][regression] | Pertains to a degraded workflow resulting from an upstream patch or internal enhancement; usually categorized as a bug. | None | +| [![reinvent][reinvent-badge]][reinvent] | Pertains to a service or feature announced at reinvent. | None | +| ![service <*>][service-badge] | Indicates the service that is covered or introduced (i.e. service/s3) | Added by Hashibot when the code change matches a service definition in `.hashibot.hcl`. +| ![size%2F<*>][size-badge] | Managed by automation to categorize the size of a PR | Added by Hashibot to indicate the size of the PR. | +| [![stale][stale-badge]][stale] | Old or inactive issues managed by automation, if no further action taken these will get closed. | Added by a Github Action, configuration is found: `.github/workflows/stale.yml`. | +| [![technical-debt][technical-debt-badge]][technical-debt] | Addresses areas of the codebase that need refactoring or redesign. | None | +| [![tests][tests-badge]][tests] | On a PR this indicates expanded test coverage. On an Issue this proposes expanded coverage or enhancement to test infrastructure. | None | +| [![thinking][thinking-badge]][thinking] | Requires additional research by the maintainers. | None | +| [![upstream-terraform][upstream-terraform-badge]][upstream-terraform] | Addresses functionality related to the Terraform core binary. | None | +| [![upstream][upstream-badge]][upstream] | Addresses functionality related to the cloud provider. | None | +| [![waiting-response][waiting-response-badge]][waiting-response] | Maintainers are waiting on response from community or contributor. | None | + +[breaking-change-badge]: https://img.shields.io/badge/breaking--change-d93f0b +[breaking-change]: https://github.com/hashicorp/terraform-provider-aws/labels/breaking-change +[bug-badge]: https://img.shields.io/badge/bug-f7c6c7 +[bug]: https://github.com/hashicorp/terraform-provider-aws/labels/bug +[crash-badge]: https://img.shields.io/badge/crash-e11d21 +[crash]: https://github.com/hashicorp/terraform-provider-aws/labels/crash +[dependencies-badge]: https://img.shields.io/badge/dependencies-fad8c7 +[dependencies]: https://github.com/hashicorp/terraform-provider-aws/labels/dependencies +[documentation-badge]: https://img.shields.io/badge/documentation-fef2c0 +[documentation]: https://github.com/hashicorp/terraform-provider-aws/labels/documentation +[enhancement-badge]: https://img.shields.io/badge/enhancement-d4c5f9 +[enhancement]: https://github.com/hashicorp/terraform-provider-aws/labels/enhancement +[examples-badge]: https://img.shields.io/badge/examples-fef2c0 +[examples]: https://github.com/hashicorp/terraform-provider-aws/labels/examples +[good-first-issue-badge]: https://img.shields.io/badge/good%20first%20issue-128A0C +[good-first-issue]: https://github.com/hashicorp/terraform-provider-aws/labels/good%20first%20issue +[hacktoberfest-badge]: https://img.shields.io/badge/hacktoberfest-2c0fad +[hacktoberfest]: https://github.com/hashicorp/terraform-provider-aws/labels/hacktoberfest +[hashibot-ignore-badge]: https://img.shields.io/badge/hashibot%2Fignore-2c0fad +[hashibot-ignore]: https://github.com/hashicorp/terraform-provider-aws/labels/hashibot-ignore +[help-wanted-badge]: https://img.shields.io/badge/help%20wanted-128A0C +[help-wanted]: https://github.com/hashicorp/terraform-provider-aws/labels/help-wanted +[needs-triage-badge]: https://img.shields.io/badge/needs--triage-e236d7 +[needs-triage]: https://github.com/hashicorp/terraform-provider-aws/labels/needs-triage +[new-data-source-badge]: https://img.shields.io/badge/new--data--source-d4c5f9 +[new-data-source]: https://github.com/hashicorp/terraform-provider-aws/labels/new-data-source +[new-resource-badge]: https://img.shields.io/badge/new--resource-d4c5f9 +[new-resource]: https://github.com/hashicorp/terraform-provider-aws/labels/new-resource +[proposal-badge]: https://img.shields.io/badge/proposal-fbca04 +[proposal]: https://github.com/hashicorp/terraform-provider-aws/labels/proposal +[provider-badge]: https://img.shields.io/badge/provider-bfd4f2 +[provider]: https://github.com/hashicorp/terraform-provider-aws/labels/provider +[question-badge]: https://img.shields.io/badge/question-d4c5f9 +[question]: https://github.com/hashicorp/terraform-provider-aws/labels/question +[regression-badge]: https://img.shields.io/badge/regression-e11d21 +[regression]: https://github.com/hashicorp/terraform-provider-aws/labels/regression +[reinvent-badge]: https://img.shields.io/badge/reinvent-c5def5 +[reinvent]: https://github.com/hashicorp/terraform-provider-aws/labels/reinvent +[service-badge]: https://img.shields.io/badge/service%2F<*>-bfd4f2 +[size-badge]: https://img.shields.io/badge/size%2F<*>-ffffff +[stale-badge]: https://img.shields.io/badge/stale-e11d21 +[stale]: https://github.com/hashicorp/terraform-provider-aws/labels/stale +[technical-debt-badge]: https://img.shields.io/badge/technical--debt-1d76db +[technical-debt]: https://github.com/hashicorp/terraform-provider-aws/labels/technical-debt +[tests-badge]: https://img.shields.io/badge/tests-DDDDDD +[tests]: https://github.com/hashicorp/terraform-provider-aws/labels/tests +[thinking-badge]: https://img.shields.io/badge/thinking-bfd4f2 +[thinking]: https://github.com/hashicorp/terraform-provider-aws/labels/thinking +[upstream-terraform-badge]: https://img.shields.io/badge/upstream--terraform-CCCCCC +[upstream-terraform]: https://github.com/hashicorp/terraform-provider-aws/labels/upstream-terraform +[upstream-badge]: https://img.shields.io/badge/upstream-fad8c7 +[upstream]: https://github.com/hashicorp/terraform-provider-aws/labels/upstream +[waiting-response-badge]: https://img.shields.io/badge/waiting--response-5319e7 +[waiting-response]: https://github.com/hashicorp/terraform-provider-aws/labels/waiting-response + ## Release Process - Create a milestone for the next release after this release (generally, the next milestone will be a minor version increase unless previously decided for a major or patch version) @@ -404,4 +507,4 @@ Environment variables (beyond standard AWS Go SDK ones) used by acceptance testi - Web interface: With the `DEPLOYMENT_TARGET_VERSION` matching the expected release milestone and `DEPLOYMENT_NEXT_VERSION` matching the next release milestone - Wait for the TeamCity release job to complete either by watching the build logs or Slack notifications - Close the release milestone -- Create a new GitHub release with the release title exactly matching the tag and milestone (e.g. `v2.22.0`) and copy the entries from the CHANGELOG to the release notes. +- Create a new GitHub release with the release title exactly matching the tag and milestone (e.g., `v2.22.0`) and copy the entries from the CHANGELOG to the release notes. diff --git a/docs/contributing/provider-design.md b/docs/contributing/provider-design.md index 17d62a6a31ec..28df5b86903f 100644 --- a/docs/contributing/provider-design.md +++ b/docs/contributing/provider-design.md @@ -92,11 +92,11 @@ When discussing data sources, they are typically classified by the intended numb #### Plural Data Sources -These data sources are intended to return zero, one, or many results, usually associated with a managed resource type. Typically results are a set unless ordering guarantees are provided by the remote system. These should be named with a plural suffix (e.g. `s` or `es`) and should not include any specific attribute in the naming (e.g. prefer `aws_ec2_transit_gateways` instead of `aws_ec2_transit_gateway_ids`). +These data sources are intended to return zero, one, or many results, usually associated with a managed resource type. Typically results are a set unless ordering guarantees are provided by the remote system. These should be named with a plural suffix (e.g., `s` or `es`) and should not include any specific attribute in the naming (e.g., prefer `aws_ec2_transit_gateways` instead of `aws_ec2_transit_gateway_ids`). #### Singular Data Sources -These data sources are intended to return one result or an error. These should not include any specific attribute in the naming (e.g. prefer `aws_ec2_transit_gateway` instead of `aws_ec2_transit_gateway_id`). +These data sources are intended to return one result or an error. These should not include any specific attribute in the naming (e.g., prefer `aws_ec2_transit_gateway` instead of `aws_ec2_transit_gateway_id`). ### IAM Resource-Based Policy Resources diff --git a/docs/contributing/pullrequest-submission-and-lifecycle.md b/docs/contributing/pullrequest-submission-and-lifecycle.md index f49b02fc6f77..cd30223918a8 100644 --- a/docs/contributing/pullrequest-submission-and-lifecycle.md +++ b/docs/contributing/pullrequest-submission-and-lifecycle.md @@ -120,7 +120,7 @@ This Contribution Guide also includes separate sections on topics such as [Error - [ ] __Passes Testing__: All code and documentation changes must pass unit testing, code linting, and website link testing. Resource code changes must pass all acceptance testing for the resource. - [ ] __Avoids API Calls Across Account, Region, and Service Boundaries__: Resources should not implement cross-account, cross-region, or cross-service API calls. - [ ] __Avoids Optional and Required for Non-Configurable Attributes__: Resource schema definitions for read-only attributes should not include `Optional: true` or `Required: true`. -- [ ] __Avoids resource.Retry() without resource.RetryableError()__: Resource logic should only implement [`resource.Retry()`](https://godoc.org/github.com/hashicorp/terraform/helper/resource#Retry) if there is a retryable condition (e.g. `return resource.RetryableError(err)`). +- [ ] __Avoids resource.Retry() without resource.RetryableError()__: Resource logic should only implement [`resource.Retry()`](https://godoc.org/github.com/hashicorp/terraform/helper/resource#Retry) if there is a retryable condition (e.g., `return resource.RetryableError(err)`). - [ ] __Avoids Resource Read Function in Data Source Read Function__: Data sources should fully implement their own resource `Read` functionality including duplicating `d.Set()` calls. - [ ] __Avoids Reading Schema Structure in Resource Code__: The resource `Schema` should not be read in resource `Create`/`Read`/`Update`/`Delete` functions to perform looping or otherwise complex attribute logic. Use [`d.Get()`](https://godoc.org/github.com/hashicorp/terraform/helper/schema#ResourceData.Get) and [`d.Set()`](https://godoc.org/github.com/hashicorp/terraform/helper/schema#ResourceData.Set) directly with individual attributes instead. - [ ] __Avoids ResourceData.GetOkExists()__: Resource logic should avoid using [`ResourceData.GetOkExists()`](https://godoc.org/github.com/hashicorp/terraform/helper/schema#ResourceData.GetOkExists) as its expected functionality is not guaranteed in all scenarios. @@ -181,8 +181,8 @@ The below are style-based items that _may_ be noted during review and are recomm When the `arn` attribute is synthesized this way, add the resource to the [list](https://www.terraform.io/docs/providers/aws/index.html#argument-reference) of those affected by the provider's `skip_requesting_account_id` attribute. -- [ ] __Implements Warning Logging With Resource State Removal__: If a resource is removed outside of Terraform (e.g. via different tool, API, or web UI), `d.SetId("")` and `return nil` can be used in the resource `Read` function to trigger resource recreation. When this occurs, a warning log message should be printed beforehand: `log.Printf("[WARN] {SERVICE} {THING} (%s) not found, removing from state", d.Id())` -- [ ] __Uses American English for Attribute Naming__: For any ambiguity with attribute naming, prefer American English over British English. e.g. `color` instead of `colour`. +- [ ] __Implements Warning Logging With Resource State Removal__: If a resource is removed outside of Terraform (e.g., via different tool, API, or web UI), `d.SetId("")` and `return nil` can be used in the resource `Read` function to trigger resource recreation. When this occurs, a warning log message should be printed beforehand: `log.Printf("[WARN] {SERVICE} {THING} (%s) not found, removing from state", d.Id())` +- [ ] __Uses American English for Attribute Naming__: For any ambiguity with attribute naming, prefer American English over British English. e.g., `color` instead of `colour`. - [ ] __Skips Timestamp Attributes__: Generally, creation and modification dates from the API should be omitted from the schema. - [ ] __Uses Paginated AWS Go SDK Functions When Iterating Over a Collection of Objects__: When the API for listing a collection of objects provides a paginated function, use it instead of looping until the next page token is not set. For example, with the EC2 API, [`DescribeInstancesPages`](https://docs.aws.amazon.com/sdk-for-go/api/service/ec2/#EC2.DescribeInstancesPages) should be used instead of [`DescribeInstances`](https://docs.aws.amazon.com/sdk-for-go/api/service/ec2/#EC2.DescribeInstances) when more than one result is expected. - [ ] __Adds Paginated Functions Missing from the AWS Go SDK to Internal Service Package__: If the AWS Go SDK does not define a paginated equivalent for a function to list a collection of objects, it should be added to a per-service internal package using the [`listpages` generator](../../aws/internal/generators/listpages/README.md). A support case should also be opened with AWS to have the paginated functions added to the AWS Go SDK. @@ -241,7 +241,7 @@ aws_workspaces_workspace ``` `````` -##### New full-length documentation guides (e.g. EKS Getting Started Guide, IAM Policy Documents with Terraform) +##### New full-length documentation guides (e.g., EKS Getting Started Guide, IAM Policy Documents with Terraform) A new full length documentation entry gives the title of the documentation added, using the `release-note:new-guide` header. diff --git a/docs/contributing/retries-and-waiters.md b/docs/contributing/retries-and-waiters.md index 21b2523072c5..0df57fdfdcbd 100644 --- a/docs/contributing/retries-and-waiters.md +++ b/docs/contributing/retries-and-waiters.md @@ -74,7 +74,7 @@ In some situations, while handling a response, the AWS Go SDK automatically retr - Certain network errors. A common exception to this is connection reset errors. - HTTP status codes 429 and 5xx. -- Certain API error codes, which are common across various AWS services (e.g. `ThrottledException`). However, not all AWS services implement these error codes consistently. A common exception to this is certain expired credentials errors. +- Certain API error codes, which are common across various AWS services (e.g., `ThrottledException`). However, not all AWS services implement these error codes consistently. A common exception to this is certain expired credentials errors. By default, the Terraform AWS Provider sets the maximum number of AWS Go SDK retries based on the [`max_retries` provider configuration](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#max_retries). The provider configuration defaults to 25 and the exponential backoff roughly equates to one hour of retries. This very high default value was present before the Terraform AWS Provider codebase was split from Terraform CLI in version 0.10. @@ -122,7 +122,7 @@ const ( ``` ```go -// aws/resource_example_thing.go +// internal/service/{service}/{thing}.go import ( // ... other imports ... @@ -147,7 +147,7 @@ import ( }) // This check is important - it handles when the AWS Go SDK operation retries without returning. - // e.g. any automatic retries due to network or throttling errors. + // e.g., any automatic retries due to network or throttling errors. if tfresource.TimedOut(err) { // The use of equals assignment (over colon equals) is also important here. // This overwrites the error variable to simplify logic. @@ -178,7 +178,7 @@ The last operation can receive varied API errors ranging from: Each AWS service API (and sometimes even operations within the same API) varies in the implementation of these errors. To handle them, it is recommended to use the [Operation Specific Error Retries](#operation-specific-error-retries) pattern. The Terraform AWS Provider implements a standard timeout constant of two minutes in the `aws/internal/service/iam/waiter` package which should be used for all retry timeouts associated with IAM errors. This timeout was derived from years of Terraform operational experience with all AWS APIs. ```go -// aws/resource_example_thing.go +// internal/service/{service}/{thing}.go import ( // ... other imports ... @@ -219,7 +219,7 @@ Some remote system operations run asynchronously as detailed in the [Asynchronou The below code example highlights this situation for a resource creation that also exhibited IAM eventual consistency. ```go -// aws/resource_example_thing.go +// internal/service/{service}/{thing}.go import ( // ... other imports ... @@ -297,7 +297,7 @@ const ( ``` ```go -// aws/resource_example_thing.go +// internal/service/{service}/{thing}.go function ExampleThingCreate(d *schema.ResourceData, meta interface{}) error { // ... @@ -415,7 +415,7 @@ func ThingAttributeUpdated(conn *example.Example, id string, expectedValue strin ``` ```go -// aws/resource_example_thing.go +// internal/service/{service}/{thing}.go function ExampleThingUpdate(d *schema.ResourceData, meta interface{}) error { // ... @@ -434,7 +434,7 @@ function ExampleThingUpdate(d *schema.ResourceData, meta interface{}) error { ## Asynchronous Operations -When you initiate a long-running operation, an AWS service may return a successful response immediately and continue working on the request asynchronously. A resource can track the status with a component-level field (e.g. `CREATING`, `UPDATING`, etc.) or an explicit tracking identifier. +When you initiate a long-running operation, an AWS service may return a successful response immediately and continue working on the request asynchronously. A resource can track the status with a component-level field (e.g., `CREATING`, `UPDATING`, etc.) or an explicit tracking identifier. Terraform resources should wait for these background operations to complete. Failing to do so can introduce incomplete state information and downstream errors in other resources. In rare scenarios involving very long-running operations, operators may request a flag to skip the waiting. However, these should only be implemented case-by-case to prevent those previously mentioned confusing issues. @@ -525,7 +525,7 @@ func ThingDeleted(conn *example.Example, id string) (*example.Thing, error) { ``` ```go -// aws/resource_example_thing.go +// internal/service/{service}/{thing}.go function ExampleThingCreate(d *schema.ResourceData, meta interface{}) error { // ... AWS Go SDK logic to create resource ... @@ -548,4 +548,4 @@ function ExampleThingDelete(d *schema.ResourceData, meta interface{}) error { } ``` -Typically, the AWS Go SDK should include constants for various status field values (e.g. `StatusCreating` for `CREATING`). If not, create them in a file named `aws/internal/service/{SERVICE}/consts.go`. +Typically, the AWS Go SDK should include constants for various status field values (e.g., `StatusCreating` for `CREATING`). If not, create them in a file named `aws/internal/service/{SERVICE}/consts.go`. diff --git a/docs/contributing/running-and-writing-acceptance-tests.md b/docs/contributing/running-and-writing-acceptance-tests.md index 41c1d906822d..922673b63ac3 100644 --- a/docs/contributing/running-and-writing-acceptance-tests.md +++ b/docs/contributing/running-and-writing-acceptance-tests.md @@ -93,11 +93,11 @@ export AWS_DEFAULT_REGION=us-gov-west-1 Tests can then be run by specifying a regular expression defining the tests to run: ```sh -$ make testacc TESTARGS='-run=TestAccAWSCloudWatchDashboard_update' +$ make testacc TESTARGS='-run=TestAccCloudWatchDashboard_update' ==> Checking that code complies with gofmt requirements... -TF_ACC=1 go test ./aws -v -run=TestAccAWSCloudWatchDashboard_update -timeout 120m -=== RUN TestAccAWSCloudWatchDashboard_update ---- PASS: TestAccAWSCloudWatchDashboard_update (26.56s) +TF_ACC=1 go test ./aws -v -run=TestAccCloudWatchDashboard_update -timeout 120m +=== RUN TestAccCloudWatchDashboard_update +--- PASS: TestAccCloudWatchDashboard_update (26.56s) PASS ok github.com/hashicorp/terraform-provider-aws/aws 26.607s ``` @@ -108,15 +108,15 @@ write the regular expression. For example, to run all tests of the testing like this: ```sh -$ make testacc TESTARGS='-run=TestAccAWSCloudWatchDashboard' +$ make testacc TESTARGS='-run=TestAccCloudWatchDashboard' ==> Checking that code complies with gofmt requirements... -TF_ACC=1 go test ./aws -v -run=TestAccAWSCloudWatchDashboard -timeout 120m -=== RUN TestAccAWSCloudWatchDashboard_importBasic ---- PASS: TestAccAWSCloudWatchDashboard_importBasic (15.06s) -=== RUN TestAccAWSCloudWatchDashboard_basic ---- PASS: TestAccAWSCloudWatchDashboard_basic (12.70s) -=== RUN TestAccAWSCloudWatchDashboard_update ---- PASS: TestAccAWSCloudWatchDashboard_update (27.81s) +TF_ACC=1 go test ./aws -v -run=TestAccCloudWatchDashboard -timeout 120m +=== RUN TestAccCloudWatchDashboard_importBasic +--- PASS: TestAccCloudWatchDashboard_importBasic (15.06s) +=== RUN TestAccCloudWatchDashboard_basic +--- PASS: TestAccCloudWatchDashboard_basic (12.70s) +=== RUN TestAccCloudWatchDashboard_update +--- PASS: TestAccCloudWatchDashboard_update (27.81s) PASS ok github.com/hashicorp/terraform-provider-aws/aws 55.619s ``` @@ -132,12 +132,12 @@ Please Note: On macOS 10.14 and later (and some Linux distributions), the defaul Certain testing requires multiple AWS accounts. This additional setup is not typically required and the testing will return an error (shown below) if your current setup does not have the secondary AWS configuration: ```console -$ make testacc TEST=./aws TESTARGS='-run=TestAccAWSDBInstance_DbSubnetGroupName_RamShared' -=== RUN TestAccAWSDBInstance_DbSubnetGroupName_RamShared -=== PAUSE TestAccAWSDBInstance_DbSubnetGroupName_RamShared -=== CONT TestAccAWSDBInstance_DbSubnetGroupName_RamShared - TestAccAWSDBInstance_DbSubnetGroupName_RamShared: provider_test.go:386: AWS_ALTERNATE_ACCESS_KEY_ID or AWS_ALTERNATE_PROFILE must be set for acceptance tests ---- FAIL: TestAccAWSDBInstance_DbSubnetGroupName_RamShared (2.22s) +$ make testacc TEST=./aws TESTARGS='-run=TestAccDBInstance_DbSubnetGroupName_RamShared' +=== RUN TestAccDBInstance_DbSubnetGroupName_RamShared +=== PAUSE TestAccDBInstance_DbSubnetGroupName_RamShared +=== CONT TestAccDBInstance_DbSubnetGroupName_RamShared + TestAccDBInstance_DbSubnetGroupName_RamShared: provider_test.go:386: AWS_ALTERNATE_ACCESS_KEY_ID or AWS_ALTERNATE_PROFILE must be set for acceptance tests +--- FAIL: TestAccDBInstance_DbSubnetGroupName_RamShared (2.22s) FAIL FAIL github.com/hashicorp/terraform-provider-aws/aws 4.305s FAIL @@ -193,13 +193,13 @@ readable, and allows reuse of assertion functions across different tests of the same type of resource. The definition of a complete test looks like this: ```go -func TestAccAWSCloudWatchDashboard_basic(t *testing.T) { +func TestAccCloudWatchDashboard_basic(t *testing.T) { var dashboard cloudwatch.GetDashboardOutput rInt := acctest.RandInt() resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - ErrorCheck: testAccErrorCheck(t, cloudwatch.EndpointsID), - Providers: testAccProviders, + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, cloudwatch.EndpointsID), + Providers: acctest.Providers, CheckDestroy: testAccCheckAWSCloudWatchDashboardDestroy, Steps: []resource.TestStep{ { @@ -320,7 +320,7 @@ When executing the test, the following steps are taken for each `TestStep`: Most resources that implement standard Create, Read, Update, and Delete functionality should follow the pattern below. Each test type has a section that describes them in more detail: - **basic**: This represents the bare minimum verification that the resource can be created, read, deleted, and optionally imported. -- **disappears**: A test that verifies Terraform will offer to recreate a resource if it is deleted outside of Terraform (e.g. via the Console) instead of returning an error that it cannot be found. +- **disappears**: A test that verifies Terraform will offer to recreate a resource if it is deleted outside of Terraform (e.g., via the Console) instead of returning an error that it cannot be found. - **Per Attribute**: A test that verifies the resource with a single additional argument can be created, read, optionally updated (or force resource recreation), deleted, and optionally imported. The leading sections below highlight additional recommended patterns. @@ -439,7 +439,7 @@ Typically the `rName` is always the first argument to the test configuration fun #### Other Recommended Variables -We also typically recommend saving a `resourceName` variable in the test that contains the resource reference, e.g. `aws_example_thing.test`, which is repeatedly used in the checks. +We also typically recommend saving a `resourceName` variable in the test that contains the resource reference, e.g., `aws_example_thing.test`, which is repeatedly used in the checks. For example: @@ -484,7 +484,7 @@ resource "aws_example_thing" "test" { Usually this test is implemented first. The test configuration should contain only required arguments (`Required: true` attributes) and it should check the values of all read-only attributes (`Computed: true` without `Optional: true`). If the resource supports it, it verifies import. It should _NOT_ perform other `TestStep` such as updates or verify recreation. -These are typically named `TestAccAws{SERVICE}{THING}_basic`, e.g. `TestAccAwsCloudWatchDashboard_basic` +These are typically named `TestAccAws{SERVICE}{THING}_basic`, e.g., `TestAccAwsCloudWatchDashboard_basic` For example: @@ -494,9 +494,9 @@ func TestAccAwsExampleThing_basic(t *testing.T) { resourceName := "aws_example_thing.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - ErrorCheck: testAccErrorCheck(t, service.EndpointsID), - Providers: testAccProviders, + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, service.EndpointsID), + Providers: acctest.Providers, CheckDestroy: testAccCheckAwsExampleThingDestroy, Steps: []resource.TestStep{ { @@ -540,7 +540,7 @@ func TestAccAwsExampleThing_basic(t *testing.T) { resourceName := "aws_example_thing.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, + PreCheck: func() { acctest.PreCheck(t) }, // ... additional checks follow ... }) } @@ -567,7 +567,7 @@ func TestAccAwsExampleThing_basic(t *testing.T) { resourceName := "aws_example_thing.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(waf.EndpointsID, t) }, + PreCheck: func() { acctest.PreCheck(t); testAccPartitionHasServicePreCheck(waf.EndpointsID, t) }, // ... additional checks follow ... }) } @@ -585,7 +585,7 @@ func TestAccAwsExampleThing_basic(t *testing.T) { resourceName := "aws_example_thing.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t), testAccPreCheckAwsExample(t) }, + PreCheck: func() { acctest.PreCheck(t), testAccPreCheckAwsExample(t) }, // ... additional checks follow ... }) } @@ -620,7 +620,7 @@ func TestAccAwsExampleThing_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ // PreCheck - ErrorCheck: testAccErrorCheck(t, service.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, service.EndpointsID), // ... additional checks follow ... }) } @@ -658,9 +658,9 @@ func testAccErrorCheckSkipService(t *testing.T) resource.ErrorCheckFunc { #### Disappears Acceptance Tests -This test is generally implemented second. It is straightforward to setup once the basic test is passing since it can reuse that test configuration. It prevents a common bug report with Terraform resources that error when they can not be found (e.g. deleted outside Terraform). +This test is generally implemented second. It is straightforward to setup once the basic test is passing since it can reuse that test configuration. It prevents a common bug report with Terraform resources that error when they can not be found (e.g., deleted outside Terraform). -These are typically named `TestAccAws{SERVICE}{THING}_disappears`, e.g. `TestAccAwsCloudWatchDashboard_disappears` +These are typically named `TestAccAws{SERVICE}{THING}_disappears`, e.g., `TestAccAwsCloudWatchDashboard_disappears` For example: @@ -670,16 +670,16 @@ func TestAccAwsExampleThing_disappears(t *testing.T) { resourceName := "aws_example_thing.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - ErrorCheck: testAccErrorCheck(t, service.EndpointsID), - Providers: testAccProviders, + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, service.EndpointsID), + Providers: acctest.Providers, CheckDestroy: testAccCheckAwsExampleThingDestroy, Steps: []resource.TestStep{ { Config: testAccAwsExampleThingConfigName(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAwsExampleThingExists(resourceName, &job), - testAccCheckResourceDisappears(testAccProvider, resourceAwsExampleThing(), resourceName), + acctest.CheckResourceDisappears(testAccProvider, resourceAwsExampleThing(), resourceName), ), ExpectNonEmptyPlan: true, }, @@ -704,7 +704,7 @@ if err != nil { } ``` -For children resources that are encapsulated by a parent resource, it is also preferable to verify that removing the parent resource will not generate an error either. These are typically named `TestAccAws{SERVICE}{THING}_disappears_{PARENT}`, e.g. `TestAccAwsRoute53ZoneAssociation_disappears_Vpc` +For children resources that are encapsulated by a parent resource, it is also preferable to verify that removing the parent resource will not generate an error either. These are typically named `TestAccAws{SERVICE}{THING}_disappears_{PARENT}`, e.g., `TestAccAwsRoute53ZoneAssociation_disappears_Vpc` ```go func TestAccAwsExampleChildThing_disappears_ParentThing(t *testing.T) { @@ -713,16 +713,16 @@ func TestAccAwsExampleChildThing_disappears_ParentThing(t *testing.T) { resourceName := "aws_example_child_thing.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - ErrorCheck: testAccErrorCheck(t, service.EndpointsID), - Providers: testAccProviders, + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, service.EndpointsID), + Providers: acctest.Providers, CheckDestroy: testAccCheckAwsExampleChildThingDestroy, Steps: []resource.TestStep{ { Config: testAccAwsExampleThingConfigName(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAwsExampleThingExists(resourceName), - testAccCheckResourceDisappears(testAccProvider, resourceAwsExampleParentThing(), parentResourceName), + acctest.CheckResourceDisappears(testAccProvider, resourceAwsExampleParentThing(), parentResourceName), ), ExpectNonEmptyPlan: true, }, @@ -733,7 +733,7 @@ func TestAccAwsExampleChildThing_disappears_ParentThing(t *testing.T) { #### Per Attribute Acceptance Tests -These are typically named `TestAccAws{SERVICE}{THING}_{ATTRIBUTE}`, e.g. `TestAccAwsCloudWatchDashboard_Name` +These are typically named `TestAccAws{SERVICE}{THING}_{ATTRIBUTE}`, e.g., `TestAccAwsCloudWatchDashboard_Name` For example: @@ -743,9 +743,9 @@ func TestAccAwsExampleThing_Description(t *testing.T) { resourceName := "aws_example_thing.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - ErrorCheck: testAccErrorCheck(t, service.EndpointsID), - Providers: testAccProviders, + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, service.EndpointsID), + Providers: acctest.Providers, CheckDestroy: testAccCheckAwsExampleThingDestroy, Steps: []resource.TestStep{ { @@ -789,7 +789,7 @@ When testing requires AWS infrastructure in a second AWS account, the below chan - In the `PreCheck` function, include `testAccAlternateAccountPreCheck(t)` to ensure a standardized set of information is required for cross-account testing credentials - Declare a `providers` variable at the top of the test function: `var providers []*schema.Provider` -- Switch usage of `Providers: testAccProviders` to `ProviderFactories: testAccProviderFactoriesAlternate(&providers)` +- Switch usage of `Providers: acctest.Providers` to `ProviderFactories: testAccProviderFactoriesAlternate(&providers)` - Add `testAccAlternateAccountProviderConfig()` to the test configuration and use `provider = awsalternate` for cross-account resources. The resource that is the focus of the acceptance test should _not_ use the alternate provider identification to simplify the testing setup. - For any `TestStep` that includes `ImportState: true`, add the `Config` that matches the previous `TestStep` `Config` @@ -802,10 +802,10 @@ func TestAccAwsExample_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { - testAccPreCheck(t) + acctest.PreCheck(t) testAccAlternateAccountPreCheck(t) }, - ErrorCheck: testAccErrorCheck(t, service.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, service.EndpointsID), ProviderFactories: testAccProviderFactoriesAlternate(&providers), CheckDestroy: testAccCheckAwsExampleDestroy, Steps: []resource.TestStep{ @@ -853,7 +853,7 @@ When testing requires AWS infrastructure in a second or third AWS region, the be - In the `PreCheck` function, include `testAccMultipleRegionPreCheck(t, ###)` to ensure a standardized set of information is required for cross-region testing configuration. If the infrastructure in the second AWS region is also in a second AWS account also include `testAccAlternateAccountPreCheck(t)` - Declare a `providers` variable at the top of the test function: `var providers []*schema.Provider` -- Switch usage of `Providers: testAccProviders` to `ProviderFactories: testAccProviderFactoriesMultipleRegion(&providers, 2)` (where the last parameter is number of regions) +- Switch usage of `Providers: acctest.Providers` to `ProviderFactories: testAccProviderFactoriesMultipleRegion(&providers, 2)` (where the last parameter is number of regions) - Add `testAccMultipleRegionProviderConfig(###)` to the test configuration and use `provider = awsalternate` (and potentially `provider = awsthird`) for cross-region resources. The resource that is the focus of the acceptance test should _not_ use the alternative providers to simplify the testing setup. If the infrastructure in the second AWS region is also in a second AWS account use `testAccAlternateAccountAlternateRegionProviderConfig()` instead - For any `TestStep` that includes `ImportState: true`, add the `Config` that matches the previous `TestStep` `Config` @@ -866,10 +866,10 @@ func TestAccAwsExample_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { - testAccPreCheck(t) + acctest.PreCheck(t) testAccMultipleRegionPreCheck(t, 2) }, - ErrorCheck: testAccErrorCheck(t, service.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, service.EndpointsID), ProviderFactories: testAccProviderFactoriesMultipleRegion(&providers, 2), CheckDestroy: testAccCheckAwsExampleDestroy, Steps: []resource.TestStep{ @@ -1003,10 +1003,10 @@ func testAccGetPricingRegion() string { For the resource or data source acceptance tests, the key items to adjust are: -* Ensure `TestCase` uses `ProviderFactories: testAccProviderFactories` instead of `Providers: testAccProviders` -* Add the call for the new `PreCheck` function (keeping `testAccPreCheck(t)`), e.g. `PreCheck: func() { testAccPreCheck(t); testAccPreCheckPricing(t) },` -* If the testing is for a managed resource with a `CheckDestroy` function, ensure it uses the new provider instance, e.g. `testAccProviderPricing`, instead of `testAccProvider`. -* If the testing is for a managed resource with a `Check...Exists` function, ensure it uses the new provider instance, e.g. `testAccProviderPricing`, instead of `testAccProvider`. +* Ensure `TestCase` uses `ProviderFactories: testAccProviderFactories` instead of `Providers: acctest.Providers` +* Add the call for the new `PreCheck` function (keeping `acctest.PreCheck(t)`), e.g., `PreCheck: func() { acctest.PreCheck(t); testAccPreCheckPricing(t) },` +* If the testing is for a managed resource with a `CheckDestroy` function, ensure it uses the new provider instance, e.g., `testAccProviderPricing`, instead of `testAccProvider`. +* If the testing is for a managed resource with a `Check...Exists` function, ensure it uses the new provider instance, e.g., `testAccProviderPricing`, instead of `testAccProvider`. * In each `TestStep` configuration, ensure the new provider configuration function is called, e.g. ```go @@ -1029,7 +1029,7 @@ When encountering these types of components, the acceptance testing can be setup To convert to serialized (one test at a time) acceptance testing: -- Convert all existing capital `T` test functions with the limited component to begin with a lowercase `t`, e.g. `TestAccSagemakerDomain_basic` becomes `testAccSagemakerDomain_basic`. This will prevent the test framework from executing these tests directly as the prefix `Test` is required. +- Convert all existing capital `T` test functions with the limited component to begin with a lowercase `t`, e.g., `TestAccSagemakerDomain_basic` becomes `testAccSagemakerDomain_basic`. This will prevent the test framework from executing these tests directly as the prefix `Test` is required. - In each of these test functions, convert `resource.ParallelTest` to `resource.Test` - Create a capital `T` `TestAcc{Service}{Thing}_serial` test function that then references all the lowercase `t` test functions. If multiple test files are referenced, this new test be created in a new shared file such as `aws/{Service}_test.go`. The contents of this test can be setup like the following: @@ -1065,10 +1065,10 @@ _NOTE: Future iterations of these acceptance testing concurrency instructions wi Writing acceptance testing for data sources is similar to resources, with the biggest changes being: - Adding `DataSource` to the test and configuration naming, such as `TestAccAwsExampleThingDataSource_Filter` -- The basic test _may_ be named after the easiest lookup attribute instead, e.g. `TestAccAwsExampleThingDataSource_Name` +- The basic test _may_ be named after the easiest lookup attribute instead, e.g., `TestAccAwsExampleThingDataSource_Name` - No disappears testing - Almost all checks should be done with [`resource.TestCheckResourceAttrPair()`](https://pkg.go.dev/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource?tab=doc#TestCheckResourceAttrPair) to compare the data source attributes to the resource attributes -- The usage of an additional `dataSourceName` variable to store a data source reference, e.g. `data.aws_example_thing.test` +- The usage of an additional `dataSourceName` variable to store a data source reference, e.g., `data.aws_example_thing.test` Data sources testing should still utilize the `CheckDestroy` function of the resource, just to continue verifying that there are no dangling AWS resources after a test is ran. @@ -1083,9 +1083,9 @@ func TestAccAwsExampleThingDataSource_Name(t *testing.T) { resourceName := "aws_example_thing.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - ErrorCheck: testAccErrorCheck(t, service.EndpointsID), - Providers: testAccProviders, + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, service.EndpointsID), + Providers: acctest.Providers, CheckDestroy: testAccCheckAwsExampleThingDestroy, Steps: []resource.TestStep{ { @@ -1309,7 +1309,7 @@ The below are required items that will be noted during submission review and pre - [ ] __Implements Exists Check Function__: Resource testing should include a `TestCheckFunc` function (typically named `testAccCheckAws{SERVICE}{RESOURCE}Exists`) that calls the API to verify that the Terraform resource has been created or associated as appropriate. Preferably, this function will also accept a pointer to an API object representing the Terraform resource from the API response that can be set for potential usage in later `TestCheckFunc`. More information about these functions can be found in the [Extending Terraform Custom Check Functions documentation](https://www.terraform.io/docs/extend/testing/acceptance-tests/testcase.html#checkdestroy). - [ ] __Excludes Provider Declarations__: Test configurations should not include `provider "aws" {...}` declarations. If necessary, only the provider declarations in `provider_test.go` should be used for multiple account/region or otherwise specialized testing. - [ ] __Passes in us-west-2 Region__: Tests default to running in `us-west-2` and at a minimum should pass in that region or include necessary `PreCheck` functions to skip the test when ran outside an expected environment. -- [ ] __Includes ErrorCheck__: All acceptance tests should include a call to the common ErrorCheck (`ErrorCheck: testAccErrorCheck(t, service.EndpointsID),`). +- [ ] __Includes ErrorCheck__: All acceptance tests should include a call to the common ErrorCheck (`ErrorCheck: acctest.ErrorCheck(t, service.EndpointsID),`). - [ ] __Uses resource.ParallelTest__: Tests should utilize [`resource.ParallelTest()`](https://godoc.org/github.com/hashicorp/terraform/helper/resource#ParallelTest) instead of [`resource.Test()`](https://godoc.org/github.com/hashicorp/terraform/helper/resource#Test) except where serialized testing is absolutely required. - [ ] __Uses fmt.Sprintf()__: Test configurations preferably should to be separated into their own functions (typically named `testAccAws{SERVICE}{RESOURCE}Config{PURPOSE}`) that call [`fmt.Sprintf()`](https://golang.org/pkg/fmt/#Sprintf) for variable injection or a string `const` for completely static configurations. Test configurations should avoid `var` or other variable injection functionality such as [`text/template`](https://golang.org/pkg/text/template/). - [ ] __Uses Randomized Infrastructure Naming__: Test configurations that utilize resources where a unique name is required should generate a random name. Typically this is created via `rName := acctest.RandomWithPrefix("tf-acc-test")` in the acceptance test function before generating the configuration. @@ -1321,14 +1321,14 @@ For resources that support import, the additional item below is required that wi The below are style-based items that _may_ be noted during review and are recommended for simplicity, consistency, and quality assurance: -- [ ] __Uses Builtin Check Functions__: Tests should utilize already available check functions, e.g. `resource.TestCheckResourceAttr()`, to verify values in the Terraform state over creating custom `TestCheckFunc`. More information about these functions can be found in the [Extending Terraform Builtin Check Functions documentation](https://www.terraform.io/docs/extend/testing/acceptance-tests/teststep.html#builtin-check-functions). +- [ ] __Uses Builtin Check Functions__: Tests should utilize already available check functions, e.g., `resource.TestCheckResourceAttr()`, to verify values in the Terraform state over creating custom `TestCheckFunc`. More information about these functions can be found in the [Extending Terraform Builtin Check Functions documentation](https://www.terraform.io/docs/extend/testing/acceptance-tests/teststep.html#builtin-check-functions). - [ ] __Uses TestCheckResoureAttrPair() for Data Sources__: Tests should utilize [`resource.TestCheckResourceAttrPair()`](https://godoc.org/github.com/hashicorp/terraform/helper/resource#TestCheckResourceAttrPair) to verify values in the Terraform state for data sources attributes to compare them with their expected resource attributes. - [ ] __Excludes Timeouts Configurations__: Test configurations should not include `timeouts {...}` configuration blocks except for explicit testing of customizable timeouts (typically very short timeouts with `ExpectError`). -- [ ] __Implements Default and Zero Value Validation__: The basic test for a resource (typically named `TestAccAws{SERVICE}{RESOURCE}_basic`) should utilize available check functions, e.g. `resource.TestCheckResourceAttr()`, to verify default and zero values in the Terraform state for all attributes. Empty/missing configuration blocks can be verified with `resource.TestCheckResourceAttr(resourceName, "{ATTRIBUTE}.#", "0")` and empty maps with `resource.TestCheckResourceAttr(resourceName, "{ATTRIBUTE}.%", "0")` +- [ ] __Implements Default and Zero Value Validation__: The basic test for a resource (typically named `TestAccAws{SERVICE}{RESOURCE}_basic`) should utilize available check functions, e.g., `resource.TestCheckResourceAttr()`, to verify default and zero values in the Terraform state for all attributes. Empty/missing configuration blocks can be verified with `resource.TestCheckResourceAttr(resourceName, "{ATTRIBUTE}.#", "0")` and empty maps with `resource.TestCheckResourceAttr(resourceName, "{ATTRIBUTE}.%", "0")` ### Avoid Hard Coding -Avoid hard coding values in acceptance test checks and configurations for consistency and testing flexibility. Resource testing is expected to pass across multiple AWS environments supported by the Terraform AWS Provider (e.g. AWS Standard and AWS GovCloud (US)). Contributors are not expected or required to perform testing outside of AWS Standard, e.g. running only in the `us-west-2` region is perfectly acceptable. However, contributors are expected to avoid hard coding with these guidelines. +Avoid hard coding values in acceptance test checks and configurations for consistency and testing flexibility. Resource testing is expected to pass across multiple AWS environments supported by the Terraform AWS Provider (e.g., AWS Standard and AWS GovCloud (US)). Contributors are not expected or required to perform testing outside of AWS Standard, e.g., running only in the `us-west-2` region is perfectly acceptable. However, contributors are expected to avoid hard coding with these guidelines. #### Hardcoded Account IDs @@ -1339,7 +1339,7 @@ Avoid hard coding values in acceptance test checks and configurations for consis - [`aws_sagemaker_prebuilt_ecr_image` data source](https://www.terraform.io/docs/providers/aws/d/sagemaker_prebuilt_ecr_image.html). - [ ] __Uses Account Test Checks__: Any check required to verify an AWS Account ID of the current testing account or another account should use one of the following available helper functions over the usage of `resource.TestCheckResourceAttrSet()` and `resource.TestMatchResourceAttr()`: - `testAccCheckResourceAttrAccountID()`: Validates the state value equals the AWS Account ID of the current account running the test. This is the most common implementation. - - `testAccMatchResourceAttrAccountID()`: Validates the state value matches any AWS Account ID (e.g. a 12 digit number). This is typically only used in data source testing of AWS managed components. + - `testAccMatchResourceAttrAccountID()`: Validates the state value matches any AWS Account ID (e.g., a 12 digit number). This is typically only used in data source testing of AWS managed components. Here's an example of using `aws_caller_identity`: @@ -1355,7 +1355,7 @@ resource "aws_backup_selection" "test" { #### Hardcoded AMI IDs -- [ ] __Uses aws_ami Data Source__: Any hardcoded AMI ID configuration, e.g. `ami-12345678`, should be replaced with the [`aws_ami` data source](https://www.terraform.io/docs/providers/aws/d/ami.html) pointing to an Amazon Linux image. The codebase includes test configuration helper functions to simplify these lookups: +- [ ] __Uses aws_ami Data Source__: Any hardcoded AMI ID configuration, e.g., `ami-12345678`, should be replaced with the [`aws_ami` data source](https://www.terraform.io/docs/providers/aws/d/ami.html) pointing to an Amazon Linux image. The codebase includes test configuration helper functions to simplify these lookups: - `testAccLatestAmazonLinuxHvmEbsAmiConfig()`: The recommended AMI for most situations, using Amazon Linux, HVM virtualization, and EBS storage. To reference the AMI ID in the test configuration: `data.aws_ami.amzn-ami-minimal-hvm-ebs.id`. - `testAccLatestAmazonLinuxHvmInstanceStoreAmiConfig()`: AMI lookup using Amazon Linux, HVM virtualization, and Instance Store storage. Should only be used in testing that requires Instance Store storage rather than EBS. To reference the AMI ID in the test configuration: `data.aws_ami.amzn-ami-minimal-hvm-instance-store.id`. - `testAccLatestAmazonLinuxPvEbsAmiConfig()`: AMI lookup using Amazon Linux, Paravirtual virtualization, and EBS storage. Should only be used in testing that requires Paravirtual over Hardware Virtual Machine (HVM) virtualization. To reference the AMI ID in the test configuration: `data.aws_ami.amzn-ami-minimal-pv-ebs.id`. @@ -1378,7 +1378,7 @@ resource "aws_launch_configuration" "test" { #### Hardcoded Availability Zones -- [ ] __Uses aws_availability_zones Data Source__: Any hardcoded AWS Availability Zone configuration, e.g. `us-west-2a`, should be replaced with the [`aws_availability_zones` data source](https://www.terraform.io/docs/providers/aws/d/availability_zones.html). Use the convenience function called `testAccAvailableAZsNoOptInConfig()` (defined in `resource_aws_instance_test.go`) to declare `data "aws_availability_zones" "available" {...}`. You can then reference the data source via `data.aws_availability_zones.available.names[0]` or `data.aws_availability_zones.available.names[count.index]` in resources utilizing `count`. +- [ ] __Uses aws_availability_zones Data Source__: Any hardcoded AWS Availability Zone configuration, e.g., `us-west-2a`, should be replaced with the [`aws_availability_zones` data source](https://www.terraform.io/docs/providers/aws/d/availability_zones.html). Use the convenience function called `testAccAvailableAZsNoOptInConfig()` (defined in `resource_aws_instance_test.go`) to declare `data "aws_availability_zones" "available" {...}`. You can then reference the data source via `data.aws_availability_zones.available.names[0]` or `data.aws_availability_zones.available.names[count.index]` in resources utilizing `count`. Here's an example of using `testAccAvailableAZsNoOptInConfig()` and `data.aws_availability_zones.available.names[0]`: @@ -1396,7 +1396,7 @@ resource "aws_subnet" "test" { #### Hardcoded Database Versions -- [ ] __Uses Database Version Data Sources__: Hardcoded database versions, e.g. RDS MySQL Engine Version `5.7.42`, should be removed (which means the AWS-defined default version will be used) or replaced with a list of preferred versions using a data source. Because versions change over times and version offerings vary from region to region and partition to partition, using the default version or providing a list of preferences ensures a version will be available. Depending on the situation, there are several data sources for versions, including: +- [ ] __Uses Database Version Data Sources__: Hardcoded database versions, e.g., RDS MySQL Engine Version `5.7.42`, should be removed (which means the AWS-defined default version will be used) or replaced with a list of preferred versions using a data source. Because versions change over times and version offerings vary from region to region and partition to partition, using the default version or providing a list of preferences ensures a version will be available. Depending on the situation, there are several data sources for versions, including: - [`aws_rds_engine_version` data source](https://www.terraform.io/docs/providers/aws/d/rds_engine_version.html), - [`aws_docdb_engine_version` data source](https://www.terraform.io/docs/providers/aws/d/docdb_engine_version.html), and - [`aws_neptune_engine_version` data source](https://www.terraform.io/docs/providers/aws/d/neptune_engine_version.html). @@ -1425,7 +1425,7 @@ resource "aws_db_instance" "bar" { #### Hardcoded Direct Connect Locations -- [ ] __Uses aws_dx_locations Data Source__: Hardcoded AWS Direct Connect locations, e.g. `EqSe2`, should be replaced with the [`aws_dx_locations` data source](https://www.terraform.io/docs/providers/aws/d/dx_locations.html). +- [ ] __Uses aws_dx_locations Data Source__: Hardcoded AWS Direct Connect locations, e.g., `EqSe2`, should be replaced with the [`aws_dx_locations` data source](https://www.terraform.io/docs/providers/aws/d/dx_locations.html). Here's an example using `data.aws_dx_locations.test.location_codes`: @@ -1485,7 +1485,7 @@ resource "aws_db_instance" "test" { #### Hardcoded Partition DNS Suffix -- [ ] __Uses aws_partition Data Source__: Any hardcoded DNS suffix configuration, e.g. the `amazonaws.com` in a `ec2.amazonaws.com` service principal, should be replaced with the [`aws_partition` data source](https://www.terraform.io/docs/providers/aws/d/partition.html). A common pattern is declaring `data "aws_partition" "current" {}` and referencing it via `data.aws_partition.current.dns_suffix`. +- [ ] __Uses aws_partition Data Source__: Any hardcoded DNS suffix configuration, e.g., the `amazonaws.com` in a `ec2.amazonaws.com` service principal, should be replaced with the [`aws_partition` data source](https://www.terraform.io/docs/providers/aws/d/partition.html). A common pattern is declaring `data "aws_partition" "current" {}` and referencing it via `data.aws_partition.current.dns_suffix`. Here's an example of using `aws_partition` and `data.aws_partition.current.dns_suffix`: @@ -1513,7 +1513,7 @@ POLICY #### Hardcoded Partition in ARN -- [ ] __Uses aws_partition Data Source__: Any hardcoded AWS Partition configuration, e.g. the `aws` in a `arn:aws:SERVICE:REGION:ACCOUNT:RESOURCE` ARN, should be replaced with the [`aws_partition` data source](https://www.terraform.io/docs/providers/aws/d/partition.html). A common pattern is declaring `data "aws_partition" "current" {}` and referencing it via `data.aws_partition.current.partition`. +- [ ] __Uses aws_partition Data Source__: Any hardcoded AWS Partition configuration, e.g., the `aws` in a `arn:aws:SERVICE:REGION:ACCOUNT:RESOURCE` ARN, should be replaced with the [`aws_partition` data source](https://www.terraform.io/docs/providers/aws/d/partition.html). A common pattern is declaring `data "aws_partition" "current" {}` and referencing it via `data.aws_partition.current.partition`. - [ ] __Uses Builtin ARN Check Functions__: Tests should utilize available ARN check functions to validate ARN attribute values in the Terraform state over `resource.TestCheckResourceAttrSet()` and `resource.TestMatchResourceAttr()`: - `testAccCheckResourceAttrRegionalARN()` verifies an ARN matches the account ID and region of the test execution and an exact resource value @@ -1534,7 +1534,7 @@ resource "aws_iam_role_policy_attachment" "test" { #### Hardcoded Region -- [ ] __Uses aws_region Data Source__: Any hardcoded AWS Region configuration, e.g. `us-west-2`, should be replaced with the [`aws_region` data source](https://www.terraform.io/docs/providers/aws/d/region.html). A common pattern is declaring `data "aws_region" "current" {}` and referencing it via `data.aws_region.current.name` +- [ ] __Uses aws_region Data Source__: Any hardcoded AWS Region configuration, e.g., `us-west-2`, should be replaced with the [`aws_region` data source](https://www.terraform.io/docs/providers/aws/d/region.html). A common pattern is declaring `data "aws_region" "current" {}` and referencing it via `data.aws_region.current.name` Here's an example of using `aws_region` and `data.aws_region.current.name`: @@ -1551,7 +1551,7 @@ resource "aws_route53_zone" "test" { #### Hardcoded Spot Price -- [ ] __Uses aws_ec2_spot_price Data Source__: Any hardcoded spot prices, e.g. `0.05`, should be replaced with the [`aws_ec2_spot_price` data source](https://www.terraform.io/docs/providers/aws/d/ec2_spot_price.html). A common pattern is declaring `data "aws_ec2_spot_price" "current" {}` and referencing it via `data.aws_ec2_spot_price.current.spot_price`. +- [ ] __Uses aws_ec2_spot_price Data Source__: Any hardcoded spot prices, e.g., `0.05`, should be replaced with the [`aws_ec2_spot_price` data source](https://www.terraform.io/docs/providers/aws/d/ec2_spot_price.html). A common pattern is declaring `data "aws_ec2_spot_price" "current" {}` and referencing it via `data.aws_ec2_spot_price.current.spot_price`. Here's an example of using `aws_ec2_spot_price` and `data.aws_ec2_spot_price.current.spot_price`: @@ -1578,7 +1578,7 @@ resource "aws_spot_fleet_request" "test" { Here's an example using `aws_key_pair` ```go -func TestAccAWSKeyPair_basic(t *testing.T) { +func TestAccKeyPair_basic(t *testing.T) { ... rName := acctest.RandomWithPrefix("tf-acc-test") @@ -1617,7 +1617,7 @@ Using `testAccDefaultEmailAddress` is preferred when using a single email addres Here's an example using `testAccDefaultEmailAddress` ```go -func TestAccAWSSNSTopicSubscription_email(t *testing.T) { +func TestAccSNSTopicSubscription_email(t *testing.T) { ... rName := acctest.RandomWithPrefix("tf-acc-test") @@ -1640,7 +1640,7 @@ func TestAccAWSSNSTopicSubscription_email(t *testing.T) { Here's an example using `testAccRandomEmailAddress()` ```go -func TestAccAWSPinpointEmailChannel_basic(t *testing.T) { +func TestAccPinpointEmailChannel_basic(t *testing.T) { ... domain := testAccRandomDomainName() diff --git a/docs/roadmaps/2020_August_to_October.md b/docs/roadmaps/2020_August_to_October.md index 7408976a71cd..0e71ec182fd6 100644 --- a/docs/roadmaps/2020_August_to_October.md +++ b/docs/roadmaps/2020_August_to_October.md @@ -2,7 +2,7 @@ Every few months, the team will highlight areas of focus for our work and upcoming research. -We select items for inclusion in the roadmap from the Top 10 Community Issues, [core services](../CORE_SERVICES.md), and internal priorities. When community pull requests exist for a given item, we will prioritize working with the original authors to include their contributions. If the author can no longer take on the implementation, HashiCorp will complete any additional work needed. +We select items for inclusion in the roadmap from the Top 10 Community Issues, [core services](../contributing/core-services.md), and internal priorities. When community pull requests exist for a given item, we will prioritize working with the original authors to include their contributions. If the author can no longer take on the implementation, HashiCorp will complete any additional work needed. Each weekly release will include necessary tasks that lead to the completion of the stated goals as well as community pull requests, enhancements, and features that are not highlighted in the roadmap. To view all the items we've prioritized for this quarter, please see the [Roadmap milestone](https://github.com/hashicorp/terraform-provider-aws/milestone/138). diff --git a/docs/roadmaps/2020_May_to_July.md b/docs/roadmaps/2020_May_to_July.md index 40651bdaa9fd..fe4f8b9cea4b 100644 --- a/docs/roadmaps/2020_May_to_July.md +++ b/docs/roadmaps/2020_May_to_July.md @@ -2,7 +2,7 @@ Each quarter the team will highlight areas of focus for our work and upcoming research. -We select items for inclusion in the roadmap from the Top 10 Community Issues, [core services](../CORE_SERVICES.md), and internal priorities. When community pull requests exist for a given item, we will prioritize working with the original authors to include their contributions. If the author can no longer take on the implementation, HashiCorp will complete any additional work needed. +We select items for inclusion in the roadmap from the Top 10 Community Issues, [core services](../contributing/core-services.md), and internal priorities. When community pull requests exist for a given item, we will prioritize working with the original authors to include their contributions. If the author can no longer take on the implementation, HashiCorp will complete any additional work needed. Each weekly release will include necessary tasks that lead to the completion of the stated goals as well as community pull requests, enhancements, and features that are not highlighted in the roadmap. diff --git a/docs/roadmaps/2020_November_to_January.md b/docs/roadmaps/2020_November_to_January.md index 0e11c19e3e81..9984d295bde2 100644 --- a/docs/roadmaps/2020_November_to_January.md +++ b/docs/roadmaps/2020_November_to_January.md @@ -2,7 +2,7 @@ Every few months, the team will highlight areas of focus for our work and upcoming research. -We select items for inclusion in the roadmap from the Top 10 Community Issues, [Core Services](../CORE_SERVICES.md), and internal priorities. Where community sourced contributions exist we will work with the authors to review and merge their work. Where this does not exist or the original contributors, are not available we will create the resources and implementation ourselves. +We select items for inclusion in the roadmap from the Top 10 Community Issues, [Core Services](../contributing/core-services.md), and internal priorities. Where community sourced contributions exist we will work with the authors to review and merge their work. Where this does not exist or the original contributors, are not available we will create the resources and implementation ourselves. Each weekly release will include necessary tasks that lead to the completion of the stated goals as well as community pull requests, enhancements, and features that are not highlighted in the roadmap. To view all the items we've prioritized for this quarter, please see the [Roadmap milestone](https://github.com/hashicorp/terraform-provider-aws/milestone/138). diff --git a/docs/roadmaps/2021_February_to_April.md b/docs/roadmaps/2021_February_to_April.md index 86e694fd2e49..0a0b01d6ff92 100644 --- a/docs/roadmaps/2021_February_to_April.md +++ b/docs/roadmaps/2021_February_to_April.md @@ -2,7 +2,7 @@ Every few months, the team will highlight areas of focus for our work and upcoming research. -We select items for inclusion in the roadmap from the Top 10 Community Issues, [Core Services](../CORE_SERVICES.md), and internal priorities. Where community sourced contributions exist we will work with the authors to review and merge their work. Where this does not exist or the original contributors, are not available we will create the resources and implementation ourselves. +We select items for inclusion in the roadmap from the Top 10 Community Issues, [Core Services](../contributing/core-services.md), and internal priorities. Where community sourced contributions exist we will work with the authors to review and merge their work. Where this does not exist or the original contributors, are not available we will create the resources and implementation ourselves. Each weekly release will include necessary tasks that lead to the completion of the stated goals as well as community pull requests, enhancements, and features that are not highlighted in the roadmap. To view all the items we've prioritized for this quarter, please see the [Roadmap milestone](https://github.com/hashicorp/terraform-provider-aws/milestone/138). diff --git a/docs/roadmaps/2021_May_to_July.md b/docs/roadmaps/2021_May_to_July.md index 3957b9053572..d79934bcd9f8 100644 --- a/docs/roadmaps/2021_May_to_July.md +++ b/docs/roadmaps/2021_May_to_July.md @@ -2,7 +2,7 @@ Every few months, the team will highlight areas of focus for our work and upcoming research. -We select items for inclusion in the roadmap from the Top 10 Community Issues, [Core Services](../CORE_SERVICES.md), and internal priorities. Where community sourced contributions exist we will work with the authors to review and merge their work. Where this does not exist or the original contributors are not available we will create the resources and implementation ourselves. +We select items for inclusion in the roadmap from the Top 10 Community Issues, [Core Services](../contributing/core-services.md), and internal priorities. Where community sourced contributions exist we will work with the authors to review and merge their work. Where this does not exist or the original contributors are not available we will create the resources and implementation ourselves. Each weekly release will include necessary tasks that lead to the completion of the stated goals as well as community pull requests, enhancements, and features that are not highlighted in the roadmap. To view all the items we've prioritized for this quarter, please see the [Roadmap milestone](https://github.com/hashicorp/terraform-provider-aws/milestone/138). diff --git a/docs/roadmaps/README.md b/docs/roadmaps/README.md new file mode 100644 index 000000000000..17f1c51b7381 --- /dev/null +++ b/docs/roadmaps/README.md @@ -0,0 +1,16 @@ +# Terraform AWS Provider Road Maps + +## What is a road map? + +Every few months, the team will highlight areas of focus for our work and upcoming research. + +We select items for inclusion in the road map from the Top 10 Community Issues, [Core Services](../contributing/core-services.md), and internal priorities. Where community sourced contributions exist we will work with the authors to review and merge their work. Where this does not exist or the original contributors are not available, we will create the resources and implementation ourselves. + +## Road Maps + +* [Latest Road Map](../ROADMAP.md) +* [May-July 2021](2021_May_to_July.md) +* [February-April 2021](2021_February_to_April.md) +* [November 2020-January 2021](2020_November_to_January.md) +* [May-July 2020](2020_May_to_July.md) +* [August-October 2020](2020_August_to_October.md)