Skip to content

Instantly share code, notes, and snippets.

@versionsix
Created May 24, 2022 12:59
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save versionsix/7f1000580c77776219229b2b40e80ac2 to your computer and use it in GitHub Desktop.
Save versionsix/7f1000580c77776219229b2b40e80ac2 to your computer and use it in GitHub Desktop.
[FullDOC] Terraform SDKv2 provider development

01-index


page_title: 'Home - Plugin Development: SDKv2' description: Learn about version 2 of the Terraform Plugin SDK.

Terraform Plugin SDKv2

Terraform Plugin SDKv2 is an established way to develop Terraform Plugins on protocol version 5.

~> Important: Which SDK Should I Use? explains the differences between Terraform Plugin SDKv2 and Terraform Plugin Framework to help you decide which option is right for your provider.

Get Started

Key Concepts

  • Schemas define available fields for provider, resource, or provisioner configuration block, and give Terraform metadata about those fields.
  • Resources are an abstraction that allow Terraform to manage infrastructure objects, such as a compute instance, an access policy, or disk. Providers act as a translation layer between Terraform and an API, offering one or more resources for practitioners to define in a configuration.

Debug and Test

Combine or Translate

Migrate to Terraform Plugin Framework

The terraform-plugin-framework is a new way to develop Terraform providers, offering improvements and new features from Terraform Plugin SDKv2. You can refactor individual resources and data sources over time with the following compatibility:


02-debugging


page_title: Plugin Development - Debugging SDKv2 Providers description: How to implement debugger support in SDKv2 Terraform providers.

Debugging SDKv2 Providers

This page contains implementation details for inspecting runtime information of a Terraform provider developed with SDKv2 via a debugger tool. Review the top level Debugging page for information pertaining to the overall Terraform provider debugging process and other inspection options, such as log-based debugging.

Code Implementation

Update the main function for the project to conditionally enable the plugin/ServeOpts.Debug field. Conventionally, a -debug flag is used to control the Debug value.

This example uses a -debug flag to enable debugging, otherwise starting the provider normally:

func main() {
	var debug bool

	flag.BoolVar(&debug, "debug", false, "set to true to run the provider with support for debuggers like delve")
	flag.Parse()

	opts := &plugin.ServeOpts{
		Debug:        debug,
		ProviderAddr: "registry.terraform.io/example-namespace/example",
		ProviderFunc: provider.New(),
	}

	plugin.Serve(opts)
}

11-best_practices-index


page_title: Plugin Development - Best Practices description: >- Patterns that ensure a consistent user experience, including naming, deprecation, beta features, testing, and versioning.

Terraform Plugin Best Practices

A key feature of Terraform is its plugin system, which separates the details of specific vendor APIs from the shared logic for managing state, managing configuration, and providing a safe plan and apply lifecycle. Plugins are responsible for the implementation of functionality for provisioning resources for a specific cloud provider, allowing each provider to fully support its unique resources and lifecycles and not settling for the lowest common denominator across all provider resources of that type (virtual machines, networks, configuration management systems, et. al). While each provider is unique, over the years we’ve accumulated some patterns that should be adhered to, to ensure a consistent user experience when using Terraform for any given provider. Listed below are a few best practices we’ve found that generally apply to most Providers, with a brief description of each, and link to read more. Each practice is also linked in navigation on the left.

This section is a work in progress, with more sections to come.

Naming

Naming resources, data sources, and attributes in plugins is how plugin authors expose their functionality to operators and using patterns common to other plugins lays the foundation for a good user experience.

Deprecations, Removals, and Renames

Over time, remote services evolve and better workflows are designed. Terraform's plugin system has functionality to aid in iterative improvements. In Deprecations, Removals, and Renames, we cover procedures for backwards compatible code and documentation updates to ensure that operators are well informed of changes ahead of functionality being removed or renamed.

Enabling beta features

As a provider, you might want to enable new resources that are still in beta. Those resources might change later on. As a general practice, you can enable your provider to support those beta features by using a environment variable such as PROVIDERX_ENABLE_BETA. Once your resources are out of beta and reach a stable status, you can use those resources by default without requiring an environment variable.

Detecting Drift

Terraform is a declarative tool designed to be the source of truth for infrastructure. In order to safely and predictably change and iterate infrastructure, Terraform needs to be able to detect changes made outside of its configuration and provide means of reconciliation. In Detecting Drift, we cover some best practices to ensure Terraform's statefile is an accurate reflection of reality, to provide accurate plan and apply functionality.

Testing Patterns

Terraform developers are encouraged to write acceptance tests that create real resource to verify the behavior of plugins, ensuring a reliable and safe way to manage infrastructure. In Testing Patterns we cover some basic acceptance tests that almost all resources should have to validate not only the functionality of the resource, but that the resource behaves as Terraform would expect.

Versioning and Changelog

Terraform development serves two distinct audiences: those writing plugin code and those implementing them. By clearly and consistently allowing operators to easily understand changes in plugin implementation via version numbering and documenting those changes, a trust is formed between the two audiences. In Versioning and Changelog we cover some guidelines when deciding release versions and how to relay changes through documentation.


12-best_practices-depending-on-providers


page_title: Plugin Development - Depending on Providers description: How to safely depend on providers and understand their interfaces.

Depending on Providers

Terraform's providers are a substantial amount of code, and occasionally it makes sense to depend on their functionality. The most straightforward and obvious way to depend on a provider is to depend on the Terraform CLI, but occasionally it makes sense to rely on a provider in a different context.

This guide lays out the supported ways to interface with and depend on Terraform's providers. Unless the provider explicitly states otherwise, no other compatibility guarantees are provided.

Do Not Import Providers as Go Modules

Terraform's providers are written as Go packages, and they mostly use Go modules as their dependency management solution. This makes it tempting to import the provider as a dependency of your Go code, and to call its exposed interface. This is explicitly an unsupported way to interact with providers and provider maintainers make no guarantees around backwards compatibility or the continued functioning of code that does this.

Providers are unable to be imported as Go modules reliably because their versioning scheme is intended to convey information about the Terraform config interface the provider presents. It's unable to capture both the configuration interface and the Go API interface in a useful way, as what is compatible in one may not be compatible in the other. Rather than give the impression that the package should be imported by using the /vX suffix now mandated for versions after 2.0.0, providers have chosen to make their incompatibility with being imported into Go code explicit.

If you find yourself needing to do this, perhaps one of the methods below will work for you, and is explicitly supported and covered under versioning policies. If not, please reach out and open an issue outlining your use case, and we'll work with you to find an appropriate way to interface with Terraform to meet your use case.

Exporting the Schema

Some projects just care about the schema and resources a provider presents. As of Terraform 0.12, the terraform providers schema -json command can be used to export a JSON representation of the schemas for the providers used in a workspace.

Using the RPC Protocol

For projects that actually want to drive the provider, the supported option is to use the gRPC protocol and the RPC calls the protocol supplies. This protocol is the same protocol that drives Terraform's CLI interface, and it is versioned using a protocol version. It changes relatively infrequently.


12-best_practices-deprecations


page_title: 'Plugin Development - Deprecations, Removals, and Renames Best Practices' description: 'Recommendations for deprecations, removals, and renames.'

Deprecations, Removals, and Renames

Terraform is trusted for managing many facets of infrastructure across many organizations. Part of that trust is due to consistent versioning guidelines and setting expectations for various levels of upgrades. Ensuring backwards compatibility for all patch and minor releases, potentially in concert with any upcoming major changes, is recommended and supported by the Terraform development framework. This allows operators to iteratively update their Terraform configurations rather than require massive refactoring.

This guide is designed to walk through various scenarios where existing Terraform functionality requires future removal, while maintaining backwards compatibility. Further information about the versioning terminology (e.g. MAJOR.MINOR.PATCH) in this guide can be found in the versioning guidelines documentation.

~> NOTE: Removals should only ever occur in MAJOR version upgrades.

Table of Contents

Provider Attribute Removal

The recommended process for removing an attribute from a data source or resource in a provider is as follows:

  1. Add Deprecated in the attribute schema definition. After an operator upgrades to this version, they will be shown a warning with the message provided when using the attribute, but the Terraform run will still complete.
  2. Ensure the changelog has an entry noting the deprecation.
  3. Release a MINOR version with the deprecation.
  4. In the next MAJOR version, remove all code associated with the attribute including the schema definition.
  5. Ensure the changelog has an entry noting the removal.
  6. Release the MAJOR version.

Provider Attribute Rename

When renaming an attribute from one name to another, it is important to keep backwards compatibility with both existing Terraform configurations and the Terraform state while operators migrate. To accomplish this, there will be some duplicated logic to support both attributes until the next MAJOR release. Once both attributes are appropriately handled, the process for deprecating and removing the old attribute is the same as noted in the Provider Attribute Removal section.

The procedure for renaming an attribute depends on what type of attribute it is:

Renaming a Required Attribute

~> NOTE: If the schema definition does not contain Optional or Required, see the Renaming a Computed Attribute section instead. If the schema definition contains Optional instead of Required, see the Renaming an Optional Attribute section.

-> Required attributes are also referred to as required "arguments" throughout the Terraform documentation.

In general, the procedure here does two things:

  • Prevents the operator from needing to define two attributes with the same value.
  • Allows the operator to migrate the configuration to the new attribute at the same time requiring that any other references only work with the new attribute. This is to prevent a situation with Terraform showing a difference when the existing attribute is configured, but the new attribute is saved into the Terraform state. For example, in terraform plan output format:
existing_attribute: "" => "value"
new_attribute:      "value" => ""

The recommended process is as follows:

  1. Replace Required: true with Optional: true in the existing attribute schema definition.
  2. Replace Required with Optional in the existing attribute documentation.
  3. Duplicate the schema definition of the existing attribute, renaming one of them with the new attribute name.
  4. Duplicate the documentation of the existing attribute, renaming one of them with the new attribute name.
  5. Add Deprecated to the schema definition of the existing (now the "old") attribute, noting to use the new attribute in the message.
  6. Add **Deprecated** to the documentation of the existing (now the "old") attribute, noting to use the new attribute.
  7. Add a note to the documentation that either the existing (now the "old") attribute or new attribute must be configured.
  8. Add ConflictsWith to the schema definitions of both the old and new attributes so they will present an error to the operator if both are configured at the same time.
  9. Add conditional logic in the Create, Read, and Update functions of the data source or resource to handle both attributes. Generally, this involves using ResourceData.GetOk() (commonly d.GetOk() in HashiCorp maintained providers).
  10. Add conditional logic in the Create and Update function that returns an error if both the old and new attributes are not defined.
  11. Follow the rest of the procedures in the Provider Attribute Removal section. When the old attribute is removed, update the schema definition and documentation of the new attribute back to Required.

Example Renaming of a Required Attribute

Given this sample resource:

func resourceExampleWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...

        Create: resourceExampleWidgetCreate,
        Read:   resourceExampleWidgetRead,
        Update: resourceExampleWidgetUpdate,

        Schema: map[string]*schema.Schema{
            // ... other attributes ...

            "existing_attribute": {
                Type:     schema.TypeString,
                Required: true,
            },
        },
    }
}

func resourceExampleWidgetCreate(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    existingAttribute := d.Get("existing_attribute").(string)
    // add attribute to provider create API call

    // ... other logic ...
    return resourceExampleWidgetRead(d, meta)
}

func resourceExampleWidgetRead(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    d.Set("existing_attribute", /* ... */)

    // ... other logic ...
    return nil
}

func resourceExampleWidgetUpdate(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    existingAttribute := d.Get("existing_attribute").(string)
    // add attribute to provider update API call

    // ... other logic ...
    return resourceExampleWidgetRead(d, meta)
}

In order to support renaming existing_attribute to new_attribute, this sample can be written as the following to support both attributes simultaneously until the existing_attribute is removed:

func resourceExampleWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...

        Create: resourceExampleWidgetCreate,
        Read:   resourceExampleWidgetRead,
        Update: resourceExampleWidgetUpdate,

        Schema: map[string]*schema.Schema{
            // ... other attributes ...

            "existing_attribute": {
                Type:          schema.TypeString,
                Optional:      true,
                ConflictsWith: []string{"new_attribute"},
                Deprecated:    "use new_attribute instead",
            },
            "new_attribute": {
                Type:          schema.TypeString,
                Optional:      true,
                ConflictsWith: []string{"existing_attribute"},
            },
        },
    }
}

func resourceExampleWidgetCreate(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    existingAttribute, existingAttributeOk := d.GetOk("existing_attribute")
    newAttribute, newAttributeOk := d.GetOk("new_attribute")
    if !existingAttributeOk && !newAttributeOk {
        return errors.New("one of existing_attribute or new_attribute must be configured")
    }
    if existingAttributeOk {
        // add existingAttribute to provider create API call
    } else {
        // add newAttribute to provider create API call
    }

    // ... other logic ...
    return resourceExampleWidgetRead(d, meta)
}

func resourceExampleWidgetRead(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    if _, ok := d.GetOk("existing_attribute"); ok {
        d.Set("existing_attribute", /* ... */)
    } else {
        d.Set("new_attribute", /* ... */)
    }

    // ... other logic ...
    return nil
}

func resourceExampleWidgetUpdate(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    existingAttribute, existingAttributeOk := d.GetOk("existing_attribute")
    newAttribute, newAttributeOk := d.GetOk("new_attribute")
    if !existingAttributeOk && !newAttributeOk {
        return errors.New("one of existing_attribute or new_attribute must be configured")
    }
    if existingAttributeOk {
        // add existingAttribute to provider update API call
    } else {
        // add newAttribute to provider update API call
    }

    // ... other logic ...
    return resourceExampleWidgetRead(d, meta)
}

When the existing_attribute is ready for removal, then this can be written as:

func resourceExampleWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...

        Create: resourceExampleWidgetCreate,
        Read:   resourceExampleWidgetRead,
        Update: resourceExampleWidgetUpdate,

        Schema: map[string]*schema.Schema{
            // ... other attributes ...

            "new_attribute": {
                Type:     schema.TypeString,
                Required: true,
            },
        },
    }
}

func resourceExampleWidgetCreate(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    newAttribute := d.Get("new_attribute").(string)
    // add attribute to provider create API call

    // ... other logic ...
    return resourceExampleWidgetRead(d, meta)
}

func resourceExampleWidgetRead(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    d.Set("new_attribute", /* ... */)

    // ... other logic ...
    return nil
}

func resourceExampleWidgetUpdate(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    newAttribute := d.Get("new_attribute").(string)
    // add attribute to provider update API call

    // ... other logic ...
    return resourceExampleWidgetRead(d, meta)
}

Renaming an Optional Attribute

~> NOTE: If the schema definition does not contain Optional or Required, see the Renaming a Computed Attribute section instead. If the schema definition contains Required instead of Optional, see the Renaming a Required Attribute section.

-> Optional attributes are also referred to as optional "arguments" throughout the Terraform documentation.

In general, the procedure here allows the operator to migrate the configuration to the new attribute at the same time requiring that any other references only work with the new attribute. This is to prevent a situation with Terraform showing a difference when the existing attribute is configured, but the new attribute is saved into the Terraform state. For example, in terraform plan output format:

existing_attribute: "" => "value"
new_attribute:      "value" => ""

The recommended process is as follows:

  1. Duplicate the schema definition of the existing attribute, renaming one of them with the new attribute name.
  2. Duplicate the documentation of the existing attribute, renaming one of them with the new attribute name.
  3. Add Deprecated to the schema definition of the existing (now the "old") attribute, noting to use the new attribute in the message.
  4. Add **Deprecated** to the documentation of the existing (now the "old") attribute, noting to use the new attribute.
  5. Add ConflictsWith to the schema definitions of both the old and new attributes so they will present an error to the operator if both are configured at the same time.
  6. Add conditional logic in the Create, Read, and Update functions of the data source or resource to handle both attributes. Generally, this involves using ResourceData.GetOk() (commonly d.GetOk() in HashiCorp maintained providers).
  7. Follow the rest of the procedures in the Provider Attribute Removal section.

Example Renaming of an Optional Attribute

Given this sample resource:

func resourceExampleWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...

        Create: resourceExampleWidgetCreate,
        Read:   resourceExampleWidgetRead,
        Update: resourceExampleWidgetUpdate,

        Schema: map[string]*schema.Schema{
            // ... other attributes ...

            "existing_attribute": {
                Type:     schema.TypeString,
                Optional: true,
            },
        },
    }
}

func resourceExampleWidgetCreate(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    if v, ok := d.GetOk("existing_attribute"); ok {
        // add attribute to provider create API call
    }

    // ... other logic ...
    return resourceExampleWidgetRead(d, meta)
}

func resourceExampleWidgetRead(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    d.Set("existing_attribute", /* ... */)

    // ... other logic ...
    return nil
}

func resourceExampleWidgetUpdate(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    if v, ok := d.GetOk("existing_attribute"); ok {
        // add attribute to provider update API call
    }

    // ... other logic ...
    return resourceExampleWidgetRead(d, meta)
}

In order to support renaming existing_attribute to new_attribute, this sample can be written as the following to support both attributes simultaneously until the existing_attribute is removed:

func resourceExampleWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...

        Create: resourceExampleWidgetCreate,
        Read:   resourceExampleWidgetRead,
        Update: resourceExampleWidgetUpdate,

        Schema: map[string]*schema.Schema{
            // ... other attributes ...

            "existing_attribute": {
                Type:          schema.TypeString,
                Optional:      true,
                ConflictsWith: []string{"new_attribute"},
                Deprecated:    "use new_attribute instead",
            },
            "new_attribute": {
                Type:          schema.TypeString,
                Optional:      true,
                ConflictsWith: []string{"existing_attribute"},
            },
        },
    }
}

func resourceExampleWidgetCreate(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    if v, ok := d.GetOk("existing_attribute"); ok {
        // add attribute to provider create API call
    } else if v, ok := d.GetOk("new_attribute"); ok {
        // add attribute to provider create API call
    }

    // ... other logic ...
    return resourceExampleWidgetRead(d, meta)
}

func resourceExampleWidgetRead(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    if v, ok := d.GetOk("existing_attribute"); ok {
        d.Set("existing_attribute", /* ... */)
    } else {
        d.Set("new_attribute", /* ... */)
    }

    // ... other logic ...
    return nil
}

func resourceExampleWidgetUpdate(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    if v, ok := d.GetOk("existing_attribute"); ok {
        // add attribute to provider update API call
    } else if v, ok := d.GetOk("new_attribute"); ok {
        // add attribute to provider update API call
    }

    // ... other logic ...
    return resourceExampleWidgetRead(d, meta)
}

When the existing_attribute is ready for removal, then this can be written as:

func resourceExampleWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...

        Create: resourceExampleWidgetCreate,
        Read:   resourceExampleWidgetRead,
        Update: resourceExampleWidgetUpdate,

        Schema: map[string]*schema.Schema{
            // ... other attributes ...

            "new_attribute": {
                Type:     schema.TypeString,
                Optional: true,
            },
        },
    }
}

func resourceExampleWidgetCreate(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    if v, ok := d.GetOk("new_attribute"); ok {
        // add attribute to provider create API call
    }

    // ... other logic ...
    return resourceExampleWidgetRead(d, meta)
}

func resourceExampleWidgetRead(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    d.Set("new_attribute", /* ... */)

    // ... other logic ...
    return nil
}

func resourceExampleWidgetUpdate(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    if v, ok := d.GetOk("new_attribute"); ok {
        // add attribute to provider update API call
    }

    // ... other logic ...
    return resourceExampleWidgetRead(d, meta)
}

Renaming a Computed Attribute

~> NOTE: If the schema definition contains Optional see the Renaming an Optional Attribute section instead. If the schema definition contains Required see the Renaming a Required Attribute section instead.

The recommended process is as follows:

  1. Duplicate the schema definition of the existing attribute, renaming one of them with the new attribute name.
  2. Duplicate the documentation of the existing attribute, renaming one of them with the new attribute name.
  3. Add Deprecated to the schema definition of the existing (now the "old") attribute, noting to use the new attribute in the message.
  4. Add **Deprecated** to the documentation of the existing (now the "old") attribute, noting to use the new attribute.
  5. Set both attributes in the Terraform state in the Read functions of the data source or resource.
  6. Follow the rest of the procedures in the Provider Attribute Removal section.

Example Renaming of a Computed Attribute

Given this sample resource:

func resourceExampleWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...

        Read: resourceExampleWidgetRead,

        Schema: map[string]*schema.Schema{
            // ... other attributes ...

            "existing_attribute": {
                Type:     schema.TypeString,
                Computed: true,
            },
        },
    }
}

func resourceExampleWidgetRead(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    d.Set("existing_attribute", /* ... */)

    // ... other logic ...
    return nil
}

In order to support renaming existing_attribute to new_attribute, this sample can be written as the following to support both attributes simultaneously until the existing_attribute is removed:

func resourceExampleWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...

        Read: resourceExampleWidgetRead,

        Schema: map[string]*schema.Schema{
            // ... other attributes ...

            "existing_attribute": {
                Type:       schema.TypeString,
                Computed:   true,
                Deprecated: "use new_attribute instead",
            },
            "new_attribute": {
                Type:     schema.TypeString,
                Computed: true,
            },
        },
    }
}

func resourceExampleWidgetRead(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    d.Set("existing_attribute", /* ... */)
    d.Set("new_attribute", /* ... */)

    // ... other logic ...
    return nil
}

When the existing_attribute is ready for removal, then this can be written as:

func resourceExampleWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...

        Read: resourceExampleWidgetRead,

        Schema: map[string]*schema.Schema{
            // ... other attributes ...

            "new_attribute": {
                Type:     schema.TypeString,
                Computed: true,
            },
        },
    }
}

func resourceExampleWidgetRead(d *schema.ResourceData, meta interface{}) error {
    // ... other logic ...

    d.Set("new_attribute", /* ... */)

    // ... other logic ...
    return nil
}

Provider Data Source or Resource Removal

The recommended process for removing a data source or resource from a provider is as follows:

  1. Add DeprecationMessage in the data source or resource schema definition. After an operator upgrades to this version, they will be shown a warning with the message provided when using the deprecated data source or resource, but the Terraform run will still complete.
  2. Ensure the changelog has an entry noting the deprecation.
  3. Release a MINOR version with the deprecation.
  4. In the next MAJOR version, remove all code associated with the deprecated data source or resource except for the schema and replace the Create and Read functions to always return an error. Remove the documentation sidebar link and update the resource or data source documentation page to include information about the removal and any potential migration information. After an operator upgrades to this version, they will be shown an error about the missing data source or resource.
  5. Ensure the changelog has an entry noting the removal.
  6. Release the MAJOR version.
  7. In the next MAJOR version, remove all code associated with the removed data source or resource. Remove the resource or data source documentation page.
  8. Release the MAJOR version.

Example Resource Removal

Given this sample provider and resource:

func Provider() terraform.ResourceProvider {
    return &schema.Provider{
        // ... other configuration ...

        ResourcesMap: map[string]*schema.Resource{
            // ... other resources ...
            "example_widget": resourceExampleWidget(),
        },
    }
}

func resourceExampleWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...
    }
}

In order to deprecate example_widget, this sample can be written as:

func Provider() terraform.ResourceProvider {
    return &schema.Provider{
        // ... other configuration ...

        ResourcesMap: map[string]*schema.Resource{
            // ... other resources ...
            "example_widget": resourceExampleWidget(),
        },
    }
}

func resourceExampleWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...

        DeprecationMessage: "use example_thing resource instead"
    }
}

To soft remove example_widget with a friendly error message, this sample can be written as:

func Provider() terraform.ResourceProvider {
    return &schema.Provider{
        // ... other configuration ...

        ResourcesMap: map[string]*schema.Resource{
            // ... other resources ...
            "example_widget": resourceExampleWidget(),
        },
    }
}

func resourceExampleWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...

        Create: func(d *schema.ResourceData, meta interface{}) error {
            return errors.New("use example_thing resource instead")
        },
        Read: func(d *schema.ResourceData, meta interface{}) error {
            return errors.New("use example_thing resource instead")
        },
    }
}

To remove example_widget:

func Provider() terraform.ResourceProvider {
    return &schema.Provider{
        // ... other configuration ...

        ResourcesMap: map[string]*schema.Resource{
            // ... other resources ...
        },
    }
}

Provider Data Source or Resource Rename

When renaming a resource from one name to another, it is important to keep backwards compatibility with both existing Terraform configurations and the Terraform state while operators migrate. To accomplish this, there will be some duplicated logic to support both resources until the next MAJOR release. Once both resources are appropriately handled, the process for deprecating and removing the old resource is the same as noted in the Provider Data Source or Resource Removal section.

The recommended process is as follows:

  1. Duplicate the code of the existing resource, renaming (and potentially modifying) functions as necessary.
  2. Duplicate the documentation of the existing resource, renaming (and potentially modifying) as necessary.
  3. Add DeprecatedMessage to the schema definition of the existing (now the "old") resource, noting to use the new resource in the message.
  4. Add !> **WARNING:** This resource is deprecated and will be removed in the next major version to the documentation of the existing (now the "old") resource, noting to use the new resource.
  5. Add the new resource to the provider ResourcesMap
  6. Follow the rest of the procedures in the Provider Attribute Removal section.

Example Resource Renaming

Given this sample provider and resource:

func Provider() terraform.ResourceProvider {
    return &schema.Provider{
        // ... other configuration ...

        ResourcesMap: map[string]*schema.Resource{
            // ... other resources ...

            "example_existing_widget": resourceExampleExistingWidget(),
        },
    }
}

func resourceExampleExistingWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...
    }
}

In order to support renaming example_existing_widget to example_new_widget, this sample can be written as the following to support both attributes simultaneously until the existing_attribute is removed:

func Provider() terraform.ResourceProvider {
    return &schema.Provider{
        // ... other configuration ...

        ResourcesMap: map[string]*schema.Resource{
            // ... other resources ...

            "example_existing_widget": resourceExampleExistingWidget(),
            "example_new_widget":      resourceExampleNewWidget(),
        },
    }
}

func resourceExampleExistingWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...

        DeprecationMessage: "use example_new_widget resource instead"
    }
}

func resourceExampleNewWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...
    }
}

To soft remove example_existing_widget with a friendly error message:

func Provider() terraform.ResourceProvider {
    return &schema.Provider{
        // ... other configuration ...

        ResourcesMap: map[string]*schema.Resource{
            // ... other resources ...

            "example_existing_widget": resourceExampleExistingWidget(),
            "example_new_widget":      resourceExampleNewWidget(),
        },
    }
}

func resourceExampleExistingWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...

        Create: func(d *schema.ResourceData, meta interface{}) error {
            return errors.New("use example_new_widget resource instead")
        },
        Read: func(d *schema.ResourceData, meta interface{}) error {
            return errors.New("use example_new_widget resource instead")
        },
    }
}

func resourceExampleNewWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...
    }
}

To remove example_existing_widget:

func Provider() terraform.ResourceProvider {
    return &schema.Provider{
        // ... other configuration ...

        ResourcesMap: map[string]*schema.Resource{
            // ... other resources ...

            "example_new_widget": resourceExampleNewWidget(),
        },
    }
}

func resourceExampleNewWidget() *schema.Resource {
    return &schema.Resource{
        // ... other configuration ...
    }
}

12-best_practices-detecting-drift


page_title: Plugin Development - Detecting Drift description: |- "Drift" describes changes to infrastructure outside of Terraform. Learn how to ensure that Terraform detects drift so that users will know when their infrastructure has changed.

Detecting Drift

One of the core challenges of infrastructure as code is keeping an up-to-date record of all deployed infrastructure and their properties. Terraform manages this by maintaining state information in a single file, called the state file.

Terraform uses declarative configuration files to define the infrastructure resources to provision. This configuration serves as the target source of truth for what exists on the backend API. Changes to Infrastructure outside of Terraform will be detected as deviation by Terraform and shown as a diff in future runs of terraform plan. This type of change is referred to as "drift", and its detection is an important responsibility of Terraform in order to inform users of changes in their infrastructure. Here are a few techniques for developers to ensure drift is detected.

Capture all state in READ

A provider's READ method is where state is synchronized from the remote API to Terraform state. It's essential that all attributes defined in the schema are recorded and kept up-to-date in state. Consider this provider code:

// resource_example_simple.go
package example

func resourceExampleSimple() *schema.Resource {
    return &schema.Resource{
        Read:   resourceExampleSimpleRead,
        Create: resourceExampleSimpleCreate,
        Schema: map[string]*schema.Schema{
            "name": {
                Type:     schema.TypeString,
                Required: true,
                ForceNew: true,
            },
            "type": {
                Type:     schema.TypeString,
                Optional: true,
            },
        },
    }
}

func resourceExampleSimpleRead(d *schema.ResourceData, meta interface{}) error {
   client := meta.(*ProviderApi).client
   resource, _ := client.GetResource(d.Id())
   d.Set("name", resource.Name)
   d.Set("type", resource.Type)
   return nil
}

As defined in the schema, the type attribute is optional, now consider this config:

# config.tf
resource "simple" "ex" {
   name = "example"
}

Even though type is omitted from the config, it is vital that we record it into state in the READ function, as the backend API could set it to a default value. To illustrate the importance of capturing all state consider a configuration that interpolates the optional value into another resource:

resource "simple" "ex" {
   name = "example"
}

resource "another" "ex" {
  name = "${simple.ex.type}"
}

Update state after modification

A provider's CREATE and UPDATE functions will create or modify resources on the remote API. APIs might perform things like provide default values for unspecified attributes (as described in the above example config/provider code), or normalize inputs (lower or upper casing all characters in a string). The end result is a backend API containing modified versions of values that Terraform has in its state locally. Immediately after creation or updating of a resource, Terraform will have a stale state, which will result in a detected deviation on subsequent plan or applys, as Terraform refreshes its state and wants to reconcile the diff. Because of this, it is standard practice to call READ at the end of all modifications to synchronize immediately and avoid that diff.

func resourceExampleSimpleRead(d *schema.ResourceData, meta interface{}) error {
   client := meta.(*ProviderApi).client
   resource, _ := client.GetResource(d.Id())
   d.Set("name", resource.Name)
   d.Set("type", resource.Type)
   return nil
}

func resourceExampleSimpleCreate(d *schema.ResourceData, meta interface{}) error {
   client := meta.(*ProviderApi).client
   name := d.Get("name").(string)
   client.CreateResource(name)
   d.SetId(name)
   return resourceExampleSimpleRead(d, meta)
}

Error checking aggregate types

Terraform schema is defined using primitive types and aggregate types. The preceding examples featured primitive types which don't require error checking. Aggregate types on the other hand, schema.TypeList, schema.TypeSet, and schema.TypeMap, are converted to key/value pairs when set into state. As a result the Set method must be error checked, otherwise Terraform will think it's operation was successful despite having broken state. The same can be said for error checking API responses.

# config.tf
resource "simple" "ex" {
   name = "example"
   type = "simple"
   tags = {
      name = "example"
   }
}
// resource_example_simple.go
package example

func resourceExampleSimple() *schema.Resource {
    return &schema.Resource{
        Read:   resourceExampleSimpleRead,
        Create: resourceExampleSimpleCreate,
        Schema: map[string]*schema.Schema{
            "name": {
                Type:     schema.TypeString,
                Required: true,
                ForceNew: true,
            },
            "type": {
                Type:     schema.TypeString,
                Optional: true,
            },
            "tags": {
                Type:     schema.TypeMap,
                Optional: true,
            },
        },
    }
}

func resourceExampleSimpleRead(d *schema.ResourceData, meta interface{}) error {
   client := meta.(*ProviderApi).client
   resource, err := client.GetResource(d.Id())
   if err != nil {
      return fmt.Errorf("error getting resource %s: %s", d.Id(), err)
   }
   d.Set("name", resource.Name)
   d.Set("type", resource.Type)
   if err := d.Set("tags", resource.TagMap); err != nil {
      return fmt.Errorf("error setting tags for resource %s: %s", d.Id(), err)
   }
   return nil
}

Use Schema Helper methods

As mentioned, remote APIs can often perform mutations to the attributes of a resource outside of Terraform's control. Common examples include data containing uppercase letters and being normalized to lowercase, or complex defaults being set for unset attributes. These situations expectedly result in drift, but can be reconciled by using Terraform's schema functions, such as DiffSuppressFunc or DefaultFunc.


12-best_practices-naming


page_title: Plugin Development - Naming Best Practices description: |- Our recommendations for naming resources, data sources, and attributes in providers.

Naming

Most names in a Terraform provider will be drawn from the upstream API/SDK that the provider is using. The upstream API names will likely need to be modified for casing or changing between plural and singular to make the provider more consistent with the common Terraform practices below.

Resource Names

Resource names are nouns, since resource blocks each represent a single object Terraform is managing. Resource names must always start with their containing provider's name followed by an underscore, so a resource from the provider postgresql might be named postgresql_database.

It is preferable to use resource names that will be familiar to those with prior experience using the service in question, e.g. via a web UI it provides.

Data Source Names

Similar to resource names, data source names should be nouns. The main difference is that in some cases data sources are used to return a list and can in those cases be plural. For example the data source aws_availability_zones in the AWS provider returns a list of availability zones.

Attribute Names

Below is an example of a resource configuration block which illustrates some general design patterns that can apply across all plugin object types:

resource "aws_instance" "example" {
  ami                    = "ami-408c7f28"
  instance_type          = "t1.micro"
  monitoring             = true
  vpc_security_group_ids = [
      "sg-1436abcf",
  ]
  tags          = {
    Name        = "Application Server"
    Environment = "production"
  }
  root_block_device {
    delete_on_termination = false
  }
}

Attribute names within Terraform configuration blocks are conventionally named as all-lowercase with underscores separating words, as shown above.

Simple single-value attributes, like ami and instance_type in the above example, are given names that are singular nouns, to reflect that only one value is required and allowed.

Boolean attributes like monitoring are usually written also as nouns describing what is being enabled. However, they can sometimes be named as verbs if the attribute is specifying whether to take some action, as with the delete_on_termination flag within the root_block_device block.

Boolean attributes are ideally oriented so that true means to do something and false means not to do it; it can be confusing do have "negative" flags that prevent something from happening, since they require the user to follow a double-negative in order to reason about what value should be provided.

Some attributes expect list, set or map values. In the above example, vpc_security_group_ids is a set of strings, while tags is a map from strings to strings. Such attributes should be named with plural nouns, to reflect that multiple values may be provided.

List and set attributes use the same bracket syntax, and differ only in how they are described to and used by the user. In lists, the ordering is significant and duplicate values are often accepted. In sets, the ordering is not significant and duplicated values are usually not accepted, since presence or absence is what is important.

Map blocks use the same syntax as other configuration blocks, but the keys in maps are arbitrary and not explicitly named by the plugin, so in some cases (as in this tags example) they will not conform to the usual "lowercase with underscores" naming convention.

Configuration blocks may contain other sub-blocks, such as root_block_device in the above example. The patterns described above can also apply to such sub-blocks. Sub-blocks are usually introduced by a singular noun, even if multiple instances of the same-named block are accepted, since each distinct instance represents a single object.


12-best_practices-other-languages


page_title: Plugin Development - Non-Go Providers description: Information about writing providers in programming languages other than Go.

Writing Non-Go Providers

There has been a lot of interest in writing providers using languages other than Go, and people frequently ask for information on how to go about doing that. The Terraform team's policy at this time is that while it is technically possible to write providers in languages other than Go, our tooling, documentation, and ecosystem will all assume your provider is being written in and distributed as Go code for the time being. This means we will not be writing any documentation on how to build a non-Go provider, nor will we be providing support or answering questions about it.

While it is possible to write a non-Go provider, thanks to Terraform's use of the gRPC protocol, it is harder than it may appear at first glance. Multiple packages, from encoders and decoders to Terraform's type system, would all need to be reimplemented in that new language. The Plugin SDK would also need to be reimplemented, which is not a trivial challenge. And the way non-Go providers would interact with the Registry, terraform init, and other pieces of the Terraform ecosystem is unclear.

At this point, our efforts are focused on providing the best development experience for Terraform providers written in Go that we can. The Terraform provider development experience is still evolving aggressively, as is Terraform's interface for providers. We may reconsider this policy once there is a more stable interface to build on and our development experience with Go has matured and evolved sufficiently.


12-best_practices-sensitive-state


page_title: Plugin Development - Sensitive State Best Practices description: Recommendations for handling sensitive information in state.

Handling Sensitive Values in State

Many organizations use Terraform to manage their entire infrastructure, and it's inevitable that sensitive information will find its way into Terraform in these circumstances. There are a couple of recommended approaches for managing sensitive state in Terraform.

Using the Sensitive Flag

When working with a field that contains information likely to be considered sensitive, it is best to set the Sensitive property on its schema to true. This will prevent the field's values from showing up in CLI output and in Terraform Cloud. It will not encrypt or obscure the value in the state, however.

Don't Encrypt State

One experiment that has been attempted is allowing the user to provide a PGP key and a cipher text, and decrypting the value in the provider code before using it, storing only the cipher text in state. Another variation on this approach was providing a PGP key that data from an API would be encrypted with before being set in state, with nothing being set in the config.

Both of these approaches are discouraged and will be removed from the HashiCorp-supported providers over time. This strategy was tailored to a time when Terraform's state had to be stored in cleartext on any machine running terraform apply, and was meant to provide a bit of security in that scenario. With the introduction and use of remote backends and especially the availability of Terraform Cloud, there are now a variety of backends that will encrypt state at rest and will not store the state in cleartext on machines running terraform apply. This means the original problem the PGP key pattern was intended to solve has a better-supported solution, and we're deprecating it in favor of that solution.

Even without comparing it to full state encryption, PGP key encryption has major drawbacks. Values encrypted with a PGP key can't be reliably interpolated, Terraform isn't built to provide a good user experience around a missing PGP key right now, and the approach needs serious modification to not violate protocol requirements for Terraform 0.12 and into the future.

In light of these shortcomings, the encouraged solution at this time is to use a state backend that supports operations and encryption, and for users whose security needs cannot be met by that strategy to weigh in on the issue about this to help outline the gaps in this strategy, so appropriate solutions can be designed for them.


12-best_practices-testing


page_title: Plugin Development - Testing Patterns description: |- Testing Patterns covers essential acceptance test patterns to implement for Terraform resources.

Testing Patterns

In Testing Terraform Plugins we introduce Terraform’s Testing Framework, providing reference for its functionality and introducing the basic parts of writing acceptance tests. In this section we’ll cover some test patterns that are common and considered a best practice to have when developing and verifying your Terraform plugins. At time of writing these guides are particular to Terraform Resources, but other testing best practices may be added later.

Table of Contents

Built-in Patterns

Acceptance tests use TestCases to construct scenarios that can be evaluated with Terraform’s lifecycle of plan, apply, refresh, and destroy. The test framework has some behaviors built in that provide very basic workflow assurance tests, such as verifying configurations apply with no diff generated by the next plan.

Each TestCase will run any PreCheck function provided before running the test, and then any CheckDestroy functions after the test concludes. These functions allow developers to verify the state of the resource and test before and after it runs.

When a test is ran, Terraform runs plan, apply, refresh, and then final plan for each TestStep in the TestCase. If the last plan results in a non-empty plan, Terraform will exit with an error. This enables developers to ensure that configurations apply cleanly. In the case of introducing regression tests or otherwise testing specific error behavior, TestStep offers a boolean field ExpectNonEmptyPlan as well ExpectError regex field to specify ways the test framework can handle expected failures. If these properties are omitted and either a non-empty plan occurs or an error encountered, Terraform will fail the test.

After all TestSteps have been ran, Terraform then runs destroy, and ends by running any CheckDestroy function provided.

Back to top

Basic test to verify attributes

The most basic resource acceptance test should use what is likely to be a common configuration for the resource under test, and verify that Terraform can correctly create the resource, and that resources attributes are what Terraform expects them to be. At a high level, the first basic test for a resource should establish the following:

  • Terraform can plan and apply a common resource configuration without error.
  • Verify the expected attributes are saved to state, and contain the values expected.
  • Verify the values in the remote API/Service for the resource match what is stored in state.
  • Verify that a subsequent terraform plan does not produce a diff/change.

The first and last item are provided by the test framework as described above in Built-in Patterns. The middle items are implemented by composing a series of Check Functions as described in Acceptance Tests: TestSteps.

To verify attributes are saved to the state file correctly, use a combination of the built-in check functions provided by the testing framework. See Built-in Check Functions to see available functions.

Checking the values in a remote API generally consists of two parts: a function to verify the corresponding object exists remotely, and a separate function to verify the values of the object. By separating the check used to verify the object exists into its own function, developers are free to re-use it for all TestCases as a means of retrieving it’s values, and can provide custom check functions per TestCase to verify different attributes or scenarios specific to that TestCase.

Here’s an example test, with in-line comments to demonstrate the key parts of a basic test.

package example

// example.Widget represents a concrete Go type that represents an API resource
func TestAccExampleWidget_basic(t *testing.T) {
	var widget example.Widget

	// generate a random name for each widget test run, to avoid
	// collisions from multiple concurrent tests.
	// the acctest package includes many helpers such as RandStringFromCharSet
	// See https://pkg.go.dev/github.com/hashicorp/terraform-plugin-sdk/helper/acctest
	rName := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)

	resource.Test(t, resource.TestCase{
		PreCheck:     func() { testAccPreCheck(t) },
		Providers:    testAccProviders,
		CheckDestroy: testAccCheckExampleResourceDestroy,
		Steps: []resource.TestStep{
			{
				// use a dynamic configuration with the random name from above
				Config: testAccExampleResource(rName),
				// compose a basic test, checking both remote and local values
				Check: resource.ComposeTestCheckFunc(
					// query the API to retrieve the widget object
					testAccCheckExampleResourceExists("example_widget.foo", &widget),
					// verify remote values
					testAccCheckExampleWidgetValues(widget, rName),
					// verify local values
					resource.TestCheckResourceAttr("example_widget.foo", "active", "true"),
					resource.TestCheckResourceAttr("example_widget.foo", "name", rName),
				),
			},
		},
	})
}

func testAccCheckExampleWidgetValues(widget *example.Widget, name string) resource.TestCheckFunc {
	return func(s *terraform.State) error {
		if *widget.Active != true {
			return fmt.Errorf("bad active state, expected \"true\", got: %#v", *widget.Active)
		}
		if *widget.Name != name {
			return fmt.Errorf("bad name, expected \"%s\", got: %#v", name, *widget.Name)
		}
		return nil
	}
}

// testAccCheckExampleResourceExists queries the API and retrieves the matching Widget.
func testAccCheckExampleResourceExists(n string, widget *example.Widget) resource.TestCheckFunc {
	return func(s *terraform.State) error {
		// find the corresponding state object
		rs, ok := s.RootModule().Resources[n]
		if !ok {
			return fmt.Errorf("Not found: %s", n)
		}

		// retrieve the configured client from the test setup
		conn := testAccProvider.Meta().(*ExampleClient)
		resp, err := conn.DescribeWidget(&example.DescribeWidgetsInput{
			WidgetIdentifier: rs.Primary.ID,
		})

		if err != nil {
			return err
		}

		if resp.Widget == nil {
			return fmt.Errorf("Widget (%s) not found", rs.Primary.ID)
		}

		// assign the response Widget attribute to the widget pointer
		*widget = *resp.Widget

		return nil
	}
}

// testAccExampleResource returns an configuration for an Example Widget with the provided name
func testAccExampleResource(name string) string {
	return fmt.Sprintf(`
resource "example_widget" "foo" {
  active = true
  name = "%s"
}`, name)
}

This example covers all the items needed for a basic test, and will be referenced or added to in the other test cases to come.

Back to top

Update test verify configuration changes

A basic test covers a simple configuration that should apply successfully and with no follow up differences in state. To verify a resource correctly applies updates, the second most common test found is an extension of the basic test, that simply applies another TestStep with a modified version of the original configuration.

Below is an example test, copied and modified from the basic test. Here we preserve the TestStep from the basic test, but we add an additional TestStep, changing the configuration and rechecking the values, with a different configuration function testAccExampleResourceUpdated and check function testAccCheckExampleWidgetValuesUpdated for verifying the values.

func TestAccExampleWidget_update(t *testing.T) {
	var widget example.Widget
	rName := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)

	resource.Test(t, resource.TestCase{
		PreCheck:     func() { testAccPreCheck(t) },
		Providers:    testAccProviders,
		CheckDestroy: testAccCheckExampleResourceDestroy,
		Steps: []resource.TestStep{
			{
				// use a dynamic configuration with the random name from above
				Config: testAccExampleResource(rName),
				Check: resource.ComposeTestCheckFunc(
					testAccCheckExampleResourceExists("example_widget.foo", &widget),
					testAccCheckExampleWidgetValues(widget, rName),
					resource.TestCheckResourceAttr("example_widget.foo", "active", "true"),
					resource.TestCheckResourceAttr("example_widget.foo", "name", rName),
				),
			},
			{
				// use a dynamic configuration with the random name from above
				Config: testAccExampleResourceUpdated(rName),
				Check: resource.ComposeTestCheckFunc(
					testAccCheckExampleResourceExists("example_widget.foo", &widget),
					testAccCheckExampleWidgetValuesUpdated(widget, rName),
					resource.TestCheckResourceAttr("example_widget.foo", "active", "false"),
					resource.TestCheckResourceAttr("example_widget.foo", "name", rName),
				),
			},
		},
	})
}

func testAccCheckExampleWidgetValuesUpdated(widget *example.Widget, name string) resource.TestCheckFunc {
	return func(s *terraform.State) error {
		if *widget.Active != false {
			return fmt.Errorf("bad active state, expected \"false\", got: %#v", *widget.Active)
		}
		if *widget.Name != name {
			return fmt.Errorf("bad name, expected \"%s\", got: %#v", name, *widget.Name)
		}
		return nil
	}
}

// testAccExampleResource returns an configuration for an Example Widget with the provided name
func testAccExampleResourceUpdated(name string) string {
	return fmt.Sprintf(`
resource "example_widget" "foo" {
  active = false
  name = "%s"
}`, name)
}

It’s common for resources to just have the above update test, as it is a superset of the basic test. So long as the basics are covered, combining the two tests is sufficient as opposed to having two separate tests.

Back to top

Expecting errors or non-empty plans

The number of acceptance tests for a given resource typically start small with the basic and update scenarios covered. Other tests should be added to demonstrate common expected configurations or behavior scenarios for a given resource, such as typical updates or changes to configuration, or exercising logic that uses polling for updates such as an autoscaling group adding or draining instances.

It is possible for scenarios to exist where a valid configuration (no errors during plan) would result in a non-empty plan after successfully running terraform apply. This is typically due to a valid but otherwise misconfiguration of the resource, and is generally undesirable. Occasionally it is useful to intentionally create this scenario in an early TestStep in order to demonstrate correcting the state with proper configuration in a follow-up TestStep. Normally a TestStep that results in a non-empty plan would fail the test after apply, however developers can use the ExpectNonEmptyPlan attribute to prevent failure and allow the TestCase to continue:

func TestAccExampleWidget_expectPlan(t *testing.T) {
	var widget example.Widget
	rName := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)

	resource.Test(t, resource.TestCase{
		PreCheck:     func() { testAccPreCheck(t) },
		Providers:    testAccProviders,
		CheckDestroy: testAccCheckExampleResourceDestroy,
		Steps: []resource.TestStep{
			{
				// use an incomplete configuration that we expect
				// to result in a non-empty plan after apply
				Config: testAccExampleResourceIncomplete(rName),
				Check: resource.ComposeTestCheckFunc(
					resource.TestCheckResourceAttr("example_widget.foo", "name", rName),
				),
				ExpectNonEmptyPlan: true,
			},
			{
				// apply the complete configuration
				Config: testAccExampleResourceComplete(rName),
				Check: resource.ComposeTestCheckFunc(
					resource.TestCheckResourceAttr("example_widget.foo", "name", rName),
				),
			},
		},
	})
}

In addition to ExpectNonEmptyPlan, TestStep also exposes an ExpectError hook, allowing developers to test configuration that they expect to produce an error, such as configuration that fails schema validators:

func TestAccExampleWidget_expectError(t *testing.T) {
	var widget example.Widget
	rName := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)

	resource.Test(t, resource.TestCase{
		PreCheck:     func() { testAccPreCheck(t) },
		Providers:    testAccProviders,
		CheckDestroy: testAccCheckExampleResourceDestroy,
		Steps: []resource.TestStep{
			{
				// use a configuration that we expect to fail a validator
				// on the resource Name attribute, which only allows alphanumeric
				// characters
				Config:      testAccExampleResourceError(rName + "*$%%^"),
				// No check function is given because we expect this configuration
				// to fail before any infrastructure is created
				ExpectError: regexp.MustCompile("Widget names may only contain alphanumeric characters"),
			},
		},
	})
}

ExpectError expects a valid regular expression, and the error message must match in order to consider the error as expected and allow the test to pass. If the regular expression does not match, the TestStep fails explaining that the configuration did not produce the error expected.

Back to top

Regression tests

As resources are put into use, issues can arise as bugs that need to be fixed and released in a new version. Developers are encouraged to introduce regression tests that demonstrate not only any bugs reported, but that code modified to address any bug is verified as fixing the issues. These regression tests should be named and documented appropriately to identify the issue(s) they demonstrate fixes for. When possible the documentation for a regression test should include a link to the original bug report.

An ideal bug fix would include at least 2 commits to source control:

A single commit introducing the regression test, verifying the issue(s) 1 or more commits that modify code to fix the issue(s)

This allows other developers to independently verify that a regression test indeed reproduces the issue by checking out the source at that commit first, and then advancing the revisions to evaluate the fix.

Back to top

Conclusion

Terraform’s Testing Framework allows for powerful, iterative acceptance tests that enable developers to fully test the behavior of Terraform plugins. By following the above best practices, developers can ensure their plugin behavies correctly across the most common use cases and everyday operations users will have using their plugins, and ensure that Terraform remains a world-class tool for safely managing infrastructure.


12-best_practices-versioning


page_title: Plugin Development - Versioning Best Practices description: Recommendations for version numbering and documentation.

Versioning and Changelog

Given the breadth of available Terraform plugins, ensuring a consistent experience across them requires a standard guideline for compatibility promises. These guidelines are enforced for plugins released by HashiCorp and are recommended for all community plugins.

Versioning Specification

Observing that Terraform plugins are in many ways analogous to shared libraries in a programming language, we adopted a version numbering scheme that follows the guidelines of Semantic Versioning. In summary, this means that with a version number of the form MAJOR.MINOR.PATCH, the following meanings apply:

  • Increasing only the patch number suggests that the release includes only bug fixes, and is intended to be functionally equivalent.
  • Increasing the minor number suggests that new features have been added but that existing functionality remains broadly compatible.
  • Increasing the major number indicates that significant breaking changes have been made, and thus extra care or attention is required during an upgrade. To allow practitioners sufficient time and opportunity to upgrade to the latest version of the provider, we recommend releasing major versions no more than once per year. Releasing major versions more frequently could present a barrier to adoption due to the effort required to upgrade.

Version numbers above 1.0.0 signify stronger compatibility guarantees, based on the rules above. Each increasing level can also contain changes of the lower level (e.g. MINOR can contain PATCH changes).

Example Major Number Increments

Increasing the MAJOR number is intended to signify potentially breaking changes.

Within Terraform provider development, some examples include:

  • Removing a resource or data source
  • Removing an attribute (e.g. switching to Removed on an attribute or removing the attribute definition altogether)
  • Renaming a resource or data source
  • Renaming an attribute
  • Changing fundamental provider behaviors (e.g. authentication or configuration precedence)
  • Changing resource import ID format
  • Changing resource ID format
  • Changing attribute type where the new type is functionally incompatible (including but not limited to changing TypeSet to TypeList and TypeList to TypeSet)
  • Changing attribute format (e.g. changing a timestamp from epoch time to a string)
  • Changing attribute default value that is incompatible with previous Terraform states (e.g. Default: "one" to Default: "two")
  • Adding an attribute default value that does not match the API default

Example Minor Number Increments

MINOR increments are intended to signify the availability of new functionality or deprecations of existing functionality without breaking changes to the previous version.

Within Terraform provider development, some examples include:

  • Marking a resource or data source as deprecated
  • Marking an attribute as deprecated
  • Adding a new resource or data source
  • Aliasing an existing resource or data source
  • Implementing new attributes within the provider configuration or an existing resource or data source
  • Implementing new validation within an existing resource or data source
  • Changing attribute type where the new type is functionally compatible (e.g. TypeInt to TypeFloat)

Example Patch Number Increments

Increasing the PATCH number is intended to signify mainly bug fixes and to be functionally equivalent with the previous version.

Within Terraform provider development, some examples include:

  • Fixing an interaction with the remote API or Terraform state drift detection (e.g. broken create, read, update, or delete functionality)
  • Fixing attributes to match behavior with resource code (e.g. removing Optional when an attribute can not be configured in the remote API)
  • Fixing attributes to match behavior with the remote API (e.g. changing Required to Optional, fixing validation)

Changelog Specification

For better operator experience, we provide a standardized format so development information is available across all providers consistently. The changelog should live in a top level file in the project, named CHANGELOG or CHANGELOG.md. We generally recommend that the changelog is updated outside of pull requests unless a clear process is setup for handling merge conflicts.

Version Headers

The upcoming release version number is always at the top of the file and is marked specifically as (Unreleased), with other previously released versions below.

~> NOTE: For HashiCorp released providers, the release process will replace the "Unreleased" header with the current date. This line must be present with the target release version to successfully release that version.

## X.Y.Z (Unreleased)

...


## A.B.C (Month Day, Year)

...

Categorization

Information in the changelog should broken down as follows:

  • BACKWARDS INCOMPATIBILITIES or BREAKING CHANGES: This section documents in brief any incompatible changes and how to handle them. This should only be present in major version upgrades.
  • NOTES: Additional information for potentially unexpected upgrade behavior, upcoming deprecations, or to highlight very important crash fixes (e.g. due to upstream API changes)
  • FEATURES: These are major new improvements that deserve a special highlight, such as a new resource or data source.
  • IMPROVEMENTS or ENHANCEMENTS: Smaller features added to the project such as a new attribute for a resource.
  • BUG FIXES: Any bugs that were fixed.

These should be displayed as left aligned text with new lines above and below:


CATEGORY:

Entry Format

Each entry under a category should use the following format:

* subsystem: Descriptive message [GH-1234]

For provider development typically the "subsystem" is the resource or data source affected e.g. resource/load_balancer, or provider if the change affects whole provider (e.g. authentication logic). Each bullet also references the corresponding pull request number that contained the code changes, in the format of [GH-####] (for HashiCorp released plugins, this will be automatically updated on release).

Entry Ordering

To order entries, these basic rules should be followed:

  1. If large cross-cutting changes are present, list them first (e.g. provider)
  2. Order other entries lexicographically based on subsystem (e.g. resource/load_balancer then resource/subnet)

Example Changelog

## 1.0.0 (Unreleased)

BREAKING CHANGES:

* Resource `network_port` has been removed [GH-1]

FEATURES:

* **New Resource:** `cluster` [GH-43]

IMPROVEMENTS:

* resource/load_balancer: Add `ATTRIBUTE` argument (support X new functionality) [GH-12]
* resource/subnet: Now better [GH-22, GH-32]

## 0.2.0 (Month Day, Year)

FEATURES:

...

21-resources-index


page_title: Resources - Guides description: >- Resources are a key component to provider development. Learn to use advanced resource APIs.

Resources

A key component to Terraform Provider development is defining the creation, read, update, and deletion functionality of a resource to map those API operations into the Terraform lifecycle. While the basic aspects of developing Terraform resources have already been covered in the Call APIs with Terraform Providers Learn collection and Schemas, this section covers more advanced features of resource development.

Import

Many operators migrating to Terraform will have previously existing infrastructure they want to bring under the management of Terraform. Terraform allows resources to implement Import Support to begin managing those existing infrastructure components.

Retries and Customizable Timeouts

The reality of cloud infrastructure is that it typically takes time to perform operations such as booting operating systems, discovering services, and replicating state across network edges. Terraform implements functionality to retry API requests or specifically declare state change criteria, while allowing customizable timeouts for operators. More information can be found in the Retries and Customizable Timeouts section.

Customizing Differences

Terraform tracks the state of provisioned resources in its state file, and compares the user-passed configuration against that state. When Terraform detects a discrepancy, it presents the user with the differences between the configuration and the state. Sometimes these scenarios require special handling, which is where Customizing Differences can help.

State Migrations

Resources define the data types and API interactions required to create, update, and destroy infrastructure with a cloud vendor, while the Terraform state stores mapping and metadata information for those remote objects.

When resource implementations change (due to bug fixes, improvements, or changes to the backend APIs Terraform interacts with), they can sometimes become incompatible with existing state. When this happens, a migration is needed for resources provisioned in the wild with old schema configurations. Terraform resources support migrating state values in these scenarios via State Migration.


22-resources-customizing-differences


page_title: Resources - Customizing Differences description: Difference customization within Resources.

Resources - Customizing Differences

Terraform tracks the state of provisioned resources in its state file, and compares the user-passed configuration against that state. When Terraform detects a discrepancy, it presents the user with the differences between the configuration and the state.

Sometimes determining the differences between state and configuration requires special handling, which can be managed with the CustomizeDiff function.

CustomizeDiff is passed a *schema.ResourceDiff. This is a structure similar to schema.ResourceData — it lacks most write functions (like Set), but adds some functions for working with the difference, such as SetNew, SetNewComputed, and ForceNew.

~> NOTE: CustomizeDiff does not currently support computed/"known after apply" values from other resource attributes.

Any function can be provided for difference customization. For the majority of simple cases, we recommend that you first try to compose the behavior using the customdiff helper package, which allows for a more declarative configuration. However, for highly custom requirements, a custom-made function is usually easier and more maintainable than working around the helper's limitations.

package example

import (
    "fmt"

    "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff"
    "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)

func resourceExampleInstance() *schema.Resource {
    return &schema.Resource{
        Create: resourceExampleInstanceCreate,
        Read:   resourceExampleInstanceRead,
        Update: resourceExampleInstanceUpdate,
        Delete: resourceExampleInstanceDelete,

        Schema: map[string]*schema.Schema{
            "size": {
                Type:     schema.TypeInt,
                Required: true,
            },
        },
        CustomizeDiff: customdiff.All(
            customdiff.ValidateChange("size", func (old, new, meta interface{}) error {
                // If we are increasing "size" then the new value must be
                // a multiple of the old value.
                if new.(int) <= old.(int) {
                    return nil
                }
                if (new.(int) % old.(int)) != 0 {
                    return fmt.Errorf("new size value must be an integer multiple of old value %d", old.(int))
                }
                return nil
            }),
            customdiff.ForceNewIfChange("size", func (old, new, meta interface{}) bool {
                // "size" can only increase in-place, so we must create a new resource
                // if it is decreased.
                return new.(int) < old.(int)
            }),
       ),
    }
}

In this example we use the helpers to ensure the size can only be increased to multiples of the original size, and that if it is ever decreased it forces a new resource. The customdiff.All helper will run all the customization functions, collecting any errors as a multierror. To have the functions short-circuit on error, please use customdiff.Sequence.


22-resources-import


page_title: Resources - Import description: Implementing resource import support.

Resources - Import

Adding import support for Terraform resources will allow existing infrastructure to be managed within Terraform. This type of enhancement generally requires a small to moderate amount of code changes.

~> Note: Operators are responsible for writing the appropriate configuration that will be associated with the resource import. This restriction may be removed in a future version of Terraform.

When importing, the operator will specify the Terraform configuration address for the resource they wish to import, along with an identifier for import. The import identifier may be different than the resource identifier (ResourceData.SetId()) for compatibility reasons outlined below in the Importer State Function section.

$ terraform import example_thing.foo abc123

Overview of Implementation

Implementing import support requires three changes: an Importer State function in the resource code, a TestStep with ImportState: true in the acceptance tests, and documentation of the import ID format.

Hands-on: Try the Implement Import tutorial on HashiCorp Learn. In this tutorial, you will implement the import functionality on an example Terraform provider.

Resource Code Implementation

In the resource code (e.g. resource_example_thing.go), implement an Importer State function:

func resourceExampleThing() *schema.Resource {
  return &schema.Resource{
    /* ... existing Resource functions ... */
    Importer: &schema.ResourceImporter{
      State: /* ... */,
    },
  }
}

Resource Acceptance Testing Implementation

In the resource acceptance testing (e.g. resource_example_thing_test.go), implement TestSteps with ImportState: true:

func TestAccExampleThing_basic(t *testing.T) {
  /* ... potentially existing acceptance testing logic ... */

  resource.ParallelTest(t, resource.TestCase{
    /* ... existing TestCase functions ... */
    Steps: []resource.TestStep{
      /* ... existing TestStep ... */
      {
        ResourceName:      "example_thing.test",
        ImportState:       true,
        ImportStateVerify: true,
      },
    },
  })
}

Resource Documentation Implementation

In the resource documentation (e.g. website/docs/r/example_thing.html.markdown), add an Import documentation section at the bottom of the page:

## Import

Service Thing can be imported using the id, e.g.

```
$ terraform import example_thing.example abc123
```

Additional Information

Recommendations for Import

The items below are coding/testing styles that should generally be followed when implementing import support.

  • The TestStep including ImportState testing should not be performed solely in a separate acceptance test. This duplicates testing infrastructure/time and does not check that all resource configurations import into Terraform properly.
  • The TestStep including ImportState should be included in all applicable resource acceptance tests (except those that delete the resource in question, e.g. _disappears tests)
  • Import implementations should not change existing Create function d.SetId() calls. Versioning best practices for Terraform Provider development notes that changing the resource ID is considered a breaking change for a major version upgrade as it makes the id attribute ambiguous between provider versions.
  • ImportStateVerifyIgnore should only be used where its not possible to d.Set() the attribute in the Read function (preferable) or Importer State function.

Importer State Function

Where possible, prefer using schema.ImportStatePassthrough as the Importer State function:

func resourceExampleThing() *schema.Resource {
  return &schema.Resource{
    /* ... existing Resource functions ... */
    Importer: &schema.ResourceImporter{
      State: schema.ImportStatePassthrough,
    },
  }
}

This function requires the Read function to be able to refresh the entire resource with d.Id() ONLY. Sometimes it is possible to adjust the resource Read function to replace d.Get() use with d.Id() if they exactly match or add a function that parses the resource ID into the necessary attributes:

// Illustrative example of parsing a resource ID into two parts to match requirements for Read function
// In this example, the resource ID is a combination of attribute1 and attribute2, separated by a colon (:) character

func resourceServiceThingExampleThingParseId(id string) (string, string, error) {
  parts := strings.SplitN(id, ":", 2)

  if len(parts) != 2 || parts[0] == "" || parts[1] == "" {
    return "", "", fmt.Errorf("unexpected format of ID (%s), expected attribute1:attribute2", id)
  }

  return parts[0], parts[1], nil
}

// In the resource Read function:

attribute1, attribute2, err := resourceServiceThingExampleThingParseId(d.Id())

if err != nil {
  return err
}

More likely though, if the resource requires multiple attributes and they are not already in the resource ID, Importer State will require a custom function implementation beyond using schema.ImportStatePassthrough, seen below. The ID passed into terraform import should be parsed so d.Set() can be called the required attributes to make the Read function properly operate. The resource ID should also match the ID set during the resource Create function via d.SetId().

// Illustrative example of parsing the import ID during terraform import
// This should only be used where the resource ID cannot be solely used
// during the resource Read function.
func resourceExampleThing() *schema.Resource {
  return &schema.Resource{
    /* ... other Resource functions ... */
    Importer: &schema.ResourceImporter{
      State:  func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) {
        // d.Id() here is the last argument passed to the `terraform import RESOURCE_TYPE.RESOURCE_NAME RESOURCE_ID` command
        // Here we use a function to parse the import ID (like the example above) to simplify our logic
        attribute1, attribute2, err := resourceServiceThingExampleThingParseId(d.Id())

        if err != nil {
          return nil, err
        }

        d.Set("attribute1", attribute1)
        d.Set("attribute2", attribute2)
        d.SetId(fmt.Sprintf("%s:%s", attribute1, attribute2))

        return []*schema.ResourceData{d}, nil
      },
    },

ImportStateVerifyIgnore

~> NOTE: ImportStateVerifyIgnore should be used sparingly as it means Terraform will require a followup apply to the resource after import or operators must configure lifecycle configuration block ignore_changes argument (especially for attributes that are ForceNew).

Some resource attributes only exist within the context of the Terraform resource or are only used to modify an API request during resource Create, Update, and Delete functions. In these cases, if implementation of the resource cannot obtain the value for the attribute in the Read function or its not determined/defaulted to the correct value during the Importer State function, the acceptance testing may return an error like the following:

--- FAIL: TestAccExampleThing_namePrefix (18.56s)
    testing.go:568: Step 2 error: ImportStateVerify attributes not equivalent. Difference is shown below. Top is actual, bottom is expected.

        (map[string]string) {
        }


        (map[string]string) (len=1) {
         (string) (len=11) "name_prefix": (string) (len=24) "test-7166041588452991103"
        }

To have the import testing ignore this attribute's value being missing during import, the ImportStateVerifyIgnore field can be used with the list containing the name(s) of the attributes, e.g.

func TestAccExampleThing_basic(t *testing.T) {
  /* ... potentially existing acceptance testing logic ... */

  resource.ParallelTest(t, resource.TestCase{
    /* ... existing TestCase functions ... */
    Steps: []resource.TestStep{
      /* ... existing TestStep ... */
      {
        ResourceName:            "example_thing.test",
        ImportState:             true,
        ImportStateVerify:       true,
        ImportStateVerifyIgnore: []string{"name_prefix"},
      },
    },
  })
}

Multiple Resource Import

~> NOTE: Multiple resource import is generally discouraged due to the implementation/testing complexity and since the resource addresses saved into the Terraform state will likely not align with the operator's configuration.

The Terraform import framework supports importing multiple resources from a single state import function (sometimes referred to as "complex" imports), by adding elements to the returned []*schema.ResourceData. Each of those new elements must have ResourceData.SetType() and ResourceData.SetId() called.

Given our fictitious example resource, if the API supported many associations with it, we could perform an API lookup during the resource import function to find those associations and add them to the Terraform state during import.

func resourceExampleThingImportState(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) {
  // Perform API lookup using the import ID (d.Id()) and save those into a variable named associations

  results := []*schema.ResourceData{d}
  for _, association := range associations {
    d := resourceExampleThingAssociation().Data(nil)
    d.SetType("example_thing_association")
    d.SetId(/* ... dependent on example_thing_association implementation ... */)
    results = append(results, d)
  }

  return results, nil
}

22-resources-retries-and-customizable-timeouts


page_title: Resources - Retries and Customizable Timeouts description: Helpers for handling retries within Resources.

Resources - Retries and Customizable Timeouts

The reality of cloud infrastructure is that it typically takes time to perform operations such as booting operating systems, discovering services, and replicating state across network edges. As the provider developer you should take known delays in resource APIs into account in the CRUD functions of the resource. Terraform supports configurable timeouts to assist in these situations.

package example

import (
    "fmt"

    "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)

func resourceExampleInstance() *schema.Resource {
    return &schema.Resource{
        CreateContext: resourceExampleInstanceCreate,
        ReadContext:   resourceExampleInstanceRead,
        UpdateContext: resourceExampleInstanceUpdate,
        DeleteContext: resourceExampleInstanceDelete,

        Schema: map[string]*schema.Schema{
            "name": {
                Type:     schema.TypeString,
                Required: true,
            },
        },
        Timeouts: &schema.ResourceTimeout{
            Create: schema.DefaultTimeout(45 * time.Minute),
        },
    }
}

In the above example we see the usage of the timeouts in the schema being configured for what is deemed the appropriate amount of time for the Create function. Read, Update, and Delete are also configurable as well as a Default. These configured timeouts can be fetched in the CRUD function logic using the (*schema.ResourceData).Timeout() method, such as d.Timeout(schema.TimeoutCreate). Practitioners can override these timeout values with resource timeouts configuration, such as:

resource "example_thing" "example" {
  # ...

  timeouts {
    create = "60m"
  }
}

Default Timeouts and Deadline Exceeded Errors

The SDK imposes the following default timeout behaviors for CRUD functions:

CRUD Function Default Timeout
Create 20 minutes
CreateContext 20 minutes
CreateWithoutTimeout N/A
Delete 20 minutes
DeleteContext 20 minutes
DeleteWithoutTimeout N/A
Read 20 minutes
ReadContext 20 minutes
ReadWithoutTimeout N/A
Update 20 minutes
UpdateContext 20 minutes
UpdateWithoutTimeout N/A

The *schema/Resource.Timeouts field can customize the default timeout on CRUD functions with default timeouts.

If a CRUD function timeout is exceeded, the SDK will automatically return a context.DeadlineExceeded error. To practitioners, this is shown in the Terraform CLI output as a context: deadline exceeded error. Since the context timeout and associated error handling occur outside CRUD logic in the SDK, it is not possible to capture or change this error behavior. If it is unclear how long CRUD operations may take, it is recommended to either increase the default timeout using the Timeouts field, or switch to using the WithoutTimeout CRUD functions.

Retry

The retry helper takes a timeout and a retry function.

  • The timeout value specifies the maximum time Terraform will invoke the retry function. You can retrieve the timeout from the *schema.ResourceData struct by passing the timeout key (schema.TimeoutCreate) to the Timeout method.
  • The retry function returns either a resource.NonRetryableError for unexpected errors/states or a resource.RetryableError for expected errrors/states. If the function returns a resource.RetryableError, it will re-run the function.

In the context of a CREATE function, once the backend responds with the desired state, invoke the READ function. If READ errors, return that error wrapped with resource.NonRetryableError. Otherwise, return nil (no error) from the retry function.

func resourceExampleInstanceCreate(d *schema.ResourceData, meta interface{}) error {
    name := d.Get("name").(string)
    client := meta.(*ExampleClient)
    resp, err := client.CreateInstance(name)

    if err != nil {
        return fmt.Errorf("Error creating instance: %s", err)
    }

    return resource.Retry(d.Timeout(schema.TimeoutCreate) - time.Minute, func() *resource.RetryError {
        resp, err := client.DescribeInstance(name)

        if err != nil {
            return resource.NonRetryableError(fmt.Errorf("Error describing instance: %s", err))
        }

        if resp.Status != "CREATED" {
            return resource.RetryableError(fmt.Errorf("Expected instance to be created but was in state %s", resp.Status))
        }

        err = resourceExampleInstanceRead(d, meta)
        if err != nil {
            return resource.NonRetryableError(err)
        } else {
            return nil
        }
    })
}

~> Important If using a CRUD function with a timeout, any Retry() or RetryContext() function timeouts should be configured below that duration to avoid returning the SDK context: deadline exceeded error instead of the retry logic error.

StateChangeConf

resource.Retry is useful for simple scenarios, particularly when the API response is either success or failure, but sometimes handling an APIs latency or eventual consistency requires more fine tuning. resource.Retry is in fact a wrapper for a another helper: resource.StateChangeConf.

Use resource.StateChangeConf when your resource has multiple states to progress though, you require fine grained control of retry and delay timing, or you want to ensure a minimum number of occurrences of a target state is reached (this is very common when dealing with eventually consistent APIs, where a response can reply back with an old state between calls before becoming consistent).

func resourceExampleInstanceCreate(d *schema.ResourceData, meta interface{}) error {
    name := d.Get("name").(string)
    client := meta.(*ExampleClient)
    resp, err := client.CreateInstance(name)

    createStateConf := &resource.StateChangeConf{
        Pending: []string{
            client.ExampleInstanceStateRequesting,
            client.ExampleInstanceStatePending,
            client.ExampleInstanceStateCreating,
            client.ExampleInstanceStateVerifying,
        },
        Target: []string{
            client.ExampleInstanceStateCreateComplete,
        },
        Refresh: func() (interface{}, string, error) {
            resp, err := client.DescribeInstance(name)
            if err != nil {
                0, "", err
            }
            return resp, resp.Status, nil
        },
        Timeout:    d.Timeout(schema.TimeoutCreate) - time.Minute,
        Delay:      10 * time.Second,
        MinTimeout: 5 * time.Second,
        ContinuousTargetOccurence: 5,
    }
    _, err = createStateConf.WaitForState()
    if err != nil {
        return fmt.Errorf("Error waiting for example instance (%s) to be created: %s", d.Id(), err)
    }

    return resourceExampleInstanceRead(d, meta)
}

~> Important If using a CRUD function with a timeout, any StateChangeConf timeouts should be configured below that duration to avoid returning the SDK context: deadline exceeded error instead of the retry logic error.


22-resources-state-migration


page_title: Resources - State Migration description: Migrating state values within resources.

Resources - State Migration

Resources define the data types and API interactions required to create, update, and destroy infrastructure with a cloud vendor while the Terraform state stores mapping and metadata information for those remote objects. There are several reasons why a resource implementation needs to change: backend APIs Terraform interacts with will change overtime, or the current implementation might be incorrect or unmaintainable. Some of these changes may not be backward compatible and a migration is needed for resources provisioned in the wild with old schema configurations.

The mechanism that is used for state migrations changed between v0.11 and v0.12 of the SDK bundled with Terraform core. Be sure to choose the method that matches your Terraform dependency.

Terraform v0.12 SDK State Migrations

~> Note: This method of state migration does not work if the provider has a dependency on the Terraform v0.11 SDK. See the Terraform v0.11 SDK State Migrations section for details on using MigrateState instead.

For this task provider developers should use a resource's SchemaVersion and StateUpgraders fields. Resources typically do not have these fields configured unless state migrations have been perfomed in the past.

When Terraform encounters a newer resource SchemaVersion during planning, it will automatically migrate the state through each StateUpgrader function until it matches the current SchemaVersion.

State migrations performed with StateUpgraders are compatible with the Terraform 0.11 runtime, if the provider still supports the Terraform 0.11 protocol. Additional MigrateState implementation is not necessary and any existing MigrateState implementations do not need to be converted to StateUpgraders.

The general overview of this process is:

  • Create a new function that copies the existing schema.Resource, but only includes the Schema field. Terraform needs the type information of each attribute in the previous schema version to successfully migrate the state.
  • Change the existing resource Schema as necessary.
  • If the SchemaVersion field for the resource is already defined, increase its value by one. If SchemaVersion is not defined for the resource, add SchemaVersion: 1 to the resource (resources default to SchemaVersion: 0 if undefined).
  • Implement the StateUpgraders field for the resource, which is a list of StateUpgrade. The new StateUpgrade should be configured with the following:
    • Type set to CoreConfigSchema().ImpliedType() of the saved schema.Resource function above.
    • Upgrade set to a function that modifies the attribute(s) appropriately for the migration.
    • Version set to the version of the schema before this migration. If no previous state migrations were performed, this should be set to 0.

For example, with a resource without previous state migrations:

package example

import "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"

func resourceExampleInstance() *schema.Resource {
    return &schema.Resource{
        Create: resourceExampleInstanceCreate,
        Read:   resourceExampleInstanceRead,
        Update: resourceExampleInstanceUpdate,
        Delete: resourceExampleInstanceDelete,

        Schema: map[string]*schema.Schema{
            "name": {
                Type:     schema.TypeString,
                Required: true,
            },
        },
    }
}

Say the instance resource API now requires the name attribute to end with a period "."

package example

import (
    "fmt"
    "strings"

    "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)

func resourceExampleInstance() *schema.Resource {
    return &schema.Resource{
        Create: resourceExampleInstanceCreate,
        Read:   resourceExampleInstanceRead,
        Update: resourceExampleInstanceUpdate,
        Delete: resourceExampleInstanceDelete,

        Schema: map[string]*schema.Schema{
            "name": {
                Type:     schema.TypeString,
                Required: true,
                ValidateFunc: func(v interface{}, k string) (warns []string, errs []error) {
                    if !strings.HasSuffix(v.(string), ".") {
                        errs = append(errs, fmt.Errorf("%q must end with a period '.'", k))
                    }
                    return
                },
            },
        },
        SchemaVersion: 1,
        StateUpgraders: []schema.StateUpgrader{
            {
                Type:    resourceExampleInstanceResourceV0().CoreConfigSchema().ImpliedType(),
                Upgrade: resourceExampleInstanceStateUpgradeV0,
                Version: 0,
            },
        },
    }
}

func resourceExampleInstanceResourceV0() *schema.Resource {
    return &schema.Resource{
        Schema: map[string]*schema.Schema{
            "name": {
                Type:     schema.TypeString,
                Required: true,
            },
        },
    }
}

func resourceExampleInstanceStateUpgradeV0(rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) {
    rawState["name"] = rawState["name"] + "."

    return rawState, nil
}

To unit test this migration, the following can be written:

func testResourceExampleInstanceStateDataV0() map[string]interface{} {
    return map[string]interface{}{
        "name": "test",
    }
}

func testResourceExampleInstanceStateDataV1() map[string]interface{} {
    v0 := testResourceExampleInstanceStateDataV0()
    return map[string]interface{}{
        "name": v0["name"] + ".",
    }
}

func TestResourceExampleInstanceStateUpgradeV0(t *testing.T) {
    expected := testResourceExampleInstanceStateDataV1()
    actual, err := resourceExampleInstanceStateUpgradeV0(testResourceExampleInstanceStateDataV0(), nil)
    if err != nil {
        t.Fatalf("error migrating state: %s", err)
    }

    if !reflect.DeepEqual(expected, actual) {
        t.Fatalf("\n\nexpected:\n\n%#v\n\ngot:\n\n%#v\n\n", expected, actual)
    }
}

Terraform v0.11 SDK State Migrations

~> NOTE: This method of state migration does not work if the provider has a dependency on the Terraform v0.12 SDK. See the Terraform v0.12 SDK State Migrations section for details on using StateUpgraders instead.

For this task provider developers should use a resource's SchemaVersion and MigrateState function. Resources do not have these options set on first implementation, the SchemaVersion defaults to 0.

package example

import "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"

func resourceExampleInstance() *schema.Resource {
    return &schema.Resource{
        Create: resourceExampleInstanceCreate,
        Read:   resourceExampleInstanceRead,
        Update: resourceExampleInstanceUpdate,
        Delete: resourceExampleInstanceDelete,

        Schema: map[string]*schema.Schema{
            "name": {
                Type:     schema.TypeString,
                Required: true,
            },
        },
    }
}

Say the instance resource API now requires the name attribute to end with a period "."

package example

import (
    "fmt"
    "strings"

    "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)

func resourceExampleInstance() *schema.Resource {
    return &schema.Resource{
        Create: resourceExampleInstanceCreate,
        Read:   resourceExampleInstanceRead,
        Update: resourceExampleInstanceUpdate,
        Delete: resourceExampleInstanceDelete,

        Schema: map[string]*schema.Schema{
            "name": {
                Type:     schema.TypeString,
                Required: true,
                ValidateFunc: func(v interface{}, k string) (warns []string, errs []error) {
                    if !strings.HasSuffix(v.(string), ".") {
                        errs = append(errs, fmt.Errorf("%q must end with a period '.'", k))
                    }
                    return
                },
            },
        },
        SchemaVersion: 1,
        MigrateState: resourceExampleInstanceMigrateState,
    }
}

To trigger the migration we set the SchemaVersion to 1. When Terraform saves state it also sets the SchemaVersion at the time, that way when differences are calculated, if the saved SchemaVersion is less than what the Resource is currently set to, the state is run through the MigrateState function.

func resourceExampleInstanceMigrateState(v int, inst *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) {
    switch v {
    case 0:
        log.Println("[INFO] Found Example Instance State v0; migrating to v1")
        return migrateExampleInstanceStateV0toV1(inst)
    default:
        return inst, fmt.Errorf("Unexpected schema version: %d", v)
    }
}

func migrateExampleInstanceStateV0toV1(inst *terraform.InstanceState) (*terraform.InstanceState, error) {
    if inst.Empty() {
        log.Println("[DEBUG] Empty InstanceState; nothing to migrate.")
        return inst, nil
    }

    if !strings.HasSuffix(inst.Attributes["name"], ".") {
        log.Printf("[DEBUG] Attributes before migration: %#v", inst.Attributes)
        inst.Attributes["name"] = inst.Attributes["name"] + "."
        log.Printf("[DEBUG] Attributes after migration: %#v", inst.Attributes)
    }

    return inst, nil
}

Although not required, it's a good idea to break the migration function up into version jumps. As the provider developer you will have to account for migrations that are larger than one version upgrade, using the switch/case pattern above will allow you to create codepaths for states coming from all the versions of state in the wild. Please be careful to allow all legacy versions to migrate to the latest schema. Consider the code now where the name attribute has moved to an attribute called fqdn.

func resourceExampleInstanceMigrateState(v int, inst *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) {
    var err error
    switch v {
    case 0:
        log.Println("[INFO] Found Example Instance State v0; migrating to v1")
        inst, err = migrateExampleInstanceV0toV1(inst)
        if err != nil {
            return inst, err
        }
        fallthrough
    case 1:
        log.Println("[INFO] Found Example Instance State v1; migrating to v2")
        return migrateExampleInstanceStateV1toV2(inst)
    default:
        return inst, fmt.Errorf("Unexpected schema version: %d", v)
    }
}

func migrateExampleInstanceStateV1toV2(inst *terraform.InstanceState) (*terraform.InstanceState, error) {
    if inst.Empty() {
        log.Println("[DEBUG] Empty InstanceState; nothing to migrate.")
        return inst, nil
    }

    if inst.Attributes["name"] != "" {
        inst.Attributes["fqdn"] = inst.Attributes["name"]
        delete(inst.Attributes, "name")
    }
    return inst, nil
}

The fallthrough allows a very old state to move from 0 to 1 and now to 2. Sometimes state migrations are more complicated, and requires making API calls, to allow this the configured meta interface{} is also passed to the MigrateState function.


31-schemas-index


page_title: Plugin Development - Schemas description: |- Schemas define plugin attributes and behaviors. Learn how to create schemas in SDKv2.

Terraform Schemas

Terraform Plugins are expressed using schemas to define attributes and their behaviors, using a high level package exposed by Terraform Core named schema. Providers, Resources, and Provisioners all contains schemas, and Terraform Core uses them to produce plan and apply executions based on the behaviors described.

Below is an example provider.go file, detailing a hypothetical ExampleProvider implementation:

package exampleprovider

import (
	"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)

func Provider() *schema.Provider {
	// Example Provider requires an API Token.
	// The Email is optional
	return &schema.Provider{
		Schema: map[string]*schema.Schema{
			"api_token": {
				Type:        schema.TypeString,
				Required:    true,
			},
			"email": {
				Type:        schema.TypeString,
				Optional:    true,
				Default:     "",
			},
		},
	}
}

In this example we’re creating a Provider and setting it’s schema. This schema is a collection of key value pairs of schema elements the attributes a user can specify in their configuration. The keys are strings, and the values are schema.Schema structs that define the behavior.

Schemas can be thought of as a type paired one or more properties that describe it’s behavior.

Schema Types

Schema items must be defined using one of the builtin types, such as TypeString, TypeBool, TypeInt, et. al. The type defines what is considered valid input for a given schema item in a users configuration.

See Schema Types for more information on the types available to schemas.

Schema Behaviors

Schema items can have various properties that can be combined to match their behaviors represented by their API. Some items are Required, others Optional, while others may be Computed such that they are useful to be tracked in state, but cannot be configured by users.

See Schema Behaviors for more information on the properties a schema can have.


32-schemas-schema-behaviors


page_title: Plugin Development - Schema Behaviors description: |- Schemas define plugin attributes and behaviors. Learn about the fields that you can use to define element behaviors in SDKv2.

Schema Behaviors

Schema fields that can have an effect at plan or apply time are collectively referred to as "Behavioral fields", or an element's behaviors. These fields are often combined in several ways to create different behaviors, depending on the need of the element in question, typically customized to match the behavior of a cloud service API. For example, at time of writing, AWS Launch Configurations cannot be updated through the AWS API. As a result, all of the schema elements in the corresponding Terraform Provider resource aws_launch_configuration are marked as ForceNew: true. This behavior instructs Terraform to first destroy and then recreate the resource if any of the attributes change in the configuration, as opposed to trying to update the existing resource.

Primitive Behaviors

-> Note: The primitive behavior fields cannot be set to false. You can opt out of a behavior by omitting it.

Optional

Data structure: bool

Values: true

Restrictions:

  • Cannot be used if Required is true
  • Must be set if Required is omitted and element is not Computed

Indicates that this element is optional to include in the configuration. Note that Optional does not itself establish a default value. See Default below.

Schema example:

"encrypted": {
  Type:     schema.TypeBool,
  Optional: true,
},

Configuration example:

resource "example_volume" "ex" {
  encrypted = true
}

Required

Data structure: bool

Values: true

Restrictions:

  • Cannot be used if Optional is true
  • Cannot be used if Computed is true
  • Must be set if Optional is omitted and element is not Computed

Indicates that this element must be provided in the configuration. Omitting this attribute from configuration, or later removing it, will result in a plan-time error.

Schema example:

"name": {
  Type:     schema.TypeString,
  Required: true,
},

Configuration example:

resource "example_volume" "ex" {
  name = "swap volume"
}

Default

Data structure: interface

Value: any value of an elements Type for primitive types, or the type defined by Elem for complex types.

Restrictions:

  • Cannot be used if Required is true
  • Cannot be used with DefaultFunc

If Default is specified, Terraform will use that value when this item is not set in the configuration.

Schema example:

"encrypted": {
  Type:     schema.TypeBool,
  Optional: true,
  Default: false,
},

Configuration example (specified):

resource "example_volume" "ex" {
  name = "swap volume"
  encrypted = true
}

Configuration example (omitted):

resource "example_volume" "ex" {
  name = "swap volume"
  # encrypted receives its default value, false
}

Computed

Data structure: bool

Value: true

Restrictions:

  • Cannot be used when Required is true
  • Cannot be used when Default is specified
  • Cannot be used with DefaultFunc

Computed is often used to represent values that are not user configurable or can not be known at time of terraform plan or apply, such as date of creation or a service specific UUID. Computed can be combined with other attributes to achieve specific behaviors, and can be used as output for interpolation into other resources

Schema example:

"uuid": {
  Type:     schema.TypeString,
  Computed: true,
},

Configuration example:

resource "example_volume" "ex" {
  name = "swap volume"
  encrypted = true
}

output "volume_uuid" {
  value = "${example_volume.ex.uuid}"
}

ForceNew

Data structure: bool

Value: true

ForceNew indicates that any change in this field requires the resource to be destroyed and recreated.

Schema example:

"base_image": {
  Type:     schema.TypeString,
  Required: true,
  ForceNew: true,
},

Configuration example:

resource "example_instance" "ex" {
  name = "bastion host"
  base_image = "ubuntu_17.10"
}

Function Behaviors

DiffSuppressFunc

Data structure: SchemaDiffSuppressFunc

When provided DiffSuppressFunc will be used by Terraform to calculate the diff of this field. Common use cases are capitalization differences in string names, or logical equivalences in JSON values.

Schema example:

"base_image": {
  Type:     schema.TypeString,
  Required: true,
  ForceNew: true,
  // Suppress the diff shown if the base_image name are equal when both compared in lower case.
  DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool {
    if strings.ToLower(old) == strings.ToLower(new) {
      return true
    }
    return false
  },
},

Configuration example:

Here we assume the service API accepts capitalizations of the base_image name and converts it to a lowercase string. The API then returns the lower case value in its responses.

resource "example_instance" "ex" {
  name = "bastion host"
  base_image = "UBunTu_17.10"
}

DefaultFunc

Data structure: SchemaDefaultFunc

Restrictions:

  • Cannot be used if Default is specified

When provided DefaultFunc will be used to compute a dynamic default for this element. The return value of this function should be "stable", such that it is uncommon to return different values in subsequent plans without any other changes being made, to avoid unnecessary diffs in terraform plan.

DefaultFunc is most commonly used in Provider schemas to allow elements to have defaults that are read from the environment.

Schema example:

In this example, Terraform will attempt to read region from the environment if it is omitted from configuration. If it’s not found in the environment, a default value of us-west is given.

"region": {
  Type:     schema.TypeString,
  Required: true,
  DefaultFunc: func() (interface{}, error) {
    if v := os.Getenv("PROVIDER_REGION"); v != "" {
      return v, nil
    }

    return "us-west", nil
  },
},

Configuration example (provided):

provider "example" {
  api_key = "somesecretkey"
  region = "us-east"
}

Configuration example (default func with PROVIDER_REGION set to us-east in the environment):

provider "example" {
  api_key = "somesecretkey"
  # region is "us-east"
}

Configuration example (default func with PROVIDER_REGION unset in the environment):

provider "example" {
  api_key = "somesecretkey"
  # region is "us-west"
}

StateFunc

Data structure: SchemaStateFunc

StateFunc is a function used to convert the value of this element to a string to be stored in the state.

Schema example:

In this example, the StateFunc converts a string value to all lower case.

"name": &schema.Schema{
  Type:     schema.TypeString,
  ForceNew: true,
  Required: true,
  StateFunc: func(val interface{}) string {
    return strings.ToLower(val.(string))
  },
},

Configuration example (provided):

resource "example" "ex_instance" {
  name = "SomeValueCASEinsensitive"
}

Value in statefile:

"name": "somevaluecaseinsensitive"

ValidateFunc

Deprecated: Use ValidateDiagFunc instead.

Data structure: SchemaValidateFunc

Restrictions:

  • Only works with primitive types

ValidateFunc is a function used to validate the value of a primitive type. Common use cases include ensuring an integer falls within a range or a string value is present in a list of valid options. The function returns two slices; the first for warnings, the second for errors which can be used to catch multiple invalid cases. Terraform will only halt execution if an error is returned. Returning warnings will warn the user but the data provided is considered valid.

Terraform includes a number of validators for use in plugins in the validation package. A full list can be found here: https://pkg.go.dev/github.com/hashicorp/terraform-plugin-sdk/helper/validation

Schema example:

In this example, the ValidateFunc ensures the integer provided is a value between 0 and 10.

"amount": &schema.Schema{
 Type:     schema.TypeInt,
 Required: true,
 ValidateFunc: func(val interface{}, key string) (warns []string, errs []error) {
   v := val.(int)
   if v < 0 || v > 10 {
     errs = append(errs, fmt.Errorf("%q must be between 0 and 10 inclusive, got: %d", key, v))
   }
   return
 },
},

Configuration example:

resource "example" "ex_instance" {
 amount = "-1"
}

ValidateDiagFunc

Data structure: SchemaValidateDiagFunc

Restrictions:

  • Only works with primitive types

ValidateDiagFunc is a function used to validate the value of a primitive type. Common use cases include ensuring an integer falls within a range or a string value is present in a list of valid options. The function returns a collection of Diagnostics. Developers should append and build the list of diagnostics up until a fatal error is reached, at which point they should return the Diagnostics. Terraform will only halt execution if an error is returned. Warnings will display a warning message to the practitioner, but continue execution.

The SDK includes some basic validators in the helper/validation package.

Schema example:

In this example, the ValidateDiagFunc ensures the string is abc.

"sample": &schema.Schema{
 Type:             schema.TypeString,
 Required:         true,
 ValidateDiagFunc: func(v interface{}, p cty.Path) diag.Diagnostics {
		value := v.(string)
    expected := "abc"
		var diags diag.Diagnostics
		if value != expected {
			diag := diag.Diagnostic{
				Severity: diag.Error,
				Summary:  "wrong value",
				Detail:   fmt.Sprintf("%q is not %q", value, expected),
			}
			diags = append(diags, diag)
		}
		return diags
	},
},

Configuration example:

resource "example" "ex_instance" {
 sample = "efg"
}

32-schemas-schema-methods


page_title: Home - Plugin Development description: |- Plugin Development is a section for content dedicated to developing Plugins to extend Terraform's core offering.

Terraform Schemas Methods

NOTE should distinguish between schema.Provider, schema.Resource, schema.Schema

Schema methods from schema.Schema:

// If one of these is set, then this item can come from the configuration.
// Both cannot be set. If Optional is set, the value is optional. If
// Required is set, the value is required.
//
// One of these must be set if the value is not computed. That is:
// value either comes from the config, is computed, or is both.
Optional bool
Required bool

// If this is non-nil, the provided function will be used during diff
// of this field. If this is nil, a default diff for the type of the
// schema will be used.
//
// This allows comparison based on something other than primitive, list
// or map equality - for example SSH public keys may be considered
// equivalent regardless of trailing whitespace.
DiffSuppressFunc SchemaDiffSuppressFunc

// If this is non-nil, then this will be a default value that is used
// when this item is not set in the configuration.
//
// DefaultFunc can be specified to compute a dynamic default.
// Only one of Default or DefaultFunc can be set. If DefaultFunc is
// used then its return value should be stable to avoid generating
// confusing/perpetual diffs.
//
// Changing either Default or the return value of DefaultFunc can be
// a breaking change, especially if the attribute in question has
// ForceNew set. If a default needs to change to align with changing
// assumptions in an upstream API then it may be necessary to also use
// the MigrateState function on the resource to change the state to match,
// or have the Read function adjust the state value to align with the
// new default.
//
// If Required is true above, then Default cannot be set. DefaultFunc
// can be set with Required. If the DefaultFunc returns nil, then there
// will be no default and the user will be asked to fill it in.
//
// If either of these is set, then the user won't be asked for input
// for this key if the default is not nil.
Default     interface{}
DefaultFunc SchemaDefaultFunc

// Description is used as the description for docs or asking for user
// input. It should be relatively short (a few sentences max) and should
// be formatted to fit a CLI.
Description string

// InputDefault is the default value to use for when inputs are requested.
// This differs from Default in that if Default is set, no input is
// asked for. If Input is asked, this will be the default value offered.
InputDefault string

// The fields below relate to diffs.
//
// If Computed is true, then the result of this value is computed
// (unless specified by config) on creation.
//
// If ForceNew is true, then a change in this resource necessitates
// the creation of a new resource.
//
// StateFunc is a function called to change the value of this before
// storing it in the state (and likewise before comparing for diffs).
// The use for this is for example with large strings, you may want
// to simply store the hash of it.
Computed  bool
ForceNew  bool
StateFunc SchemaStateFunc

// The following fields are only set for a TypeList, TypeSet, or TypeMap.
//
// Elem represents the element type. For a TypeMap, it must be a *Schema
// with a Type of TypeString, otherwise it may be either a *Schema or a
// *Resource. If it is *Schema, the element type is just a simple value.
// If it is *Resource, the element type is a complex structure,
// potentially with its own lifecycle.
Elem interface{}

// The following fields are only set for a TypeList or TypeSet.
//
// MaxItems defines a maximum amount of items that can exist within a
// TypeSet or TypeList. Specific use cases would be if a TypeSet is being
// used to wrap a complex structure, however more than one instance would
// cause instability.
//
// MinItems defines a minimum amount of items that can exist within a
// TypeSet or TypeList. Specific use cases would be if a TypeSet is being
// used to wrap a complex structure, however less than one instance would
// cause instability.
//
// PromoteSingle, if true, will allow single elements to be standalone
// and promote them to a list. For example "foo" would be promoted to
// ["foo"] automatically. This is primarily for legacy reasons and the
// ambiguity is not recommended for new usage. Promotion is only allowed
// for primitive element types.
MaxItems      int
MinItems      int
PromoteSingle bool

// The following fields are only valid for a TypeSet type.
//
// Set defines a function to determine the unique ID of an item so that
// a proper set can be built.
Set SchemaSetFunc

// ComputedWhen is a set of queries on the configuration. Whenever any
// of these things is changed, it will require a recompute (this requires
// that Computed is set to true).
//
// NOTE: This currently does not work.
ComputedWhen []string

// ConflictsWith is a set of schema keys that conflict with this schema.
// This will only check that they're set in the _config_. This will not
// raise an error for a malfunctioning resource that sets a conflicting
// key.
ConflictsWith []string

// When Deprecated is set, this attribute is deprecated.
//
// A deprecated field still works, but will probably stop working in near
// future. This string is the message shown to the user with instructions on
// how to address the deprecation.
Deprecated string

// When Removed is set, this attribute has been removed from the schema
//
// Removed attributes can be left in the Schema to generate informative error
// messages for the user when they show up in resource configurations.
// This string is the message shown to the user with instructions on
// what do to about the removed attribute.
Removed string

// ValidateFunc allows individual fields to define arbitrary validation
// logic. It is yielded the provided config value as an interface{} that is
// guaranteed to be of the proper Schema type, and it can yield warnings or
// errors based on inspection of that value.
//
// ValidateFunc currently only works for primitive types.
ValidateFunc SchemaValidateFunc

// Sensitive ensures that the attribute's value does not get displayed in
// logs or regular output. It should be used for passwords or other
// secret fields. Future versions of Terraform may encrypt these
// values.
Sensitive bool

32-schemas-schema-types


page_title: Home - Plugin Development description: |- Schemas define plugin behavior and attributes. The schema type attribute defines what kind of values users can provide in their configuration for an element.

Schema Attributes and Types

Almost every Terraform Plugin offers user configurable parameters, examples such as a Provider’s region or a Resource's name. Each parameter is defined in the items schema, which is a map of string names to schema structs.

In the below example implementation of a Resource you see parameters uuid and name defined:

func resourceExampleResource() *schema.Resource {
	return &schema.Resource{
		// ... //
		Schema: map[string]*schema.Schema{
			"uuid": {
				Type:     schema.TypeString,
				Computed: true,
			},

			"name": {
				Type:         schema.TypeString,
				Required:     true,
				ForceNew:     true,
				ValidateFunc: validatName,
			},
			// ... //
		},
	}
}

The Schema attribute Type defines what kind of values users can provide in their configuration for this element. Here we define the available schema types supported. See Schema Behaviors for more information on configuring element behaviors.

Types

The schema attribute Type determines what data is valid in configuring the element, as well as the type of data returned when used in an expression. Schemas attributes must be one of the types defined below, and can be loosely categorized as either Primitive or Aggregate types:

Primitive types

Primitive types are simple values such as integers, booleans, and strings. Primitives are stored in the state file as "key": "value" string pairs, where both key and value are string representations.

Aggregate types

Aggregate types form more complicated data types by combining primitive types. Aggregate types may define the types of elements they contain by using the Elem property. If the Elem property is omitted, the default element data type is a string.

Aggregate types are stored in state as a key.index and value pair for each element of the property, with a unique index appended to the key based on the type. There is an additional key.index item included in the state that tracks the number of items the property contains.

Primitive Types

TypeBool

Data structure: bool

Example: true or false

Schema example:

"encrypted": {
  Type:     schema.TypeBool,
},

Configuration example:

resource "example_volume" "ex" {
  encrypted = true
}

State representation:

"encrypted": "true",

TypeInt

Data structure: int

Example: -9, 0, 1, 2, 9

Schema example:

"cores": {
  Type:     schema.TypeInt,
},

Configuration example:

resource "example_compute_instance" "ex" {
  cores = 16
}

State representation:

"cores": "16",

TypeFloat

Data structure: float64

Example: 1.0, 7.19009

Schema example:

"price": {
  Type:     schema.TypeFloat,
},

Configuration example:

resource "example_spot_request" "ex" {
  price = 0.37
}

State representation:

"price": "0.37",

TypeString

Data structure: string

Example: "Hello, world!"

Schema example:

"name": {
  Type:     schema.TypeString,
},

Configuration example:

resource "example_spot_request" "ex" {
  description = "Managed by Terraform"
}

State representation:

"description": "Managed by Terraform",

Date & Time Data

TypeString is also used for date/time data, the preferred format is RFC 3339 (you can use the provided validation function).

Example: 2006-01-02T15:04:05+07:00

Schema example:

"expiration": {
  Type:         schema.TypeString,
  ValidateFunc: validation.IsRFC3339Time,
},

Configuration example:

resource "example_resource" "ex" {
  expiration = "2006-01-02T15:04:05+07:00"
}

State representation:

"expiration": "2006-01-02T15:04:05+07:00",

Aggregate Types

TypeMap

Data structure: map: map[string]interface{}

Example: key = value

A key based map (also known as a dictionary) with string keys and values defined by the Elem property.

~> NOTE: Using the Elem block to define specific keys for the map is currently not possible. A potential workaround would be to confirm the required keys are set when expanding the Map object inside the resource code.

Schema example:

"tags": {
  Type:     schema.TypeMap,
  Elem: &schema.Schema{
    Type: schema.TypeString,
  },
},

Configuration example:

resource "example_compute_instance" "ex" {
  tags {
    env = "development"
    name = "example tag"
  }
}

State representation:

TypeMap items are stored in state with the key as the index. The count of items in a map is denoted by the % index:

"tags.%": "2",
"tags.env": "development",
"tags.name": "example tag",

TypeList

Data structure: Slice: []interface{}

Example: []interface{"2", "3", "4"}

Used to represent an ordered collection of items, where the order the items are presented can impact the behavior of the resource being modeled. An example of ordered items would be network routing rules, where rules are examined in the order they are given until a match is found. The items are all of the same type defined by the Elem property.

Schema example:

"termination_policies": {
  Type:     schema.TypeList,
  Elem: &schema.Schema{
    Type: schema.TypeString,
  },
},

Configuration example:

resource "example_compute_instance" "ex" {
  termination_policies = ["OldestInstance","ClosestToNextInstanceHour"]
}

State representation:

TypeList items are stored in state in a zero based index data structure.

"name_servers.#": "4",
"name_servers.0": "ns-1508.awsdns-60.org",
"name_servers.1": "ns-1956.awsdns-52.co.uk",
"name_servers.2": "ns-469.awsdns-58.com",
"name_servers.3": "ns-564.awsdns-06.net",

TypeSet

Data structure: *schema.Set

Example: []string{"one", "two", "three"}

TypeSet implements set behavior and is used to represent an unordered collection of items, meaning that their ordering specified does not need to be consistent, and the ordering itself has no impact on the behavior of the resource.

The elements of a set can be any of the other types allowed by Terraform, including another schema. Set items cannot be repeated.

Schema example:

"ingress": {
  Type:     schema.TypeSet,
  Elem: &schema.Resource{
    Schema: map[string]*schema.Schema{
      "from_port": {
        Type:     schema.TypeInt,
        Required: true,
      },

      "to_port": {
        Type:     schema.TypeInt,
        Required: true,
      },

      "protocol": {
        Type:      schema.TypeString,
        Required:  true,
        StateFunc: protocolStateFunc,
      },

      "cidr_blocks": {
        Type:     schema.TypeList,
        Optional: true,
        Elem: &schema.Schema{
          Type:         schema.TypeString,
        },
      },
    },
  },
}

Configuration example:

resource "example_security_group" "ex" {
  name        = "sg_test"
  description = "managed by Terraform"

  ingress {
    protocol    = "tcp"
    from_port   = 80
    to_port     = 9000
    cidr_blocks = ["10.0.0.0/8"]
  }

  ingress {
    protocol    = "tcp"
    from_port   = 80
    to_port     = 8000
    cidr_blocks = ["0.0.0.0/0", "10.0.0.0/8"]
  }
}

State representation:

TypeSet items are stored in state with an index value calculated by the hash of the attributes of the set.

"ingress.#": "2",
"ingress.1061987227.cidr_blocks.#": "1",
"ingress.1061987227.cidr_blocks.0": "10.0.0.0/8",
"ingress.1061987227.description": "",
"ingress.1061987227.from_port": "80",
"ingress.1061987227.ipv6_cidr_blocks.#": "0",
"ingress.1061987227.protocol": "tcp",
"ingress.1061987227.security_groups.#": "0",
"ingress.1061987227.self": "false",
"ingress.1061987227.to_port": "9000",
"ingress.493694946.cidr_blocks.#": "2",
"ingress.493694946.cidr_blocks.0": "0.0.0.0/0",
"ingress.493694946.cidr_blocks.1": "10.0.0.0/8",
"ingress.493694946.description": "",
"ingress.493694946.from_port": "80",
"ingress.493694946.ipv6_cidr_blocks.#": "0",
"ingress.493694946.protocol": "tcp",
"ingress.493694946.security_groups.#": "0",
"ingress.493694946.self": "false",
"ingress.493694946.to_port": "8000",

Next Steps

Checkout Schema Behaviors to learn how to customize each schema elements behavior.


41-testing-index


page_title: Plugin Development - Testing description: |- Learn how to write successful acceptance and unit tests for Terraform plugins.

Testing Terraform Plugins

Here we cover information needed to write successful tests for Terraform Plugins. Tests are a vital part of the Terraform ecosystem, verifying we can deliver on our mission to safely and predictably create, change, and improve infrastructure. Documentation for Terraform tests are broken into categories briefly described below. Each category has more detailed information by clicking on the matching item in the left navigation.

-> Note: Recent versions of Terraform CLI also support developer overrides in the CLI configuration, which can be useful for manually testing providers. The acceptance testing framework uses real Terraform CLI executions, so developer overrides are only recommended as a last resort option for missing functionality.

Acceptance Tests

In order to deliver on our promise to be safe and predictable, we need to be able to easily and routinely verify that Terraform Plugins produce the expected outcome. The most common usage of an acceptance test is in Terraform Providers, where each Resource is tested with configuration files and the resulting infrastructure is verified. Terraform includes a framework for constructing acceptance tests that imitate the execution of one or more steps of applying one or more configuration files, allowing multiple scenarios to be tested.

It’s important to reiterate that acceptance tests in resources create actual cloud infrastructure, with possible expenses incurred, and are the responsibility of the user running the tests. Creating real infrastructure in tests verifies the described behavior of Terraform Plugins in real world use cases against the actual APIs, and verifies both local state and remote values match. Acceptance tests require a network connection and often require credentials to access an account for the given API. When writing and testing plugins, it is highly recommended to use an account dedicated to testing, to ensure no infrastructure is created in error in any environment that cannot be completely and safely destroyed.

HashiCorp runs nightly acceptance tests of providers found in the Terraform Providers GitHub Organization to ensure each Provider is working correctly.

For a given plugin, Acceptance Tests can be run from the root of the project by using a common make task:

$ make testacc

See Acceptance Testing to learn more.

Unit Tests

Testing plugin code in small, isolated units is distinct from Acceptance Tests, and does not require network connections. Unit tests are commonly used for testing helper methods that expand or flatten API response data into data structures for storage into state by Terraform. This section covers the specifics of writing Unit Tests for Terraform Plugin code.

For a given plugin, Unit Tests can be run from the root of the project by using a common make task:

$ make test

See Unit Testing to learn more.

Next Steps

See the navigation on the left of this page for documentation and guides on writing tests for Terraform Plugins.


42-testing-testing-api


page_title: Plugin Development - Testing API description: |- Plugin Development is a section for content dedicated to developing Plugins to extend Terraform's core offering.

Testing API


42-testing-testing-patterns


page_title: Plugin Development - Testing Patterns description: |- Plugin Development is a section for content dedicated to developing Plugins to extend Terraform's core offering.

Testing Patterns


42-testing-unit-testing


page_title: Plugin Development - Unit Testing description: |- Unit tests are commonly used for testing helper methods that expand or flatten API responses into data structures that Terraform stores as state.

Unit Testing

Testing plugin code in small, isolated units is distinct from Acceptance Tests, and does not require network connections. Unit tests are commonly used for testing helper methods that expand or flatten API responses into data structures for storage into state by Terraform. This section covers the specifics of writing Unit Tests for Terraform Plugin code.

The procedure for writing unit tests for Terraform follows the same setup and conventions of writing any Go unit tests. We recommend naming tests to follow the same convention as our acceptance tests, Test<Provider>_<Test Name>. For more information on Go tests, see the official Golang docs on testing.

Below is an example unit test used in flattening AWS security group rules, demonstrating a typical flattener type method that's commonly used to convert structures returned from APIs into data structures used by Terraform in saving to state. This example is truncated for brevity, but you can see the full test in the aws/structure_test.go in the Terraform AWS Provider repository on GitHub

func TestFlattenSecurityGroups(t *testing.T) {
	cases := []struct {
		ownerId  *string
		pairs    []*ec2.UserIdGroupPair
		expected []*GroupIdentifier
	}{
		// simple, no user id included
		{
			ownerId: aws.String("user1234"),
			pairs: []*ec2.UserIdGroupPair{
				&ec2.UserIdGroupPair{
					GroupId: aws.String("sg-12345"),
				},
			},
			expected: []*GroupIdentifier{
				&GroupIdentifier{
					GroupId: aws.String("sg-12345"),
				},
			},
		},
		// include the owner id, but keep it consitent with the same account. Tests
		// EC2 classic situation
		{
			ownerId: aws.String("user1234"),
			pairs: []*ec2.UserIdGroupPair{
				&ec2.UserIdGroupPair{
					GroupId: aws.String("sg-12345"),
					UserId:  aws.String("user1234"),
				},
			},
			expected: []*GroupIdentifier{
				&GroupIdentifier{
					GroupId: aws.String("sg-12345"),
				},
			},
		},

		// include the owner id, but from a different account. This is reflects
		// EC2 Classic when referring to groups by name
		{
			ownerId: aws.String("user1234"),
			pairs: []*ec2.UserIdGroupPair{
				&ec2.UserIdGroupPair{
					GroupId:   aws.String("sg-12345"),
					GroupName: aws.String("somegroup"), // GroupName is only included in Classic
					UserId:    aws.String("user4321"),
				},
			},
			expected: []*GroupIdentifier{
				&GroupIdentifier{
					GroupId:   aws.String("sg-12345"),
					GroupName: aws.String("user4321/somegroup"),
				},
			},
		},
	}

	for _, c := range cases {
		out := flattenSecurityGroups(c.pairs, c.ownerId)
		if !reflect.DeepEqual(out, c.expected) {
			t.Fatalf("Error matching output and expected: %#v vs %#v", out, c.expected)
		}
	}
}

49.1-testing-acceptance_tests-index


page_title: Plugin Development - Acceptance Testing description: |- Terraform includes a framework for constructing acceptance tests that imitate applying one or more configuration files.

Acceptance Tests

In order to deliver on our promise to be safe and predictable, we need to be able to easily and routinely verify that Terraform Plugins produce the expected outcome. The most common usage of an acceptance test is in Terraform Providers, where each Resource is tested with configuration files and the resulting infrastructure is verified. Terraform includes a framework for constructing acceptance tests that imitate the execution of one or more steps of applying one or more configuration files, allowing multiple scenarios to be tested.

Terraform acceptance tests use real Terraform configurations to exercise the code in real plan, apply, refresh, and destroy life cycles. When run from the root of a Terraform Provider codebase, Terraform’s testing framework compiles the current provider in-memory and executes the provided configuration in developer defined steps, creating infrastructure along the way. At the conclusion of all the steps, Terraform automatically destroys the infrastructure. It’s important to note that during development, it’s possible for Terraform to leave orphaned or “dangling” resources behind, depending on the correctness of the code in development. The testing framework provides means to validate all resources are destroyed, alerting developers if any fail to destroy. It is the developer's responsibility to clean up any dangling resources left over from testing and development.

How Acceptance Tests Work

Provider acceptance tests use a Terraform CLI binary to run real Terraform commands. The goal is to approximate using the provider with Terraform in production as closely as possible.

Terraform Core and Terraform Plugins act as gRPC client and server, implemented using HashiCorp's go-plugin system (see the RPC Plugin Model section of the Terraform Core documentation). When go test is run, the SDK's acceptance test framework starts a plugin server in the same process as the Go test framework. This plugin server runs for the duration of the test case, and each Terraform command (terraform plan, terraform apply, etc) creates a client that reattaches to this server.

Real-world Terraform usage requires a config file and Terraform working directory on the local filesystem. The framework uses the internal/plugintest package to manage temporary directories and files during test runs. This library is not intended for use directly by provider developers.

While the test framework provides a reasonable simulation of real-world usage, there are some differences, the major one being in the lifecycle of the plugin gRPC server. During normal Terraform operation, the plugin server starts and stops once per graph walk, of which there may be several during one Terraform command. The acceptance test framework, however, maintains one plugin gRPC server for the duration of each test case. In theory, it is possible for providers to carry internal state between operations during tests - but providers would have to go out of their way (and the SDK's public API) to do this.

Test files

Terraform follows many of the Go programming language conventions with regards to testing, with both acceptance tests and unit tests being placed in a file that matches the file under test, with an added _test.go suffix. Here’s an example file structure:

terraform-plugin-example/
├── provider.go
├── provider_test.go
├── example/
│   ├── resource_example_compute.go
│   ├── resource_example_compute_test.go

To create an acceptance test in the example resource_example_compute_test.go file, the function name must begin with TestAccXxx, and have the following signature:

func TestAccXxx(*testing.T)

Requirements and Recommendations

Acceptance tests have the following requirements:

  • Go: The most recent stable version.
  • Terraform CLI: Version 0.12.26 or later.
  • Provider Access: Network or system access to the provider and any resources being tested.
  • Provider Credentials: Authorized credentials to the provider and any resources being tested.
  • TF_ACC Environment Variable: Set to any value. Prevents developers from incurring unintended charges when running other Go tests.

We also recommend the following when running acceptance tests:

  • Separate Account: Use a separate provider account or namespace for acceptance testing. This prevents Terraform from unexpectedly modifying or destroying infrastructure due to code or testing issues.
  • Previous Terraform CLI Installation: Install Terraform CLI either into the operating system PATH or use the TF_ACC_TERRAFORM_PATH environment variable prior to running acceptance tests. Otherwise, the testing framework will download and install the latest Terraform CLI version into a temporary directory for every test invocation. Refer to the Terraform CLI Installation Behaviors section for details.

Each provider may have additional requirements and setup recommendations. Refer to the provider's codebase for more details.

Terraform CLI Installation Behaviors

The testing framework implements the following Terraform CLI discovery and installation behaviors:

  • If the TF_ACC_TERRAFORM_PATH environment variable is set, the framework will use that Terraform CLI binary if it exists and is executable. If the framework cannot find the binary or it is not executable, the framework returns an error unless the TF_ACC_TERRAFORM_VERSION environment variable is also set.
  • If the TF_ACC_TERRAFORM_VERSION environment variable is set, the framework will install and use that Terraform CLI version.
  • If both the TF_ACC_TERRAFORM_PATH and TF_ACC_TERRAFORM_VERSION environment variables are unset, the framework will search for the Terraform CLI binary based on the operating system PATH. If the framework cannot find the specified binary, it installs the latest available Terraform CLI binary.

Refer to the Environment Variables section for more details about behaviors and valid configurations.

Running Acceptance Tests

Ensure that the acceptance testing requirements are met and then use the go test command to run acceptance tests. You can run the acceptance tests on any environment capable of running go test, such as a local workstation command line, or continuous integration runner, such as GitHub Actions.

~> Note: Acceptance tests typically create and destroy actual infrastructure resources, possibly incurring expenses during or after the test duration.

Command Line Workflow

Run acceptance testing with the command line of any workstation. Use these instructions as the basis for other environments such as continuous integration runners.

The following example will execute all available acceptance tests in a provider codebase:

TF_ACC=1 go test -v ./...

Some provider codebases also implement a Makefile with a testacc target, which will set TF_ACC and other testing flags automatically.

The following is an example Makefile configuration:

testacc:
  TF_ACC=1 go test -v ./...

The Makefile configuration lets developers to use the following command to run acceptance tests:

make testacc

GitHub Actions Workflow

If using GitHub, run acceptance testing via GitHub Actions. Other continuous integration runners, while not exhaustively documented, are also supported.

Ensure the GitHub Organization settings for GitHub Actions and GitHub Repository settings for GitHub Actions allows running workflows and allows the actions/checkout, actions/setup-go, and hashicorp/setup-terraform actions.

Create a GitHub Actions workflow file, such as .github/workflows/test.yaml, that does the following:

Use the matrix strategy for more advanced configuration, such as running acceptance testing against multiple Terraform CLI versions.

The following example workflow runs acceptance testing for the provider using the latest patch versions of Go 1.17 and Terraform CLI 1.1:

name: Terraform Provider Tests

on:
  pull_request:
    paths:
      - '.github/workflows/test.yaml'
      - '**.go'

permissions:
  # Permission for checking out code
  contents: read

jobs:
  acceptance:
    name: Acceptance Tests
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: actions/setup-go@v2
        with:
          go-version: '1.17'
      - uses: hashicorp/setup-terraform@v1
        with:
          terraform_version: '1.1.*'
          terraform_wrapper: false
      - run: go test -v -cover ./...
        env:
          TF_ACC: '1'
  unit:
    name: Unit Tests
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: actions/setup-go@v2
        with:
          go-version: '1.17'
      - run: go test -v -cover ./...

The following example workflow runs acceptance testing for the provider using the latest patch versions of Go 1.17 and Terraform CLI 0.12 through 1.1:

name: Terraform Provider Tests

on:
  pull_request:
    paths:
      - '.github/workflows/test.yaml'
      - '**.go'

permissions:
  # Permission for checking out code
  contents: read

jobs:
  acceptance:
    name: Acceptance Tests (Terraform ${{ matrix.terraform-version }})
    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        terraform-version:
          - '0.12.*'
          - '0.13.*'
          - '0.14.*'
          - '0.15.*'
          - '1.0.*'
          - '1.1.*'
    steps:
      - uses: actions/checkout@v2
      - uses: actions/setup-go@v2
        with:
          go-version: '1.17'
      - uses: hashicorp/setup-terraform@v1
        with:
          terraform_version: ${{ matrix.terraform-version }}
          terraform_wrapper: false
      - run: go test -v -cover ./...
        env:
          TF_ACC: '1'
  unit:
    name: Unit Tests
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: actions/setup-go@v2
        with:
          go-version: '1.17'
      - run: go test -v -cover ./...

Environment Variables

A number of environment variables are available to control aspects of acceptance test execution.

Environment Variable Name Default Description
TF_ACC N/A Set to any value to enable acceptance testing via the helper/resource.ParallelTest() and helper/resource.Test() functions.
TF_ACC_LOG_PATH N/A Set a path for Terraform logs during testing. Refer to TF_LOG_PATH_MASK to configure individual log files per test.
TF_ACC_PROVIDER_HOST: registry.terraform.io Set the hostname of the provider under test, such as example.com in the example.com/myorg/myprovider provider source address. This is only required if any TestStep.Config specifies a provider source address, such as in the terraform configuration block required_providers attribute.
TF_ACC_PROVIDER_NAMESPACE hashicorp Set the namespace of the provider under test, such as myorg in the registry.terraform.io/myorg/myprovider provider source address. This is only required if any TestStep.Config specifies a provider source address, such as in the terraform configuration block required_providers attribute.
TF_ACC_STATE_LINEAGE N/A Set to 1 to enable state lineage debug logs, which are normally suppressed during acceptance testing.
TF_ACC_TEMP_DIR Operating system specific via os.TempDir() Set a temporary directory used for testing files and installing Terraform CLI, if installation is required.
TF_ACC_TERRAFORM_PATH N/A Set the path to a Terraform CLI binary on the local filesystem to be used during testing. It must be executable. If not found and TF_ACC_TERRAFORM_VERSION is not set, an error is returned.
TF_ACC_TERRAFORM_VERSION N/A Set the exact version of Terraform CLI to automatically install into TF_ACC_TEMP_DIR. For example, 1.1.6 or v1.0.11.
TF_LOG_PATH_MASK N/A Set a file path containing the string %s, which is replaced with the test name, to write a separate log file per test. Refer to TF_ACC_LOG_PATH to configure a single log file for all tests.

Troubleshooting

This section lists common errors encountered during testing.

Unrecognized remote plugin message

terraform failed: exit status 1

        stderr:

        Error: Failed to instantiate provider "random" to obtain schema: Unrecognized remote plugin message: --- FAIL: TestAccResourceID (4.28s)

        This usually means that the plugin is either invalid or simply
        needs to be recompiled to support the latest protocol.

This error indicates that the provider server could not connect to Terraform Core. Verify that the output of terraform version is v0.12.26 or above.

Next Steps

Terraform relies heavily on acceptance tests to ensure we keep our promise of helping users safely and predictably create, change, and improve infrastructure. In our next section we detail how to create “Test Cases”, individual acceptance tests using Terraform’s testing framework, in order to build and verify real infrastructure. Proceed to Test Cases


49.2-testing-acceptance_tests-sweepers


page_title: 'Plugin Development - Acceptance Testing: Sweepers' description: >- Acceptance tests provision and verify real infrastructure with Terraform's testing framework. Sweepers clean up leftover infrastructure.

Sweepers

Acceptance tests in Terraform provision and verify real infrastructure using Terraform's testing framework. Ideally all infrastructure created is then destroyed within the lifecycle of a test, however the reality is that there are several situations that can arise where resources created during a test are “leaked”. Leaked test resources are resources created by Terraform during a test, but Terraform either failed to destroy them as part of the test, or the test falsely reported all resources were destroyed after completing the test. Common causes are intermittent errors or failures in vendor APIs, or developer error in the resource code or test.

To address the possibility of leaked resources, Terraform provides a mechanism called sweepers to cleanup leftover infrastructure. We will add a file to our folder structure that will invoke the sweeper helper.

terraform-plugin-example/
├── provider.go
├── provider_test.go
├── example/
│   ├── example_sweeper_test.go
│   ├── resource_example_compute.go
│   ├── resource_example_compute_test.go

example_sweeper_test.go

package example

import (
  "testing"

  "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
)

func TestMain(m *testing.M) {
  resource.TestMain(m)
}

// sharedClientForRegion returns a common provider client configured for the specified region
func sharedClientForRegion(region string) (interface{}, error) {
  ...
  return client, nil
}

resource.TestMain is responsible for parsing the special test flags and invoking the sweepers. Sweepers should be added within the acceptance test file of a resource.

resource_example_compute_test.go

package example

import (
  "log"
  "strings"
  "testing"

  "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
)

func init() {
  resource.AddTestSweepers("example_compute", &resource.Sweeper{
    Name: "example_compute",
    F: func (region string) error {
      client, err := sharedClientForRegion(region)
      if err != nil {
        return fmt.Errorf("Error getting client: %s", err)
      }
      conn := client.(*ExampleClient)

      instances, err := conn.DescribeComputeInstances()
      if err != nil {
        return fmt.Errorf("Error getting instances: %s", err)
      }
      for _, instance := range instances {
        if strings.HasPrefix(instance.Name, "test-acc") {
          err := conn.DestroyInstance(instance.ID)

          if err != nil {
            log.Printf("Error destroying %s during sweep: %s", instance.Name, err)
          }
        }
      }
      return nil
    },
  })
}

This example demonstrates adding a sweeper, it is important to note that the string passed to resource.AddTestSweepers is added to a map, this name must therefore be unique. Also note there needs to be a way of identifying resources created by Terraform during acceptance tests, a common practice is to prefix all resource names created during acceptance tests with "test-acc" or something similar.

For more complex leaks, sweepers can also specify a list of sweepers that need to be run prior to the one being defined.

resource_example_compute_disk_test.go

package example

import (
  "testing"

  "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
)

func init() {
  resource.AddTestSweepers("example_compute_disk", &resource.Sweeper{
    Name: "example_compute_disk",
    Dependencies: []string{"example_compute"}
    ...
  })
}

The sweepers can be invoked with the common make target sweep:

$ make sweep
WARNING: This will destroy infrastructure. Use only in development accounts.
go test ...
...

49.2-testing-acceptance_tests-testcase


page_title: 'Plugin Development - Acceptance Testing: TestCase' description: |- Acceptance tests are expressed in terms of Test Cases. Each Test Case creates a set of resources then verifies the new infrastructure.

Acceptance Tests: TestCases

Acceptance tests are expressed in terms of Test Cases, each using one or more Terraform configurations designed to create a set of resources under test, and then verify the actual infrastructure created. Terraform’s resource package offers a method Test(), accepting two parameters and acting as the entry point to Terraform’s acceptance test framework. The first parameter is the standard *testing.T struct from Golang’s Testing package, and the second is TestCase, a Go struct that developers use to setup the acceptance tests.

Here’s an example acceptance test. Here the Provider is named Example, and the Resource under test is Widget. The parts of this test are explained below the example.

package example

// example.Widget represents a concrete Go type that represents an API resource
func TestAccExampleWidget_basic(t *testing.T) {
  var widgetBefore, widgetAfter example.Widget
  rName := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)

	resource.Test(t, resource.TestCase{
    PreCheck:     func() { testAccPreCheck(t) },
    Providers:    testAccProviders,
    CheckDestroy: testAccCheckExampleResourceDestroy,
    Steps: []resource.TestStep{
      {
        Config: testAccExampleResource(rName),
        Check: resource.ComposeTestCheckFunc(
          testAccCheckExampleResourceExists("example_widget.foo", &widgetBefore),
        ),
      },
      {
        Config: testAccExampleResource_removedPolicy(rName),
        Check: resource.ComposeTestCheckFunc(
          testAccCheckExampleResourceExists("example_widget.foo", &widgetAfter),
        ),
      },
    },
  })
}

Creating Acceptance Tests Functions

Terraform acceptance tests are declared with the naming pattern TestAccXxx and with the standard Go test function signature of func TestAccXxx(*testing.T). Using the above test as an example:

// File: example/widget_test.go
package example

func TestAccExampleWidget_basic(t *testing.T) {
  // ...
}

Inside this function we invoke resource.Test() with the *testing.T input and a new testcase object:

// File: example/widget_test.go
package example

func TestAccExampleWidget_basic(t *testing.T) {
  resource.Test(t, resource.TestCase{
    // ...
  })
}

The majority of acceptance tests will only invoke resource.Test() and exit. If at any point this method encounters an error, either in executing the provided Terraform configurations or subsequent developer defined checks, Test() will invoke the t.Error method of Go’s standard testing framework and the test will fail. A failed test will not halt or otherwise interrupt any other tests currently running.

TestCase Reference API

TestCase offers several fields for developers to add to customize and validate each test, defined below. The source for TestCase can be viewed here on godoc.org

IsUnitTest

Type: bool

Default: false

Required: no

IsUnitTest allows a test to run regardless of the TF_ACC environment variable. This should be used with care - only for fast tests on local resources (e.g. remote state with a local backend) but can be used to increase confidence in correct operation of Terraform without waiting for a full acceptance test run.

PreCheck

Type: function

Default: nil

Required: no

PreCheck if non-nil, will be called before any test steps are executed. It is commonly used to verify that required values exist for testing, such as environment variables containing test keys that are used to configure the Provider or Resource under test.

Example usage:

// File: example/widget_test.go
package example

func TestAccExampleWidget_basic(t *testing.T) {
  resource.Test(t, resource.TestCase{
    PreCheck:     func() { testAccPreCheck(t) },
    // ...
  })
}


// testAccPreCheck validates the necessary test API keys exist
// in the testing environment
func testAccPreCheck(t *testing.T) {
  if v := os.Getenv("EXAMPLE_KEY"); v == "" {
    t.Fatal("EXAMPLE_KEY must be set for acceptance tests")
  }
  if v := os.Getenv("EXAMPLE_SECRET"); v == "" {
    t.Fatal("EXAMPLE_SECRET must be set for acceptance tests")
  }
}

Providers

Type: map[string]*schema.Provider

Required: Yes

Providers is a map of *schema.Provider values with string keys, representing the Providers that will be under test. Only the Providers included in this map will be loaded during the test, so any Provider included in a configuration file for testing must be represented in this map or the test will fail during initialization.

This map is most commonly constructed once in a common init() method of the Provider’s main test file, and includes an object of the current Provider type.

Example usage: (note the different files widget_test.go and provider_test.go)

// File: example/widget_test.go
package example

func TestAccExampleWidget_basic(t *testing.T) {
  resource.Test(t, resource.TestCase{
    PreCheck:     func() { testAccPreCheck(t) },
    Providers:    testAccProviders,
    // ...
  })
}

// File: example/provider_test.go
package example

var testAccProviders map[string]*schema.Provider
var testAccProvider *schema.Provider

func init() {
  testAccProvider = Provider()
  testAccProviders = map[string]*schema.Provider{
    "example": testAccProvider,
  }
}

CheckDestroy

Type: TestCheckFunc

Default: nil

Required: no

CheckDestroy is called after all test steps have been run and Terraform has run destroy on the remaining state. This allows developers to ensure any resource created is truly destroyed. This method receives the last known Terraform state as input, and commonly uses infrastructure SDKs to query APIs directly to verify the expected objects are no longer found, and should return an error if any resources remain.

Example usage:

// File: example/widget_test.go
package example

func TestAccExampleWidget_basic(t *testing.T) {
  resource.Test(t, resource.TestCase{
    PreCheck:     func() { testAccPreCheck(t) },
    Providers:    testAccProviders,
    CheckDestroy: testAccCheckExampleResourceDestroy,
    // ...
  })
}

// testAccCheckExampleResourceDestroy verifies the Widget
// has been destroyed
func testAccCheckExampleResourceDestroy(s *terraform.State) error {
  // retrieve the connection established in Provider configuration
  conn := testAccProvider.Meta().(*ExampleClient)

  // loop through the resources in state, verifying each widget
  // is destroyed
  for _, rs := range s.RootModule().Resources {
    if rs.Type != "example_widget" {
      continue
    }

    // Retrieve our widget by referencing it's state ID for API lookup
    request := &example.DescribeWidgets{
      IDs: []string{rs.Primary.ID},
    }

    response, err := conn.DescribeWidgets(request)
    if err == nil {
      if len(response.Widgets) > 0 && *response.Widgets[0].ID == rs.Primary.ID {
        return fmt.Errorf("Widget (%s) still exists.", rs.Primary.ID)
      }

      return nil
    }

    // If the error is equivalent to 404 not found, the widget is destroyed.
    // Otherwise return the error
    if !strings.Contains(err.Error(), "Widget not found") {
      return err
    }
  }

  return nil
}

Steps

Type: []TestStep

Required: yes

TestStep is a single apply sequence of a test, done within the context of a state. Multiple TestSteps can be sequenced in a Test to allow testing potentially complex update logic and usage. Basic tests typically contain one to two steps, to verify the resource can be created and subsequently updated, depending on the properties of the resource. In general, simply create/destroy tests will only need one step.

TestSteps are covered in detail in the next section, TestSteps.

Example usage:

// File: example/widget_test.go
package example

func TestAccExampleWidget_basic(t *testing.T) {
  resource.Test(t, resource.TestCase{
    PreCheck:     func() { testAccPreCheck(t) },
    Providers:    testAccProviders,
    CheckDestroy: testAccCheckExampleResourceDestroy,
    Steps: []resource.TestStep{
      {
        Config: testAccExampleResource(rName),
        Check: resource.ComposeTestCheckFunc(
          testAccCheckExampleResourceExists("example_widget.foo", &widgetBefore),
        ),
      },
      {
        Config: testAccExampleResource_removedPolicy(rName),
        Check: resource.ComposeTestCheckFunc(
          testAccCheckExampleResourceExists("example_widget.foo", &widgetAfter),
        ),
      },
    },
  })
}

Next Steps

TestCases are used to verify the features of a given part of a plugin. Each case should represent a scenario of normal usage of the plugin, from simple creation to creating, adding, and removing specific properties. In the next Section TestSteps, we’ll detail Steps portion of TestCase and see how to create these scenarios by iterating on Terraform configurations.


49.2-testing-acceptance_tests-teststep


page_title: 'Plugin Development - Acceptance Testing: TestStep' description: |- TestSteps represent the application of an actual Terraform configuration file to a given state.

Acceptance Tests: TestSteps

TestSteps represent the application of an actual Terraform configuration file to a given state. Each step requires a configuration as input and provides developers several means of validating the behavior of the specific resource under test.

Test Modes

Terraform’s test framework facilitates two distinct modes of acceptance tests, Lifecycle and Import.

Lifecycle mode is the most common mode, and is used for testing plugins by providing one or more configuration files with the same logic as would be used when running terraform apply.

Import mode is used for testing resource functionality to import existing infrastructure into a Terraform statefile, using the same logic as would be used when running terraform import.

An acceptance test’s mode is implicitly determined by the fields provided in the TestStep definition. The applicable fields are defined in the TestStep Reference API.

Steps

Steps is a field within TestCase, the struct used to construct acceptance tests. Each step represents a full terraform apply of a given configuration language, followed by zero or more checks (defined later) to verify the application. Each Step is applied in order, and require its own configuration and optional check functions.

Below is a code example of a lifecycle test that provides two TestStep structs:

package example

// example.Widget represents a concrete Go type that represents an API resource
func TestAccExampleWidget_basic(t *testing.T) {
  var widgetBefore, widgetAfter example.Widget
  rName := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)

  resource.Test(t, resource.TestCase{
    PreCheck:     func() { testAccPreCheck(t) },
    Providers:    testAccProviders,
    CheckDestroy: testAccCheckExampleResourceDestroy,
    Steps: []resource.TestStep{
      {
        Config: testAccExampleResource(rName),
        Check: resource.ComposeTestCheckFunc(
          testAccCheckExampleResourceExists("example_widget.foo", &widgetBefore),
        ),
      },
      {
        Config: testAccExampleResource_removedPolicy(rName),
        Check: resource.ComposeTestCheckFunc(
          testAccCheckExampleResourceExists("example_widget.foo", &widgetAfter),
        ),
      },
    },
  })
}

In the above example each TestCase invokes a function to retrieve it’s desired configuration, based on a randomized name provided, however an in-line string or constant string would work as well, so long as they contain valid Terraform configuration for the plugin or resource under test. This pattern of first applying and checking a basic configuration, followed by applying a modified configuration with updated or additional checks is a common pattern used to test update functionality.

Check Functions

After the configuration for a TestStep is applied, Terraform’s testing framework provides developers an opportunity to check the results by providing a “Check” function. While possible to only supply a single function, it is recommended you use multiple functions to validate specific information about the results of the terraform apply ran in each TestStep. The Check attribute of TestStep is singular, so in order to include multiple checks developers should use either ComposeTestCheckFunc or ComposeAggregateTestCheckFunc (defined below) to group multiple check functions, defined below:

ComposeTestCheckFunc

ComposeTestCheckFunc lets you compose multiple TestCheckFunc functions into a single check. As a user testing their provider, this lets you decompose your checks into smaller pieces more easily, with individual methods for checking specific attributes. Each check is ran in the order provided, and on failure the entire TestCase is stopped, and Terraform attempts to destroy any resources created.

Example:

Steps: []resource.TestStep{
  {
    Config: testAccExampleResource(rName),
    Check: resource.ComposeTestCheckFunc(
      // if testAccCheckExampleResourceExists fails to find the resource,
      // the parent TestStep and TestCase fail
      testAccCheckExampleResourceExists("example_widget.foo", &widgetBefore),
      resource.TestCheckResourceAttr("example_widget.foo", "size", "expected size"),
    ),
  },
},

ComposeAggregateTestCheckFunc

ComposeAggregateTestCheckFunc lets you compose multiple TestCheckFunc functions into a single check. It’s purpose and usage is identical to ComposeTestCheckFunc, however each check is ran in order even if a previous check failed, collecting the errors returned from any checks and returning a single aggregate error. The entire TestCase is still stopped, and Terraform attempts to destroy any resources created.

Example:

Steps: []resource.TestStep{
  {
    Config: testAccExampleResource(rName),
    Check: resource.ComposeAggregateTestCheckFunc(
      // if testAccCheckExampleResourceExists fails to find the resource,
      // the following TestCheckResourceAttr is still run, with any errors aggregated
      testAccCheckExampleResourceExists("example_widget.foo", &widgetBefore),
      resource.TestCheckResourceAttr("example_widget.foo", "active", "true"),
    ),
  },
},

Builtin check functions

Terraform has several TestCheckFunc functions built in for developers to use for common checks, such as verifying the status and value of a specific attribute in the resulting state. Developers are encouraged to use as many as reasonable to verify the behavior of the plugin/resource, and should combine them with the above mentioned ComposeTestCheckFunc or ComposeAggregateTestCheckFunc functions.

Most builtin functions accept name, key, and/or value fields, derived from the typical Terraform configuration stanzas:

resource "example_widget" "foo" {
  active = true
}

Here the name represents the resource name in state (example_widget.foo), the key represents the attribute to check (active), and value represents the desired value to check against (true). In this case, an equality check would be:

resource.TestCheckResourceAttr("example_widget.foo", "active", "true"),

The full list of functions can be seen in the helper/resource package. Names for these begin with TestCheck... and TestMatch.... The most common checks for non-TypeSet attributes are below.

Function Purpose
TestCheckResourceAttr(name string, key string, value string) Value equality checks
TestMatchResourceAttr(name string, key string, regex *regexp.Regexp)
Value regular expression checks
TestCheckResourceAttrPair(nameFirst string, keyFirst string, nameSecond string, keySecond string) Value equality across two attributes (usually in different resources)
TestCheckResourceAttrSet(name string, key string) Passes if any value was set
TestCheckNoResourceAttr(name string, key string) Passes if no value was set

For TypeSet attributes, there are some additional functions that accept a * placeholder in attribute keys for indexing into the set.

Function Purpose
TestCheckTypeSetElemAttr(name string, key string, value string) Value is contained in set
TestCheckTypeSetElemAttrPair(nameFirst string, keyFirst string, nameSecond string, keySecond string) Value is contained in set from another attribute (usually in different resources)
TestCheckTypeSetElemNestedAttrs(name string, key string, values map[string]string) Map of values is contained in set (usually checking multiple attributes of a block)

All of these functions also accept the below syntax in attribute keys to enable additional behaviors.

Syntax Purpose Example
.{NUMBER} List index TestCheckResourceAttr("example_widget.foo", "some_block.0", "first value")
.{KEY} Map key TestCheckResourceAttr("example_widget.foo", "some_map.some_key", "map value")
.# Number of elements in list or set TestCheckResourceAttr("example_widget.foo", "some_list.#", "2")
.% Number of keys in map TestCheckResourceAttr("example_widget.foo", "some_map.%", "2")

Custom check functions

The Check field of TestStep accepts any function of type TestCheckFunc. Developers are free to write their own check functions to create customized validation functions for their plugin. Any function that matches the TestCheckFunc function signature of func(*terraform.State) error can be used individually, or with other TestCheckFunc functions with one of the above Aggregate functions.

It's common to write custom TestCheckFunc functions to validate resources were created correctly by using SDKs directly to verify identity and properties of resources. These functions can retrieve information by SDKs and provide the results to other TestCheckFunc methods. The below example uses ComposeTestCheckFunc to group a set of TestCheckFunc functions together. The first function testAccCheckExampleWidgetExists uses the Example service SDK directly, and queries it for the ID of the widget we have in state. Once found, the result is stored into the widget struct declared at the beginning of the test function. The next check function testAccCheckExampleWidgetAttributes receives the updated widget and checks its attributes. The final check TestCheckResourceAttr verifies that the same value is stored in state.

func TestAccExampleWidget_basic(t *testing.T) {
  var widget example.WidgetDescription

  resource.Test(t, resource.TestCase{
    PreCheck:     func() { testAccPreCheck(t) },
    Providers:    testAccProviders,
    CheckDestroy: testAccCheckExampleWidgetDestroy,
    Steps: []resource.TestStep{
			{
        Config: testAccExampleWidgetConfig,
        Check: resource.ComposeTestCheckFunc(
          testAccCheckExampleWidgetExists("example_widget.bar", &widget),
          testAccCheckExampleWidgetAttributes(&widget),
          resource.TestCheckResourceAttr("example_widget.bar", "active", "true"),
        ),
      },
    },
  })
}

// testAccCheckExampleWidgetAttributes verifies attributes are set correctly by
// Terraform
func testAccCheckExampleWidgetAttributes(widget *example.WidgetDescription) resource.TestCheckFunc {
  return func(s *terraform.State) error {
    if *widget.active != true {
      return fmt.Errorf("widget is not active")
    }

    return nil
  }
}

// testAccCheckExampleWidgetExists uses the Example SDK directly to retrieve
// the Widget description, and stores it in the provided
// *example.WidgetDescription
func testAccCheckExampleWidgetExists(resourceName string, widget *example.WidgetDescription) resource.TestCheckFunc {
  return func(s *terraform.State) error {
    // retrieve the resource by name from state
    rs, ok := s.RootModule().Resources[resourceName]
    if !ok {
      return fmt.Errorf("Not found: %s", resourceName)
    }

    if rs.Primary.ID == "" {
      return fmt.Errorf("Widget ID is not set")
    }

    // retrieve the client from the test provider
    client := testAccProvider.Meta().(*ExampleClient)

    response, err := client.DescribeWidgets(&example.DescribeWidgetsInput{
      WidgetIDs: []string{rs.Primary.ID},
    })

    if err != nil {
      return err
    }

    // we expect only a single widget by this ID. If we find zero, or many,
    // then we consider this an error
    if len(response.WidgetDescriptions) != 1 ||
      *response.WidgetDescriptions[0].WidgetID != rs.Primary.ID {
      return fmt.Errorf("Widget not found")
    }

    // store the resulting widget in the *example.WidgetDescription pointer
    *widget = *response.WidgetDescriptions[0]
    return nil
  }
}

Next Steps

Acceptance Testing is an essential approach to validating the implementation of a Terraform Provider. Using actual APIs to provision resources for testing can leave behind real infrastructure that costs money between tests. The reasons for these leaks can vary, regardless Terraform provides a mechanism known as Sweepers to help keep the testing account clean.


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment