Building Custom Providers with the new Terraform Plugin Framework

Extend Terraform to easily support custom APIs in your environment.

Sean Kane
SuperOrbital Engineer
Flipping bits on the internet since 1992.

Published on March 01, 2023


Table of Contents

Introduction

planet terraform banner

Have you ever wished that you could extend Terraform, so that it could support the APIs that you have developed inside your organization? Or maybe, you heavily rely on an unusual public API that is not already supported by Terraform.

In this article, we will explore the new Terraform Plugin Framework and how it can be used to build custom Terraform providers that support any API1 that you need.

The Terraform Plugin Framework is written in Go and is designed to allow the developer to focus on the interface between the provider and the API, while letting the Framework provide all the core features and deal with Terraform-related implementation details. If you are lucky enough to already have a Go client for your API, making a custom Terraform provider is especially straight forward.

Why Build a Custom Terraform Provider

Terraform’s primary job is to make it significantly easier to manage API objects, even when they have complex dependencies on other objects in the environment. A very common use case for Terraform, is to help manage dynamic and complex cloud environments.

There are several reasons that a team or organization might want to build a custom provider for Terraform, including:

  • Developing a public provider for the API that you provide to your customers.
  • Creating a private provider for an internal API.
  • Sponsoring a provider for a public API that is either unsupported or poorly-supported by any existing providers.

Let’s take a moment to look at each of these examples.

Public providers for public APIs

Maybe the easiest way to justify the investment into building a custom provider, is to consider the case, where your company provides an API to your customers, and those customers heavily rely of Terraform to help them build and maintain their complex infrastructure. Under these circumstances, it makes a lot of sense to provide them with a robust and well tested provider which you can ensure provides additional value to your customers, increases satisfaction, and helps them make better use of all the features that your API provides.

For example, the DNS provider, NS1, mantains a public Terraform provider that makes it easy for their customer to manage the NS1 components of their infrastrcuture.

terraform {
  required_providers {
    ns1 = {
      source = "ns1-terraform/ns1"
    }
  }
}

provider "ns1" {
  apikey = var.ns1_apikey
}

resource "ns1_record" "inventory-api" {
  ttl    = 60
  zone   = "example.com"
  domain = "inventory-api.example.com"
  type   = "CNAME"

  meta = {
    note = "inventory api load balancer"
  }

  answers {
    answer = aws_lb.inventory.dns_name
  }
}

Public providers for public 3rd Party APIs

If you happen to use a 3rd party API, that doesn’t have a Terraform provider, there is often nothing preventing you from creating one to meet whatever needs that you have. There are many open source community-supported Terraform providers in the ecosystem that are trying to provide at least basic support for many other APIs that might be useful.

At the time of this write-up, one example of this is the Slack API. There are currently a few community providers available that support varying amounts of the Slack API, but still provide at least some basic support for automating Slack resource management.

terraform {
  required_providers {
    slack = {
      source  = "pablovarela/slack"
    }
  }
}

provider "slack" {
  token = var.slack_token
}

resource "slack_conversation" "example" {
  name              = "example-channel"
  topic             = "The topic for the channel"
  is_private        = false
}

It is worth articulating clearly that writing a custom Terraform provider does not require you to implement the whole API. A robust, customer facing provider should often have as complete coverage as possible, but there are plenty of situations, where it would be helpful to simply have access to a few core components of the API. Using standard iterative development processes you can expand this support as you need to, while making the provider immediately useful in your current workflow.

Private providers for internal APIs

However, Terraform is not only for publicly accessible APIs. It is actually an incredible tool for helping bridge the gaps that exist between cloud-native systems and internal corporate data. By building a custom Terraform provider that knows how to talk to a unique internal API, like one that contains data about every organizational team in the company, Terraform enables the use of this data to trigger all sorts of actions across the infrastructure. For example, you could imagine modifying cloud permissions as team members change, updating ownership tags with current team names (even through re-orgs), etc.

We can’t provide a concrete example here, but if we had this fictional Team service, we could use it to help ensure that every cloud object we create is tagged with the current contact information for the owning team, even when the team members, and organizational structure has changed over time.

terraform {
  required_providers {
    aws = {
     source  = "hashicorp/aws"
    }
    team = {
      source  = "example/team"
    }
  }
}

provider "aws" {
  region = "us-west-2"
}

provider "team" {
  token = var.team_token
}

data "team" "platform" {
  id = "T13921_platform"
}

resource "aws_s3_bucket" "example" {
  bucket = "my-example-team-bucket"
  acl    = "private"

  tags = {
    owner             = data.team.platform.name
    emergency_contact = data.team.platform.emergency_contact
  }
}

Getting Started

Hashicorp expects all provider repo names to be named terraform-provider-${API}. So, it is important that we name our project appropriately.

For this example, we decided to create a very simple API service called the Inventory service, which allows very simple in-memory objects, called items, to be created and managed. At the time of this write up, each item is made up of a generated unique ID, mandatory name, and optional tag.

{
  "id":1000,
  "name":"2022 Mustang Shelby GT500",
  "tag":"USD:79,420"
}

By design, this API is very simple, but provides the basic operations that we would normally expect and it can easily be run anywhere. Building a custom Terraform provider for the inventory service API will allow us to discuss how a custom Terraform provider can be built without being to distracted by the details of the API that we are interacting with.

Below, we are going to talk through many of the important steps and concepts required to create a custom Terraform provider using the Terraform Plugin Framework. You can find the complete inventory service provider source code on GitHub.

Requirements

Writing the Provider

The first thing we need to do is create a new repo from the hashicorp/terraform-provider-scaffolding-framework template repo and name the new git repo terraform-provider-inventory. Once you have the new repo created, clone it onto your local system.

The first thing that we need to do is modify the go.mod file and update the module name which also defines where our project will eventually be available online. In the module line below, and the rest of the source code examples, you will typically want to replace myuser with your GitHub user or organization name (e.g. superorbital, in our case.).

module github.com/myuser/terraform-provider-inventory

go 1.18

Preparing the Environment

In main.go we will want to update the import block, replacing "github.com/hashicorp/terraform-provider-scaffolding-framework/internal/provider", so that the block now looks like this:

import (
  "context"
  "flag"
  "log"

  "github.com/hashicorp/terraform-plugin-framework/providerserver"
  "github.com/myuser/terraform-provider-inventory/internal/provider"
)

and then a little further down, you will want to update the line that reads Address: "registry.terraform.io/hashicorp/scaffolding", so that it reflects the correct name for your new provider.

  opts := providerserver.ServeOpts{
    Address: "registry.terraform.io/myuser/terraform-provider-inventory",
    Debug:   debug,
  }

When you are developing a Terraform provider it is often very helpful to tell terraform to use your local copy of the provider, instead of trying to download a proper provider from the Terraform Registry.

To do this we need to create a file in our home directory called .terraformrc that contains a provider_installation section that looks something like this:

provider_installation {
  dev_overrides {
    "myuser/inventory" = "/usr/local/go/bin/"
  }
  direct {}
}

The exact directory that you need to specify will depend on your environment, but you can run go env and look at the values for GOPATH and GOROOT, to get an idea of where binaries will be installed. In many cases, it will be in ${GOPATH}/bin.

You will normally only want the dev_overrides section enabled while you are actively testing your local provider. If you leave it enabled all the time, local terraform runs will never use the official provider. Because of this terraform will always give you a warning when this setting is enabled, that looks like this:

│ Warning: Provider development overrides are in effect
│
│ The following provider development overrides are set in the CLI
│ configuration:
│  - myuser/inventory in /usr/local/go/bin/
│
│ The behavior may therefore not match any released version of the provider and
│ applying changes may cause the state to become incompatible with published
│ releases.

Just to make sure things compile and we have the .terraformrc setup correct let’s build and install the provider in its current state.

$ go install .

Assuming that there were no errors, we should be able to find the provider binary in the bin directory used by go install and referenced in the .terraformrc file.

$ ls $GOPATH/bin/terraform-provider-inventory
/usr/local/go/bin/terraform-provider-inventory

If you find it somewhere else, make sure to fix your ${HOME}/.terrraformrc file, so that it points to the correct directory.

Also, note that if you run the provider directly you will get a message that looks like this:

This binary is a plugin. These are not meant to be executed directly.
Please execute the program that consumes these plugins, which will
load any plugins automatically

This is expected, since this is a plugin and not a standard CLI2 tool.

Using the Inventory Service

This is a good time to spin up a local copy of the Inventory service. You can do this a number of ways. Pick whichever one works best for you and your environment.

  • Download a compiled binary, rename it to inventory-service, ensure that it is has executable permissions, and then run it in another terminal session.
$ mv ./inventory-service_darwin_amd64 ./inventory-service
$ chmod u+rwx ./inventory-service
$ inventory-service -port 8080
$ docker container run --name inventory --rm -d -p 8080:8080 superorbital/inventory-service
$ wget https://raw.githubusercontent.com/superorbital/inventory-service/main/docker-compose.yaml
$ docker compose up -d

Once you have a local copy running you can run some basic tests using curl or something similar, as seen below:

  • Confirm that the initial list is empty.
$ curl -X GET 127.0.0.1:8080/items
[]
  • Add a new item.
$ curl -H 'Content-Type: application/json' -X POST -d '{"name":"1908 Harley-Davidson", "tag":"USD:935,000"}' 127.0.0.1:8080/items
{"id":1000,"name":"1908 Harley-Davidson","tag":"USD:935,000"}
  • Read an existing item.
$ curl -X GET 127.0.0.1:8080/items/1000
{"id":1000,"name":"1908 Harley-Davidson","tag":"USD:935,000"}
  • List all of the items.
$ curl -X GET 127.0.0.1:8080/items
[{"id":1000,"name":"1908 Harley-Davidson","tag":"USD:935,000"}]
  • Modify an existing item.
$ curl -H 'Content-Type: application/json' -X PUT -d '{"name":"1908 Harley-Davidson", "tag":"USD:975,000"}' 127.0.0.1:8080/items/1000
[{"id":1000,"name":"1908 Harley-Davidson", "tag":"USD:975,000"}]
  • Delete an existing item.
$ curl -H 'Content-Type: application/json' -X DELETE 127.0.0.1:8080/items/1000
  • Confirm that the list is empty again.
$ curl -X GET 127.0.0.1:8080/items
[]

We are going to utilize the Terraform Plugin Framework and the Inventory service API client to enable our new Terraform provider to perform all of these operations, so that we can use it to manage items within our API.

Provider: Connecting to the API

At this point we are ready to start connecting our provider to our API.

Go ahead and navigate to ./internal/provider/ and remove the example files that are currently there.

We will start by creating the provider.go file, whose primary purpose is to instantiate the provider and setting a client that can interface with the API.

Full Source Code: provider.go
package provider

import (
 "context"
 "os"

 "github.com/superorbital/inventory-service/client"

 "github.com/hashicorp/terraform-plugin-framework/datasource"
 "github.com/hashicorp/terraform-plugin-framework/path"
 "github.com/hashicorp/terraform-plugin-framework/provider"
 "github.com/hashicorp/terraform-plugin-framework/provider/schema"
 "github.com/hashicorp/terraform-plugin-framework/resource"
 "github.com/hashicorp/terraform-plugin-framework/types"
 "github.com/hashicorp/terraform-plugin-log/tflog"
)

// Ensure the implementation satisfies the expected interfaces.
var (
 _ provider.Provider = &inventoryProvider{}
)

// New is a helper function to simplify provider server and testing implementation.
func New(version string) func() provider.Provider {
 return func() provider.Provider {
  return &inventoryProvider{
   version: version,
  }
 }
}

// inventoryProvider is the provider implementation.
type inventoryProvider struct {
 // version is set to the provider version on release, "dev" when the
 // provider is built and ran locally, and "test" when running acceptance
 // testing.
 version string
}

// inventoryProviderModel maps provider schema data to a Go type.
type inventoryProviderModel struct {
 Host types.String `tfsdk:"host"`
 Port types.String `tfsdk:"port"`
}

// Metadata returns the provider type name.
func (p *inventoryProvider) Metadata(_ context.Context, _ provider.MetadataRequest, resp *provider.MetadataResponse) {
 resp.TypeName = "inventory"
 resp.Version = p.version
}

// Schema defines the provider-level schema for configuration data.
func (p *inventoryProvider) Schema(_ context.Context, _ provider.SchemaRequest, resp *provider.SchemaResponse) {
 resp.Schema = schema.Schema{
  Attributes: map[string]schema.Attribute{
   "host": schema.StringAttribute{
    Optional:    true,
    Description: "The hostname or IP address for the inventory service endpoint. May also be provided via the INVENTORY_HOST environment variable.",
   },
   "port": schema.StringAttribute{
    Optional:    true,
    Description: "The port to connect to. May also be provided via the INVENTORY_PORT environment variable.",
   },
  },
  Blocks:      map[string]schema.Block{},
  Description: "Interface with the Inventory service API.",
 }
}

// Configure prepares a Inventory API client for data sources and resources.
//
//gocyclo:ignore
func (p *inventoryProvider) Configure(ctx context.Context, req provider.ConfigureRequest, resp *provider.ConfigureResponse) {
 tflog.Info(ctx, "Configuring Inventory client")

 // Retrieve provider data from configuration
 var config inventoryProviderModel
 diags := req.Config.Get(ctx, &config)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }

 // If practitioner provided a configuration value for any of the
 // attributes, it must be a known value.

 if config.Host.IsUnknown() {
  resp.Diagnostics.AddAttributeError(
   path.Root("host"),
   "Unknown Inventory service Host",
   "The provider cannot create the Inventory API client as there is an unknown configuration value for the Inventory API host. "+
    "Either target apply the source of the value first, set the value statically in the configuration, or use the INVENTORY_HOST environment variable.",
  )
 }

 if config.Port.IsUnknown() {
  resp.Diagnostics.AddAttributeError(
   path.Root("port"),
   "Unknown Inventory service Port",
   "The provider cannot create the Inventory API client as there is an unknown configuration value for the Inventory API port. "+
    "Either target apply the source of the value first, set the value statically in the configuration, or use the INVENTORY_PORT environment variable.",
  )
 }

 if resp.Diagnostics.HasError() {
  return
 }

 // Default values to environment variables, but override
 // with Terraform configuration value if set.

 host := os.Getenv("INVENTORY_HOST")
 port := os.Getenv("INVENTORY_PORT")

 if !config.Host.IsNull() {
  host = config.Host.ValueString()
 }

 if !config.Port.IsNull() {
  port = config.Port.ValueString()
 }

 // If any of the expected configurations are missing, return
 // errors with provider-specific guidance.

 if host == "" {
  resp.Diagnostics.AddAttributeWarning(
   path.Root("host"),
   "Missing Inventory API Host (using default value: 127.0.0.1)",
   "The provider is using a default value as there is a missing or empty value for the Inventory API host. "+
    "Set the host value in the configuration or use the INVENTORY_HOST environment variable. "+
    "If either is already set, ensure the value is not empty.",
  )
  host = "127.0.0.1"
 }

 if port == "" {
  resp.Diagnostics.AddAttributeWarning(
   path.Root("port"),
   "Missing Inventory API port (using default value: 8080)",
   "The provider is using a default value as there is a missing or empty value for the Inventory API host. "+
    "Set the host value in the configuration or use the INVENTORY_PORT environment variable. "+
    "If either is already set, ensure the value is not empty.",
  )
  port = "8080"
 }

 if resp.Diagnostics.HasError() {
  return
 }

 tflog.Debug(ctx, "Creating Inventory client")

 // Instantiate the client that we will use to talk to the Inventory server
 serverURL := "http://" + host + ":" + port + "/"
 api, err := client.NewClient(serverURL)
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Create Inventory API Client",
   "An unexpected error occurred when creating the Inventory API client. "+
    "If the error is not clear, please contact the provider developers.\n\n"+
    "Inventory Client Error: "+err.Error(),
  )
  return
 }
 // Test that we have some basic connectivity
 _, err = api.FindItemById(ctx, int64(1))
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Create Inventory API Client",
   "An unexpected error occurred when creating the Inventory API client. "+
    "If the error is not clear, please contact the provider developers.\n\n"+
    "Inventory Client Error: "+err.Error(),
  )
  return
 }

 // Make the Inventory client available during DataSource and Resource
 // type Configure methods.
 resp.DataSourceData = api
 resp.ResourceData = api

 tflog.Info(ctx, "Configured Inventory client", map[string]any{"success": true})
}

// DataSources defines the data sources implemented in the provider.
func (p *inventoryProvider) DataSources(_ context.Context) []func() datasource.DataSource {
 //return []func() datasource.DataSource{
 // NewItemDataSource,
 //}
 return nil
}

// Resources defines the resources implemented in the provider.
func (p *inventoryProvider) Resources(_ context.Context) []func() resource.Resource {
 //return []func() resource.Resource{
 // NewItemResource,
 //}
 return nil
}

The import keyword is used to pull in all the libraries that we will need from the standard Go library, the Hashicorp terraform-plugin-framework, and the inventory service.

In the Metadata function we start by setting the resp.TypeName to inventory and setting resp.Version to the provider version number. TypeName will usually match the name of provider.

func (p *inventoryProvider) Metadata(_ context.Context, _ provider.MetadataRequest, resp *provider.MetadataResponse) {
 resp.TypeName = "inventory"
 resp.Version = p.version
}

Provider: Schema

The inventoryProviderModel struct and the Schema function are used to start defining what arguments we plan to support in the HCL3 provider block for this plugin.

Our struct is defined like this:

type inventoryProviderModel struct {
 Host types.String `tfsdk:"host"`
 Port types.String `tfsdk:"port"`
}

and the Schema functions look like this:

func (p *inventoryProvider) Schema(_ context.Context, _ provider.SchemaRequest, resp *provider.SchemaResponse) {
 resp.Schema = schema.Schema{
  Attributes: map[string]schema.Attribute{
   "host": schema.StringAttribute{
    Optional:    true,
    Description: "The hostname or IP address for the inventory service endpoint. May also be provided via the INVENTORY_HOST environment variable.",
   },
   "port": schema.StringAttribute{
    Optional:    true,
    Description: "The port to connect to. May also be provided via the INVENTORY_PORT environment variable.",
   },
  },
  Blocks:      map[string]schema.Block{},
  Description: "Interface with the Inventory service API.",
 }
}

The resp.Schema value tells Terraform that our provider will support optional host and port arguments while the struct gives us an easy place to store these values.

Provider: Configuration

The Configure function is the core implementation, which will assemble the provider’s configuration by reading in any existing provider block in the HCL files, checking for any relevant environment variables, setting some default values, if necessary, and then finally trying to create a new client for the Inventory service and quickly testing the connection to ensure that we have a valid configuration. This code makes use of the Inventory service client library that ships with the service.

provider.go func: Configure()
func (p *inventoryProvider) Configure(ctx context.Context, req provider.ConfigureRequest, resp*provider.ConfigureResponse) {
 tflog.Info(ctx, "Configuring Inventory client")

 // Retrieve provider data from configuration
 var config inventoryProviderModel
 diags := req.Config.Get(ctx, &config)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }

 // If practitioner provided a configuration value for any of the
 // attributes, it must be a known value.

 if config.Host.IsUnknown() {
  resp.Diagnostics.AddAttributeError(
   path.Root("host"),
   "Unknown Inventory service Host",
   "The provider cannot create the Inventory API client as there is an unknown configuration value for the Inventory API host. "+
    "Either target apply the source of the value first, set the value statically in the configuration, or use the INVENTORY_HOST environment variable.",
  )
 }

 if config.Port.IsUnknown() {
  resp.Diagnostics.AddAttributeError(
   path.Root("port"),
   "Unknown Inventory service Port",
   "The provider cannot create the Inventory API client as there is an unknown configuration value for the Inventory API port. "+
    "Either target apply the source of the value first, set the value statically in the configuration, or use the INVENTORY_PORT environment variable.",
  )
 }

 if resp.Diagnostics.HasError() {
  return
 }

 // Default values to environment variables, but override
 // with Terraform configuration value if set.

 host := os.Getenv("INVENTORY_HOST")
 port := os.Getenv("INVENTORY_PORT")

 if !config.Host.IsNull() {
  host = config.Host.ValueString()
 }

 if !config.Port.IsNull() {
  port = config.Port.ValueString()
 }

 // If any of the expected configurations are missing, return
 // errors with provider-specific guidance.

 if host == "" {
  resp.Diagnostics.AddAttributeWarning(
   path.Root("host"),
   "Missing Inventory API Host (using default value: 127.0.0.1)",
   "The provider is using a default value as there is a missing or empty value for the Inventory API host. "+
    "Set the host value in the configuration or use the INVENTORY_HOST environment variable. "+
    "If either is already set, ensure the value is not empty.",
  )
  host = "127.0.0.1"
 }

 if port == "" {
  resp.Diagnostics.AddAttributeWarning(
   path.Root("port"),
   "Missing Inventory API port (using default value: 8080)",
   "The provider is using a default value as there is a missing or empty value for the Inventory API host. "+
    "Set the host value in the configuration or use the INVENTORY_PORT environment variable. "+
    "If either is already set, ensure the value is not empty.",
  )
  port = "8080"
 }

 if resp.Diagnostics.HasError() {
  return
 }

 tflog.Debug(ctx, "Creating Inventory client")

 // Instantiate the client that we will use to talk to the Inventory server
 serverURL := "http://" + host + ":" + port + "/"
 api, err := client.NewClient(serverURL)
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Create Inventory API Client",
   "An unexpected error occurred when creating the Inventory API client. "+
    "If the error is not clear, please contact the provider developers.\n\n"+
    "Inventory Client Error: "+err.Error(),
  )
  return
 }
 // Test that we have some basic connectivity
 _, err = api.FindItemById(ctx, int64(1))
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Create Inventory API Client",
   "An unexpected error occurred when creating the Inventory API client. "+
    "If the error is not clear, please contact the provider developers.\n\n"+
    "Inventory Client Error: "+err.Error(),
  )
  return
 }

 // Make the Inventory client available during DataSource and Resource
 // type Configure methods.
 resp.DataSourceData = api
 resp.ResourceData   = api

 tflog.Info(ctx, "Configured Inventory client", map[string]any{"success": true})
}

Once we have a client that is properly configured and ready to go, we can make it available to our future data sources and resources by setting resp.DataSourceData and resp.ResourceData to the configured client.

 api, err := client.NewClient(serverURL)
 …
 resp.DataSourceData = api
 resp.ResourceData   = api

Testing the Provider

Let’s quickly add some additional code that will become the basis for all of our acceptance tests, which we will use to ensure that the provider code is working the way we expect it to.

Full Source Code: provider_test.go
package provider

import (
 "github.com/hashicorp/terraform-plugin-framework/providerserver"
 "github.com/hashicorp/terraform-plugin-go/tfprotov6"
)

const (
 // providerConfig is a shared configuration to combine with the actual
 // test configuration so the Inventory client is properly configured.
 providerConfig = `terraform {
  required_providers {
    inventory = {
      source = "myuser/inventory"
    }
  }
}

# Configure the connection details for the Inventory service
provider "inventory" {
}
`
)

var (
 // testAccProtoV6ProviderFactories are used to instantiate a provider during
 // acceptance testing. The factory function will be invoked for every Terraform
 // CLI command executed to create a provider server to which the CLI can
 // reattach.
 testAccProtoV6ProviderFactories = map[string]func() (tfprotov6.ProviderServer, error){
  "inventory": providerserver.NewProtocol6WithError(New("test")()),
 }
)

This test file primarily tells Go how to instantiate a provider for acceptance testing and then defines providerConfig which will be used by many of our tests to configure the provider. The constant simply contains some standard HCL, that matches something a user might use with this provider.

 providerConfig = `terraform {
  required_providers {
    inventory = {
      source = "myuser/inventory"
    }
  }
}

# Configure the connection details for the Inventory service

provider "inventory" {
}
`
)

You could set the host and port arguments in the provider block, but since the provider will set reasonable defaults, we do not need to do this, unless we want to create multiple tests that specifically check all the various ways that a user might configure this provider. For now though, let’s just use the defaults.

If we try to build and install the provider now, we will get an error stating that no required module provides package github.com/superorbital/inventory-service/client. So, let’s update go.mod and then try again.

$ go mod tidy
$ go install .

In a second terminal window, goo ahead and create a temporary directory somewhere on your system:

$ mkdir -p ~/tmp/terraform-provider-inventory-test
$ cd ~/tmp/terraform-provider-inventory-test

In that directory, go ahead and create a file, called provider_test.tf, with the following contents:

terraform {
  required_providers {
    inventory = {
      source = "myuser/inventory"
    }
  }
}

# Configure the connection details for the Inventory service
provider "inventory" {
  host = "127.0.0.1"
  port = "8080"
}

If you run terraform apply in this directory you should see something very similar to this:

$ terraform apply
╷
│ Warning: Provider development overrides are in effect
│
│ The following provider development overrides are set in the CLI configuration:
│  - myuser/inventory in /usr/local/go/bin
│
│ The behavior may therefore not match any released version of the provider and applying changes may cause the state to
│ become incompatible with published releases.
╵

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are
needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Things appear to be running, but at this point the provider isn’t really doing much of anything.

To really start testing things out we need to add a data source or resource for the provider to manage. Since data sources in Terraform, represent read only objects, they are the easiest thing to start with.

Data Source: Item

If we create a new object in the API:

$ curl -H 'Content-Type: application/json' -X POST -d '{"name":"1908 Harley-Davidson", "tag":"USD:935,000"}' 127.0.0.1:8080/items
{"id":1000,"name":"1908 Harley-Davidson","tag":"USD:935,000"}

And then add the following block to provider_test.tf, which references the correct ID for the new object:

# Read in a existing Inventory item
data "inventory_item" "example" {
 id = "1000"
}

And finally run terraform apply again, we will get an error, because we have not implemented an item data source yet.

$ terraform apply
…
╷
│ Error: Invalid data source
│
│   on provider.tf line 16, in data "inventory_item" "example":
│   16: data "inventory_item" "example" {
│
│ The provider myuser/inventory does not support data source "inventory_item".

So, let’s do that next.

To create a data source that is capable of reading an existing item in the Inventory service API we need to start by creating the file in ./internal/provider called item_data_source.go.

For this project we already have access to a Go-based client for the Inventory service API, which we will start making heavy use of now. However, even if you did not have access to a ready made client, there are multiple options to consider. You might be able to leverage code generation projects like the OpenAPI codegen for Go, if the API in question utilizes OpenAPI (formerly swagger), you could write a new Go client for your API from scratch, or you could always design the provider to directly construct and send whatever HTTP requests the API might require.

Full Source Code: item_data_source.go
package provider

import (
 "context"
 "encoding/json"

 "github.com/superorbital/inventory-service/client"

 "github.com/hashicorp/terraform-plugin-framework/datasource"
 "github.com/hashicorp/terraform-plugin-framework/datasource/schema"
 "github.com/hashicorp/terraform-plugin-framework/types"
 "github.com/hashicorp/terraform-plugin-log/tflog"
)

// Ensure the implementation satisfies the expected interfaces.
var (
 _datasource.DataSource              = &itemDataSource{}
 _ datasource.DataSourceWithConfigure = &itemDataSource{}
)

// NewItemDataSource is a helper function to simplify the provider implementation.
func NewItemDataSource() datasource.DataSource {
 return &itemDataSource{}
}

// itemDataSource is the data source implementation.
type itemDataSource struct {
 client *client.Client
}

// itemDataSourceModel maps the data source schema data.
type itemDataSourceModel struct {
 ID   types.Int64  `tfsdk:"id"`
 Name types.String `tfsdk:"name"`
 Tag  types.String `tfsdk:"tag"`
}

// Configure adds the provider configured client to the data source.
func (d *itemDataSource) Configure(ctx context.Context, req datasource.ConfigureRequest, _*datasource.ConfigureResponse) {
 if req.ProviderData == nil {
  return
 }

 client, ok := req.ProviderData.(*client.Client)
 if !ok {
  tflog.Error(ctx, "Unable to prepare client")
  return
 }
 d.client = client

}

// Metadata returns the data source type name.
func (d *itemDataSource) Metadata(_ context.Context, req datasource.MetadataRequest, resp*datasource.MetadataResponse) {
 resp.TypeName = req.ProviderTypeName + "_item"
}

// Schema defines the schema for the data source.
func (d *itemDataSource) Schema(_ context.Context, _ datasource.SchemaRequest, resp*datasource.SchemaResponse) {
 resp.Schema = schema.Schema{
  Description: "Fetch an item.",
  Attributes: map[string]schema.Attribute{
   "id": schema.Int64Attribute{
    Description: "Identifier for this inventory item.",
    Required:    true,
   },
   "name": schema.StringAttribute{
    Description: "The name for this inventory item.",
    Computed:    true,
   },
   "tag": schema.StringAttribute{
    Description: "The tag for this inventory item.",
    Computed:    true,
   },
  },
 }
}

// Read refreshes the Terraform state with the latest data.
func (d *itemDataSource) Read(ctx context.Context, req datasource.ReadRequest, resp*datasource.ReadResponse) {
 tflog.Debug(ctx, "Preparing to read item data source")
 var state itemDataSourceModel

 resp.Diagnostics.Append(req.Config.Get(ctx, &state)...)

 itemResponse, err := d.client.FindItemById(ctx, state.ID.ValueInt64())
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Read Item",
   err.Error(),
  )
  return
 }

 var newItem client.Item
 if itemResponse.StatusCode != 200 {
  resp.Diagnostics.AddError(
   "Unexpected HTTP error code received for Item",
   itemResponse.Status,
  )
  return
 }

 if err := json.NewDecoder(itemResponse.Body).Decode(&newItem); err != nil {
  resp.Diagnostics.AddError(
   "Invalid format received for Item",
   err.Error(),
  )
  return
 }

 // Map response body to model
 state = itemDataSourceModel{
  ID:   types.Int64Value(newItem.Id),
  Name: types.StringValue(newItem.Name),
  Tag:  types.StringValue(*newItem.Tag),
 }

 // Set state
 resp.Diagnostics.Append(resp.State.Set(ctx, &state)...)
 tflog.Debug(ctx, "Finished reading item data source", map[string]any{"success": true})
}

The itemDataSourceModel type maps each field from the item object’s JSON representation to a field in the Go struct.

type itemDataSourceModel struct {
 ID   types.Int64  `tfsdk:"id"`
 Name types.String `tfsdk:"name"`
 Tag  types.String `tfsdk:"tag"`
}

and the Schema function defines what the HCL representation of the item data source must conform to.

// Schema defines the schema for the data source.
func (d *itemDataSource) Schema(_ context.Context, _ datasource.SchemaRequest, resp*datasource.SchemaResponse) {
 resp.Schema = schema.Schema{
  Description: "Fetch an item.",
  Attributes: map[string]schema.Attribute{
   "id": schema.Int64Attribute{
    Description: "Identifier for this inventory item.",
    Required:    true,
   },
   "name": schema.StringAttribute{
    Description: "The name for this inventory item.",
    Computed:    true,
   },
   "tag": schema.StringAttribute{
    Description: "The tag for this inventory item.",
    Computed:    true,
   },
  },
 }
}

In the above schema, id is the only required field, and everything else is marked as computed. This means that when a user defines an inventory_item data source, they must provide an id and nothing else. Since the ID is the only field that is guaranteed to be unique, we use this to look up the object in question, and then populate the name and tag fields with whatever we get back from the API.

The Configure function adds the provider configured client to the data source.

func (d *itemDataSource) Configure(ctx context.Context, req datasource.ConfigureRequest, _*datasource.ConfigureResponse) {
 if req.ProviderData == nil {
  return
 }

 client, ok := req.ProviderData.(*client.Client)
 if !ok {
  tflog.Error(ctx, "Unable to prepare client")
  return
 }
 d.client = client

}

While the Metadata function defines the resp.TypeName for the data source, which in this case will evaluate to inventory_item, and can be referenced in HCL as a data "invetory_item" block.

func (d *itemDataSource) Metadata(_ context.Context, req datasource.MetadataRequest, resp*datasource.MetadataResponse) {
 resp.TypeName = req.ProviderTypeName + "_item"
}

Data Source: Read

The Read function implements all the logic required to read in the correct item object from the Inventory service API and write the results into the Terraform state file. In this case, we grab the current Terraform state, attempt to lookup the item via its id, report an error or map all the results in a new itemDataSourceModel struct and write the results back into the Terraform state.

item_data_source.go func: Read()
func (d *itemDataSource) Read(ctx context.Context, req datasource.ReadRequest, resp*datasource.ReadResponse) {
 tflog.Debug(ctx, "Preparing to read item data source")
 var state itemDataSourceModel

 resp.Diagnostics.Append(req.Config.Get(ctx, &state)...)

 itemResponse, err := d.client.FindItemById(ctx, state.ID.ValueInt64())
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Read Item",
   err.Error(),
  )
  return
 }

 var newItem client.Item
 if itemResponse.StatusCode != 200 {
  resp.Diagnostics.AddError(
   "Unexpected HTTP error code received for Item",
   itemResponse.Status,
  )
  return
 }

 if err := json.NewDecoder(itemResponse.Body).Decode(&newItem); err != nil {
  resp.Diagnostics.AddError(
   "Invalid format received for Item",
   err.Error(),
  )
  return
 }

 // Map response body to model
 state = itemDataSourceModel{
  ID:   types.Int64Value(newItem.Id),
  Name: types.StringValue(newItem.Name),
  Tag:  types.StringValue(*newItem.Tag),
 }

 // Set state
 resp.Diagnostics.Append(resp.State.Set(ctx, &state)...)
 tflog.Debug(ctx, "Finished reading item data source", map[string]any{"success": true})
}

To make the provider aware of this new data source, we need to go back into provider.go and edit the DataSources function so that it looks like this.

func (p *inventoryProvider) DataSources(_ context.Context) []func() datasource.DataSource {
 return []func() datasource.DataSource{
  NewItemDataSource,
 }
}

Testing the Data Source

Now, we can go ahead and create our acceptance tests for the new item data source.

Full Source Code: item_data_source_test.go
package provider

import (
  "fmt"
  "testing"

  "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource"
)

func TestAccItemDataSource(t *testing.T) {
  resource.Test(t, resource.TestCase{
    ProtoV6ProviderFactories: testAccProtoV6ProviderFactories,
    Steps: []resource.TestStep{
      {
        /* 
           The Inventory service always gives the first `item` created
           the `id` number 1000, so that is why we default to testing
           for that specific `id`.
        */
        Config: providerConfig + `
data "inventory_item" "test" {
 id = 1000
}
`,
        Check: resource.ComposeAggregateTestCheckFunc(
          // Verify placeholder id attribute
          resource.TestCheckResourceAttrSet("data.inventory_item.test", "id"),
        ),
      },
    },
  })
}

Since we do not have a way to create an item via our Terraform provider yet, we are going to simply test for the existence of the one that we already created a little earlier via curl.

resource.Test(t, resource.TestCase{
    ProtoV6ProviderFactories: testAccProtoV6ProviderFactories,
    Steps: []resource.TestStep{
      {
        /* 
           The Inventory service always gives the first `item` created
           the `id` number 1000, so that is why we default to testing
           for that specific `id`.
        */
        Config: providerConfig + `
data "inventory_item" "test" {
 id = 1000
}
`,
        Check: resource.ComposeAggregateTestCheckFunc(
          // Verify placeholder id attribute
          resource.TestCheckResourceAttrSet("data.inventory_item.test", "id"),
        ),
      },
    },
  })

In the test above we create a provider instance via the factory and then feed it some HCL, which is a combination of the HCL defined in providerConfig and the additional HCL defined in this test. For the time being, we simply verify that the ID is set, after we get the results back. We will improve on this test after we have created the item resource for our provider.

At this point, we can go ahead and rebuild the provider:

$ go mod tidy
$ go install .

WARNING: It is important that the Inventory service is running at this point and has at least a single object defined in it with ID 1000, if not then you will get connection errors with the acceptance tests and terraform commands.

Let’s try out the acceptance tests. If the provider can connect to the API and read the data source, then you should see output similar to this:

$ TF_ACC=1 go test ./... -v  -timeout 120m
?    github.com/myuser/terraform-provider-inventory [no test files]
=== RUN   TestAccItemDataSource
--- PASS: TestAccItemDataSource (1.44s)
PASS
ok   github.com/myuser/terraform-provider-inventory/internal/provider 1.457s

If that worked, let’s go ahead and try to run terraform apply again in the directory that contains provider_test.tf. Make sure that it still has the data "inventory_item" "example" block defined.

If all goes well, you should see something very close to this:

$ terraform apply

│ Warning: Provider development overrides are in effect
│
│ The following provider development overrides are set in the CLI configuration:
│  - myuser/inventory in /usr/local/go/bin
│
│ The behavior may therefore not match any released version of the provider and applying changes may cause the state to
│ become incompatible with published releases.
╵
data.inventory_item.example: Reading...
data.inventory_item.example: Read complete after 0s [name=1908 Harley-Davidson]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are
needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

As a last check of the item data source code, we can confirm that the state file and API agree on the data that the item with ID 1000 contains.

$ terraform state show data.inventory_item.example
# data.inventory_item.example:
data "inventory_item" "example" {
    id   = 1000
    name = "1908 Harley-Davidson"
    tag  = "USD:935,000"
}

$ curl -f -X GET 127.0.0.1:8080/items/1000
{"id":1000,"name":"1908 Harley-Davidson","tag":"USD:935,000"}

Resource: Item

The last major code addition to our provider, will be adding an item resource, that will allow us to create, read, update, and delete items from the Inventory service API.

To create a new item, we will need to add the following block to provider_test.tf.

#Create new Inventory item
resource "inventory_item" "example" {
  name = "Jones Extreme Sour Cherry Warhead Soda"
  tag = "USD:2.99"
}

Since we haven’t actually implemented this resource yet, running terraform apply will result in an error.

$ terraform apply
…
│ Error: Invalid resource type
│
│   on provider.tf line 22, in resource "inventory_item" "example":
│   22: resource "inventory_item" "example" {
│
│ The provider myuser/inventory does not support resource type "inventory_item".
│
│ Did you intend to use the data source "inventory_item"? If so, declare this using a "data" block instead of a "resource"
│ block.

So, let’s go ahead and get that resource implemented. The first thing that we will need to do is create item_resource.go in the ./internal/provider/ directory.

Full Source Code: item_resource.go
package provider

import (
 "context"
 "encoding/json"
 "net/http"
 "strconv"

 "github.com/superorbital/inventory-service/client"

 "github.com/hashicorp/terraform-plugin-framework/path"
 "github.com/hashicorp/terraform-plugin-framework/resource"
 "github.com/hashicorp/terraform-plugin-framework/resource/schema"
 "github.com/hashicorp/terraform-plugin-framework/resource/schema/int64planmodifier"
 "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier"
 "github.com/hashicorp/terraform-plugin-framework/types"
 "github.com/hashicorp/terraform-plugin-log/tflog"
)

// Ensure the implementation satisfies the expected interfaces.
var (
 _ resource.Resource                = &itemResource{}
 _ resource.ResourceWithConfigure   = &itemResource{}
 _ resource.ResourceWithImportState = &itemResource{}
)

// NewItemResource is a helper function to simplify the provider implementation.
func NewItemResource() resource.Resource {
 return &itemResource{}
}

// itemResource is the resource implementation.
type itemResource struct {
 client *client.Client
}

// itemResourceModel maps the resource schema data.
type itemResourceModel struct {
 ID   types.Int64  `tfsdk:"id"`
 Name types.String `tfsdk:"name"`
 Tag  types.String `tfsdk:"tag"`
}

// Configure adds the provider configured client to the resource.
func (r *itemResource) Configure(ctx context.Context, req resource.ConfigureRequest, _ *resource.ConfigureResponse) {
 if req.ProviderData == nil {
  return
 }

 client, ok := req.ProviderData.(*client.Client)
 if !ok {
  tflog.Error(ctx, "Unable to prepare client")
  return
 }
 r.client = client

}

// Metadata returns the resource type name.
func (r *itemResource) Metadata(_ context.Context, req resource.MetadataRequest, resp *resource.MetadataResponse) {
 resp.TypeName = req.ProviderTypeName + "_item"
}

// Schema defines the schema for the resource.
func (r *itemResource) Schema(_ context.Context, _ resource.SchemaRequest, resp *resource.SchemaResponse) {
 resp.Schema = schema.Schema{
  Description: "Manage an item.",
  Attributes: map[string]schema.Attribute{
   "id": schema.Int64Attribute{
    Description: "Identifier for this inventory item.",
    Computed:    true,
    PlanModifiers: []planmodifier.Int64{
     int64planmodifier.UseStateForUnknown(),
    },
   },
   "name": schema.StringAttribute{
    Description: "The name for this inventory item.",
    Required:    true,
   },
   "tag": schema.StringAttribute{
    Description: "The tag for this inventory item.",
    Optional:    true,
   },
  },
 }
}

func (r *itemResource) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) {
 // Retrieve import ID and save to id attribute
 // If our ID was a string then we could do this
 // resource.ImportStatePassthroughID(ctx, path.Root("id"), req, resp)

 id, err := strconv.ParseInt(req.ID, 10, 64)

 if err != nil {
  resp.Diagnostics.AddError(
   "Error importing item",
   "Could not import item, unexpected error (ID should be an integer): "+err.Error(),
  )
  return
 }

 resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("id"), id)...)
}

// Create a new resource.
func (r *itemResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
 tflog.Debug(ctx, "Preparing to create item resource")
 // Retrieve values from plan
 var plan itemResourceModel
 diags := req.Plan.Get(ctx, &plan)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }

 name := plan.Name.ValueString()
 tag := plan.Tag.ValueString()

 item := client.NewItem{
  Name: name,
  Tag:  &tag,
 }

 params := client.AddItemJSONRequestBody(item)

 // Create new item

 itemResponse, err := r.client.AddItem(ctx, params)
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Create Item",
   err.Error(),
  )
  return
 }

 var newItem client.Item
 if err := json.NewDecoder(itemResponse.Body).Decode(&newItem); err != nil {
  resp.Diagnostics.AddError(
   "Invalid format received for Item",
   err.Error(),
  )
  return
 }

 // Map response body to model
 plan.ID = types.Int64Value(newItem.Id)
 plan.Name = types.StringValue(newItem.Name)
 plan.Tag = types.StringValue(*newItem.Tag)

 // Set state to fully populated data
 diags = resp.State.Set(ctx, plan)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }
 tflog.Debug(ctx, "Created item resource", map[string]any{"success": true})
}

// Read resource information.
func (r *itemResource) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) {
 tflog.Debug(ctx, "Preparing to read item resource")
 // Get current state
 var state itemResourceModel
 diags := req.State.Get(ctx, &state)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }

 itemResponse, err := r.client.FindItemById(ctx, state.ID.ValueInt64())
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Read Item",
   err.Error(),
  )
  return
 }

 // Treat HTTP 404 Not Found status as a signal to remove/recreate resource
 if itemResponse.StatusCode == http.StatusNotFound {
  resp.State.RemoveResource(ctx)
  return
 }

 if itemResponse.StatusCode != http.StatusOK {
  resp.Diagnostics.AddError(
   "Unexpected HTTP error code received for Item",
   itemResponse.Status,
  )
  return
 }

 var newItem client.Item
 if err := json.NewDecoder(itemResponse.Body).Decode(&newItem); err != nil {
  resp.Diagnostics.AddError(
   "Invalid format received for Item",
   err.Error(),
  )
  return
 }

 // Map response body to model
 state = itemResourceModel{
  ID:   types.Int64Value(newItem.Id),
  Name: types.StringValue(newItem.Name),
  Tag:  types.StringValue(*newItem.Tag),
 }

 // Set refreshed state
 diags = resp.State.Set(ctx, &state)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }
 tflog.Debug(ctx, "Finished reading item resource", map[string]any{"success": true})
}

func (r *itemResource) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) {
 tflog.Debug(ctx, "Preparing to update item resource")
 // Retrieve values from plan
 var plan itemResourceModel
 diags := req.Plan.Get(ctx, &plan)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }

 name := plan.Name.ValueString()
 tag := plan.Tag.ValueString()

 item := client.NewItem{
  Name: name,
  Tag:  &tag,
 }

 // update item
 itemResponse, err := r.client.UpdateItem(ctx, plan.ID.ValueInt64(), item)
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Update Item",
   err.Error(),
  )
  return
 }

 if itemResponse.StatusCode != http.StatusOK {
  resp.Diagnostics.AddError(
   "Unexpected HTTP error code received for Item",
   itemResponse.Status,
  )
  return
 }

 var newItem client.Item
 if err := json.NewDecoder(itemResponse.Body).Decode(&newItem); err != nil {
  resp.Diagnostics.AddError(
   "Invalid format received for Item",
   err.Error(),
  )
  return
 }

 // Overwrite items with refreshed state
 plan = itemResourceModel{
  ID:   types.Int64Value(newItem.Id),
  Name: types.StringValue(newItem.Name),
  Tag:  types.StringValue(*newItem.Tag),
 }

 // Set refreshed state
 diags = resp.State.Set(ctx, plan)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }
 tflog.Debug(ctx, "Updated item resource", map[string]any{"success": true})
}

func (r *itemResource) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) {
 tflog.Debug(ctx, "Preparing to delete item resource")
 // Retrieve values from state
 var state itemResourceModel
 diags := req.State.Get(ctx, &state)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }

 // delete item
 _, err := r.client.DeleteItem(ctx, state.ID.ValueInt64())
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Delete Item",
   err.Error(),
  )
  return
 }
 tflog.Debug(ctx, "Deleted item resource", map[string]any{"success": true})
}

Very similar to what we saw with the item data source, the itemResourceModel type maps each field from the item object’s JSON representation to a field in the Go struct.

type itemResourceModel struct {
 ID   types.Int64  `tfsdk:"id"`
 Name types.String `tfsdk:"name"`
 Tag  types.String `tfsdk:"tag"`
}

and the Schema function defines what the HCL representation of the item resource must conform to.

func (r *itemResource) Schema(_ context.Context, _ resource.SchemaRequest, resp *resource.SchemaResponse) {
 resp.Schema = schema.Schema{
  Description: "Manage an item.",
  Attributes: map[string]schema.Attribute{
   "id": schema.Int64Attribute{
    Description: "Identifier for this inventory item.",
    Computed:    true,
    PlanModifiers: []planmodifier.Int64{
     int64planmodifier.UseStateForUnknown(),
    },
   },
   "name": schema.StringAttribute{
    Description: "The name for this inventory item.",
    Required:    true,
   },
   "tag": schema.StringAttribute{
    Description: "The tag for this inventory item.",
    Optional:    true,
   },
  },
 }
}

Even though we are working with the same type of object here that we have in the item data source, the Schema is different, because instead of needing the user to pass in the id for an existing item, we expect them to pass in everything required to define a new item.

When creating an item in the Inventory service API, the name key is required input, but the tag key is optional input, and the id value will be computed by the API, so we need to mirror all of that information in the Schema definition.

The PlanModifiers that we see defined in the Schema for the id key, is required because, although the id is computed, we also know that it will never change. So, this tells Terraform that it should set the id in the plan to the value that is already in the state file. This is primarily required for update operations, to prevent the Terraform plan from reporting that the id value will be known after apply, when we already know that no change will occur.

The Configure function adds the provider configured client to the resource.

func (r *itemResource) Configure(ctx context.Context, req resource.ConfigureRequest, _ *resource.ConfigureResponse) {
 if req.ProviderData == nil {
  return
 }

 client, ok := req.ProviderData.(*client.Client)
 if !ok {
  tflog.Error(ctx, "Unable to prepare client")
  return
 }
 r.client = client

}

While the Metadata function defines the resp.TypeName for the resource, which in this case will evaluate to inventory_item, and can be referenced in HCL as a resource "invetory_item" block.

func (r *itemResource) Metadata(_ context.Context, req resource.MetadataRequest, resp *resource.MetadataResponse) {
 resp.TypeName = req.ProviderTypeName + "_item"
}

The primarily logic for the item resource is implemented in the five functions listed below:

  • Read
  • Create
  • Update
  • Delete
  • ImportState

As you are developing a resource for the first time, you will often want to implement these functions in the order listed here, and simply have the remaining functions setup to do nothing until you are ready to implement each one of them.

Resource: Read

The Read function for a resource implements all the logic required to read an item from the Inventory service API and write the results into the Terraform state file. In this case, we grab the current Terraform state, attempt to lookup the item via its id, provide special handling of HTTP 404 errors (since this often means that the object was deleted out-of-band), report any other errors, and then map all the results in a new itemResourceModel struct and write the results back into the Terraform state file. The workflow is very similar to what we do for a data source, but it is not exactly the same.

item_resource.go func: Read()
func (r *itemResource) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) {
 tflog.Debug(ctx, "Preparing to read item resource")
 // Get current state
 var state itemResourceModel
 diags := req.State.Get(ctx, &state)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }

 itemResponse, err := r.client.FindItemById(ctx, state.ID.ValueInt64())
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Read Item",
   err.Error(),
  )
  return
 }

 // Treat HTTP 404 Not Found status as a signal to remove/recreate resource
 if itemResponse.StatusCode == http.StatusNotFound {
  resp.State.RemoveResource(ctx)
  return
 }

 if itemResponse.StatusCode != http.StatusOK {
  resp.Diagnostics.AddError(
   "Unexpected HTTP error code received for Item",
   itemResponse.Status,
  )
  return
 }

 var newItem client.Item
 if err := json.NewDecoder(itemResponse.Body).Decode(&newItem); err != nil {
  resp.Diagnostics.AddError(
   "Invalid format received for Item",
   err.Error(),
  )
  return
 }

 // Map response body to model
 state = itemResourceModel{
  ID:   types.Int64Value(newItem.Id),
  Name: types.StringValue(newItem.Name),
  Tag:  types.StringValue(*newItem.Tag),
 }

 // Set refreshed state
 diags = resp.State.Set(ctx, &state)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }
 tflog.Debug(ctx, "Finished reading item resource", map[string]any{"success": true})
}

Resource: Create

The Create function’s sole purpose is to create new items in the API. In this function we will take the plan for the new item, fill in a NewItem struct with the values we got from the plan, make a call to the API to create the item, handle any errors, parse the response, and then write the response values (including the newly generated id) back into the Terraform state file.

item_resource.go func: Create()
func (r *itemResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
 tflog.Debug(ctx, "Preparing to create item resource")
 // Retrieve values from plan
 var plan itemResourceModel
 diags := req.Plan.Get(ctx, &plan)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }

 name := plan.Name.ValueString()
 tag := plan.Tag.ValueString()

 item := client.NewItem{
  Name: name,
  Tag:  &tag,
 }

 params := client.AddItemJSONRequestBody(item)

 // Create new item

 itemResponse, err := r.client.AddItem(ctx, params)
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Create Item",
   err.Error(),
  )
  return
 }

 var newItem client.Item
 if err := json.NewDecoder(itemResponse.Body).Decode(&newItem); err != nil {
  resp.Diagnostics.AddError(
   "Invalid format received for Item",
   err.Error(),
  )
  return
 }

 // Map response body to model
 plan.ID = types.Int64Value(newItem.Id)
 plan.Name = types.StringValue(newItem.Name)
 plan.Tag = types.StringValue(*newItem.Tag)

 // Set state to fully populated data
 diags = resp.State.Set(ctx, plan)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }
 tflog.Debug(ctx, "Created item resource", map[string]any{"success": true})
}

Resource: Update

The Update function is used when we have an existing item in the state file and have generated a plan that will make a change to this item. In this function we take the plan and then assemble a NewItem. We use the NewItem struct, here because the API client, expects us to pass in the target id as a separate argument, when we make the call to r.client.UpdateItem(). Once we have attempted to update the item, we will handle any errors, and then parse the API response and update the state with the values that we received back.

item_resource.go func: Update()
func (r *itemResource) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) {
 tflog.Debug(ctx, "Preparing to update item resource")
 // Retrieve values from plan
 var plan itemResourceModel
 diags := req.Plan.Get(ctx, &plan)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }

 name := plan.Name.ValueString()
 tag := plan.Tag.ValueString()

 item := client.NewItem{
  Name: name,
  Tag:  &tag,
 }

 // update item
 itemResponse, err := r.client.UpdateItem(ctx, plan.ID.ValueInt64(), item)
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Update Item",
   err.Error(),
  )
  return
 }

 if itemResponse.StatusCode != http.StatusOK {
  resp.Diagnostics.AddError(
   "Unexpected HTTP error code received for Item",
   itemResponse.Status,
  )
  return
 }

 var newItem client.Item
 if err := json.NewDecoder(itemResponse.Body).Decode(&newItem); err != nil {
  resp.Diagnostics.AddError(
   "Invalid format received for Item",
   err.Error(),
  )
  return
 }

 // Overwrite items with refreshed state
 plan = itemResourceModel{
  ID:   types.Int64Value(newItem.Id),
  Name: types.StringValue(newItem.Name),
  Tag:  types.StringValue(*newItem.Tag),
 }

 // Set refreshed state
 diags = resp.State.Set(ctx, plan)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }
 tflog.Debug(ctx, "Updated item resource", map[string]any{"success": true})
}

Resource: Delete

The Delete function will be called when plan determines that an item needs to be deleted and the Read function does not return an HTTP 404 which would tell us that the object had already been deleted out of band. In this function we take the plan and make a call to the Inventory service to delete the item associated with the id in the plan. The state for the item in question will automatically be removed if there are no errors generated by this function.

item_resource.go func: Delete()
func (r *itemResource) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) {
 tflog.Debug(ctx, "Preparing to delete item resource")
 // Retrieve values from state
 var state itemResourceModel
 diags := req.State.Get(ctx, &state)
 resp.Diagnostics.Append(diags...)
 if resp.Diagnostics.HasError() {
  return
 }

 // delete item
 _, err := r.client.DeleteItem(ctx, state.ID.ValueInt64())
 if err != nil {
  resp.Diagnostics.AddError(
   "Unable to Delete Item",
   err.Error(),
  )
  return
 }
 tflog.Debug(ctx, "Deleted item resource", map[string]any{"success": true})
}

Resource: ImportState

The last important function in a resource, is called ImportState. This is used when someone wants to use the terraform import command to adopt an existing item in the API into their Terraform state file. This function basically gets us the id we need into the format that we have to pass to the Inventory service API to read the existing value, so that the framework can then save that data into the Terraform state file.

item_resource.go func: ImportState()
func (r *itemResource) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) {
 // Retrieve import ID and save to id attribute
 // If our ID was a string then we could do this
 // resource.ImportStatePassthroughID(ctx, path.Root("id"), req, resp)

 id, err := strconv.ParseInt(req.ID, 10, 64)

 if err != nil {
  resp.Diagnostics.AddError(
   "Error importing item",
   "Could not import item, unexpected error (ID should be an integer): "+err.Error(),
  )
  return
 }

 resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("id"), id)...)
}

Testing the Resource

We can now create acceptance tests for the item resource and then try and use it to create and manage one of our resources.

Full Source Code: item_resource_test.go
package provider

import (
 "testing"

 "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource"
)

func TestAccItemResource(t *testing.T) {
 resource.Test(t, resource.TestCase{
  ProtoV6ProviderFactories: testAccProtoV6ProviderFactories,
  Steps: []resource.TestStep{
   // Create and Read testing
   {
    Config: providerConfig + `
resource "inventory_item" "test" {
    name = "Jones Extreme Sour Cherry Warhead Soda"
    tag = "USD:2.99"
}
`,
    Check: resource.ComposeAggregateTestCheckFunc(
     resource.TestCheckResourceAttr("inventory_item.test", "name", "Jones Extreme Sour Cherry Warhead Soda"),
     resource.TestCheckResourceAttr("inventory_item.test", "tag", "USD:2.99"),
     // Verify dynamic values have any value set in the state.
     resource.TestCheckResourceAttrSet("inventory_item.test", "id"),
    ),
   },
   // ImportState testing
   {
    ResourceName:      "inventory_item.test",
    ImportState:       true,
    ImportStateVerify: true,
   },
   // Update and Read testing
   {
    Config: providerConfig + `
resource "inventory_item" "test" {
    name = "1928 de Havilland DH-60GM"
    tag  = "USD:110,781"
}
`,
    Check: resource.ComposeAggregateTestCheckFunc(
     resource.TestCheckResourceAttr("inventory_item.test", "name", "1928 de Havilland DH-60GM"),
     resource.TestCheckResourceAttr("inventory_item.test", "tag", "USD:110,781"),
     // Verify dynamic values have any value set in the state.
     resource.TestCheckResourceAttrSet("inventory_item.test", "id"),
    ),
   },
   // Delete testing automatically occurs in TestCase
  },
 })
}

This set of acceptance tests, will create a new item in the Inventory service API from the following HCL code:

resource "inventory_item" "test" {
    name = "Jones Extreme Sour Cherry Warhead Soda"
    tag = "USD:2.99"
}

Verify the results:

     resource.TestCheckResourceAttr("inventory_item.test", "name", "Jones Extreme Sour Cherry Warhead Soda"),
     resource.TestCheckResourceAttr("inventory_item.test", "tag", "USD:2.99"),
     // Verify dynamic values have any value set in the state.
     resource.TestCheckResourceAttrSet("inventory_item.test", "id"),

Then update the item using the following HCL code:

resource "inventory_item" "test" {
    name = "1928 de Havilland DH-60GM"
    tag  = "USD:110,781"
}

Ensure that the results match what we expect:

     resource.TestCheckResourceAttr("inventory_item.test", "name", "1928 de Havilland DH-60GM"),
     resource.TestCheckResourceAttr("inventory_item.test", "tag", "USD:110,781"),
     // Verify dynamic values have any value set in the state.
     resource.TestCheckResourceAttrSet("inventory_item.test", "id"),

And then finally, automatically test deleting the item.

We can also go back and update the acceptance tests for the item data source. These changes will makes the test more self-sufficient and remove the need to pre-populate any data into the API for the tests to run successfully.

Full Source Code (updated): item_data_source_test.go
package provider

import (
  "fmt"
  "testing"

  "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource"
)

func TestAccItemDataSource(t *testing.T) {
  resource.Test(t, resource.TestCase{
    ProtoV6ProviderFactories: testAccProtoV6ProviderFactories,
    Steps: []resource.TestStep{
      {
      Config: providerConfig + `
resource "inventory_item" "test" {
  name = "2022 Mustang Shelby GT500"
  tag = "USD:79,420"
}

data "inventory_item" "test" {
 id = inventory_item.test.id
}
`,
      Check: resource.ComposeAggregateTestCheckFunc(
        // Verify the item to ensure all attributes are set
        resource.TestCheckResourceAttr("data.inventory_item.test", "name", "2022 Mustang Shelby GT500"),
        resource.TestCheckResourceAttr("data.inventory_item.test", "tag", "USD:79,420"),
        // Verify placeholder id attribute
        resource.TestCheckResourceAttrSet("data.inventory_item.test", "id"),
      ),
      },
    },
  })
}

The changes that we need to make to item_data_source_test.go can be seen here:

            for that specific `id`.
         */
         Config: providerConfig + `
+resource "inventory_item" "test" {
+  name = "2022 Mustang Shelby GT500"
+  tag = "USD:79,420"
+}
+
 data "inventory_item" "test" {
- id = 1000
+ id = inventory_item.test.id
 }
 `,
         Check: resource.ComposeAggregateTestCheckFunc(
+          // Verify the item to ensure all attributes are set
+          resource.TestCheckResourceAttr("data.inventory_item.test", "name", "2022 Mustang Shelby GT500"),
+          resource.TestCheckResourceAttr("data.inventory_item.test", "tag", "USD:79,420"),
           // Verify placeholder id attribute
           resource.TestCheckResourceAttrSet("data.inventory_item.test", "id"),
         ),

This will allow the data source test to create a new item, and then read in that same item as a data source, and ensure that we get all the values that we expect.

Just like we had to do to enable the data source in the provider, we also need to make the provider aware of this new resource. So let’s go back into provider.go and edit the Resources function so that it looks like this.

func (p *inventoryProvider) Resources(_ context.Context) []func() resource.Resource {
 return []func() resource.Resource{
  NewItemResource,
 }
}

At this point, we can go ahead and rebuild the provider:

$ go mod tidy
$ go install .

WARNING: It is important that the Inventory service is running at this point, if not then you will get connection errors with the acceptance tests and terraform commands.

Let’s try out the acceptance tests. If the provider can connect to the API and read the data source, then you should see output similar to this:

NOTE: There currently appears to be a bug in the framework that manifests itself when one is trying to run acceptance tests (via TF_ACC=1 go test ./... -v -timeout 120m) on a resource that is defined in an unpublished Terraform provider. This issue appears to go away once the provider is published. This issue has been reported.

Let’s go ahead and try to run terraform apply again in the directory that contains provider_test.tf. Edit the file to match this:

terraform {
  required_providers {
    inventory = {
      source = "myuser/inventory"
    }
  }
}

# Configure the connection details for the Inventory service
provider "inventory" {
  host = "127.0.0.1"
  port = "8080"
}

#Create new Inventory item
resource "inventory_item" "example" {
  name = "Jones Extreme Sour Cherry Warhead Soda"
  tag = "USD:2.99"
}

And then go ahead and apply it. If all goes well, you should see be prompted to apply some changes, and once the changes are confirmed you should see something very close to this:

$ terraform apply
╷
│ Warning: Provider development overrides are in effect
…
data.inventory_item.example: Reading...
data.inventory_item.example: Read complete after 0s [name=1908 Harley-Davidson]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # inventory_item.example will be created
  + resource "inventory_item" "example" {
      + id   = (known after apply)
      + name = "Jones Extreme Sour Cherry Warhead Soda"
      + tag  = "USD:2.99"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
…
  Enter a value: yes

inventory_item.example: Creating...
inventory_item.example: Creation complete after 0s [name=Jones Extreme Sour Cherry Warhead Soda]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

As a last check of the item resource code, we can confirm that the state file and API agree on the data that the new item contains.

NOTE: In the curl command below you will need to use the id that you got from the previous terraform state command.

$ terraform state show inventory_item.example
# inventory_item.example:
resource "inventory_item" "example" {
    id   = 1005
    name = "Jones Extreme Sour Cherry Warhead Soda"
    tag  = "USD:2.99"
}

$ curl -f -X GET 127.0.0.1:8080/items/1005
{"id":1005,"name":"Jones Extreme Sour Cherry Warhead Soda","tag":"USD:2.99"}

TIP: Since we are done, you may want to edit your ${HOME}/.terraformrc file now and comment out the dev_overrides block inside the provider_installation block.

The Conclusion

Congratulations! You should now have a working Terraform provider for the custom Inventory service API. The new Terraform framework and this general workflow makes it easy to create new Terraform providers for whatever API you might need to automate. The framework removes most of the implementation details and allows you to focus on reading and writing data to the target API, mapping the results into Terraform, and developing useful test cases.

One interesting aspect of a Terraform provider is that there is no requirement to support the whole API, especially when you are building something for internal use. If you only need to access a few endpoints from the API, then those are really the only things that you need to implement initially. If you decide later on that you could benefit from supporting additional endpoints, then you can add those as you go along.

At this point you are ready to distribute your provider internally, publish your provider to the Hashicorp Registry, or even design or run a private Terraform registry and host it there.

SuperOrbital consists of a small team of distributed systems experts focused on helping you deliver ambitious projects. We provide the judgment and expertise you need to execute with confidence on critical, high-risk, high-profile assignments. Additionally, our training workshops are lovingly crafted in-house and delivered in person by our team of senior cloud engineers.

Reach out today and see how we can help you succeed with your most ambitious engineering projects.


Sean Kane
SuperOrbital Engineer
Flipping bits on the internet since 1992.