Standardisation, efficiency and developer experience sit front and centre for us when helping customers to operate predictably and innovate safely. And it's the same when running our own platform. Service catalogs - curated collections of re-useable, versioned and supported components that meet organisational standards - are a step in the right direction.
In this series of posts, I'll show you how we operate a service catalog for Terraform modules - infrastructure as code components - on GitHub.
A module is a container for multiple resources that are used together. You can use modules to create lightweight abstractions, so that you can describe your infrastructure in terms of its architecture, rather than directly in terms of physical objects.
https://developer.hashicorp.com/terraform/language/modules/develop
At Frontier, we typically use Terraform modules to deploy and manage cloud infrastructure on Microsoft Azure, Amazon Web Services (AWS) and Google Cloud Platform (GCP). The modules we build let us do useful things like:
Each module includes one or more resources and require a common set of input variables - like zone, environment, location and identifier - as well as some resource specific ones, too.
Here's an example - a module that deploys an Azure Kubernetes Service cluster:
# variables.tf
variable "environment" {
type = string
}
variable "identifier" {
type = string
}
variable "location" {
type = string
}
variable "tags" {
type = map(string)
default = {}
}
variable "vm_size" {
type = string
default = "Standard_B4ms"
}
variable "zone" {
type = string
}
...
# locals.tf
locals {
kubernetes_version = "1.30.1"
tags = {
Environment = var.environment
Location = var.location
ModuleName = "kubernetes-cluster"
ModuleVersion = "1.0.9"
Zone = var.zone
}
}
# resources.tf
resource "azurerm_kubernetes_cluster" "main" {
name = "k8s-${var.zone}-${var.environment}-${var.location}-${var.identifier}"
location = var.location
resource_group_name = var.resource_group_name
azure_policy_enabled = true
kubernetes_version = local.kubernetes_version
...
azure_active_directory_role_based_access_control {
managed = true
admin_group_object_ids = var.admin_group_object_ids
azure_rbac_enabled = true
}
...
tags = merge(var.tags, local.tags)
}
...
Full code here.
The Kubernetes cluster this module deploys:
When engineers or developers use this module to deploy an Azure Kubernetes Service cluster to our platform, they're probably going to get to production faster, safer and cheaper than if they'd have written Terraform from scratch because it meets organisational standards "out the box".
It's worth calling out that Terraform documentation says:
We do not recommend writing modules that are just thin wrappers around single other resource types. If you have trouble finding a name for your module that isn't the same as the main resource type inside it, that may be a sign that your module is not creating any new abstraction and so the module is adding unnecessary complexity. Just use the resource type directly in the calling module instead.
https://developer.hashicorp.com/terraform/language/modules/develop#when-to-write-a-module
I understand the intent with this recommendation, and I agree that if a module is used as a simple wrapper - passing inputs straight through - then it's probably adding unnecessary complexity. In our example however - despite only defining a single resource - the module is reducing cognitive load, improving quality and consistency, and shortening release cycles. That's not unnecessary complexity, that's a step towards a golden path.
In the next post, I'll talk about how we use GitHub to store our Terraform modules, and how we solved the problem of independently versioning modules in a single repository with a tool called Vertag.