[Setting Up Yandex Cloud Provider With Terraform and Terragrunt]
Analyze with AI
Get AI-powered insights from this Mad Devs tech article:
Before you start
The configuration examples in this guide use Terraform >= 1.9.7, Terragrunt >= 0.67.16, and Yandex Cloud Provider >= 0.129.0. A few things to check when using newer versions:
- Yandex Cloud Provider is actively maintained – as of early 2026, it has reached v0.136+. The generate block approach described here is compatible with all minor versions above 0.129.0, but it's worth pinning a specific version range in production to avoid unexpected breaking changes on
terraform init. - The Terraform mirror at terraform-mirror.yandexcloud.net remains the recommended installation path for the Yandex Cloud provider in environments where the HashiCorp registry is unavailable. If you're using
.terraform.lock.hcl, run terraform providerslock -net-mirror=https://terraform-mirror.yandexcloud.netafter updating the version constraint. - OpenTofu users: the same mirror URL and
provider_installationblock structure applies – see the dedicated mirror configuration guide for OpenTofu-specific nuances.
And now let's begin.
Intro
Here's a practical guide on how to manage Terraform Yandex Cloud provider configurations for different regions using Terragrunt – covering everything from basic setup to multi-region deployments with Managed Kubernetes.
☑️ What you'll need
- Terraform >= 1.9.7
- Terragrunt >= 0.67.16
- Yandex Cloud Provider >= 0.129.0
Setup steps
Let's look at how to use Terragrunt to dynamically create provider configs for Yandex Cloud. I'll break this down into digestible pieces:
1. Basic provider setup
First, we'll set up the base Yandex Cloud Terraform provider config in the root terragrunt.hcl. This generates a versions.tf file for each module and locks the yandex-cloud/yandex provider source.
locals {
tf_providers = {
yandex = ">= 0.129.0"
}
}
generate "providers_versions" {
path = "versions.tf"
if_exists = "overwrite"
contents = <<EOF
terraform {
required_version = ">= 1.9.7"
required_providers {
yandex = {
source = "yandex-cloud/yandex"
version = "${local.tf_providers.yandex}"
}
}
}
EOF
}2. Region settings
The Yandex Cloud provider defaults to the RU region. For regions like the newly created KZ region (kz.yandexcloud.net), additional endpoints must be specified explicitly – otherwise terraform init will fail to resolve the correct API surface. We can specify them at the project level, for example, env.hcl and the providers.tf is generated dynamically for each module:
locals {
cloud_id = "SOME_ID"
folder_id = "SOME_ID"
sa_key_file = "${get_repo_root()}/key.json"
endpoint = "api.yandexcloud.kz:443" # Region-Specific
storage_endpoint = "storage.yandexcloud.kz" # Region-Specific
}
generate "providers_configs" {
path = "providers.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
provider "yandex" {
service_account_key_file = "${local.sa_key_file}"
cloud_id = "${local.cloud_id}"
folder_id = "${local.folder_id}"
endpoint = "${local.endpoint}"
storage_endpoint = "${local.storage_endpoint}"
}
EOF
}3. Additional providers
If you're working with Managed Kubernetes on Yandex Cloud alongside Kubectl and Helm in Terraform, you'll need these additional provider configs. The key challenge here is that Kubernetes provider configuration requires cluster outputs that don't exist yet at plan time – mock_outputs handle this by providing placeholder values for init, validate and plan commands. To wire everything together, pass cluster_id from a Terragrunt dependency into the called module:
dependencies {
paths = ["path/to/your/mks"]
}
dependency "mks" {
config_path = "path/to/your/mks"
mock_outputs_allowed_terraform_commands = ["init", "validate", "plan", "destroy"]
mock_outputs_merge_strategy_with_state = "shallow"
mock_outputs = {
cluster_id = "cluster_id"
}
}
terraform {
source = "path/to/your/module"
}
inputs = {
cluster_id = dependency.mks.outputs.cluster_id
. . .
<OTHER_INPUTS>
. . .
}Then use data resources in the module to configure providers:
variable "cluster_id" {
type = string
default = null
description = "Managed Kubernetes Service cluster ID"
}
data "yandex_kubernetes_cluster" "this" {
cluster_id = var.cluster_id
}
data "yandex_client_config" "this" {}
provider "kubernetes" {
host = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint
cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate
token = data.yandex_client_config.this.iam_token
}
provider "helm" {
kubernetes {
host = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint
cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate
token = data.yandex_client_config.this.iam_token
}
}
provider "kubectl" {
host = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint
cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate
token = data.yandex_client_config.this.iam_token
}Notes
▫️ Using Terragrunt for configuration management:
Terragrunt simplifies configuration management for multiple environments by dynamically generating provider configurations via the generate block in the .hcl files. This setup allows for easy handling of multi-region deployments from a single configuration source.
▫️ Setup JSON key for Terragrunt:
To access the Yandex Cloud resources, place the JSON key for the service account in the root directory of your project. Don't forget to add it to .gitignore. Alternatively you can use a static access key.
▫️ Configuring the module:
Remember that even if you don't manage the terraform module directly, you can almost always override the configuration using generate when calling the module in Terragrunt.
Conclusion
This setup gives you a clean way to manage Terraform configs across different Yandex Cloud regions. It handles authentication properly and works well whether you're just using basic cloud resources or diving into Kubernetes and Helm deployments.
