Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 166 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

2.22.0

Loading...

Loading...

Getting started

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

User Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Security

General security

Terrakube security is based organizations and groups.

All Dex connectors that implement the groups claims can be used inside Terrakube.

An organization can have one or multiple groups and each group have different kind of access to manage the following options:

  • Module

    • Manage terraform modules inside an organization

  • VCS

    • Manage private connections to different VCS like Github, Bitbucket, Azure DevOps and Gitlab and handle SSH keys

  • Template

    • Manage the custom flows written in Terrakube Configuration Language when running any job inside the platform

  • Workspaces

    • Manage the terraform workspaces to run remote terraform operations.

  • Providers

    • Manage the terraform providers available inside the platform

Adding a group to an organization will grant access to read the content inside the organization but to be able to manage any option like module, workspace, templates or providers or VCS a Terrakube administrator will need to grant it

Administrator group

There is one special group inside Terrakube called TERRAKUBE_ADMIN, this is the only group that has access to create organizations and grant access to a teams to manage different organization features, you can also customize the group name if you want to use a different name depending on which Dex connector you are using when running Terrakube.

Storage backend

Docker Compose + Traefik

WIP

Special thanks to SolomonHD who has created the Traefik setup example to use Terrakube

Original Reference:

https://gist.github.com/SolomonHD/b55be40146b7a53b8f26fe244f5be52e

PostgreSQL

By default the helmchart deploy a postgresql database in your terrakube namespace but if you want to customize that you can change the default deployment values.

To use a PostgreSQL with your Terrakube deployment create a terrakube.yaml file with the following content:

## Terrakube API properties
api:
  defaultDatabase: false
  loadSampleData: false
  properties:
    databaseType: "POSTGRESQL"
    databaseHostname: "server_name.postgres.database.com"
    databaseName: "database_name"
    databaseUser: "database_user"
    databasePassword: "database_password"
    databaseSslMode: "disable"
    databasePort: "5432"

loadSampleData this will add some organization, workspaces and modules by default in your database if need it

Now you can install terrakube using the command.

helm install --values terrakube.yaml terrakube terrakube-repo/terrakube -n terrakube

Postgresql SSL mode can be use adding databaseSslMode parameter by default the value is "disable", but it accepts the following values; disable, allow, prefer, require, verify-ca, verify-full. This feature is supported from Terrakube 2.15.0. Reference: https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/PGProperty.html#SSL_MODE

Deployment

The easiest way to deploy Terrakube is using our Helm Chart, to learn more about this please review the following:

You can deploy Terrakube using different authentication providers for more information check

Amazon Cloud Storage

This guide will assume that you are using the minikube deployment, but the storage backend can be used in any real kubernetes environment.

The first step will be to create one s3 bucket with private access

Create S3 bucket tutorial

Once the s3 bucket is created you will need to get the following:

  • access key

  • secret key

  • bucket name

  • region

Now you have all the information we will need to create a terrakube.yaml for our terrakube deployment with the following content:

Now you can install terrakube using the command:

Helm Chart

You can deploy Terrakube to any Kubernetes cluster using the helm chart available in the following repository:

Helm Repository

To use the repository do the following:

Google Cloud Storage

This guide will assume that you are using the minikube deployment, but the storage backend can be used in any real kubernetes environment.

The first step will be to create one google storage bucket with private access

Create storage bucket tutorial

Once the google storage bucket is created you will need to get the following:

  • project id

  • bucket name

  • JSON GCP credentials file with access to the storage bucket

Now you have all the information we will need to create a terrakube.yaml for our terrakube deployment with the following content:

Now you can install terrakube using the command:

H2

To use a H2 with your Terrakube deployment create a terrakube.yaml file with the following content:INT

H2 database is just for testing, each time the api pod is restarted a new database will be created

loadSampleData this will add some organization, workspaces and modules by default in your database, keep databaseHostname, databaseName, databaseUser and databasePassword empty

Now you can install terrakube using the command.

🔨Helm Chart
## Terrakube API properties
api:
  defaultDatabase: false
  loadSampleData: true
  properties:
    databaseType: "H2"
    databaseHostname: ""
    databaseName: ""
    databaseUser: ""
    databasePassword: ""
helm install --values terrakube.yaml terrakube terrakube-repo/terrakube -n terrakube
## Terrakube Storage
storage:
  defaultStorage: false
  aws:
    accessKey: "rqerqw"
    secretKey: "sadfasfdq"
    bucketName: "qerqw"
    region: "us-east-1"
helm install --values terrakube.yaml terrakube terrakube-repo/terrakube -n terrakube
link
helm repo add terrakube-repo https://AzBuilder.github.io/terrakube-helm-chart
helm repo update
https://github.com/AzBuilder/terrakube-helm-chart
## Terrakube Storage
storage:
  defaultStorage: false
  gcp:
    projectId: "sample project"
    bucketName: "sampledata"
    credentials: |
      {
        "type": "service_account",
        "project_id": "XXXXXXX",
        ......
      }
helm install --values terrakube.yaml terrakube terrakube-repo/terrakube -n terrakube
link

Ingress Configuration

The default ingress configuration for terrakube can be customize using a terrakube.yaml like the following:

dex:
  config:
    issuer: http://terrakube-api.minikube.net/dex ## CHANGE THIS DOMAIN
    #.....
    #PUT ALL THE REST OF THE DEX CONFIG AND UPDATE THE URL WITH THE NEW DOMAIN
    #.....

## Ingress properties
ingress:
  useTls: false
  ui:
    enabled: true
    domain: "terrakube-ui.minikube.net" ## CHANGE THIS DOMAIN
    path: "/(.*)"
    pathType: "Prefix" 
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
  api:
    enabled: true
    domain: "terrakube-api.minikube.net" ## CHANGE THIS DOMAIN
    path: "/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
  registry:
    enabled: true
    domain: "terrakube-reg.minikube.net" ## Change to your customer domain
    path: "/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
  dex:
    enabled: true
    path: "/dex/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"

Update the custom domains and the other ingress configuration depending of your kubernetes environment

The above example is using a simple ngnix ingress

Now you can install terrakube using the command

helm install --values terrakube.yaml terrakube terrakube-repo/terrakube -n terrakube

Minio (S3 compatible)

This guide will assume that you are using the minikube deployment, but the storage backend can be used in any real kubernetes environment.

The first step will be to deploy a minio instance inside minikube in the terrakube namespace

MINIO helm deployment link

Create the file minio-setup.yaml that we can use to create the default user and buckets

auth:
  rootUser: "admin"
  rootPassword: "superadmin"
defaultBuckets: "terrakube"
kubectl install --values minio-setup.yaml miniostorage bitnami/minio -n terrakube

Once the minio storage is installed lets get the service name.

kubectl get svc -o wide -n terrakube

The service name for the minio storage should be "miniostorage"

Once minio is installed with a bucket you will need to get the following:

  • access key

  • secret key

  • bucket name

  • endpoint (http://miniostorage:9000)

Now you have all the information we will need to create a terrakube.yaml for our terrakube deployment with the following content:

## Terrakube Storage
storage:
  defaultStorage: false
  minio:
    accessKey: "admin"
    secretKey: "superadmin"
    bucketName: "terrakube"
    endpoint: "http://miniostorage:9000"

Now you can install terrakube using the command:

helm install --values terrakube.yaml terrakube terrakube-repo/terrakube -n terrakube

Docker Compose

Local DNS entries

Update the /etc/hosts file adding the following entries:

127.0.0.1 terrakube-api
127.0.0.1 terrakube-ui
127.0.0.1 terrakube-executor
127.0.0.1 terrakube-dex
127.0.0.1 terrakube-registry

Running Terrakube Locally.

git clone https://github.com/AzBuilder/terrakube.git
cd terrakube/docker-compose
docker-compose up -d

Terrakube will be available in the following URL:

  • http://terrakube-ui:3000

    • Username: [email protected]

    • Password: admin

Docker Compose with Open Telemetry Setup

You can also test Terrakube using the Open Telemetry example setup to see how the components are working internally and check some internal logs

git clone https://github.com/AzBuilder/terrakube.git
cd terrakube/telemetry-compose
docker-compose up -d

The setup is using jaegertracing/all-in-one

Jaeger UI will be available in http://localhost:16686/

For more information about Open Telemetry setup check the following information

Architecture

This is the high level architecture of the platform

Component descriptions:

  • Terrakube API:

    • Expose a JSON:API or GraphQL API providing endpoints to handle:

      • Organizations.

      • Workspace API with the following support:

        • Terraform state history

        • Terraform output history

        • Terraform Variables (public/secrets)

        • Environment Variables (public/secrets)

        • Cron like support to schedule the jobs execution programmatically

      • Job.

        • Custom terraform flows based on Terrakube Configuration Language

      • Modules

      • Providers

      • Teams

      • Teamplate

  • Terrakube Jobs:

    • Automatic process that check for any pending job operations in any workspace to trigger a custom logic flow based on Terrakube Configuration Language

  • Terrakube Executor:

    • Service that executes the Terrakube job operations written in Terrakube Configuration Language, handle the terraform state and outputs using cloud storage providers like Azure Storage Account

  • Terrakube Open Registry:

    • This component allows to expose an open source private repository protected by Dex that you can use to handle your company private terraform modules or providers.

  • Cloud Storage:

    • Cloud storage to handle terraform state, outputs and terraform modules used by terraform CLI

  • RDBMS:

    • The platform can be used with any database supported by the Liquibase project.

  • Security:

    • All authentication and authorization is handle using .

  • Terrakube CLI:

    • Go based CLI that can communicate with the Terrakube API and execute operation for organizations, workspaces, jobs, templates, modules or providers

  • Terrakube UI:

    • React based frontend to handle all Terrakube Operations.

Terrakube can installed in any kuberentes cluster using any cloud provider. We provide a Helm Chart that can be found in the following .

Introduction

Terrakube is an open source collaboration platform for running remote infrastructure as code operations using Terraform or OpenTofu that aims to be a complete replacement for close source tools like Terraform Enterprise, Scalr or Env0.

The platform can be easily installed in any kubernetes cluster, you can easily customize the platform using other open source tools already available for terraform (Example: terratag, infracost, terrascan, etc) and you can integrate the tool with different authentication providers like Azure Active Directory, Google Cloud Identity, Github, Gitlab or any other provider supported by .

Terrakube provides different features like the following:

  • Terraform module protocol:

    • This allows Terrakube to expose an open source terraform registry that is protected and is private by default using different Dex for the authentication with different providers.

  • Terraform provider protocol:

    • This allows Terrakube to expose an open source terraform registry where you can have create your own private terraform providers or create a mirror of the current terraform providers available in the public Hashicorp registry.

  • Handle you infrastructure as code using Organization like closed source tools like Terraform Enterprise and Scalr.

  • Easily extended by default integrating with many open source tools available today for terraform CLI.

  • Support to run the platform using any RDBMS supported by (mysql, postgres, oracle, sql server, etc).

  • Plugins support to extends functionality using different cloud providers and open source tools using the Terrakube Configuration Languague (TCL).

  • Easily extends the platform using common technologies like react js, java, spring, elide, liquibase, groovy, bash, go.

  • Native integration with the most popular VCS like:

    • Github.

    • Azure DevOps

    • Bitbucket

    • Gitlab

  • Support for SSH key (RSA and ED25519)

  • Handle Personal Access Token

  • Extends and create custom flows when running terraform remote operations with custom logics like:

    • Approvals

    • Budget reviews

    • Security reviews

    • Custom logic that you can build with groovy and bash scripts

Organizations

Organizations are privately shared spaces for teams to collaborate on infrastructure. Basically are logical containers that you can use to define your company hierarchy to organize the workspaces, modules in the private registry and assign fine grained permissions to your teams.

In this section:

Dex
connectors
liquibase
Creating an Organization
Global Variables
Team Management
API Tokens
Templates
Tags
Dex
link

Overview

The workspace overview page provides you a summary of the current workspace state.

Managing Workspace Tags

You can manage tags for the workspace in the overview page. The right panel lets you search existing tags or type a new tag name to add or remove tags from the workspace.

Docker Images

Images for Terrakube components can be found in the following links:

Component
Link

Default Images are based on Ubuntu Jammy (linux/amd64)

The docker images are generated using cloud native buildpacks with the maven plugin, more information can be found in these links:

Alpaquita Linux (Experimental)

From Terrakube 2.17.0 we generate 3 additional docker images for the API, Registry and Executor components that are based on that are designed and optimized for Spring Boot applications.

Alpaquita Linux images for the Executor component does not include GIT, Terraform modules using remote executions like the following wont work:

Creating an Organization

Only users that belongs to Terrakube administrator group can create organizations. This group is defined in the terrakube settings during deployment, for more details see

Click the organizations list on the main menu and then click the Create new organization button

Provide organization name and description and click the Create organization button

Then you will redirected to the Organization Settings page where you can define your teams, this is step is important so you can assign the permissions to users, otherwise you won't be able to create workspaces and modules inside the organization.

In order to start using your organization you must add at least one team. Terrakube will create some but you can add more based on your needs. See and for further information.

Default Templates

Terrakube creates the following default templates:

CLI-Driven Templates

Terrakube use job templates for all executions, Terrakube automatically create the following templates that are used when using the terraform remote state backend operations. This templates are created in all organizations.

Terraform Plan/Apply Cli Template

Will be used when you execute using the terraform cli

Terraform Plan/Destroy Cli Templates

Will be used when you execute terraform destroyusing the terraform cli

This templates can be updated if need it but in order for the terraform remote state backed to work properly the step number and the template names should not be changed. So if you delete or modify this templates

Azure Dynamic Provider Credentials

Requirements

The dynamic provider credential setup in Azure can be done with the Terrraform code available in the following link:

The code will also create a sample workspace with all the require environment variables that can be used to test the functionality using the CLI driven workflow.

Make sure to mount your public and private key to the API container as explained

Mare sure the private key is in "pkcs8" format

Validate the following terrakube api endpoints are working:

Set terraform variables using: "variables.auto.tfvars"

To generate the API token check

Run Terraform apply to create all the federated credential setup in AWS and a sample workspace in terrakube for testing

To test the following terraform code can be used:

Running Example:

When running a job Terrakube will correctly authenticate to Azure without any credentials inside the workspace

UI Templates

Terrakube allows you to customize the UI for each step inside your templates using standard HTML. So you can render any kind of content extracted from your Job execution in the Terrakube UI.

For example you can present the costs using Infracost in a friendly way:

Or present a table with the OPA policies

In order to use UI templates you will need to save the HTML for each template step using the . Terrakube expects the ui templates in the following format.

In the above example the 100 property in the JSON refers to the step number inside your template. In order to save this value from the template you can use the Context extension. For example:

Resource Details

Description

The Resource Details action is designed to show detailed information about a specific resource within the workspace. By using the context of the current state, this action provides a detailed view about the properties and dependencies in a tab within the resource drawer.

Display Criteria

By default this Action is enabled and will appear for all the resources. If you want to display this action only for certain resources, please check .

Usage

  • Navigate to the Workspace Overview or the Visual State and click a resource name.

  • In the Resource Drawer, you will see a new tab Details with the resource attributes and dependencies.

Proxy Configuration

When Terrrakube needs to run behind a corporate proxy the following environment variable can be used in each container:

When using the official helm chart the environment variable setting can be set like the following:

Open in Azure Portal

Description

The Open In Azure Portal action is designed to provide quick access to the Azure portal for a specific resource. By using the context of the current state, this action constructs the appropriate URL and presents it as a clickable button.

Display Criteria

By default, this Action is disabled and when enabled will appear for all the azurerm resources. If you want to display this action only for certain resources, please check .

Usage

  • Navigate to the Workspace Overview or the Visual State and click a resource name.

  • In the Resource Drawer, click the Open in Azure button.

  • You will be able to see the Azure portal for that resource.

CI/CD Integration

If you need to manage your workspaces and execute Jobs from an external CI/CD tool you can use the however a simple approach is to use the existing integrations:

CI/CD tool
Integration

Terrakube API

https://hub.docker.com/r/azbuilder/api-server/tags

Terrakube Registry

https://hub.docker.com/r/azbuilder/open-registry/tags

Terrakube Executor

https://hub.docker.com/r/azbuilder/executor/tags

Terrakube UI

https://hub.docker.com/r/azbuilder/terrakube-ui/tags

docker pull azbuilder/api-server:2.17.0-alpaquita
docker pull azbuilder/open-registry:2.17.0-alpaquita
docker pull azbuilder/executor:2.17.0-alpaquita
module "consul" {
  source = "[email protected]:hashicorp/example.git"
}
https://docs.spring.io/spring-boot/docs/current/reference/html/container-images.html#container-images.buildpacks
https://docs.spring.io/spring-boot/docs/3.1.5/maven-plugin/reference/htmlsingle/#build-image
Alpaquita Linux
JAVA_TOOL_OPTIONS="-Dhttp.proxyHost=your.proxy.net -Dhttp.proxyPort=8080 -Dhttp.nonProxyHosts=XXXXX -Dhttps.proxyHost=your.proxy.net -Dhttps.proxyPort=8080 -Dhttps.nonProxyHosts=XXXXX"
api:
  env:
  - name: JAVA_TOOL_OPTIONS
    value: "-Dhttp.proxyHost=your.proxy.net -Dhttp.proxyPort=8080 -Dhttp.nonProxyHosts=XXXXX -Dhttps.proxyHost=your.proxy.net -Dhttps.proxyPort=8080 -Dhttps.nonProxyHosts=XXXXX"
    
registry:
  env:
  - name: JAVA_TOOL_OPTIONS
    value: "-Dhttp.proxyHost=your.proxy.net -Dhttp.proxyPort=8080 -Dhttp.nonProxyHosts=XXXXX -Dhttps.proxyHost=your.proxy.net -Dhttps.proxyPort=8080 -Dhttps.nonProxyHosts=XXXXX"

executor:
  env:
  - name: JAVA_TOOL_OPTIONS
    value: "-Dhttp.proxyHost=your.proxy.net -Dhttp.proxyPort=8080 -Dhttp.nonProxyHosts=XXXXX -Dhttps.proxyHost=your.proxy.net -Dhttps.proxyPort=8080 -Dhttps.nonProxyHosts=XXXXX"
Reference

Github

Github Action

Bitbucket

Bitbucket Pipe

Azure DevOps

Azure DevOps extension is in development

Terrakube API,

User Authentication (DEX)

To authenticate users Terrakube implement DEX so you can authenticate using differente providers using dex connectors like the following:

  • Azure Active Direcory

  • Google Cloud Identity

  • Amazon Cognito

  • Github Authentication

  • Gitlab Authentictaion

  • OIDC

  • LDAP

  • Keycloak

  • etc.

Any dex connector that implements the "groups" scope should work without any issue

The Terrakube helm chart is using DEX as dependency, so you can quickly implement it using any exiting dex configuration.

Make sure to update the dex configuration when deploying terrakube in a real kubernetes environment, by default it is using a very basic openLDAP with some sample data. To disable udpate security.useOpenLDAP in your terrakube.yaml

To customize the DEX setup just create a simple terrakube.yaml and update the configuration like the following example:

# UPDATE THE DEX CLIENT ID AND SCOPE 
security:
  dexClientId: "example-app"
  dexClientScope: "email openid profile offline_access groups"
  useOpenLDAP: false
  
# UPDATE THE DEX CONFIG
dex:
  config:
    issuer: http://terrakube-api.minikube.net/dex

    storage:
      type: memory
    web:
      http: 0.0.0.0:5556
      allowedOrigins: ['*']
      skipApprovalScreen: true
    oauth2:
      responseTypes: ["code", "token", "id_token"] 

    connectors:
    - type: ldap
      name: OpenLDAP
      id: ldap
      config:
        # The following configurations seem to work with OpenLDAP:
        #
        # 1) Plain LDAP, without TLS:
        host: terrakube-openldap-service:389
        insecureNoSSL: true
        #
        # 2) LDAPS without certificate validation:
        #host: localhost:636
        #insecureNoSSL: false
        #insecureSkipVerify: true
        #
        # 3) LDAPS with certificate validation:
        #host: YOUR-HOSTNAME:636
        #insecureNoSSL: false
        #insecureSkipVerify: false
        #rootCAData: 'CERT'
        # ...where CERT="$( base64 -w 0 your-cert.crt )"

        # This would normally be a read-only user.
        bindDN: cn=admin,dc=example,dc=org
        bindPW: admin

        usernamePrompt: Email Address

        userSearch:
          baseDN: ou=People,dc=example,dc=org
          filter: "(objectClass=person)"
          username: mail
          # "DN" (case sensitive) is a special attribute name. It indicates that
          # this value should be taken from the entity's DN not an attribute on
          # the entity.
          idAttr: DN
          emailAttr: mail
          nameAttr: cn

        groupSearch:
          baseDN: ou=Groups,dc=example,dc=org
          filter: "(objectClass=groupOfNames)"

          userMatchers:
            # A user is a member of a group when their DN matches
            # the value of a "member" attribute on the group entity.
          - userAttr: DN
            groupAttr: member

          # The group name should be the "cn" value.
          nameAttr: cn

    staticClients:
    - id: example-app
      redirectURIs:
      - 'http://terrakube-ui.minikube.net'
      - '/device/callback'
      - 'http://localhost:10000/login'
      - 'http://localhost:10001/login'
      name: 'example-app'
      public: true

Dex configuration examples can be found here.

Import Templates

Templates can become big over time while adding steps to execute additional business logic.

Example before template import:

flow:
- type: "terraformPlan"
  step: 100
  commands:
    - runtime: "GROOVY"
      priority: 100
      before: true
      script: |
        import TerraTag
        new TerraTag().loadTool(
          "$workingDirectory",
          "$bashToolsDirectory",
          "0.1.30")
        "Terratag download completed"
    - runtime: "BASH"
      priority: 200
      before: true
      script: |
        cd $workingDirectory
        terratag -tags="{\"environment_id\": \"development\"}"
- type: "terraformApply"
  step: 300

Terrakube supports having the template logic in an external git repository like the following:

This feature is only available from Terrakube 2.11.0

flow:
  - type: "terraformPlan"
    step: 100
    importComands:
      repository: "https://github.com/AzBuilder/terrakube-extensions"
      folder: "templates/terratag"
      branch: "import-template"
  - type: "terraformApply"
    step: 200

Using import templates become simply and we can reuse the logic accross several Terrakube organizations

The sample template can be found here.

The import commands are using a public repository, we will add support for private repositories in the future.

Share Workspace State

Terrakube support sharing the terraform state between workspaces using the following data block.

data "terraform_remote_state" "remote_creation_time" {
  backend = "remote"
  config = {
    organization = "simple"
    hostname = "8080-azbuilder-terrakube-vg8s9w8fhaj.ws-us102.gitpod.io"
    workspaces = {
      name = "simple_tag1"
    }
  }
}

For example it can be used in the following way:

data "terraform_remote_state" "remote_creation_time" {
  backend = "remote"
  config = {
    organization = "simple"
    hostname = "8080-azbuilder-terrakube-vg8s9w8fhaj.ws-us102.gitpod.io"
    workspaces = {
      name = "simple_tag1"
    }
  }
}

resource "null_resource" "previous" {}

resource "time_sleep" "wait_30_seconds" {
  depends_on = [null_resource.previous]

  create_duration = data.terraform_remote_state.remote_creation_time.outputs.creation_time
}

resource "null_resource" "next" {
  depends_on = [time_sleep.wait_30_seconds]
}

This feature is supported from Terrakube 2.15.0

Built-in Actions

The following actions are added by default in Terrakube. However, not all actions are enabled by default. Please refer to the documentation below if you are interested in activating these actions.

General

  • Open Documentation: Adds a button to quickly navigate to the Terraform registry documentation for a specific resource.

  • Resource Details: Adds a new panel to visualize the state attributes and dependencies for a resource.

Azure

  • Open in Azure Portal: Adds a button to quickly navigate to the resource in the Azure Portal.

  • Restart Azure VM: Adds a button to restart the Azure VM directly from the workspace overview.

Monitor

  • Azure Monitor: Adds a panel to visualize the metrics for the resource.

AI

  • Open AI: Adds a button to ask questions or get recommendations using OpenAI based on the workspace data.

flow:
- type: "terraformPlan"
  name: "Terraform Plan from Terraform CLI"
  step: 100
- type: "approval"
  name: "Approve Plan from Terraform CLI"
  step: 150
  team: "TERRAFORM_CLI"
- type: "terraformApply"
  name: "Terraform Apply from Terraform CLI"
  step: 200
flow:
- type: "terraformPlanDestroy"
  name: "Terraform Plan Destroy from Terraform CLI"
  step: 100
- type: "approval"
  name: "Approve Plan from Terraform CLI"
  step: 150
  team: "TERRAFORM_CLI"
- type: "terraformApply"
  name: "Terraform Apply from Terraform CLI"
  step: 200
terrakube_token                          = "TERRAKUBE_PERSONAL_ACCESS_TOKEN"
terrakube_api_hostname                   = "TERRAKUBE-API.MYCLUSTER.COM"
terrakube_federated_credentials_audience = "api://AzureADTokenExchange"
terrakube_organization_name              = "simple"
terrakube_workspace_name                 = "dynamic-azure"
terraform {

  cloud {
    organization = "terrakube_organization_name"
    hostname = "terrakube-api.mydomain.com"

    workspaces {
      name = "terrakube_workspace_name"
    }
  }
}

provider "azurerm" {
  features {}
}

 resource "azurerm_resource_group" "example" {
  name     = "randomstring-aejthtyu"
  location = "East US 2"
}

https://github.com/AzBuilder/terrakube/tree/main/dynamic-credential-setup/azure
here
https://terrakube-api.mydomain.com/.well-known/jwks
https://terrakube-api.mydomain.com/.well-known/openid-configuration
here
terrakubeUI :{
   "100" : "<span>Some Content</span>"
}
flow:
  - type: "terraformPlan"
    step: 100
    name: "Plan"
    commands:
      - runtime: "GROOVY"
        priority: 300
        after: true
        script: |
          import Context
          
          def uiTemplate = '{"100":"<span>Simple Text</span>"}'
          new Context("$terrakubeApi", "$terrakubeToken", "$jobId", "$workingDirectory").saveProperty("terrakubeUI", uiTemplate)

          "Save context completed..."
  - type: "terraformApply"
    step: 200
    name: "Apply"
Persistent Context
default templates
Creating a Team
Templates
Administrator group
filtering actions
Workspace Overview
Visual State
display criteria

VCS Providers

Terrakube empowers collaboration between different teams in your organization. To achieve this, you can integrate Terrakube with your version control system (VCS) provider. Although Terrakube can be used with public Git repositories or with private Git repositories using SSH, connecting to your Terraform code through a VCS provider is the preferred method. This allows for a more streamlined and secure workflow, as well as easier management of access control and permissions.

Terrakube supports the following VCS providers:

Webhooks

Terrakube uses webhooks to monitor new commits. This features is not available in SSH and Azure DevOps.

  • When new commits are added to a branch, Terrakube workspaces based on that branch will automatically initiate a Terraform job. Terrakube will use the "Plan and apply" template by default, but you can specify a different Template during the Workspace creation.

  • When you specify a directory in the Workspace. Terrakube will run the job only if a file changes in that directory

SSH

Terrakube can use SSH keys to authenticate to your private repositories. The SSH keys can be uploaded inside the settings menu.

To handle SSH keys inside Terrakube your team should have "Manage VCS settings" access

Terrakube support keys with RSA and ED25519

SSH keys can be used to authenticate to private repositories where you can store modules or workspaces

Once SSH keys are added inside your organization you can use them like the following example:

Modules:

Workspace:

When using SSH keys make sure to use your repository URL using the ssh format. For github it is something like [email protected]:AzBuilder/terrakube-docker-compose.git

SSH Key Injection

The selected SSH key will be used to clone the workspace information at runtime when running the job inside Terrakube, but it will also be injected to the job execution to be use inside our terraform code.

For example if you are using a module using a GIT connection like the following:

module "test" {
  source = "[email protected]:alfespa17/private-module.git"
}

When running the job, internally terraform will be using the selected SSH key to clone the necesary module dependencies like the below image:

When using a VCS connection you can select wich SSH key should be injected when downloading modules in the workspace settings:

This will allow to use modules using the git format inside a workspace with VCS connection like the following:

module "test" {
  source = "[email protected]:alfespa17/simple-terraform.git"
}

Azure Monitor

Description

The Azure Monitor Metrics action allows users to fetch and visualize metrics from Azure Monitor for a specified resource.

Display Criteria

By default, this Action is disabled and when enabled will appear for all the azurerm resources. If you want to display this action only for certain resources, please check display criteria.

Setup

This action requires the following variables as Workspace Variables or Global Variables in the Workspace Organization:

  • ARM_CLIENT_ID: The Azure Client ID used for authentication.

  • ARM_CLIENT_SECRET: The Azure Client Secret used for authentication.

  • ARM_TENANT_ID: The Azure Tenant ID associated with your subscription.

  • ARM_SUBSCRIPTION_ID: The Azure Subscription ID where the VM is located.

The Client ID should have at least Monitoring Reader or Reader access on resource group or subscription.

Usage

  • Navigate to the Workspace Overview or the Visual State and click on a resource name.

  • Click the Monitor tab to view metrics for the selected resource.

  • You can view additional details for the metrics using the tooltip.

  • You can see more details for each chart by navigating through it.

Open Documentation

Description

The Open Documentation action is designed to provide quick access to the Terraform registry documentation for a specific provider and resource type. By using the context of the current state, this action constructs the appropriate URL and presents it as a clickable button.

Display Criteria

By default this Action is enabled and will appear for all the resources. If you want to display this action only for certain resources, please check display criteria.

Usage

  • Navigate to the Workspace Overview or the Visual State and click a resource name.

Workspace Overview
Visual State
  • In the Resource Drawer, Click the Open documentation button

  • You will be able to see the documentation for that resource in the Terraform registry

Restart Azure VM

Description

The Restart VM action is designed to restart a specific virtual machine in Azure. By using the context of the current state, this action fetches an Azure access token and issues a restart command to the specified VM. The action ensures that the VM is restarted successfully and provides feedback on the process.

Display Criteria

By default, this Action is disabled and when enabled will appear for all resources that have resource type azurerm_virtual_machine. If you want to display this action only for certain resources, please check display criteria.

Setup

This action requires the following variables as Workspace Variables or Global Variables in the Workspace Organization:

  • ARM_CLIENT_ID: The Azure Client ID used for authentication.

  • ARM_CLIENT_SECRET: The Azure Client Secret used for authentication.

  • ARM_TENANT_ID: The Azure Tenant ID associated with your subscription.

  • ARM_SUBSCRIPTION_ID: The Azure Subscription ID where the VM is located.

The Client ID should have at least Virtual Machine Contributor access on the VM or resource group.

Usage

  • Navigate to the Workspace Overview or the Visual State and click on a resource name.

  • In the Resource Drawer, click the "Restart" button.

  • The VM will be restarted, and a success or error message will be displayed.

Action Types

Action Types define where an action will appear in Terrakube. Here is the list of supported types:

Workspace/Action

The action will appear on the Workspace page, next to the Lock button. Actions are not limited to buttons, but this section is particularly suitable for adding various kinds of buttons. Use this type for actions that involves interaction with the Workspace

Workspace/ResourceDrawer/Action

The action will appear for each resource when you click on a resource using the Overview Workspace page or the Visual State Diagram. Use this when an action applies to specific resources, such as restarting a VM or any actions that involve interaction with the resource.

Workspace/ResourceDrawer/Tab

The action will appear for each resource when you click on a resource using the Overview Workspace page or the Visual State Diagram. Use this when you want to display additional data for the resource, such as activity history, costs, metrics, and more.

For this type, theLabel property will be used as the name of the Tab pane.

Open AI

Security Warning

The current action shares the state with the OpenAI API, so it's not recommended for sensitive workspaces. Future versions will provide a mask method before sending the data.

Description

The Ask Workspace action allows users to interact with OpenAI's GPT-4o model to ask questions and get assistance related to their Terraform Workspace. This action provides a chat interface where users can ask questions about their Workspace's Terraform state and receive helpful responses from the AI.

Display Criteria

By default, this Action is disabled and when enabled will appear for all resources. If you want to display this action only for certain resources, please check display criteria.

Setup

This action requires the following variables as Workspace Variables or Global Variables in the Workspace Organization:

  • OPENAI_API_KEY: The API key to authenticate requests to the OpenAI API. Please check this guide in how to get an OPEN AI api key

Usage

  • Navigate to the Workspace and click on the Ask Workspace button.

  • Enter your questions and click the Send button or press the Enter key.

Workspaces

When working with Terraform at Enterprise level, you need to organize your infrastructure in different collections. Terrakube manages infrastructure collections with workspaces. A workspace contains everything Terraform needs to manage a given collection of infrastructure, so you can easily organize all your resources based in your requirements.

For example, you can create a workspace for dev environment and a different workspace for production. Or you can separate your workspaces based in your resource types, so you can create a workspace for all your SQL Databases and anothers workspace for all your VMS.

You can create unlimited workspaces inside each Terrakube Organization. In this section:

Terrakube CLI

The CLI is not compatible with Terrakube 2.X, it needs to be updated and there is an open issue for this feature

terrakube cli is Terrakube on the command line. It brings organizations, workspaces and other Terrakube concepts to the terminal.

In this section:

Keycloak

Requirements

  • A working Keycloak server with a configured realm.

Steps for configuring Terrakube with Keycloak

For configuring Terrakube with Keycloak using Dex the following steps are involved:

  • Keycloak client creation and configuration for Terrakube.

  • Configure Terrakube so it works with Keycloak.

  • Testing the configuration.

Keycloak client creation

Log in to Keycloak with admin credentials and select Configure > Clients > Create. Define the ID as TerrakubeClient and select openid-connect as its protocol. For Root URL use Terrakube's GUI URL. Then click Save.

A form for finishing the Terrakube client configuration will be displayed. These are the fields that must be fulfilled:

  • Name: in this example it has the value of TerrakubeClient.

  • Client Protocol: it must be set to openid-connect.

  • Access Type: set it to confidential.

  • Root URL: Terrakube's UI URL.

  • Valid Redirect URIs: set it to *

Then click Save.

Notice that, since we set Access Type to confidential, we have an extra tab titled Credentials. Click the Credentials tab and copy the Secret value. It will be used later when we configure the Terrakube's connector.

Depending on your configuration, Terrakube might expect different client scopes, such as openid, profile, email, groups, etc. You can see if they are assigned to TerrakubeClient by clicking on the Client Scopes tab (in TerrakubeClient).

If they are not assigned, you can assign them by selecting the scopes and clicking on the Add selected button.

If some scope does not exist, you must create it before assigning it to the client. You do this by clicking on Client Scopes, then click on the Create button. This will lead you to a form where you can create the new scope. Then, you can assign it to the client.

Terrakube configuration

We have to configure a Dex connector to use with Keycloak. Add a new connector in Dex's configuration, so it looks like this:

This is the simpler configuration that we can use. Let's see some notes about this fields:

  • type: must be oidc (OpenID Connect).

  • name: this is the string shown in the connectors list in Terrakube GUI.

  • issuer: it refers to the Keycloak server. It has the form [http|https]://<KEYCLOAK_SERVER>/auth/realms/<REALM_NAME>

  • clientID: refers to the Client ID configured in Keycloak. They must match.

  • clientSecret: must be the secret in the Credentials tab in Keycloak..

  • redirectURI: has the form [http|https]://<TERRAKUBE_API>/dex/callback. Notice this is the Terrakube API URL and not the UI URL.

  • insecureEnableGroups: this is required to enable groups claims. This way groups defined in Keycloak are brought by Terrakube's Dex connector.

{% hint style="info" %} If your users do not have a name set (First Name field in Keycloak), you must tell oidc which attribute to use as the user's name. You can do this giving the userNameKey:

{% endhint %}

Testing Terrakube authentication

When we click on Terrakube's login button we are given the choice to select the connector we want to use:

Click on Log in with TerrakubeClient. You will be redirected to a login form in Keycloak:

After login, you are redirected back to Terrakube and a dialog asking you to grant access is shown

Click on Grant Access. That is the last step. If everything went right, now you should be logged in Terrakube.

Template Scheduling in Jobs

When we deploy some infrastructure we can set some other template to execute in some specific time for example to destroy the infrascturture or resize it.

This example will show the basic logic to accomplish but it is not creaing a real VM it just running some scripts that are showing the injected terraform variables when running the job.

Example

Define the size as global terraform variables for example SIZE_SMALL and SIZE_BIG, this terraform variables will be injected in our future templates "schedule1" and "schedule2" as terraform variables dynamically.

We define a job template that creates the infrasctucture using some small size and create two schedule at the end of the template.

Create Virtual Machine (Example)

This will create our "virtual machine" using size small, after it will set two addtional schedules to run every 5 and 10 minutes to resize it.

We will have 3 templates, one to create the virtual machine and two to resize it

Once the workspace is executed creating the infrasctructure two automatic shedules will be define

Differente jobs will be executed.

Resize Template (schedule1)

Inject the new size using global terraform variables

Resize Template (schedule2)

Inject the new size using global terraform variables

Azure DevOps

For using repositories from Azure Devops with Terrakube workspaces and modules you will need to follow these steps:

Manage VCS Providers permission is required to perform this action, please check for more info.

Navigate to the desired organization and click the Settings button, then on the left menu select VCS Providers

If you prefer, you can add a new VCS Provider directly from the or Create Module screen.

Click the Azure Devops button

In the next screen click the link to in Azure Devops

In the Azure Devops page, complete the required fields and click Create application

Field
Description

You can complete the fields using the information suggested by terrakube in the VCS provider screen

In the next screen, copy the App ID and Client Secret

Go back to Terrakube to enter the information you copied from the previous step. Then, click the Connect and Continue button.

You will see an Azure Devops window, click the Accept button to complete the connection

Finally, if the connection was established successfully, you will be redirected to the VCS provider’s page in your organization. You should see the connection status with the date and the user that created the connection.

And now, you will be able to use the connection in your workspaces and modules:

GCP Dynamic Provider Credentials

Requirements

The dynamic provider credential setup in GCP can be done with the Terrraform code available in the following link:

The code will also create a sample workspace with all the require environment variables that can be used to test the functionality using the CLI driven workflow.

Make sure to mount your public and private key to the API container as explained

Mare sure the private key is in "pkcs8" format

Validate the following terrakube api endpoints are working:

Set terraform variables using: "variables.auto.tfvars"

To generate the API token check

Run Terraform apply to create all the federated credential setup in GCP and a sample workspace in terrakube for testing

To test the following terraform code can be used:

Running Example

Action Context

The context object contains data related to the Workspace that you can use inside the action. The properties depends on the .

Tip

Inside your action you can use console.log(context) to quickly explore the available properties for the context

context.workspace

Contains the workspace information in the Terrakube api format. See the for more details on the properties available.

Examples

  • context.workspace.id: Id of the workspace

  • context.workspace.attributes.name: Name of the workspace

  • context.workspace.attributes.terraformVersion: Terraform version

Available for action types

  • workspace/action

  • workspace/resourcedrawer/action

  • workspace/resourcedrawer/tab

context.settings

Contains the settings that you configured in the display criteria.

Examples

  • context.settings.Repository: the value of repository that you set for the setting. Example:

Available for action types

  • workspace/action

  • workspace/resourcedrawer/action

  • workspace/resourcedrawer/tab

context.state

For workspace/action this property contains the full terraform or open tofu state. For workspace/resourcedrawer/action and workspace/resourcedrawer/tab contains only the section of the resource

Available for action types

  • workspace/action

  • workspace/resourcedrawer/action

  • workspace/resourcedrawer/tab

context.apiUrl

Contains the Terrakube api Url. Useful if you want to use the or execute a call to the Terrakube API.

Available for action types

  • workspace/action

  • workspace/resourcedrawer/action

  • workspace/resourcedrawer/tab

Github
Github Enterprise
GitLab
Gitlab EE and CE
Bitbucket
Azure DevOps
SSH
Getting started
Installation
Commands
Creating Workspaces
Terraform State
Variables
Workspace scheduler
      connectors: 
        - type: oidc
          id: TerrakubeClient
          name: TerrakubeClient
          config:
            issuer: "[http|https]://<KEYCLOAK_SERVER>/auth/realms/<MY_REALM>"
            clientID: "TerrakubeClient"
            clientSecret: "<TerrakubeClient's secret>"
            redirectURI: "[http|https]://<TERRAKUBE-API URL>/dex/callback"
            insecureEnableGroups: true
config:
  .....
  userNameKey: email
flow:
  - type: "customScripts"
    step: 100
    inputsTerraform:
      SIZE: $SIZE_SMALL
    commands:
      - runtime: "BASH"
        priority: 100
        before: true
        script: |
          echo "SIZE : $SIZE"
  - type: "scheduleTemplates"
    step: 200
    templates:
      - name: "schedule1"
        schedule: "0 0/5 * ? * * *"
      - name: "schedule2"
        schedule: "0 0/10 * ? * * *"
flow:
  - type: "customScripts"
    step: 100
    inputsTerraform:
      SIZE: $SIZE_SMALL
    commands:
      - runtime: "BASH"
        priority: 100
        before: true
        script: |
          echo "SIZE : $SIZE"
flow:
  - type: "customScripts"
    step: 100
    inputsTerraform:
      SIZE: $SIZE_BIG
    commands:
      - runtime: "BASH"
        priority: 100
        before: true
        script: |
          echo "SIZE : $SIZE"

Company Name

Your company name.

Application name

The name of your application or you can use the Organization name

Application website

Your application or website url

Callback URL

Copy the Callback URL from the Terrakube URL

Authorized scopes (checkboxes)

Only the following should be checked: Code (read) Code (status)

Team Management
Create workspace
register a new OAuth Application
terrakube_token = "TERRAKUBE_PERSONAL_ACCESS_TOKEN"
terrakube_hostname = "terrakube-api.mydomain.com"
terrakube_organization_name = "simple"
terrakube_workspace_name = "dynamic-workspace"
gcp_project_id = "my-gcp-project"
terraform {

  cloud {
    organization = "terrakube_organization_name"
    hostname = "terrakube-api.mydomain.com"

    workspaces {
      name = "terrakube_workspace_name"
    }
  }
}

provider "google" {
  project     = "my-gcp-project"
  region      = "us-central1"
  zone        = "us-central1-c"
}

resource "google_storage_bucket" "auto-expire" {
  name          = "mysuperbukcetname"
  location      = "US"
  force_destroy = true

  public_access_prevention = "enforced"
}
https://github.com/AzBuilder/terrakube/tree/main/dynamic-credential-setup/gcp
here
https://terrakube-api.mydomain.com/.well-known/jwks
https://terrakube-api.mydomain.com/.well-known/openid-configuration
here
Action Type
docs
Action Proxy

Github

For using repositories from GitHub.com with Terrakube workspaces and modules you will need to follow these steps:

Manage VCS Providers permission is required to perform this action, please check Team Management for more info.

Navigate to the desired organization and click the Settings button, then on the left menu select VCS Providers

If you prefer, you can add a new VCS Provider directly from the Create workspace or Create Module screen.

Click the Github button and then click the Github Cloud option.

In the next screen click the link to register a new OAuth Application in Github

In the Github page, complete the required fields and click Register application

Field
Description

Application Name

Your application name, for example you can use your organization name.

Homepage URL

The url for your application or website,

Application Description

Any description you choice

Authorization Callback URL

Copy the callback url from the Terrakube UI

You can complete the fields using the information suggested by terrakube in the VCS provider screen

Next, generate a new client secret

Copy the Client Id and Client Secret from Github and go back to Terrakube to complete the required information. Then, click the Connect and Continue button

You will see the Github page, click the Authorize button to complete the connection

Finally, if the connection was established successfully, you will be redirected to the VCS provider’s page in your organization. You should see the connection status with the date and the user that created the connection.

And now, you will be able to use the connection in your workspaces and modules:

Bitbucket

Work in progress

For using repositories from Bitbucket.com with Terrakube workspaces and modules you will need to follow these steps:

Manage VCS Providers permission is required to perform this action, please check Team Management for more info.

Navigate to the desired organization and click the Settings button, then on the left menu select VCS Providers

If you prefer, you can add a new VCS Provider directly from the Create workspace or Create Module screen.

Click the Bitbucket button and then click the Bitbucket Cloud option

Open Bitbucket Cloud and log in as whichever account you want Terrakube to use and click the settings button in your workspace

Click the OAuth consumers menu and then click the Add consumer button

Complete the required fields and click Save button

Field
Description

Name

Your application name, for example you can use your organization name.

Description

Any description of your choice

Redirect URI

Copy the Redirect URI from the Terrakube UI

This is a private consumer (checkbox)

Checked

Permissions (checkboxes)

The following should be checked:

Account: Write

Repositories: Admin

Pull requests: Write

Webhooks: Read and write

You can complete the fields using the information suggested by terrakube in the VCS provider screen

In the next screen, copy the Key and Secret

Go back to Terrakube to enter the information you copied from the previous step. Then, click the Connect and Continue button.

You will see a Bitbucket window, click the Grant Access button to complete the connection

Finally, if the connection was established successfully, you will be redirected to the VCS provider’s page in your organization. You should see the connection status with the date and the user that created the connection.

And now, you will be able to use the connection in your workspaces and modules:

Tags

You can use tags to categorize, sort, and even filter workspaces based on the tags.

You can manage tags for the organization in the Settings panel. This section lets you create and remove tags. Alternatively, you can add existing tags or create new ones in the Workspace overview page.

Creating a Tag

Once you are in the desired organization, click the Settings button, then in the left menu select the Tags option and click the Add global variable button

In the popup, provide the required values. Use the below table as reference:

Field
Description

Name

Unique tag name

Finally click the Save tag button and the variable will be created

You will see the new tag in the list. And now you can use the tag in the workspaces within the organization

Edit a Tag

Click the Edit button next to the tag you want to edit.

Change the fields you need and click the Save tag button

Delete a Tag

Click the Delete button next to the tag you want to delete, and then click the Yes button to confirm the deletion. Please take in consideration the deletion is irreversible and the tag will be removed from all the workspaces using it.

Open Telemetry

Terrakube components support by default to enable effective observability.

To enable telemetry inside the Terrakube components please add the following environment variable:

Terrakube API, Registry and Executor support the setup for now from version 2.12.0.

UI support will be added in the future.

Once the open telemetry agent is enable we can use other environment variables to setup the monitoring for our application for example to enable jaeger we could add the following using addtional environment variables:

Now we can go the jaeger ui to see if everything is working as expected.

There are several differente configuration options for example:

Jaeger exporter

The exporter. This exporter uses gRPC for its communications protocol.

System property
Environment variable
Description

Zipkin exporter

The exporter. It sends JSON in to a specified HTTP URL.

System property
Environment variable
Description

Prometheus exporter

The exporter.

System property
Environment variable
Description

For more information please check, .

Open Telemetry Example

One small example to show how to use open telemetry with docker compose can be found in the following URL:

Dynamic Provider Credentials

This feature is available from version 2.21.0

Generate Public and Private Key.

To use Dynamic Provider credentials we need to genera a public and private key that will be use to generate a validate the federated tokens, we can use the following commands

You need to make sure the private key starts with "-----BEGIN PRIVATE KEY-----" if not the following command can be used to transform the private key to the correct format

The public and private key need to be mounted inside the container and the path should be specify in the following environment variables

  • DynamicCredentialPublicKeyPath

  • DynamicCredentialPrivateKeyPath

Public Endpoints Requirements

To use Dynamic Provider credentials the following public endpoints were added. This endpoint needs to be accessible for your different cloud providers.

Terrakube Environment Variables:

The following environment variables can be used to customize the dynamic credentials configuration:

  • DynamicCredentialId = This will be the kid in the JWKS endpoint (Default value: 03446895-220d-47e1-9564-4eeaa3691b42)

  • DynamicCredentialTtl= The TTL for the federated token generated internally in Terrakube (Defafult: 30)

  • DynamicCredentialPublicKeyPath= The path to the public key to validate the federated tokens

  • DynamicCredentialPrivateKeyPath=The path to the private key to generate the federated tokens

Token structure

Terrakube will generate a JWT token internally, this token will be used to authenticate to your cloud provider.

The token structure looks like the following for Azure

The token structure looks like the following for GCP

The token structure looks like the following for AWS

Bitbucket

This example show how easy is to integrate Terrakube Jobs in Bitbucket pipelines.

YAML Definition

Add the following snippet to the script section of your bitbucket-pipelines.yml file:

Variables

Variable
Usage

(*) = required variable.

Examples

Basic example:

Advanced example:

For more information about this pipe please refer to the following .

Self-Hosted Agents

This feature is supported from version 2.20.0 and helm chart version 3.16.0

Terrakube allow to have one or multiple agents to run jobs, you can have as many agents as you want for a single organization.

To use this feature you could deploy a single executor component using the following values:

The above values are assuming the we have deploy terrakube using the domain "minikube.net" inside a namespace called "terrakube"

Now that we have our values.yaml we can use the following helm command:

Now we have a single executor component ready to accept jobs or we could change the number or replicas to have multiple replicas like a pool of agent:

openssl genrsa -out private_temp.pem 2048
openssl rsa -in private_temp.pem -outform PEM -pubout -out public.pem
openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in private_temp.pem -out private.pem
GET https://TERRAKUBE.MYSUPERDOMAIN.COM/.well-known/openid-configuration
{
  "issuer": "https://TERRAKUBE.MYSUPERDOMAIN.COM",
  "jwks_uri": "https://TERRAKUBE.MYSUPERDOMAIN.COM/.well-known/jwks",
  "response_types_supported": [
    "id_token"
  ],
  "claims_supported": [
    "sub",
    "aud",
    "exp",
    "iat",
    "iss",
    "jti",
    "terrakube_workspace_id",
    "terrakube_organization_id",
    "terrakube_job_id"
  ],
  "id_token_signing_alg_values_supported": [
    "RS256"
  ],
  "scopes_supported": [
    "openid"
  ],
  "subject_types_supported": [
    "public"
  ]
}
GET https://TERRAKUBE.MYSUPERDOMAIN.COM/.well-known/jwks
{
  "keys": [
    {
      "kty": "RSA",
      "use": "sig",
      "n": "ALEzGE4Rn2WhxOhIuXAzq7e-WvRLCJfoqrHMUXtpt6gefNmWGo9trbea84KyeKdvzE9wBwWxnz_U5d_utmLLztVA2FLdDfnndh7pF4Fp7hB-lhaT1hV2EsiFsc9oefCYmkzXmHylfNQOuqNlRA_2Xu5pHovrF79WW01hWSjhGTkpj6pxFG4t7Tl54SWnJ83CvGDAKuoO9c1M1iTKikB3ENMK8WfU-wZJ4oLTAfhSydqZxZuGRhiwPGsEQOpRynyHJ54XWZHmFdsWs_eGRsfs1iTPbiQSBZbaEwz36HF4QdqFzzLGd67sTtZku_YEsUbJW8cbK6nOFEdR0BSTtSV-lPk=",
      "e": "AQAB",
      "kid": "03446895-220d-47e1-9564-4eeaa3691b42",
      "alg": "RS256"
    }
  ]
}
JWT HEADER
{
  "kid": "12345",
  "alg": "RS256"
}
JWT BODY
{
  "sub": "organization:TERRAKUBE_ORG_NAME:workspace:TERRAKUBE_WORKSPACE_NAME",
  "aud": "api://AzureADTokenExchange",
  "jti": "12345678",
  "terrakube_workspace_id": "1",
  "terrakube_organization_id": "2",
  "terrakube_job_id": "3",
  "iat": 1713397293,
  "iss": "https://terrakube-api.example.com",
  "exp": 1713397353
}
SIGNATURE
JWT HEADER
{
  "kid": "03446895-220d-47e1-9564-4eeaa3691b42",
  "alg": "RS256"
}
JWT BODY
{
  "sub": "organization:TERRAKUBE_ORG_NAME:workspace:TERRAKUBE_WORKSPACE_NAME",
  "aud": "https://iam.googleapis.com/projects/xxxxx/locations/global/workloadIdentityPools/xxxxx/providers/xxxxx",
  "jti": "d4432299-5dad-4b1e-9756-544639e84cec",
  "terrakube_workspace_id": "d9b58bd3-f3fc-4056-a026-1163297e80a8",
  "terrakube_organization_id": "8abe206b-29a8-4ed8-8a3b-30237e295659",
  "terrakube_job_id": "1",
  "iat": 1713915600,
  "iss": "https://terrakube-api.example.com",
  "exp": 1713917400
}
SIGNATURE
JWT HEADER
{
  "kid": "03446895-220d-47e1-9564-4eeaa3691b42",
  "alg": "RS256"
}
JWT BODY
{
  "sub": "organization:TERRAKUBE_ORG_NAME:workspace:TERRAKUBE_WORKSPACE_NAME",
  "aud": "aws.workload.identity",
  "jti": "d4432299-5dad-4b1e-9756-544639e84cec",
  "terrakube_workspace_id": "d9b58bd3-f3fc-4056-a026-1163297e80a8",
  "terrakube_organization_id": "8abe206b-29a8-4ed8-8a3b-30237e295659",
  "terrakube_job_id": "1",
  "iat": 1713915600,
  "iss": "https://terrakube-api.example.com",
  "exp": 1713917400
}
SIGNATURE
script:
  - pipe: azbuilder/terrakube-pipe:1.0.0
    variables:
      LOGIN_ENDPOINT: "<string>" #optional Default: https://login.microsoftonline.com
      TERRAKUBE_TENANT_ID: "<string>"
      TERRAKUBE_APPLICATION_ID: "<string>"
      TERRAKUBE_APPLICATION_SECRET: "<string>"
      TERRAKUBE_APPLICATION_SCOPE: "<string>" #optional Default: api://Terrakube/.default
      TERRAKUBE_ORGANIZATION: "<string>"
      TERRAKUBE_WORKSPACE: "<string>"
      TERRAKUBE_TEMPLATE: "<string>"
      TERRAKUBE_ENDPOINT: "<string>"
      DEBUG: "<boolean>" # Optional Default: false

LOGIN_ENDPOINT

Default values: https://login.microsoftonline.com

TERRAKUBE_TENANT_ID (*)

Azure AD Application tenant ID

TERRAKUBE_APPLICATION_ID (*)

Azure AD Application tenant ID

TERRAKUBE_APPLICATION_SECRET (*)

Azure AD Application tenant ID

TERRAKUBE_APPLICATION_SCOPE

Default value: api://Terrakube/.default

TERRAKUBE_ORGANIZATION (*)

Terrakube organization name

TERRAKUBE_WORKSPACE (*)

Terrakube workspace name

TERRAKUBE_TEMPLATE (*)

Terrakube template name

TERRAKUBE_ENDPOINT (*)

Terrakbue api endpoint

script:
  - pipe: docker://azbuilder/terrakube-pipe:1.0.0
    variables:
      TERRAKUBE_TENANT_ID: "36857254-c824-409f-96f5-d3f2de37b016"
      TERRAKUBE_APPLICATION_ID: "36857254-c824-409f-96f5-d3f2de37b016"
      TERRAKUBE_APPLICATION_SECRET: "SuperSecret"
      TERRAKUBE_ORGANIZATION: "terrakube"
      TERRAKUBE_WORKSPACE: "bitbucket"
      TERRAKUBE_TEMPLATE: "vulnerability-snyk"
      TERRAKUBE_ENDPOINT: "https://terrakube.interal/service"
script:
  - pipe: docker://azbuilder/terrakube-pipe:1.0.0
    variables:
      LOGIN_ENDPOINT: "https://login.microsoftonline.com"
      TERRAKUBE_TENANT_ID: "36857254-c824-409f-96f5-d3f2de37b016"
      TERRAKUBE_APPLICATION_ID: "36857254-c824-409f-96f5-d3f2de37b016"
      TERRAKUBE_APPLICATION_SECRET: "SuperSecret"
      TERRAKUBE_APPLICATION_SCOPE: "api://TerrakubeApp/.default"
      TERRAKUBE_ORGANIZATION: "terrakube"
      TERRAKUBE_WORKSPACE: "bitbucket"
      TERRAKUBE_TEMPLATE: "vulnerability-snyk"
      TERRAKUBE_ENDPOINT: "https://terrakube.interal/service"
      DEBUG: "true"
repository

Azure Storage Account

This guide will assume that you are using the minikube deployment, but the storage backend can be used in any real kubernetes environment.

The first step will be to create one azure storage account with the folling containers:

  • content

  • registry

  • tfoutput

  • tfstate

Create Azure Storage Account tutorial link

Once the storage account is created you will have to get the "Access Key"

Now you have all the information we will need to create a terrakube.yaml for our terrakube deployment with the following content:

## Terrakube Storage
storage:
  defaultStorage: false
  azure:
    storageAccountName: "<<STORAGE ACCOUNT NAME>>"
    storageAccountResourceGroup: "<< RESOURCE GROUP >>"
    storageAccountAccessKey: "<< STORAGE ACCOUNT ACCESS KEY"

Now you can install terrakube using the command

helm install --values terrakube.yaml terrakube terrakube-repo/terrakube -n terrakube

OTEL_JAVAAGENT_ENABLE=true
OTEL_TRACES_EXPORTER=jaeger
OTEL_EXPORTER_JAEGER_ENDPOINT=http://jaeger-all-in-one:14250
OTEL_SERVICE_NAME=TERRAKUBE-API

otel.traces.exporter=jaeger

OTEL_TRACES_EXPORTER=jaeger

Select the Jaeger exporter

otel.exporter.jaeger.endpoint

OTEL_EXPORTER_JAEGER_ENDPOINT

The Jaeger gRPC endpoint to connect to. Default is http://localhost:14250.

otel.exporter.jaeger.timeout

OTEL_EXPORTER_JAEGER_TIMEOUT

The maximum waiting time, in milliseconds, allowed to send each batch. Default is 10000.

otel.traces.exporter=zipkin

OTEL_TRACES_EXPORTER=zipkin

Select the Zipkin exporter

otel.exporter.zipkin.endpoint

OTEL_EXPORTER_ZIPKIN_ENDPOINT

The Zipkin endpoint to connect to. Default is http://localhost:9411/api/v2/spans. Currently only HTTP is supported.

otel.metrics.exporter=prometheus

OTEL_METRICS_EXPORTER=prometheus

Select the Prometheus exporter

otel.exporter.prometheus.port

OTEL_EXPORTER_PROMETHEUS_PORT

The local port used to bind the prometheus metric server. Default is 9464.

otel.exporter.prometheus.host

OTEL_EXPORTER_PROMETHEUS_HOST

The local address used to bind the prometheus metric server. Default is 0.0.0.0.

Open Telemetry
Jaeger
Zipkin
Zipkin format
Prometheus
the official open telemetry documentation
# Executor should be enable but we need to customize the apiServiceUrl
executor:
  enabled: true
  apiServiceUrl: "http://terrakube-api-service.terrakube:8080" ## The API is in another namespace called "terrakube"

# We need to disable the default openLdap but we need to provide the internal secret
# so the executor can authenticat with the API and the Registry
security:
  useOpenLDAP: false
  internalSecret: "AxxPdgpCi72f8WhMXCTGhtfMRp6AuBfj"

# We need to disable dex in this deployment
dex:
  enabled: false

# We need to disable default storage MINIO and set some custom values 
# in this example will be deploying like using an external MINIO
#(other backend storage could be used too)
storage:
  defaultStorage: false
  minio:
    accessKey: "admin"
    secretKey: "superadmin"
    bucketName: "terrakube"
    endpoint: "http://terrakube-minio.terrakube:9000" ## MINIO is in another namespace called "terrakube"

# We need to disable API, the default redis and default postgresql database
# But we need to provide some properties like the redis connection
api:
  enabled: false
  defaultRedis: false
  defaultDatabase: false
  properties:
    redisHostname: "terrakube-redis-master.terrakube" ## REDIS is in another namespace called "terrakube"
    redisPassword: "7p9iWVeRV4S944"

# We need to disable registry deployment
registry:
  enabled: false

# We need to disable ui deployment
ui:
  enabled: false

# We need to disable the ingress configuration
# but we need to specify the api and registry URL 
ingress:
  useTls: false
  includeTlsHosts: false
  ui:
    enabled: false
  api:
    enabled: false
    domain: "terrakube-api.minikube.net"
  registry:
    enabled: false
    domain: "terrakube-reg.minikube.net"
  dex:
    enabled: false
helm install --debug --values ./your-values.yaml terrakube terrakube-repo/terrakube -n self-hosted-executor

Custom CA Certs

Adding certificates at runtime

Terrakube componentes (api, registry and executor) are using buildpacks to create the docker images

When using buildpack to add a custom CA certificate at runtime you need to do the following:

Provide the following environment variable to the container:

SERVICE_BINDING_ROOT: /mnt/platform/bindings

Inside the path there is a folder call "ca-certificates"

cnb@terrakube-api-678cb68d5b-ns5gt:/mnt/platform/bindings$ ls
ca-certificates

We need to mount some information to that path

/mnt/platform/bindings/ca-certificates

Inside this folder we should put out custom PEM CA certs and one additional file call type

cnb@terrakube-api-678cb68d5b-ns5gt:/mnt/platform/bindings/ca-certificates$ ls
terrakubeDemo1.pem  terrakubeDemo2.pem  type

The content of the file type is just the text "ca-certificates"

cnb@terrakube-api-678cb68d5b-ns5gt:/mnt/platform/bindings/ca-certificates$ cat type
ca-certificates

Finally your helm terrakube.yaml should look something like this because we are mounting out CA certs and the file called type in the following path " /mnt/platform/bindings/ca-certificates"

## Terrakube Security
security:
  caCerts:
    terrakubeDemo1.pem: |
      -----BEGIN CERTIFICATE-----
      MIIDZTCCA.........

      -----END CERTIFICATE-----
    terrakubeDemo2.pem: |
      -----BEGIN CERTIFICATE-----
      MIIDZTCCA.....
      

      -----END CERTIFICATE-----

## API properties
api:
  version: "2.11.2"
  env:
  - name: SERVICE_BINDING_ROOT
    value: /mnt/platform/bindings
  volumes:
    - name: ca-certs
      secret:
        secretName: terrakube-ca-secrets
        items:
        - key: "terrakubeDemo1.pem"
          path: "terrakubeDemo1.pem"
        - key: "terrakubeDemo2.pem"
          path: "terrakubeDemo2.pem"
        - key: "type'
          path: "type"
  volumeMounts:
  - name: ca-certs
    mountPath: /mnt/platform/bindings/ca-certificates
    readOnly: true

When mounting the volume with the ca secrets dont forget to add the key "type", the content of the file is already defined inside the helm chart

Checking the terrakube component two additional ca certs are added inside the sytem truststore

Added 2 additional CA certificate(s) to system truststore
Setting Active Processor Count to 2
Calculating JVM memory based on 5791152K available memory
For more information on this calculation, see https://paketo.io/docs/reference/java-reference/#memory-calculator
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -Xmx5033128K -XX:MaxMetaspaceSize=246023K -XX:ReservedCodeCacheSize=240M -Xss1M (Total Memory: 5791152K, Thread Count: 250, Loaded Class Count: 41022, Headroom: 0%)
Enabling Java Native Memory Tracking
Adding 126 container CA certificates to JVM truststore
Spring Cloud Bindings Enabled
Picked up JAVA_TOOL_OPTIONS: -Djava.security.properties=/layers/paketo-buildpacks_bellsoft-liberica/java-security-properties/java-security.properties -XX:+ExitOnOutOfMemoryError -XX:ActiveProcessorCount=2 -XX:MaxDirectMemorySize=10M -Xmx5033128K -XX:MaxMetaspaceSize=246023K -XX:ReservedCodeCacheSize=240M -Xss1M -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary -XX:+PrintNMTStatistics -Dorg.springframework.cloud.bindings.boot.enable=true

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::                (v2.7.8)

Additinal information about buildpacks can be found in this link:

  • https://blog.dahanne.net/2021/02/06/customizing-cloud-native-buildpacks-practical-examples/#Add_certificates_binding_at_runtime:~:text=ca%2Dcertificates-,Add%20certificates%20binding%20at%20runtime,-If%20your%20image

  • https://github.com/paketo-buildpacks/ca-certificates

  • https://github.com/paketo-buildpacks/spring-boot

Adding certificate at build time

Terrakube allow to add the certs when building the application, to use this option use the following:

git clone https://github.com/AzBuilder/terrakube
cd terrakube
git checkout <<TERRAKUBE-VERSION>>
mv EXAMPLE.pem bindings/ca-certificates

# This script should be run from the root folder
./scripts/build/terrakubeBuild.sh

The certs will be added at runtime as the following image.

Gitlab EE and CE

To use Terrakube’s VCS features with a self-hosted GitLab Enterprise Edition (EE) or GitLab Community Edition (CE), follow these instructions. For GitLab.com, see the separate instructions.

Manage VCS Providers permission is required to perform this action, please check Team Management for more info.

  • Got to the organization settings you want to configure. Select the VCS Providers on the left menu and click the Add VCS provider button on the top right corner.

If you prefer, you can add a new VCS Provider directly from the Create workspace or Create Module screen.

  • Click the Gitlab button and then click the GitLab Enterprise option.

  • On the next screen, enter the following information about your GitLab instance:

Field
Value

HTTP URL

https://<GITLAB INSTANCE HOSTNAME>

API URL

https://<GITLAB INSTANCE HOSTNAME>/api/v4

  • Use a different browser tab to access your GitLab instance and sign in with the account that you want Terrakube to use. You should use a service user account for your organization, but you can also use a personal account.

Note: To connect Terrakube, you need an account with admin (master) access to any shared repositories of Terraform configurations. This is because creating webhooks requires admin permissions. Make sure you create the application as a user-owned application, not as an administrative application. Terrakube needs user access to repositories to create webhooks and ingress configurations.

If you have GitLab CE or EE 10.6 or higher, you might also need to turn on Allow requests to the local network from hooks and services. You can find this option in the Admin area under Settings, in the “Outbound requests” section (/admin/application_settings/network). For more details, see the GitLab documentation.

  • Navigate to GitLab "User Settings > Applications" page. This page is located at https://<GITLAB INSTANCE HOSTNAME>/profile/applications

  • In the GitLab page, complete the required fields and click Save application

Field
Description

Name

Your application name, for example you can use your organization name.

Redirect URI

Copy the Redirect URI from the UI

Scopes

Only api should be checked

You can complete the fields using the information suggested by terrakube in the VCS provider screen

  • On the next screen, copy the Application ID and Secret

  • Go back to Terrakube to enter the information you copied from the previous step. Then, click the Connect and Continue button.

You will be redirected to GitLab. Click the Authorize button to complete the connection.

Finally, if the connection was established successfully, you will be redirected to the VCS provider’s page in your organization. You should see the connection status with the date and the user that created the connection.

And now, you will be able to use the connection in your workspaces and modules:

Actions

Only users that belongs to Terrakube administrator group can create and edit actions. This group is defined in the terrakube settings during deployment, for more details see Administrator group

Terrakube actions allow you to extend the UI in Workspaces, functioning similarly to plugins. These actions enable you to add new functions to your Workspace, transforming it into a hub where you can monitor and manage your infrastructure and gain valuable insights. With Terrakube actions, you can quickly observe your infrastructure and apply necessary actions.

Key Benefits of Terrakube Actions:

  • Extend Functionality: Customize your Workspace with additional functions specifics to your needs.

  • Monitor Infrastructure: Gain real-time insights into your infrastructure with various metrics and monitoring tools.

  • Manage Resources: Directly manage and interact with your resources from within the Workspace.

  • Flexibility: Quickly disable an action if it is not useful for your organization or set conditions for actions to appear, such as if a resource is hosted in GCP or if the workspace is Dev.

  • Cloud-Agnostic Hub: Manage multi-cloud infrastructure from a single point, providing observability and actionability without needing to navigate individual cloud portals.

Examples of Terrakube Actions:

  • Manage Infrastructure: Restart your Azure VM or add an action to quickly navigate to the resource in the Azure portal directly from Terrakube.

  • Monitor Metrics: Track and display resource metrics for performance monitoring using tools like Prometheus, Azure Monitor or AWS Cloudwatch.

  • Integrate with Tools: Seamlessly integrate with tools like Infracost to display resource and Workspace costs.

  • AI Integration: Use AI to diagnose issues in your Workspaces. By integrating with models like OpenAI, Claude, and Gemini, you can quickly resolve errors in your infrastructure.

  • Project and Incident Management: Integrate with Jira, GitHub Issues, and ServiceNow to identify Workspaces with pending work items or reported errors.

  • Documentation: Link your Confluence documentation or wiki pages in your preferred tool, allowing quick access to documents related to your Workspace infrastructure.

Built-in Actions

Terrakube provides several built-in actions that you can start using some are enabled by default. Please refer to the documentation for detailed information on each action, including usage and setup instructions.

Developing Actions

If you are interested in creating your own actions, you only need knowledge of JavaScript and React, as an action is essentially a React component. For more details please check our quick start guide, or visit our GitHub Terrakube Actions repository to see some examples, contribute new actions or suggest improvements.

Creating an Action

After you develop your action. To create a new action in Terrakube:

  1. Navigate to the settings in any Terrakube organization.

  1. Click the Add Action button.

  1. Provide the required information as detailed in the table below:

Field Name
Description

ID

A unique ID for your action. Example: terrakube.demo-action or myorg.my-action

Name

A unique name for your action.

Type

Defines the section where this action will appear. Refer to to see the specific area where the action will be rendered.

Label

For tabs actions, this will be displayed as the tab name. For action buttons, it should be displayed as the button name.

Category

This helps to organize the actions based on their function. Example: General, Azure, Cost, Monitor.

Version

Must follow semantic versioning (e.g., 1.0.0).

Description

A brief description of the action.

Display Criteria

Defines the conditions under which the action should appear based on the context. For more details, check the documentation.

Action

A JavaScript function equivalent to a React component. Receives the context with some data related to the context. See our guide to start creating actions

Active

Enables or disables an action.

  1. Click the Save button.

Publishing Private Modules

Manage Modules permission is required to perform this action, please check Team Management for more info.

Click Registry in the main menu and then click the Publish module button

Select an existing version control provider or click Connect to a different VCS to configure a new one. See VCS Providers for more details.

Provide the git repository URL and click the Continue button.

In the next screen, configure the required fields and click the Publish Module button.

The module will be published inside the specified organization. On the details page, you can view available versions, read documentation, and copy a usage example.

Releasing New Versions of a Module

To release a new version of a module, create a new release tag to its VCS repository. The registry automatically imports the new version.

Deleting Modules

In the Module details page click the Delete Module button and then click the Yes button to confirm

User Management

You can handle Terrakube users with differen authentication providers like the following

Github Actions

Integrate Terrakube with GitHub Actions is easy and you can handle your workspace from GitHub.

The GIT repository will represent a Terrakube Organization and each folder inside the repository will be a new workspace.

There is an example available in the following

Configuration

Add the following snippet to the script section of your github actions file:

Pull Request example

To run a terraform plan using Terrakube templates use the following example:

File: .github/workflows/pull_request.yml

A new workspace will be created for each folder with the file "terrakube.json". For each PR only new folders or folders that has been updated will be evaluated inside Terrakube.

Push Main branch

To run a terraform apply using Terrakube templates use the following example:

File: .github/workflows/push_main.yml

Terrakube Variables

To define terrakube variables to connect to cloud providers it is recommended to use Global Variables inside the organization using the UI. Terraform variables could be define inside a terraform.tfvars inside each folder or you can define inside the Terrakube UI after the workspace creation.

GitHub Action Inputs

Variable
Usage

(*) = required variable.

Terraform Version Configuration

Create a file called "terrakube.json" and include the terraform version that will be used for the job execution

Build GitHub Action

To build the github action in your local machine use the following.

SQL Azure

To use a SQL Azure with your Terrakube deployment create a terrakube.yaml file with the following content:

loadSampleData this will add some organization, workspaces and modules by default in your database if need it

Now you can install terrakube using the command.

Custom Terraform CLI Builds

Terrakube will support terraform cli custom builds or custom cli mirrors, the only requirement is to expose an endpoint with the following structure:

Support is available from version 2.12.0

Example Endpoint:

  • https://eov1ys4sxa1bfy9.m.pipedream.net/

This is usefull when you have some network restrictions that does not allow to get the information from in your private kuberentes cluster.

To support custom terrafom cli releases when using the helm chart us the following:

MySQL

To use a MySQL with your Terrakube deployment create a terrakube.yaml file with the following content:

loadSampleData this will add some organization, workspaces and modules by default in your database

Now you can install terrakube using the command.

Developing Actions

In this section:

Azure Active Directory
Google Cloud Identity
Amazon Cognito
Github
https://github.com/AzBuilder/docs/blob/main/getting-started/user-management/keycloak.md
## Terrakube API properties
api:
  defaultDatabase: false
  loadSampleData: false
  properties:
    databaseType: "SQL_AZURE"
    databaseHostname: "server_name.database.windows.net"
    databaseName: "database_name"
    databaseUser: "database_user"
    databasePassword: "database_password"
    databaseSchema: "dbo"
helm install --values terrakube.yaml terrakube terrakube-repo/terrakube -n terrakube
## Terrakube API properties
api:
  defaultDatabase: false
  loadSampleData: false
  properties:
    databaseType: "MYSQL"
    databaseHostname: "server_name.database.mysql.com"
    databaseName: "database_name"
    databaseUser: "database_user"
    databasePassword: "database_password"
helm install --values terrakube.yaml terrakube terrakube-repo/terrakube -n terrakube
name: Terrakube Plan

on:
  pull_request:
    types: [opened, synchronize, reopened]

jobs:
  build:
    runs-on: ubuntu-latest
    name: "Running Terrakube Plan"
    steps:
      - uses: actions/checkout@v3
        with:
          fetch-depth: 0

      - uses: AzBuilder/terrakube-action-github@main
        with:
          TERRAKUBE_TOKEN:  ${{ secrets.TERRAKUBE_PAT }} 
          TERRAKUBE_TEMPLATE: "Terraform-Plan"
          TERRAKUBE_ENDPOINT: ${{ secrets.TERRAKUBE_ENDPOINT }}  
          TERRAKUBE_BRANCH: ${{ github.head_ref }}
          TERRAKUBE_ORGANIZATION: "terrakube_organization_name"
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          SHOW_OUTPUT: true
name: Terrakube Apply

on:
  push:
    branches:
      - main

jobs:
  build:
    runs-on: ubuntu-latest
    name: "Running Terrakube Apply"
    steps:
      - uses: actions/checkout@v3
        with:
          fetch-depth: 0

      - uses: AzBuilder/terrakube-action-github@main
        with:
          TERRAKUBE_TOKEN:  ${{ secrets.TERRAKUBE_PAT }} 
          TERRAKUBE_TEMPLATE: "Terraform-Plan/Apply"
          TERRAKUBE_ENDPOINT: ${{ secrets.TERRAKUBE_GITPOD }}  
          TERRAKUBE_BRANCH: "main"
          TERRAKUBE_ORGANIZATION: "terrakube_organization_name"
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          SHOW_OUTPUT: false

TERRAKUBE_TOKEN (*)

Terrakube Personal Access Token

TERRAKUBE_TEMPLATE (*)

Terrakube template name

TERRAKUBE_ENDPOINT (*)

Terrakbue api endpoint

TERRAKUBE_ORGANIZATION (*)

Terrakbue organization

TERRAKUBE_BRANCH (*)

Github Branch when running a job

TERRAKUBE_SSH_KEY_NAME (*)

ssh key name define in the terrakube organization to connect to private repositories

GITHUB_TOKEN (*)

Github Token

SHOW_OUTPUT (*)

Show terrakube logs inside PR comments

{
	"terraform": "1.2.9"
}
git clone https://github.com/AzBuilder/terrakube-action-github.git
yarn install
yarn build
link
{
   "name":"terraform",
   "versions":{
      "1.3.9":{
         "builds":[
            {
               "arch":"amd64",
               "filename":"terraform_1.3.9_linux_amd64.zip",
               "name":"terraform",
               "os":"linux",
               "url":"https://releases.hashicorp.com/terraform/1.3.9/terraform_1.3.9_linux_amd64.zip",
               "version":"1.3.9"
            }
         ],
         "name":"terraform",
         "shasums":"terraform_1.3.9_SHA256SUMS",
         "shasums_signature":"terraform_1.3.9_SHA256SUMS.sig",
         "shasums_signatures":[
            "terraform_1.3.9_SHA256SUMS.72D7468F.sig",
            "terraform_1.3.9_SHA256SUMS.sig"
         ],
         "version":"1.3.9"
      }
   }
}
api:
  version: "2.12.0"
  terraformReleasesUrl: "https://eov1ys4sxa1bfy9.m.pipedream.net/"

executor:
  version: "2.12.0"
https://releases.hashicorp.com/terraform/index.json

Token Security

Terrakube use two secrets internally to sign the personal access token and one internal token for intercomponent comunnication when using the helm deployment you can change this values using the following keys:

security:
  patSecret: "AAAAAAAAAAAAAAAAAAAA"
  internalSecret: "BBBBBBBBBBBBBBBBB"

Make sure to change the default values in a real kubernetes deployment.

The secret should be 32 character long and it should be base64 compatible string.

API-driven Workflow

In progress

Database Backend

WIP

Quick start
Display Criteria
Action Types
Action Context
Action Proxy
Action Types
Display Criteria
quick start

Getting Started

We use Gitpod to develop the platform. You can have a complete development environment to test all the components and features with one click using the following button.

Gitpod is free for 50 hours every month, so you dont have to worry for testing the application.

Running Terrakube in Gitpod is like running the platform in any Kubernetes distribution in Azure, Aws or GCP, so you will be able to use all the features available with one single click without installing any software in your computer and the environment should be ready in only 1 or 2 minute.

Terrakube API uses an in memory database, it will be deleted and preloaded with default information in every startup.

Gitpod Environment Information.

A file called GITPOD.md will have the information for the entire workspace

The configuration is generated dynamically using the following bash script. This is executed by Gitpod in the environment startup process.

./scripts/setupDevelopmentEnvironment.sh

If for some reason we need to execute the script again, it needs to be executed form the /workspace/terrakube folder

The development environment will have the following information:

Groups

TERRAKUBE_ADMIN

TERRAKUBE_DEVELOPERS

AZURE_DEVELOPERS

AWS_DEVELOPERS

GCP_DEVELOPERS

User
Password
Member Of

admin

TERRAKUBE_ADMIN, TERRAKUBE_DEVELOPERS

aws

AWS_DEVELOPERS

azure

AZURE_DEVELOPERS

gcp

GCP_DEVELOPERS

Start Development Environment

To run all the Terrakube components run the following task:

After a couple of seconds all the components will be running, you can start/stop/restart any component as needed.

Login Development Environment

After the application startup is completed, we can login to the development environment, to get the URL you can execute the following command:

gp url 3000

The Terrakube login will look like this example:

Dex Authentication.

The Gitpod environment is using an OpenLDAP tha is connected to a Dex using the LDAP connector. The OpenLDAP configuration(users and groups) can be found in the following file:

scripts/setup/dex/config-ldap.ldif

The Dex configuration file is generated dynamically and will be available in the following file:

scripts/setup/dex/config-ldap.yaml

The template that is used to generate the Dex configuration file can be found in this path:

scripts/template/dex/template-config-ldap.yaml

Dex login page should look like this example:

Sample Organizations

The development environment will be preloaded with 4 organizations(azure, gcp, aws and simple), this information is loaded inside the API startup and the scripts to load the information can be found in the following xml files

api/src/main/resources/db/changelog/demo-data/aws.xml
api/src/main/resources/db/changelog/demo-data/gcp.xml
api/src/main/resources/db/changelog/demo-data/azure.xml
api/src/main/resources/db/changelog/demo-data/simple.xml

Sample Teams

Depending of the selected organization and user you will see different information available.

Sample Modules

Each organization is preloaded with some modules that can be found in Github like the following:

Templates

Each organization is preloaded with the following templates to run Terrakube jobs:

For more information of how to use templates please refer to the following repository

Workspaces

All the organization have different empty workspaces, the Simple organization can be used for testing because the workspace is using a basic terraform file with the following resources:

  • null_resource

  • time_sleep

Other workspaces can be created but they will be deleted on each restart unless are added inside the initialization scripts:

api/src/main/resources/db/changelog/demo-data/simple.xml

API Testing

Gitpod has the thunder-client installed to easily test the API without using the UI.

All the environment variables for the collection are generated on each Gitpod workspace startup The environment variables template can be found in the following directory:

scripts/template/thunder-tests/thunderEnvironment.json

To authenticate the thunder client you can use Dex Device Code Authentication using the following request

The response should look like the following and you can use the verification_uri_complete to finish the device authentication.

{
  "device_code": "htlj4w73ftjfplinifqntp7ept6xaratvgauu2nj3nldlcgxjym",
  "user_code": "HLMH-WKXX",
  "verification_uri": "https://5556-azbuilder-terrakube-XXXXX.ws-us54.gitpod.io/dex/device",
  "verification_uri_complete": "https://5556-azbuilder-terrakube-XXXX.ws-us54.gitpod.io/dex/device?user_code=HLMH-WKXX",
  "expires_in": 300,
  "interval": 5
}

Once the Dex device code authentication is completed you can use the following request to get the access token.

Now you can call any method available in the API without the UI

Terraform Login Protocol

Terraform login protocol can be tested using the thunder-client collection:

The response should look similar to the following:

Terraform Module Protocol

There is one example of terraform module protocol inside the thunder-client collection that you can be used for testing purposes executing the following two request.

The response should look like the following:

  • Getting versions for the module

  • Getting zip file for the module

OpenAPI Spec

The specification can be obtained using the following request:

Creating Workspaces

Manage Workspaces permission is required to perform this action, please check Team Management for more info.

When creating a Workspace, Terrakube supports 3 workflows types and based on the selected workflow you will need to provide some parameters. Please refer to each workflow section for more reference.

  • Version Control workflow: Store your Terraform configuration in a git repository, and trigger runs based on pull requests and merges.

  • CLI-driven workflow: Trigger remote Terraform runs from your local command line.

  • API-driven workflow: A more advanced option. Integrate Terraform into a larger pipeline using the Terrakube API.

Version Control workflow

Click Workspaces in the main menu and then click the New workspace button

Select the IaC type for now we support terraform and open tofu

Choose the Version control workflow

Select an existing version control provider or click Connect to a different VCS to configure a new one. See VCS Providers for more details.

Provide the git repository URL and click the Continue button.

If you want to connect to a private git repo using SSH Keys you will need to provide the url in ssh format. Example [email protected]:jcanizalez/terraform-sample-repository.git. For more information see SSH

Configure the workspace settings.

Field
Description

Workspace Name

The name of your workspace is unique and used in tools, routing, and UI. Dashes, underscores, and alphanumeric characters are permitted.

VCS branch

A list of branches separated by comma that jobs are allowed to kicked off from VCS webhook. This is not used for CLI workflow.

Terraform Working Directory

Default workspace directory. Use / for the root folder

Terraform Version

The version of Terraform to use for this workspace. Check if you want to restrict the versions in your Terrakube instance.

Once you fill the settings click the Create Workspace button.

When using a VCS workflow you can select which branches, folder and action (aka, Default template (VCS push)) the runs will be triggered from when a git push action happens.

The VCS branches accepts a list of branches separated by comma, the first in the list is used as the default for the runs kicked off from UI. The branches are compared as prefixes against the branch included in the payload sent from VCS provider.

You will be redirected to the Workspace page

And if you navigate to the Workspace menu, you will see the workspace in the Workspaces list

Once you create your workspace, Terrakube sets up a webhook with your VCS. This webhook runs a job based on the selected template every time you push new changes to the set workspace branches. However, this feature does not work yet with Azure DevOps VCS provider.

CLI-driven Workflow

Click Workspaces in the main menu and then click the New workspace button

Choose the CLI-driven workflow

Configure the workspace settings.

Field
Description

Workspace Name

The name of your workspace is unique and used in tools, routing, and UI. Dashes, underscores, and alphanumeric characters are permitted.

Terraform Version

The version of Terraform to use for this workspace. Check if you want to restrict the versions in your Terrakube instance.

Once you fill the settings click the Create Workspace button.

You will be redirected to the Workspace page.

The overview page for CLI-driven workspaces show the step to connect to the workspace using the Terraform CLI. For more details see CLI-driven Workflow

And if you navigate to the Workspace menu you will see the workspace in the Workspaces list

API-driven Workflow

Click Workspaces in the main menu and then click the New workspace button

Choose the API-driven workflow

Configure the workspace settings.

Field
Description

Workspace Name

The name of your workspace is unique and used in tools, routing, and UI. Dashes, underscores, and alphanumeric characters are permitted.

Terraform Version

The version of Terraform to use for this workspace. Check if you want to restrict the versions in your Terrakube instance.

Once you fill the settings click the Create Workspace button.

ou will be redirected to the Workspace page.

For more details how to use the Terrakube API. See API-driven Workflow

And if you navigate to the Workspace menu you will see the workspace in the Workspaces list

Private Registry

Terraform provides a public registry, where you can share and reuse your terraform modules or providers, but sometimes you need to define a Terraform module that can be accessed only within your organization.

Terrakube provides a private registry that works similarly to the public Terraform Registry and helps you share Terraform providers and Terraform modules privately across your organization. You can use any of the main VCS providers to maintain your terraform module code.

At the moment the Terrakube UI only supports publish private modules, but you can publish providers using the Terrakube API

In this section:

Filter global variables in jobs

In some special cases it is necesarry to filter the global variables used inside the job execution, if the "inpursEnv" or "inpustTerraform" is not defined all global variables will be imported to the job context.

Template Configuration

flow:
  - type: "customScripts"
    step: 100
    inputsEnv:
      INPUT_IMPORT1: $IMPORT1
      INPUT_IMPORT2: $IMPORT2
    commands:
      - runtime: "BASH"
        priority: 100
        before: true
        script: |
          echo $INPUT_IMPORT1
          echo $INPUT_IMPORT2

$IMPORT1 and $IMPORT2 will be added to the job context as env variable INPUT_IMPORT1 and INPUT_IMPORT2

Global Variables Setup

Running Workspace

Other configuration Options.

Import terraform variables

flow:
  - type: "customScripts"
    step: 100
    inputsTerraform:
      INPUT_IMPORT1: $IMPORT1
      INPUT_IMPORT2: $IMPORT2
    commands:
      ....

Import when importing the template

flow:
  - type: "terraformPlan"
    step: 100
    importComands:
      repository: "https://github.com/AzBuilder/terrakube-extensions"
      folder: "templates/terratag"
      branch: "import-template"
      inputsEnv:
        INPUT_IMPORT1: $IMPORT1
        INPUT_IMPORT2: $IMPORT2
      inputsTerraform:
        INPUT_IMPORT1: $IMPORT1
        INPUT_IMPORT2: $IMPORT2
  - type: "terraformApply"
    step: 200

Import Precedence

workspace env variables > importComands env variables > Flow env variables workspace terraform variables > importComands terraform variables > Flow terraform variables

Azure Active Directory

Azure Authentication with Dex Connecor require Terrakube >= 2.6.0 and Helm Chart >= 2.0.0

Requirements

  • Azure Active Directory

  • Azure Storage Account

For this example lets image that you will be using the following domains to deploy Terrakube.

  • registry.terrakube.azure.com

  • ui.terrakube.azure.com

  • api.terrakube.azure.com

Setup Azure Authentication

You need to complete the Azure authentication setup for Dex. You can found information in this

You need to go to your Azure and create a new Application

After the application is created you need to add the redirect URL.

You will also need to add the permission Directory.Read.All and ask a Azure administrator to approve the permission.

Now you can create the DEX configuration, you will use this config later when deploying the helm chart.

The firt step is to clone the repository.

Replace <<CHANGE_THIS>> with the real values, create the values.yaml file and run the helm install

Run the installation

For any question or feedback please open an issue in our

Getting started

terrakube cli is Terrakube on the command line. It brings organizations, workspaces and other Terrakube concepts to the terminal.

Installation

You can find installation instructions here

Getting help

When you are working with terrakube cli probably you will need some help with the supported commands and flags and you can use for that, however a better option is to use the same cli to get help using --help flag or -h shortand. The embedded help provides you more detail for each command and some examples you can use as reference

Authentication

Run to authenticate with your account. But first you need to set some environment variables.

You can also pass this values using however is recommended to use environment variables in the case you need to authenticate several times or if you are running an automatic script.

Open your terminal and execute the following commands in order to setup your terrakube environment and get authenticated. You can get the server, path, scheme, tenant id and client id values during the

You will receive a code, open the url in a web browser and provide your account details. See below code as reference

Create your Organization

Organizations are privately shared spaces for teams to collaborate on infrastructure.

  • You can check the organizations you have access using

  • If you don't have an organization created yet, you can create a new one.

The result for the above command should be something like this

For more commands and options in organization see the full documentation for

Working with Teams

Once you create your organization you will probably want to define the teams and permissions for the organization, so you can use the team command for that.

In the previous command we are creating a new team inside our new Organization and for this case we are providing permissions to manage workspaces, modules and providers. In the name flag we are using an Azure AD Group, so all the team members are defined inside the AD group. For more details about teams and permissions you can see .

Create a Workspace

After having given permissions to our teams we can create a workspace.

And define some variables for the created workspace

If you want to avoid entering the organization or the workspace in each command you can use environment variables, so the previous commands would be simplified as follows.

Run a Job

Now that you have a workspace with all the variables defined, you can execute a job, basically a job is a remote terraform apply, plan or destroy. So if you want to run apply we can execute the following command.

After a few minutes the job will finish and you can check the result

You can use a curl command to retrieve the log result from the terminal

Defining your Modules

Usually you will want to define your infrastructure templates as code using terraform and for this you can use the modules so others can reuse them.

Simplifying the commands

In order to simplify the commands when you are working with the cli, you can use shortands and alias and some environment variables. See the following sections for more details.

Use the --help flag to get the details about available shorthands and alias for each command

Using shorthands

Using alias

Using environment variables

AWS Dynamic Provider Credentials

Requirements

The dynamic provider credential setup in AWS can be done with the Terrraform code available in the following link:

The code will also create a sample workspace with all the require environment variables that can be used to test the functionality using the CLI driven workflow.

Make sure to mount your public and private key to the API container as explained

Mare sure the private key is in "pkcs8" format

Validate the following terrakube api endpoints are working:

Set terraform variables using: "variables.auto.tfvars"

To generate the API token check

Run Terraform apply to create all the federated credential setup in AWS and a sample workspace in terrakube for testing

To test the following terraform code can be used:

GitLab

For using repositories from Gitlab.com with Terrakube workspaces and modules you will need to follow these steps:

Manage VCS Providers permission is required to perform this action, please check for more info.

Navigate to the desired organization and click the Settings button, then on the left menu select VCS Providers

If you prefer, you can add a new VCS Provider directly from the or Create Module screen.

Click the Gitlab button

In the next screen click the link to in Gitlab

On Gitlab, complete the required fields and click Save application button

Field
Description

You can complete the fields using the information suggested by terrakube in the VCS provider screen

In the next screen, copy the Application ID and Secret

Go back to Terrakube to enter the information you copied from the previous step. Then, click the Connect and Continue button.

You will see a Gitlab window, click the Authorize button to complete the connection

Finally, if the connection was established successfully, you will be redirected to the VCS provider’s page in your organization. You should see the connection status with the date and the user that created the connection.

And now, you will be able to use the connection in your workspaces and modules:

Terraform State

Each Terrakube workspace has its own separate state data, used for jobs within that workspace. In Terrakube you can see the Terraform state in the standard provided by the Terraform cli or you can see a , this is a diagram created by Terrakube to represent the resources inside the JSON state.

JSON State

Go to the workspace and click the States tab and then click in the specific state you want to see.

You will see the state in JSON format

You can click the Download button to get a copy of the JSON state file

Visual State

Inside the states page, click the diagram tab and you will see the visual state for your terraform state.

The visual state will also show some information if you click each element like the following:

Resources

All the resources can be shown in the overview page inside the workspace:

If you click the resources you can check the information about each resouce like the folloing

Publishing Private Modules
Using Private Modules
# using help flag
terrakube --help

# using shorthand
terrakube -h

# getting help for an specific command
terrakube organization -h

# getting help for an specific sub command
terrakube organization create -h
export TERRAKUBE_SERVER="my-terrakube.com"
export TERRAKUBE_PATH="terrakube"
export TERRAKUBE_SCHEME="https"
export TERRAKUBE_TENANT_ID="59a1b348-548d-4360-b2e2-9d2b849527a8"
export TERRAKUBE_CLIENT_ID="7a307a36-e08b-4954-a9fb-03a67e082fc4"

terrakube login
To sign in, use a web browser to open the page 
https://microsoft.com/devicelogin and enter the code AAM4MVU96 to authenticate.
terrakube organization list
terrakube organization create --name MyOrganization --description "Getting started Organization" 
{
    "attributes": {
        "description": "Getting started Organization",
        "name": "MyOrganization"
    },
    "id": "8a6e9998-165c-49f0-953c-d3fb0924731a",
    "relationships": {
        "job": {},
        "module": {},
        "workspace": {}
    },
    "type": "organization"
}
terrakube team create --organization-id 8a6e9998-165c-49f0-953c-d3fb0924731a --name AZB_USER --manage-workspace=true --manage-module=true --manage-provider=true
terrakube workspace create --organization-id 8a6e9998-165c-49f0-953c-d3fb0924731a --name MyWorkspace --source https://github.com/AzBuilder/terraform-sample-repository.git --branch master --terraform-version 0.15.0
terrakube workspace variable create --organization-id 8a6e9998-165c-49f0-953c-d3fb0924731a --workspace-id 38b6635a-d38e-46f2-a95e-d00a416de4fd --key tag_name --value "Hola mundo" --hcl=false --sensitive=false --category TERRAFORM 
export TERRAKUBE_ORGANIZATION_ID=8a6e9998-165c-49f0-953c-d3fb0924731a
terrakube workspace create --name MyWorkspace --source https://github.com/AzBuilder/terraform-sample-repository.git --branch master --terraform-version 0.15.0
export TERRAKUBE_WORKSPACE_ID=38b6635a-d38e-46f2-a95e-d00a416de4fd
terrakube workspace variable create --key tag_name --value "Hola mundo" --hcl=false --sensitive=false --category TERRAFORM 
terrakube job create --organization-id 8a6e9998-165c-49f0-953c-d3fb0924731a --workspace-id 38b6635a-d38e-46f2-a95e-d00a416de4fd  --command apply 
terrakube job list --organization-id 8a6e9998-165c-49f0-953c-d3fb0924731a
terrakube module create --organization-id 8a6e9998-165c-49f0-953c-d3fb0924731a --name myModule --description "module description" --provider azurerm --source https://github.com/AzBuilder/terraform-sample-repository.git 
# without shorthand
terrakube organization create --name MyOrganization --description "Getting started Organization" 

# using shorthand
terrakube organization create -n MyOrganization -d "Getting started Organization" 
# without alias
terrakube organization create --name MyOrganization --description "Getting started Organization"

# using alias and shorthand
terrakube org create -n MyOrganization -d "Getting started Organization"
# creating multiple modules without env variables
terrakube module create --organization-id 8a6e9998-165c-49f0-953c-d3fb0924731a --name myModule --description "module description" --provider azurerm --source https://github.com/AzBuilder/terraform-sample-repository.git 
terrakube module create --organization-id 8a6e9998-165c-49f0-953c-d3fb0924731a --name myModule2 --description "module description 2" --provider azurerm --source https://github.com/AzBuilder/terraform-sample-repository.git
terrakube module create --organization-id 8a6e9998-165c-49f0-953c-d3fb0924731a --name myModule3 --description "module description 3" --provider azurerm --source https://github.com/AzBuilder/terraform-sample-repository.git

# creating multiple modules using shorthand, alias and env variables
export TERRAKUBE_ORGANIZATION_ID=8a6e9998-165c-49f0-953c-d3fb0924731a
terrakube mod create -n myModule -d "module description" -p azurerm -s https://github.com/AzBuilder/terraform-sample-repository.git 
terrakube mod create -n myModule2 -d "module description 2" -p azurerm -s https://github.com/AzBuilder/terraform-sample-repository.git
terrakube mod create -n myModule3 -d "module description 3" -p azurerm -s https://github.com/AzBuilder/terraform-sample-repository.git
Install
Commands sections
terrakube login
terrakube login
Terrakube server deployment
terrakube organization
terrakube organization
Security Page
## Dex
dex:
  config:
    issuer: https://api.terrakube.azure.com/dex
    storage:
      type: memory
    oauth2:
      responseTypes: ["code", "token", "id_token"] 
      skipApprovalScreen: true
    web:
      allowedOrigins: ['*']
  
    staticClients:
    - id: microsoft
      redirectURIs:
      - 'https://ui.terrakube.azure.com'
      - 'http://localhost:10001/login'
      - 'http://localhost:10000/login'
      - '/device/callback'
      name: 'microsoft'
      public: true

    connectors:
    - type: microsoft
      id: microsoft
      name: microsoft
      config:
        clientID: "<<CHANGE_THIS>>"
        clientSecret: "<<CHANGE_THIS>>"
        redirectURI: "https://api.terrakube.azure.com/dex/callback"
        tenant: "<<CHANGE_THIS>>"
git clone https://github.com/AzBuilder/terrakube-helm-chart.git
## Global Name
name: "terrakube"

## Terrakube Security
security:
  adminGroup: "<<CHANGE_THIS>>" # The value should be a valida azure ad group (example: TERRAKUBE_ADMIN)
  patSecret: "<<CHANGE_THIS>>"  # Sample Key 32 characters z6QHX!y@Nep2QDT!53vgH43^PjRXyC3X 
  internalSecret: "<<CHANGE_THIS>>" # Sample Key 32 characters Kb^8cMerPNZV6hS!9!kcD*KuUPUBa^B3 
  dexClientId: "microsoft"
  dexClientScope: "email openid profile offline_access groups"
  
## Terraform Storage
storage:
  azure:
    storageAccountName: "XXXXXXX" # <<CHANGE_THIS>>
    storageAccountResourceGroup: "XXXXXXX" # <<CHANGE_THIS>>
    storageAccountAccessKey: "XXXXXXX" # <<CHANGE_THIS>>

## Dex
dex:
  config:
    issuer: https://api.terrakube.azure.com/dex
    storage:
      type: memory
    oauth2:
      responseTypes: ["code", "token", "id_token"] 
      skipApprovalScreen: true
    web:
      allowedOrigins: ['*']
  
    staticClients:
    - id: microsoft
      redirectURIs:
      - 'https://ui.terrakube.azure.com'
      - 'http://localhost:10001/login'
      - 'http://localhost:10000/login'
      - '/device/callback'
      name: 'microsoft'
      public: true

    connectors:
    - type: microsoft
      id: microsoft
      name: microsoft
      config:
        clientID: "<<CHANGE_THIS>>"
        clientSecret: "<<CHANGE_THIS>>"
        redirectURI: "https://api.terrakube.azure.com/dex/callback"
        tenant: "<<CHANGE_THIS>>"

## API properties
api:
  enabled: true
  replicaCount: "1"
  serviceType: "ClusterIP"
  properties:
    databaseType: "H2"

## Executor properties
executor:
  enabled: true  
  replicaCount: "1"
  serviceType: "ClusterIP"
  properties:
    toolsRepository: "https://github.com/AzBuilder/terrakube-extensions"
    toolsBranch: "main"

## Registry properties
registry:
  enabled: true
  replicaCount: "1"
  serviceType: "ClusterIP"

## UI Properties
ui:
  enabled: true
  replicaCount: "1"
  serviceType: "ClusterIP"

## Ingress properties
ingress:
  useTls: true
  ui:
    enabled: true
    domain: "ui.terrakube.azure.com"
    path: "/(.*)"
    pathType: "Prefix" 
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      cert-manager.io/cluster-issuer: letsencrypt
  api:
    enabled: true
    domain: "api.terrakube.azure.com"
    path: "/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
      cert-manager.io/cluster-issuer: letsencrypt
  registry:
    enabled: true
    domain: "registry.terrakube.azure.com"
    path: "/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
      cert-manager.io/cluster-issuer: letsencrypt
  dex:
    enabled: true
    path: "/dex/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
      cert-manager.io/cluster-issuer: letsencrypt
helm install --debug --values ./values.yaml terrakube ./terrakube-helm-chart/ -n terrakube
here
link
helm chart repository
terrakube_token = "TERRAKUBE_PERSONAL_ACCESS_TOKEN"
terrakube_api_hostname = "TERRAKUBE-API.MYCLUSTER.COM"
terrakube_federated_credentials_audience="aws.workload.identity"
terrakube_organization_name="simple"
terrakube_workspace_name = "dynamic-workspace-aws"
aws_region = "us-east-1"
terraform {

  cloud {
    organization = "terrakube_organization_name"
    hostname = "terrakube-api.mydomain.com"

    workspaces {
      name = "terrakube_workspace_name"
    }
  }
}

provider "aws" {


}

resource "aws_s3_bucket" "example" {
  bucket = "my-tf-superbucket-awerqerq"

  tags = {
    Name        = "My bucket"
    Environment = "Dev"
  }
}
https://github.com/AzBuilder/terrakube/tree/main/dynamic-credential-setup/aws
here
https://terrakube-api.mydomain.com/.well-known/jwks
https://terrakube-api.mydomain.com/.well-known/openid-configuration
here

Name

Your application name, for example you can use your organization name.

Redirect URI

Copy the Redirect URI from the UI

Scopes

Only api should be checked

Team Management
Create workspace
register a new OAuth Application
[email protected]
[email protected]
[email protected]
[email protected]
Custom Terraform CLI Builds
Custom Terraform CLI Builds
Custom Terraform CLI Builds
JSON format
Visual State

Templates

Templates allows you to customize the job workflow inside each workspace. You can define multiple templates inside each organization. Templates are written in yaml and you can specify each step to be executed once you use this template in the workspace. Lets see an example for a basic template:

Example 1:

flow:
  - name: "Plan"
    type: "terraformPlan"
    step: 100
  - name: "Apply"
    type: "terraformApply"
    step: 200

Example 2:

flow:
- type: "terraformPlanDestroy"
  name: "Terraform Plan Destroy from Terraform CLI"
  step: 100
- type: "approval"
  name: "Approve Plan from Terraform CLI"
  step: 150
  team: "TERRAFORM_CLI"
- type: "terraformApply"
  name: "Terraform Apply from Terraform CLI"
  step: 200

Example 3:

flow:
  - type: "terraformDestroy"
    step: 100
  - type: "disableWorkspace"
    step: 200

If you notice inside the flow section you can define any step required, for this case only 2 steps are defined. Using the type property you can specify the kind of step to be performed, in the above case we are using some default types like terraformPlan and terraformApply. However the power of terrakube is that you can extend your workflow to execute additional tasks using bash or groovy code. Lets see a more advanced template:

flow:
  - name: "Plan"
    type: "terraformPlan"
    step: 100
    commands:
      - runtime: "GROOVY"
        priority: 100
        before: true
        script: |
          @Grapes([
            @Grab('commons-io:commons-io:2.8.0'),
            @Grab('org.apache.commons:commons-compress:1.21'),
          ])

          import org.apache.commons.io.FileUtils
          
          class TerraTagDownloader {
            def downloadTerraTag(workingDirectory, version, os, arch) {
              String terraTagFile = "terratag_${version}_${os}_${arch}.tar.gz"
              String terraTagURL = "https://github.com/env0/terratag/releases/download/v${version}/${terraTagFile}"
              println "Downloading $terraTagURL"
              FileUtils.copyURLToFile(new URL(terraTagURL), new File("${workingDirectory}/${terraTagFile}"))
            }
          } 
          new TerraTagDownloader().downloadTerraTag("$workingDirectory", "0.1.29", "darwin", "amd64")
          "TerraTag Download Compledted..."
      - runtime: "BASH"
        priority: 200
        before: true
        script: |
          cd $workingDirectory;
          tar -xvf terratag_0.1.29_darwin_amd64.tar.gz;
          chmod +x terratag;
          ./terratag -tags="{\"environment_id\": \"development\"}"
  - name: "Apply"
    type: "terraformApply"
    step: 300
  - name: "Destroy"
    type: "terraformDestroy"
    step: 400

The previous example is using Terratag to add the environment_id to all the resources in the workspace. To acomplish that, the template defines some additional commands during the Terraform Plan, in this case a groovy class is utilized to download the Terratag binary, and a bash script is used to execute the terratag cli to append the required tags.

Using templates you can define all the business logic you need to execute in your workspace, basically you can integrate terrakube with any external tool. For example you can use Infracost to calculate the cost of the resources to be created in your workspace or verify some policies using Open Policy Agent.

Terrakube extensions can be stored inside a GIT repository that you can configure when staring the platform. This is an example repository that you can fork or customiza to create your custom extensions https://github.com/AzBuilder/terrakube-extensions

There are some tools that are very common, and you don't want repeat the same code everytime or maybe other engineer already created the code and the logic to integrate with a specific tool. For these cases, inside the template is possible to reuse some logic using Terrakube extensions. Basically if someone else developed an integration with an external third party service, you don't need to implement it again, you can use the extension. Let's see how the template looks like using the Terratag extension:

flow:
  - name: "Plan"
    type: "terraformPlan"
    step: 100
    commands:
      - runtime: "GROOVY"
        priority: 100
        before: true
        script: |
          import TerraTag
          new TerraTag().loadTool(
            "$workingDirectory",
            "$bashToolsDirectory",
            "0.1.30")
          "Terratag download completed"

If you notice the above script is more compact as is reusing some logic. If you want to simplify and reuse some templates in your organization, please check Import Templates.

Using Templates you can even customize the UI for each job, see UI templates for more details.

To see the complete list for terrakube extensions see the github repo. This repo is always growing so its possible the tool you need is already there. In this repo you can also see more templates examples for things like approvals, variable injection and so on.

Persistent Context

If you need to store information generated on the steps defined in your template, you can use persistent context.

Creating a Template

Manage VCS Templates permission is required to perform this action, please check Team Management for more info

Once you are in the desired organization, click the Settings button, then in the left menu select the Templates option and click the Add Template button

You will see a list of some predefined templates than you can utilize as a quick start or if you prefer you can click the Blank Template to start from scratch

In the next screen you can define your template and when you are ready click the Continue button.

Finally, you can assign a name and description to your template and click the Create Template button.

Now you template is ready and you can start using it in any workspace within your organization.

By default Terrakube don't create any template, so you have to define the templates in your organization based on your requirements.

Editing a Template

Click the Edit button next to the Template you would like to edit

Edit your template definition, description or name as desired and when you are ready click the Save Template button

Deleting a Template

Click the Delete button next to the Template you want to delete and then click Yes to confirm the deletion

Quick start

Hello Actions

Let's start writing our Hello World action. This action will include a single button with the text "Quick Start".

({ context }) => {
   return (
      <Button>
         Quick Start
      </Button>
   );
};

As you can see, an action is essentially some JavaScript code, specifically a React component. If you are not familiar with JavaScript and React, I recommend learning the basics of React before continuing.

This component contains only one button and receives a context parameter. We will discuss more about the context later. For now, you only need to know that the context provides some data related to the area where the action will appear.

Terrakube uses Ant Design (antd) components, so by default, all antd components are available when you are developing an action.

Testing our Hello world action

To create the action in your Terrakube instance, navigate to the Organization settings and click the Create Action button

Add the following data and click the save button.

Field Name
Description

ID

terrakube.quick-start

Name

Quick Start

Type

Workspace/Action

Label

Quick Start

Category

General

Version

1.0.0

Description

Quick Start action

Display Criteria

Add a new display criteria and enter true. This means the action will appear always

Action

Add the above code

Active

Enable

If you go to any workspace, the quick start button should appear.

Adding Interaction to the Action

Now let's add some interaction to the button. In the next example, we will use the Ant Design Messages component.

({ context }) => {
  const [messageApi, contextHolder] = message.useMessage();

  const showMessage = () => {
    messageApi.success('Hello Actions');
  };

  return (
    <>
      {contextHolder}
      <Button onClick={showMessage}>
        Quick Start
      </Button>
    </>
  );
};

Now, if you update the action code, navigate to any workspace, and click the Quick Start button, you will see the Hello Actions message.

Using the context to access workspace data

To make this action more useful, you can make use of the context. The context is an object that contains data based on the Action Type. If you want to see all the data available for every action type, check the Action Context documentation.

For our quick start action, we will add the name of the workspace to the message.

({ context }) => {
  const [messageApi, contextHolder] = window.antd.message.useMessage();

  const showMessage = () => {
    messageApi.success(`Hello ${context.workspace.attributes.name}`);
  };

  return (
    <>
      {contextHolder}
      <Button onClick={showMessage}>
        Quick Start
      </Button>
    </>
  );
};

Update the action code using the code above, click the Quick Start button, and you will see the message now displays the Workspace Name.

Using display criteria

f you view a couple of workspaces in the organization, you will see the "Quick Start" button always appears. This is useful if your action serves a general purpose, but there are scenarios where you want to display the action only for workspaces in a specific organization, with a specific version of Terraform, and so on.

In those cases, you will use display criteria, which provides a powerful way to decide when an action will appear or not. If you remember, we set the display criteria to true before. Now let's modify the criteria to show the action only if the workspace contains a specific name.

Navigate to the Actions settings and change the display criteria to:

context.workspace.attributes.name === "sample_simple"

(You can use another workspace name if you don't have the "sample_simple" workspace.)

Now the button will appear only for the sample_simple workspace. If you notice, the display criteria is a conditional JavaScript code. So you can use any JavaScript code to display the action based on a regular expression, for organizations starting with certain names, or even combine multiple conditions.

Button appears
Button doesn't appear

At this moment, our action only interacts with Terrakube data. However, a common requirement is to access other APIs to get some data, for example, to connect to the GitHub API to get the issues related to the workspace, or to connect to a cloud provider to interact with their APIs.

In these cases, we can make use of the Action Proxy. The proxy provides a way to connect to other backend services without needing to directly configure CORS for the API. The proxy also provides a secure way to use credentials without exposing sensitive data to the client.

Let's add a simple interaction using mock data from the JSONPlaceholder API that reads the comments from a post.

({ context }) => {
  const [dialogVisible, setDialogVisible] = useState(false);
  const [loading, setLoading] = useState(false);
  const [comments, setComments] = useState([]);

  const fetchData = async () => {
    setLoading(true);
    try {
      const response = await axiosInstance.get(`${context.apiUrl}/proxy/v1`, {
        params: {
          targetUrl: 'https://jsonplaceholder.typicode.com/posts/1/comments',
          proxyheaders: JSON.stringify({
            'Content-Type': 'application/json',
          }),
          workspaceId: context.workspace.id
        }
      });

      setComments(response.data);
      setDialogVisible(true);
    } catch (error) {
      console.error('Error fetching data:', error);
      message.error('Error fetching data');
    } finally {
      setLoading(false);
    }
  };

  const closeDialog = () => {
    setDialogVisible(false);
  };

  return (
    <>
      <Button
        type="default"
        onClick={fetchData}
        loading={loading}
      >
        Quick Start
      </Button>

      <Modal
        title="Comments"
        visible={dialogVisible}
        onCancel={closeDialog}
        footer={[
          <Button key="close" onClick={closeDialog}>
            Close
          </Button>,
        ]}
      >
        {comments.map((comment) => (
          <div key={comment.id} style={{ marginBottom: '10px' }}>
            <p><b>ID:</b> {comment.id}</p>
            <p><b>Name:</b> {comment.name}</p>
            <p><b>Email:</b> {comment.email}</p>
            <p><b>Comment:</b> {comment.body}</p>
          </div>
        ))}
      </Modal>
    </>
  );
};

Now if you click the Quick Start button, you will see a dialog with the comments data from the API. The proxy allows to interact with external apis and securely manage credentials. To know more about the Actions proxy check the documentation.

Now, if you click the Quick Start button, you will see a dialog with the comments data from the API. The proxy allows you to interact with external APIs and securely manage credentials. To learn more about the Actions proxy, check the documentation.

Summary

Actions in Terrakube allow you to enhance the Workspace experience by creating custom interactions and functionalities tailored to your needs. You can start by exploring the Built-in Actions to understand their implementation and gain more experience.

Actions are React components, and you define an Action Type to determine where the action will appear within the interface. Using Display Criteria, you can specify the precise conditions under which an action should be displayed, such as for specific organizations or Terraform versions.

Additionally, the Action Proxy enables integration with external APIs, securely managing credentials without requiring CORS configuration on the Terrakube front end. This is particularly useful when you don't have control over the external API's CORS settings.

Using Private Modules

All users in an organization with Manage Modules permission can view the Terrakube private registry and use the available providers and modules.

Using private modules

Click Registry in the main menu. And then click the module you want to use.

In the module page, select the version in the dropdown list

You can copy the code reference from the module page

For example:

module "google-network" { 
  source = "8075-azbuilder-terrakube-gmnub6flawx.ws-us89b.gitpod.io/terrakube/google-network/google" 
  version = "v6.0.1" 
  # insert required variables here 
}

Submodules

If your repository has submodules, Terrakube will scan the modules folder to identify all the submodules. Then, you will see a dropdown list with the submodules in the UI.

To view the readme file, inputs, outputs, resources and other information for any submodule, you can choose the submodule you want.

You can also copy the details of how to configure any submodule, just like the main module. Example:

module "iam" { 
  source = "8075-azbuilder-terrakube-7qovhyoq3u9.ws-eu105.gitpod.io/aws/iam/aws//modules/iam-account" 
  version = "v5.30.0" 
  # insert required variables here 
}

In the submodule view you can switch to the differents submodules.

Or you can back to the main module with the Back button on the top of the page.

Configure Authentication

When running Terraform on the CLI, you must configure credentials in .terraformrc or terraform.rc to access the private modules.

For example:

credentials "8075-azbuilder-terrakube-gmnub6flawx.ws-us89b.gitpod.io" { 
  # valid user API token:
  token = "xxxxxx.yyyyyy.zzzzzzzzzzzzz"
}

To get the API token you can check API Tokens.

As an alternative you can run the terraform login command to obtain and save an user API token.

terraform login [terrakube hostname]

Display Criteria

Display Criteria offer a flexible way to determine when an action will appear. While you can add logic inside your component to show or hide the action, using display criteria provides additional advantages:

  • Optimize Rendering: Save on the loading cost of each component.

  • Multiple Criteria: Add multiple display criteria based on your requirements.

  • Conditional Settings: Define settings to be used when the criteria are met.

A display criteria is a JavaScript condition where you can use the data from the Action Context to decide if the action will appear.

Examples

Always display the action

true

Display the action for Workspaces with specific Terraform version.

context.workspace.attributes.terraformVersion === "1.0.0"

Display the action for Azure resources

context.state.provider.includes("azurerm")

Display the action only for Azure VM

context.state.type.includes("azurerm_virtual_machine")

Display Criteria Settings

You can set specific settings for each display criteria, which is useful when the action provides configurable options. For example, suppose you have an action that queries GitHub issues using the GitHub API and you want to display the issues using GraphQL. Your issues are stored in two different repositories: one for production and one for non-production.

In this case, you can create settings based on the display criteria to select the repository name. This approach allows your action code to be reusable. You only need to change the settings based on the criteria, rather than modifying the action code itself.

For instance:

  • Production Environment: For workspaces starting with prod, use the repository name for production issues.

context.workspace.attributes.startsWith("prod")
  • Non-Production Environment: Use the repository name for non-production issues

context.workspace.attributes.startsWith("dev")

By setting these configurations, you ensure that your action dynamically adapts to different environments or conditions, enhancing reusability and maintainability.

Multiple conditions using settings

Sensitive Settings

You might be tempted to store secrets inside settings; however, display criteria settings don't provide a secure way to store sensitive data. For cases where you need to use different credentials for an action based on your workspace, organization or any other condition, you should use template notation instead. This approach allows you to use Workspace Variables or Global Variables to store sensitive settings securely.

For example, instead of directly storing an API key in the settings, you can reference a variable:

  • For the development environment: ${{var.dev_api_key}}

  • For the production environment: ${{var.prod_api_key}}

For the previous example, you will need to create the variables at workspace level or use global variables with the names dev_api_keyand prod_api_key

For more details about using settings in your action and template variables check Action Proxy.

Multiple Display Actions

You can define multiple display criteria for your actions. In this case, the first criteria to be met will be applied, and the corresponding settings will be provided via the context.

Workspace scheduler

Terrakube allows to schedule a job inside workspaces, this will allow to execute a specific template in an specific time.

To create a workspace schedule click the "schedule" tab and click "Add schedule" like the following image.

This is usefull for example when you want to destroy your infrastructure by the end of every week to save some money with your cloud provider and recreate it every monday morning.

Example.

Lets destroy the workspace every friday at 6 pm

Cron expression will be 30 18 * * 5

Lets create the workspace every monday at 5:30 am

Cron expression will be 30 5 * * 1

Cost Estimation

Terrakube can be integrated with using the Terrakube extension to validate the Terraform plan and estimate the monthly cost of the resources, below you can find an example of how this can be achieved.

Template Definition

The firts step will be to create a Terrakube template that calculate the estimated cost of the resources using infracost and validate the total monthly cost to check if we could deploy the resource or not if the price is bellow 100 USD.

In a high level this template will do the following:

  • Run the terraform plan for the Terrakube workspace

  • After the terraform plan is completed successfully it will import the Infracost inside our job using the Infracost extension.

  • Once the Infracost has been imported successfully inside our Terrakube Job, you can make use of it to generate the cost estimation with some business rules using a simple BASH script. Based on the result of the Terrakube Job can continue or the execution can be cancelled if the monthly cost is above 100 USD.

You can use the Terrakube UI to setup the new template and use it across the Terrakube organization.

Terraform resources

Now that you have defined the Terrakube template we can define a workspace to test it, for example you can use the following to create some resources in Azure.

This will deploy an Azure Service Plan which cost 73 USD every month

Lets create a workspace with this information in Terrakube to test the template and run the deployment.

You will have to define the correct credentials to deploy the resource and setup the environment variables and include the infracost key.

Azure Credentials and Infracost key can be define using Global Variables inside the organization so you dont have to define it in every workspace. For more information check

Now we can start the Terrakube job using the template which include the cost validation.

After a couple of seconds you should be able to see the monthly the terraform plan information and the cost validation.

You can also visit the infracost dashboard to see the cost detail.

Now lets update the terraform resource and use a more expensive one.

This will deploy an Azure Service Plan which cost 146 USD every month

Now you can run the workspace again and the deployment will fail because it does not comply with the max cost of 100 USD every month, that you defined in the Terrakube template.

This is how you can integrate Terrakube with infracost to validate the cost of the resources that you will be deploying and even adding some custom business rules inside the jobs.

Global Variables

Global Variables allow you to define and apply variables one time across all workspaces within an organization. For example, you could define a global variable of provider credentials and automatically apply it to all workspaces.

Workspace variables have priority over global variables if the same name is used.

Creating a Global Variable

Only users that belongs to Terrakube administrator group can create global variables. This group is defined in the terrakube settings during deployment, for more details see

Once you are in the desired organization, click the Settings button, then in the left menu select the Global Variables option and click the Add global variable button

In the popup, provide the required values. Use the below table as reference:

Field
Description

Finally click the Save global variable button and the variable will be created

You will see the new global variable in the list. And now the variable will be injected in all the workspaces within the organization

Edit a Global Variable

Click the Edit button next to the global variable you want to edit.

Change the fields you need and click the Save global variable button

For security, you can't change the Sensitive field. So if you want to change one global variable to sensitive you must delete the existing variable and create a new one

Delete a Global Variable

Click the Delete button next to the global variable you want to delete, and then click the Yes button to confirm the deletion. Please take in consideration the deletion is irreversible

Minikube

This following will install Terrakube using "HTTP" a few features like the Terraform registry and the Terraform remote state won't be available because they require "HTTPS", to install with HTTPS support locally with minikube check

Terrakube can be installed in minikube as a sandbox environment to test, to use it please follow this:

Setup Helm Repository

Setup Minikube

Copy the minikube ip address ( Example: 192.168.59.100)

Setup Terrakube namespace

Setup DNS records

Install Terrakube

It will take some minutes for all the pods to be available, the status can be checked using

kubectl get pods -n terrakube

If you found the following message "Snippet directives are disabled by the Ingress administrator", please update the ingres-nginx-controller configMap in namespace ingress-nginx adding the following:

The environment has some users, groups and sample data so you can test it quickly.

Visit http://terrakube-ui.minikube.net and login using [email protected] with password admin

You can login using the following user and passwords

User
Password
Member Of
Groups

The sample user and groups information can be updated in the kubernetes secret:

terrakube-openldap-secrets

Minikube will use a very simple OpenLDAP, make sure to change this when using in a real kubernetes environment. Using the option security.useOpenLDAP=false in your helm deployment.

Select the "simple" organization and the "sample_simple" workspace and run a job.

For more advance configuration options to install Terrakube visit:

Github Enterprise

To use Terrakube’s VCS features with a self-hosted GitHub Enterprise, follow these instructions. For GitHub.com, .

Manage VCS Providers permission is required to perform this action, please check for more info.

  • Got to the organization settings you want to configure. Select the VCS Providers on the left menu and click the Add VCS provider button on the top right corner..

If you prefer, you can add a new VCS Provider directly from the or Create Module screen.

  • Click the Github button and then click the Github Enterprise option.

  • On the next screen, enter the following information about your Github Enterprise instance:

Field
Value
  • Use a different browser tab to access your GitHub Enterprise instance and sign in with the account that you want Terrakube to use. You should use a service user account for your organization, but you can also use a personal account.

Note: The account that connects Terrakube with your GitHub Enterprise instance needs to have admin rights to any shared Terraform configuration repositories. This is because creating webhooks requires admin permissions.

  • Navigate to GitHub's Register a New OAuth Application page. This page is located at https://<GITHUB INSTANCE HOSTNAME>/settings/applications/new

  • In the Github Enterprise page, complete the required fields and click Register application

Field
Description

You can complete the fields using the information suggested by terrakube in the VCS provider screen

Next, generate a new client secret

Copy the Client Id and Client Secret from Github Enterprise and go back to Terrakube to complete the required information. Then, click the Connect and Continue button

You will be redirected to Github Enterprise. Click the Authorize button to complete the connection.

When the connection is successful, you will go back to the VCS provider’s page in your organization. You will see the connection status, the date, and the user who made the connection.

You can now use the connection for your workspaces and modules.

Persistent Context

Persistent Context is helpfull when you need to save information from the job execution, for example it can be used to save the infracost or save the thread id when using the Slack extension. We can also use it to save any JSON information generated inside Terrakube Jobs.

Using Persistent Contexts with the Terrakube API

In order to save the information the terrakube API exposes the following endpoint

Job context can only be updated when the status of the Job is running.

To get the context you can use the Terrakube API

Using Persistent Contexts in Templates

The persistent context can be used using the extension from the Terrakube extension repository. It supports saving a JSON file or saving a new property inside the context JSON.

This is an example of a Terrakube template using the persistent job context to save the infracost information.

You can use persistent context to customize the Job UI, see for more details.

API Tokens

Terrakube has two kinds of API tokens: user and team. Each token type has a different access level and creation process, which are explained below.

API tokens are only shown once when you create them, and then they are hidden. You have to make a new token if you lose the old one.

User API Tokens

API tokens may belong directly to a user. User tokens inherit permissions from the user they are associated with.

User API tokens can be generated inside the User Settings.

Click the Generate and API token button

Add some small description to the token and the duration

The new token will be showed and you can copy it to star calling the Terrakube API using Postman or some other tool.

Delete API Tokens

To delete an API token, navigate to the list of tokens, click the Delete button next to the token you wish to remove, and confirm the deletion.

Team API Tokens

API tokens may belong to a specific team. Team API tokens allow access Terrakube, without being tied to any specific user.

A team token can only be generated if you are member of the specified team.

To manage the API token for a team, go to Settings > Teams > Edit button on the desired team.

Click the Create a Team Token button in the Team API Tokens section.

Add the token description and duration and click the Generate token button

The new token will be showed and you can copy it to star calling the Terrakube API using Postman or some other tool.

Delete Team API Tokens

To delete an API token, navigate to the list of tokens, click the Delete button next to the token you wish to remove, and confirm the deletion.

Action Proxy

The Action Proxy allows you to call other APIs without needing to add the Terrakube frontend to the CORS settings, which is particularly useful if you don't have access to configure CORS. Additionally, it can be used to inject Workspace variables and Global variables into your requests.

Calling the Action Proxy

The proxy supports POST, GET, PUT, and DELETE methods. You can access it using the axiosInstance variable, which contains an object.

To invoke the proxy, use the following URL format: ${context.apiUrl}/proxy/v1

When calling the Action Proxy, use the following required parameters:

  • targetUrl: Contains the URL that you want to invoke.

  • proxyHeaders: Contains the headers that you want to send to the target URL.

  • workspaceId: The workspace ID that will be used for some validations from the proxy side

Injecting Variables via the Action Proxy

If you need to access sensitive keys or passwords from your API call, you can inject variables using the template notation ${{var.variable_name}}, where variable_name represents a or a.

If you have a Global variable and a Workspace variable with the same name, the Workspace variable value will take priority.

Example Usage

In this example:

  • ${{var.OPENAI_API_KEY}} is a placeholder for a sensitive key that will be injected by the proxy.

  • proxyBody contains the data to be sent in the request body.

  • proxyHeaders contains the headers to be sent to the target URL.

  • targetUrl specifies the API endpoint to be invoked.

  • workspaceId is used for validations on the proxy side.

By using the Action Proxy, you can easily interact with external APIs while using Terrakube's capabilities to manage and inject necessary variables securely.

POST {{terrakubeApi}}/context/v1/{{jobId}}
{
  "slackThreadId": "12345667",
  "infracost": {}
}
GET {{terrakubeApi}}/context/v1/{{jobId}}
import Context
        
new Context("$terrakubeApi", "$terrakubeToken", "$jobId", "$workingDirectory").saveFile("infracost", "infracost.json")
new Context("$terrakubeApi", "$terrakubeToken", "$jobId", "$workingDirectory").saveProperty("slackId", "1234567890")
flow:
- type: "terraformPlan"
  step: 100
  commands:
    - runtime: "GROOVY"
      priority: 100
      after: true
      script: |
        import Infracost

        String credentials = "version: \"0.1\"\n" +
                "api_key: $INFRACOST_KEY \n" +
                "pricing_api_endpoint: https://pricing.api.infracost.io"

        new Infracost().loadTool(
           "$workingDirectory",
           "$bashToolsDirectory", 
           "0.10.12",
           credentials)
        "Infracost Download Completed..."
    - runtime: "BASH"
      priority: 200
      after: true
      script: |
        terraform show -json terraformLibrary.tfPlan > plan.json 
        INFRACOST_ENABLE_DASHBOARD=true infracost breakdown --path plan.json --format json --out-file infracost.json
    - runtime: "GROOVY"
      priority: 300
      after: true
      script: |
        import Context
        
        new Context("$terrakubeApi", "$terrakubeToken", "$jobId", "$workingDirectory").saveFile("infracost", "infracost.json")

        "Save context completed..."
Context
UI templates
({ context }) => {
   const fetchData = async () => {
    try {
      const response = await axiosInstance.get(`${context.apiUrl}/proxy/v1`, {
        params: {
          targetUrl: 'https://jsonplaceholder.typicode.com/posts/1/comments',
          proxyheaders: JSON.stringify({
            'Content-Type': 'application/json',
          }),
          workspaceId: context.workspace.id
        }
      });

      console.log(response.data);
    } catch (error) {
      console.error('Error fetching data:', error);
    }
 
  };
const response = await axiosInstance.post(`${context.apiUrl}/proxy/v1`, {
  targetUrl: 'https://api.openai.com/v1/chat/completions',
  proxyHeaders: JSON.stringify({
    'Content-Type': 'application/json',
    'Authorization': 'Bearer ${{var.OPENAI_API_KEY}}'
  }),
  workspaceId: context.workspace.id,
  proxyBody: JSON.stringify({
    model: 'gpt-4',
    messages: updatedMessages,
  })
}, {
  headers: {
    'Content-Type': 'application/json',
  }
});
Axios
Workspace variable
Global variable
https://github.com/AzBuilder/terrakube/tree/main/telemetry-compose
flow:
- type: "terraformPlan"
  name: "Terraform Plan with Cost Estimation"
  step: 100
  commands:
    - runtime: "GROOVY"
      priority: 100
      after: true
      script: |
        import Infracost

        String credentials = "version: \"0.1\"\n" +
                "api_key: $INFRACOST_KEY \n" +
                "pricing_api_endpoint: https://pricing.api.infracost.io"

        new Infracost().loadTool(
           "$workingDirectory",
           "$bashToolsDirectory", 
           "0.10.12",
           credentials)
        "Infracost Download Completed..."
    - runtime: "BASH"
      priority: 200
      after: true
      script: |
        terraform show -json terraformLibrary.tfPlan > plan.json;
        INFRACOST_ENABLE_DASHBOARD=true infracost breakdown --path plan.json --format json --out-file infracost.json;
        totalCost=$(jq -r '.totalMonthlyCost' infracost.json);
        urlTotalCost=$(jq -r '.shareUrl' infracost.json);
        echo "Total Monthly Cost: $totalCost USD"
        echo "For more detail information please visit: $urlTotalCost"
        if (($totalCost < 100)); 
        then
          echo "The total cost for the resource is below 100 USD. Deployment is approved";
        else
          echo "The total cost for the resource is above 100 USD, cancelling operation";
          exit 1
        fi;
- type: "terraformApply"
  name: "Terraform Apply"
  step: 200
...
flow:
- type: "terraformPlan"
  name: "Terraform Plan with Cost Estimation"
  step: 100
...
...
    - runtime: "GROOVY"
      priority: 100
      after: true
      script: |
        import Infracost

        String credentials = "version: \"0.1\"\n" +
                "api_key: $INFRACOST_KEY \n" +
                "pricing_api_endpoint: https://pricing.api.infracost.io"

        new Infracost().loadTool(
           "$workingDirectory",
           "$bashToolsDirectory", 
           "0.10.12",
           credentials)
        "Infracost Download Completed..."
...
...
    - runtime: "BASH"
      priority: 200
      after: true
      script: |
        terraform show -json terraformLibrary.tfPlan > plan.json;
        INFRACOST_ENABLE_DASHBOARD=true infracost breakdown --path plan.json --format json --out-file infracost.json;
        totalCost=$(jq -r '.totalMonthlyCost' infracost.json);
        urlTotalCost=$(jq -r '.shareUrl' infracost.json);
        echo "Total Monthly Cost: $totalCost USD"
        echo "For more detail information please visit: $urlTotalCost"
        if (($totalCost < 100)); 
        then
          echo "The total cost for the resource is below 100 USD. Deployment is approved";
        else
          echo "The total cost for the resource is above 100 USD, cancelling operation";
          exit 1
        fi;
...
terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=3.25.0"
    }
  }
}

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "example" {
  name     = "api-rg-pro"
  location = "West Europe"
}

resource "azurerm_app_service_plan" "example" {
  name                = "api-appserviceplan-pro"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name

  sku {
    tier = "Standard"
    size = "S1"
  }
}
terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=3.25.0"
    }
  }
}

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "example" {
  name     = "api-rg-pro"
  location = "East Us 2"
}

resource "azurerm_app_service_plan" "example" {
  name                = "api-appserviceplan-pro"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name

  sku {
    tier = "Standard"
    size = "S2"
  }
}
Infracost
Global Variables

Key

Unique variable name

Value

Key value

Category

Category could be Terraform Variable or Environment Variable

Description

Free text to document the reason for this global variable

HCL

Parse this field as HashiCorp Configuration Language (HCL). This allows you to interpolate values at runtime.

Sensitive

Sensitive variables are never shown in the UI or API. They may appear in Terraform logs if your configuration is designed to output them.

Administrator group
helm repo add terrakube-repo https://AzBuilder.github.io/terrakube-helm-chart
helm repo update
minikube start
minikube addons enable ingress
minikube addons enable storage-provisioner
minikube ip 
kubectl create namespace terrakube
sudo nano /etc/hosts

192.168.59.100 terrakube-ui.minikube.net
192.168.59.100 terrakube-api.minikube.net
192.168.59.100 terrakube-reg.minikube.net
helm install terrakube terrakube-repo/terrakube -n terrakube
allow-snippet-annotations: "true"

Reference: https://github.com/AzBuilder/terrakube/issues/618#issuecomment-1838980451

[email protected]

admin

TERRAKUBE_ADMIN, TERRAKUBE_DEVELOPERS

[email protected]

aws

AWS_DEVELOPERS

[email protected]

gcp

GCP_DEVELOPERS

[email protected]

azure

AZURE_DEVELOPERS

TERRAKUBE_ADMIN

TERRAKUBE_DEVELOPERS

AZURE_DEVELOPERS

AWS_DEVELOPERS

GCP_DEVELOPERS

this
https://github.com/AzBuilder/terrakube-helm-chart

HTTP URL

https://<GITHUB INSTANCE HOSTNAME>

API URL

https://<GITHUB INSTANCE HOSTNAME>/api/v3

Application Name

Your application name, for example you can use your organization name.

Homepage URL

The url for your application or website,

Application Description

Any description you choice

Authorization Callback URL

Copy the callback url from the Terrakube UI

see the separate instructions
Team Management
Create workspace

Ephemeral Agents

This feature is supported from version 2.22.0

The following will explain how to run the executor component in"ephemeral" mode.

These environment variables can be used to customize the API component:

  • ExecutorEphemeralNamespace (Default value: "terrakube")

  • ExecutorEphemeralImage (Defatul value: "azbuilder/executor:2.22.0" )

  • ExecutorEphemeralSecret (Default value: "terrakube-executor-secrets" )

The above is basically to control where the job will be created and executed and to mount the secrets required by the executor component

Internally the Executor component will use the following to run in "ephemeral" :

  • EphemeralFlagBatch (Default value: "false")

  • EphemeralJobData, this contains all the data that the executor need to run.

Requirements

To use Ephemeral executors we need to create the following configuration:

Service Account Creation

apiVersion: v1
kind: ServiceAccount
metadata:
  name: terrakube-api-service-account
  namespace: terrakube

Role Creation

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: terrakube
  name: terrakube-api-role
rules:
- apiGroups: ["batch"]
  resources: ["jobs"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]

Role Binding Creation

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: terrakube-api-role-binding
  namespace: terrakube
subjects:
- kind: ServiceAccount
  name: terrakube-api-service-account
  namespace: terrakube
roleRef:
  kind: Role
  name: terrakube-api-role
  apiGroup: rbac.authorization.k8s.io

Helm Chart Configuration

Once the above configuration is created we can deploy the Terrakube API like the following example:

## API properties
api:
  image: "azbuilder/api-server"
  version: "2.22.0"
  serviceAccountName: "terrakube-api-service-account"
  env:
  - name: ExecutorEphemeralNamespace
    value: terrakube
  - name: ExecutorEphemeralImage
    value: azbuilder/executor:2.22.0
  - name: ExecutorEphemeralSecret
    value: terrakube-executor-secrets

Workspace Configuration

Add the environment variable TERRAKUBE_ENABLE_EPHEMERAL_EXECUTOR=1 like the image below

Workspace Execution

Now when the job is running internally Terrakube will create a K8S job and will execute each step of the job in a "ephemeral executor"

Internal Kubernetes Job Example:

Plan Running in a pod:

Apply Running in a different pod:

Node Selector.

If required you can specify the node selector configuration where the pod will be created using something like the following:

api:
  env:
  - name: JAVA_TOOL_OPTIONS
    value: "-Dorg.terrakube.executor.ephemeral.nodeSelector.diskType=ssd -Dorg.terrakube.executor.ephemeral.nodeSelector.nodeType=spot"

The above will be the equivalent to use the Kubernetes YAML like:

  nodeSelector:
    disktype: ssd
    nodeType: spot

Adding node selector configuration is available from version 2.23.0

Team Management

In Terrakube you can define user permissions inside your organization using teams.

Creating a Team

Once you are in the desired organization, click the Settings button and then in the left menu select the Teams option.

Click the Create team button

In the popup, provide the team name and the permissions assigned to the team. Use the below table as reference:

Field
Description

Name

Must be a valid group based on the Dex connector you are using to manage users and groups.

For example if you are using Azure Active Directory, you must use a valid Active Directory Group like TERRAKUBE_ADMIN, or if you are using Github the format should be MyGithubOrg:TERRAKUBE_ADMIN

Manage Workspaces

Allow members to create and administrate all workspaces within the organization

Manage Modules

Allow members to create and administrate all modules within the organization

Manage Providers

Allow members to create and administrate all providers within the organization

Manage

Allow members to create and administrate all within the organization

Manage

Allow members to create and administrate all within the organization

Finally click the Create team button and the team will be created

Now all the users inside the team will be able to manage the specific resources within the organization based on the permissions you grantted.

Edit a Team

Click the Edit button next to the team you want to edit

Change the permissions you need and click the Save team button

Delete a Team

Click the Delete button next to the team you want to delete, and then click the Yes button to confirm the deletion. Please take in consideration the deletion is irreversible

Variables

Terrakube workspace variables let you customize configurations and store information like static provider credentials. You can also reference this variables inside your Templates.

You can set or you can create to reuse the same variables across multiple workspaces.

Workspace-Specific Variables

To view and manage a workspace's variables, go to the workspace and click the Variables tab.

The Variables page appears, showing all workspace-specific variables. This is where you can add, edit, and delete workspace-specific variables.

There are 2 kinds of Variables:

Terraform Variables

These Terraform variables are set using a terraform.tfvars file. To use interpolation or set a non-string value for a variable, click its HCL checkbox.

Add a Terraform Variable

Go to the workspace Variables page and in the Terraform Variables section click the Add variable button.

In the popup, provide the required values. Use the below table as reference:

Field
Description

Finally click the Save variable button and the variable will be created.

Edit a Terraform Variable

Click the Edit button next to the variable you want to edit.

Make any desired changes and click Save variable.

Delete a Terraform Variable

Click the Delete button next to the variable you want to delete.

Click Yes to confirm your action.

Environment Variables

These variables are set in Terraform's shell environment using export.

Add an Environment Variable

Go to the workspace Variables page and in the Environment Variables section click the Add variable button.

In the popup, provide the required values. Use the below table as reference:

Field
Description

Finally click the Save variable button and the variable will be created.

Edit an Environment Variable

Click the Edit button next to the variable you want to edit.

Make any desired changes and click Save variable.

Delete an Environment Variable

Click the Delete button next to the variable you want to delete.

Click Yes to confirm your action.

Key

Unique variable name

Value

Key value

Description

Free text to document the reason for this global variable

HCL

Parse this field as HashiCorp Configuration Language (HCL). This allows you to interpolate values at runtime.

Sensitive

Sensitive variables are never shown in the UI or API. They may appear in Terraform logs if your configuration is designed to output them.

Key

Unique variable name

Value

Key value

Description

Free text to document the reason for this global variable

HCL

Parse this field as HashiCorp Configuration Language (HCL). This allows you to interpolate values at runtime.

Sensitive

Sensitive variables are never shown in the UI or API. They may appear in Terraform logs if your configuration is designed to output them.

variables specifically for each workspace
Global Variables
Terraform Variables
Environment Variables
Templates
templates
VCS Settings
VCS Providers
Gitpod ready-to-code

Ephemeral Workspaces

The following will show how easy is to implement an ephemeral workspace using Terrakube custom schedules and templates with the remote CLI-driven workflow.

The first step will be to create a new organization, lets call it "playground".

Once we have the playground organization, we need to add a team with access to create templates like the following:

We will also need a team with access to create/delete a workspace only, like the following:

Teams names will depend on your Dex configuration.

Once we have the two teams in our new playground organization, we can proceed and create a new template that we will be using to auto destroy all the workspace:

Lets call it "delete-playground" and it will have the following code:

flow:
  - type: "terraformDestroy"
    name: "Destroy Playground"
    step: 100
  - type: "disableWorkspace"
    name: "Delete Workspace"
    step: 200

Now we can update the default template that is used when we are using the remote CLI-driven workflow, this template is inside every organization by default and is called "Terraform-Plan/Apply-Cli", usually we don't need to update this template but we will do some small changes to enable ephemeral workspaces in the playground organization.

We need to go the the templates inside the organization settings and edit the template

We will add a new step in the template, this will allow to schedule a job that will be using the "delete-playground" template that we have created above.

We need to use the following template code:

flow:
- type: "terraformPlan"
  name: "Terraform Plan from Terraform CLI"
  step: 100
- type: "approval"
  name: "Approve Plan from Terraform CLI"
  step: 150
  team: "TERRAFORM_CLI"
- type: "terraformApply"
  name: "Terraform Apply from Terraform CLI"
  step: 200
- type: "scheduleTemplates"
  step: 300
  name: "Setup auto destroy"
  templates:
    - name: "delete-playground"
      schedule: "0 0/5 * ? * * *"

If we pay special attention we just add a new section where we define that the schedule will run every five minutes after the Terraform apply is completed.

In this example we will schedule the template every five minutes for testing purposes.

  name: "Setup auto destroy"
  templates:
    - name: "delete-playground"
      schedule: "0 0/5 * ? * * *"

The schedule is using Quartz format, to learn more about this use this link.

Now we need to define our Terraform code, lets use the following simple example:

terraform {
  cloud {
    organization = "playground"
    hostname = "8080-azbuilder-terrakube-h128dcdc7l1.ws-us105.gitpod.io"

    workspaces {
      tags = ["myplayground", "example"]
    }
  }
}

resource "null_resource" "previous" {}

resource "time_sleep" "wait_5_seconds" {
  depends_on = [null_resource.previous]

  create_duration = "5s"
}

resource "null_resource" "next" {
  depends_on = [time_sleep.wait_5_seconds]
}

output "creation_time" {
    value = time_sleep.wait_5_seconds.create_duration
}

Run terraform login to connect to our Terrakube instance:

terraform login 8080-azbuilder-terrakube-h128dcdc7l1.ws-us105.gitpod.io

Now we can run terraform init to initialize our workspace inside the playground organization, lets use "myplayground" for the name of our new workspace

Let's run terraform apply and create our resources:

Preparing the remote apply...

To view this run in a browser, visit:
https://8080-azbuilder-terrakube-h128dcdc7l1.ws-us105.gitpod.io/app/playground/myplayground/runs/1

Waiting for the plan to start...

***************************************
Running Terraform PLAN
***************************************

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # null_resource.next will be created
  + resource "null_resource" "next" {
      + id = (known after apply)
    }

  # null_resource.previous will be created
  + resource "null_resource" "previous" {
      + id = (known after apply)
    }

  # time_sleep.wait_5_seconds will be created
  + resource "time_sleep" "wait_5_seconds" {
      + create_duration = "5s"
      + id              = (known after apply)
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + creation_time = "5s"

Do you want to perform these actions in workspace "myplayground"?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

null_resource.previous: Creating...
null_resource.previous: Creation complete after 0s [id=7198759863280029870]
time_sleep.wait_5_seconds: Creating...
time_sleep.wait_5_seconds: Creation complete after 5s [id=2023-10-18T16:05:14Z]
null_resource.next: Creating...
null_resource.next: Creation complete after 0s [id=855270182201609076]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Outputs:

creation_time = "5s"

Our new workspace is created and if we go the organization we can see all the information

The job will be running to create the resources:

We can see that our job is completed and the setup auto destroy have created a new schedule for our workspace:

We could go to the schedules tab:

This schedule will run in 5 minutes:

After waiting for 5 minutes we will see that Terrakube have created a new Job automatically

If we check the details for the new job we can see that a terraform destroy will be executed:

All of the workspace resources are deleted and the workspace will be deleted automatically after the destroy is completed.

Once the Job is completed, our workspace information is deleted from Terrakube

If we are using AWS, AZURE, GCP or any other terraform provider that allow to inject the credentials using environment variables we could use "global variables" to define those.

Global variables will be injected automatically to any workspace inside the playground organization.

Drift Detection

Terrakube can be used to detect any infraestructure drift, this can be done using Terrakube extensions(open policy and slack), templates and schedules, below you can find an example of how this can be achieved.

Open Policy Definition

The firts step will be to create a small rego policy and add this to your Terrakube extensions repository inside the "policy" folder with the name "plan_information.rego", this is a very simple policy that will count the number of changes for a terraform plan.

package terrakube.plan.information

import input as tfplan
  
created := create_count {
    resources := [resource | resource:= tfplan.resource_changes[_]; resource.change.actions[_] == "create"]
    create_count := count(resources)
}

deleted := delete_count {
    resources := [resource | resource:= tfplan.resource_changes[_]; resource.change.actions[_] == "delete"]
    delete_count := count(resources)
}

updated := updated_count {
    resources := [resource | resource:= tfplan.resource_changes[_]; resource.change.actions[_] == "update"]
    updated_count := count(resources)
}

no_change := no_change_count {
    resources := [resource | resource:= tfplan.resource_changes[_]; resource.change.actions[_] == "no-op"]
    no_change_count := count(resources)

The output of this policy will look like this:

{
    "created": 2,
    "deleted": 4,
    "no_change": 2,
    "updated": 5
}

The policy should look like this in your extension repository

Terrakube extensions can be stored inside a GIT repository that you can configure when staring the platform. This is an example repository that you can fork or customize to create your custom extensions based on your own requirements https://github.com/AzBuilder/terrakube-extensions

Template Definition

The firts step will be to create a Terrakube template that validate if there is a change in our infraestructure.

flow:
  - type: "terraformPlan"
    step: 100
    name: "Running Terraform Plan with Drift Detection and Slack Notification"
    commands:
      - runtime: "GROOVY"
        priority: 100
        after: true
        script: |
          import Opa

          new Opa().loadTool(
            "$workingDirectory",
            "$bashToolsDirectory",
            "0.45.0")
          "Opa Download Completed..."
      - runtime: "BASH"
        priority: 200
        after: true
        script: |
          cd $workingDirectory;
          terraform show -json terraformLibrary.tfPlan > tfplan.json;
          echo "Validating terraform plan information";
          opa exec --decision terrakube/plan/information --bundle .terrakube/toolsRepository/policy/ tfplan.json | jq '.result[0].result' > drift_detection.json;
          cat drift_detection.json;
      - runtime: "GROOVY"
        priority: 300
        after: true
        script: |
          import SlackApp
          import groovy.json.JsonSlurper
          import groovy.json.JsonOutput
          
          File drift_detection = new File("${workingDirectory}/drift_detection.json")
          String drift_detection_content = drift_detection.text

          println drift_detection_content

          def jsonSlurper = new JsonSlurper()
          def body = jsonSlurper.parseText(drift_detection_content)
          def changes =  body.created + body.updated + body.deleted

          if (changes > 0) {
            new SlackApp().sendMessageWithoutAttachment(
              "#general", 
              "Hello team, Terrakube has deteted an infrastructure drift, please review the following workspace $workspaceId inside organization $organizationId", 
              "$SLACK_TOKEN", 
              terrakubeOutput);
          } else {
            new SlackApp().sendMessageWithoutAttachment(
              "#general", 
              "Hello team, Terrakube did not detect any infrastructure drift for workspace $workspaceId inside organization $organizationId", 
              "$SLACK_TOKEN", 
              terrakubeOutput);
          }

          "Drift Detection Completed..."

In a high level this template will do the following:

  • Run a terraform plan

flow:
  - type: "terraformPlan"
    step: 100
    name: "Running Terraform Plan with Drift Detection and Slack Notification"
  • Once the terraform plan is completed, the template will import the Open Policy Extension to our job workspace

    commands:
      - runtime: "GROOVY"
        priority: 100
        after: true
        script: |
          import Opa

          new Opa().loadTool(
            "$workingDirectory",
            "$bashToolsDirectory",
            "0.45.0")
          "Opa Download Completed..."
  • The next step will be to export the terraform plan in JSON format and execute the rego policy to validate the number of changes in our terraform plan and create a file "drift_detection.json" with the result

      - runtime: "BASH"
        priority: 200
        after: true
        script: |
          cd $workingDirectory;
          terraform show -json terraformLibrary.tfPlan > tfplan.json;
          echo "Validating terraform plan information";
          opa exec --decision terrakube/plan/information --bundle .terrakube/toolsRepository/policy/ tfplan.json | jq '.result[0].result' > drift_detection.json;
          cat drift_detection.json;
      
  • Now you can use the "drift_detection.json" file using a simple groovy script to review the number of changes in the terraform plan.

      - runtime: "GROOVY"
        priority: 300
        after: true
        script: |
          import SlackApp
          import groovy.json.JsonSlurper
          import groovy.json.JsonOutput
          
          File drift_detection = new File("${workingDirectory}/drift_detection.json")
          String drift_detection_content = drift_detection.text

          println drift_detection_content

          def jsonSlurper = new JsonSlurper()
          def body = jsonSlurper.parseText(drift_detection_content)
          def changes =  body.created + body.updated + body.deleted
  • Once you have the number of changes we can add a simple validation to send a Slack Message using the Terrakube extension to notify you teams if an infrastructure drift was detected

          if (changes > 0) {
            new SlackApp().sendMessageWithoutAttachment(
              "#general", 
              "Hello team, Terrakube has deteted an infrastructure drift, please review the following workspace $workspaceId inside organization $organizationId", 
              "$SLACK_TOKEN", 
              terrakubeOutput);
          } else {
            new SlackApp().sendMessageWithoutAttachment(
              "#general", 
              "Hello team, Terrakube did not detect any infrastructure drift for workspace $workspaceId inside organization $organizationId", 
              "$SLACK_TOKEN", 
              terrakubeOutput);
          }
SLACK_TOKEN is an environment variable that you can define at workspace level.
You can also setup this using the global variables in your organization

Template Setup

Now that you have define the template to detect the infraestructure drif you can add the template in your Terrakube organization using the name "Drift Detection"

The template will look like this:

Now you can use this template in any workspace inside your organization

Workspace Setup

To test the template we can use the following example for Azure and we execute a Terraform Apply inside Terrakube. This will create a app service plan with tier Basic and size B1

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=3.25.0"
    }
  }
}

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "example" {
  name     = "api-rg-pro"
  location = "East Us 2"
}

resource "azurerm_app_service_plan" "example" {
  name                = "api-appserviceplan-pro"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name

  sku {
    tier = "Basic"
    size = "B1"
  }
}

Once our resources are created in Azure we can run the Drift Detection template.

The template will send a message to a Slack channel with the following:

  • If there is no infracstructure change you should receive the following message in your slack channel.

  • If for some reason the resource in Azure is changed (scale up) and our state in Terrakube does not match with Azure you will see the following message.

Schedule Drift Detection

Now that you have tested the Drift Detection template, you can use with the Workspace Schedule feature to run this template for example every day at 5:30 AM for the workspace.

You will have to go the the "Schedule" option inside your workspace.

Now you can select the Drift Detection template to run at 5:30 AM every day.

Now you should receive the notification in your Slack channel every day at 5:30 am

Implementing Drift detection in Terrakube is easy, you just need to make use of extension and write a small script, this is just an example of easy is to extend Terrakube functionality using extensions, you can even create more complex templates quickly, for example you could create a webhook or send emails using the sendgrid extension.

Minikube + HTTPS

Terrakube can be installed in minikube as a sandbox environment with HTTPS, using terrakube with HTTPS will allow to use the Terraform registry and the Terraform remote state backend to be used locally without any issue.

Please follow these instructions:

Requirements:

  • Install mkcert to generate the local certificates.

We will be using following domains in our test installation:

terrakube-api.minikube.net
terrakube-ui.minikube.net
terrakube-reg.minikube.net

Generate local CA certificate

mkcert -install
Created a new local CA 💥
The local CA is now installed in the system trust store! ⚡️
The local CA is now installed in the Firefox trust store (requires browser restart)! 🦊

Local CA certificate path

mkcert -CAROOT
/home/myuser/.local/share/mkcert

Local CA certificate content

cat /home/myuser/.local/share/mkcert/rootCA.pem

Generate certificate for *.minikube.net

These command will generate two files key.pem and cert.pem that we will be using later.

mkcert -key-file key.pem -cert-file cert.pem minikube.net *.minikube.net

Created a new certificate valid for the following names 📜
 - "minikube.net"
 - "*.minikube.net"

Reminder: X.509 wildcards only go one level deep, so this won't match a.b.minikube.net ℹ️

The certificate is at "cert.pem" and the key at "key.pem" ✅

It will expire on 19 January 2026 🗓

Now we have all the necessary to install Terrakube with HTTPS

Setup Helm Repository

helm repo add terrakube-repo https://AzBuilder.github.io/terrakube-helm-chart
helm repo update

Setup Minikube

minikube start
minikube addons enable ingress
minikube addons enable storage-provisioner
minikube ip 

Copy the minikube ip address ( Example: 192.168.59.100)

Create Kubernetes secret with local certificate

kubectl -n kube-system create secret tls mkcert --key key.pem --cert cert.pem

Setup Terrakube namespace

kubectl create namespace terrakube

Setup DNS records

sudo nano /etc/hosts

192.168.59.100 terrakube-ui.minikube.net
192.168.59.100 terrakube-api.minikube.net
192.168.59.100 terrakube-reg.minikube.net

Setup Ingress Certificates

minikube addons configure ingress

-- Enter custom cert(format is "namespace/secret"): kube-system/mkcert
✅  ingress was successfully configured

minikube addons disable ingress

🌑  "The 'ingress' addon is disabled

minikube addons enable ingress

🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled

Helm Values.

We need to generate a file called value.yaml with the following using the content of our rootCa.pem from the previous step:

security:
  caCerts:
    rootCA.pem: |
      -----BEGIN CERTIFICATE-----
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      -----END CERTIFICATE-----

## API properties
api:
  env:
  - name: SERVICE_BINDING_ROOT
    value: /mnt/platform/bindings
  volumes:
    - name: ca-certs
      secret:
        secretName: terrakube-ca-secrets
        items:
        - key: "rootCA.pem"
          path: "rootCA.pem"
        - key: "type"
          path: "type"
  volumeMounts:
  - name: ca-certs
    mountPath: /mnt/platform/bindings/ca-certificates
    readOnly: true
  properties:
    databaseType: "H2"

executor:
  env:
  - name: SERVICE_BINDING_ROOT
    value: /mnt/platform/bindings
  volumes:
    - name: ca-certs
      secret:
        secretName: terrakube-ca-secrets
        items:
        - key: "rootCA.pem"
          path: "rootCA.pem"
        - key: "type"
          path: "type"
  volumeMounts:
  - name: ca-certs
    mountPath: /mnt/platform/bindings/ca-certificates
    readOnly: true

## Registry properties
registry:
  enabled: true
  replicaCount: "1"
  serviceType: "ClusterIP"
  env:
  - name: SERVICE_BINDING_ROOT
    value: /mnt/platform/bindings
  volumes:
    - name: ca-certs
      secret:
        secretName: terrakube-ca-secrets
        items:
        - key: "rootCA.pem"
          path: "rootCA.pem"
        - key: "type"
          path: "type"
  volumeMounts:
  - name: ca-certs
    mountPath: /mnt/platform/bindings/ca-certificates
    readOnly: true

dex:
  config:
    issuer: https://terrakube-api.minikube.net/dex

    storage:
      type: memory
    web:
      http: 0.0.0.0:5556
      allowedOrigins: ['*']
      skipApprovalScreen: true
    oauth2:
      responseTypes: ["code", "token", "id_token"]

    connectors:
    - type: ldap
      name: OpenLDAP
      id: ldap
      config:
        # The following configurations seem to work with OpenLDAP:
        #
        # 1) Plain LDAP, without TLS:
        host: terrakube-openldap-service:1389
        insecureNoSSL: true
        #
        # 2) LDAPS without certificate validation:
        #host: localhost:636
        #insecureNoSSL: false
        #insecureSkipVerify: true
        #
        # 3) LDAPS with certificate validation:
        #host: YOUR-HOSTNAME:636
        #insecureNoSSL: false
        #insecureSkipVerify: false
        #rootCAData: 'CERT'
        # ...where CERT="$( base64 -w 0 your-cert.crt )"

        # This would normally be a read-only user.
        bindDN: cn=admin,dc=example,dc=org
        bindPW: admin

        usernamePrompt: Email Address

        userSearch:
          baseDN: ou=users,dc=example,dc=org
          filter: "(objectClass=person)"
          username: mail
          # "DN" (case sensitive) is a special attribute name. It indicates that
          # this value should be taken from the entity's DN not an attribute on
          # the entity.
          idAttr: DN
          emailAttr: mail
          nameAttr: cn

        groupSearch:
          baseDN: ou=Groups,dc=example,dc=org
          filter: "(objectClass=groupOfNames)"

          userMatchers:
            # A user is a member of a group when their DN matches
            # the value of a "member" attribute on the group entity.
          - userAttr: DN
            groupAttr: member

          # The group name should be the "cn" value.
          nameAttr: cn

    staticClients:
    - id: example-app
      redirectURIs:
      - 'https://terrakube-ui.minikube.net'
      - '/device/callback'
      - 'http://localhost:10000/login'
      - 'http://localhost:10001/login'
      name: 'example-app'
      public: true

## Ingress properties
ingress:
  useTls: true
  includeTlsHosts: true
  ui:
    enabled: true
    domain: "terrakube-ui.minikube.net"
    path: "/"
    pathType: "Prefix"
    tlsSecretName: tls-secret-ui-terrakube
    annotations:
      nginx.ingress.kubernetes.io/use-regex: "true"
  api:
    enabled: true
    domain: "terrakube-api.minikube.net"
    path: "/"
    pathType: "Prefix"
    tlsSecretName: tls-secret-api-terrakube
    annotations:
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
  registry:
    enabled: true
    domain: "terrakube-reg.minikube.net"
    path: "/"
    pathType: "Prefix"
    tlsSecretName: tls-secret-reg-terrakube
    annotations:
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
  dex:
    enabled: true
    path: "/dex/"
    pathType: "Prefix"
    annotations:
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"

Install Terrakube with local HTTPS support

helm install terrakube terrakube-repo/terrakube -n terrakube --values values.yaml 

If you found the following message "Snippet directives are disabled by the Ingress administrator", please update the ingres-nginx-controller configMap in namespace ingress-nginx adding the following:

allow-snippet-annotations: "true"

Reference: https://github.com/AzBuilder/terrakube/issues/618#issuecomment-1838980

Using Terrakube

The environment has some users, groups and sample data so you can test it quickly.

Visit https://terrakube-ui.minikube.net and login using [email protected] with password admin

We should be able to use the UI using a valid certificate.

Connect to Terrakube

Using terraform we can connect to the terrakube api and the private registry using the following command with the same credentials that we used to login to the UI.

terraform login terrakube-api.minikube.net
terraform login terrakube-reg.minikube.net

Terraform Remote State.

Lets create a simple Terraform file.

terraform {
  cloud {
    organization = "simple"
    hostname = "terrakube-api.minikube.net"

    workspaces {
      tags = ["myplayground", "example"]
    }
  }
}

# This resource will destroy (potentially immediately) after null_resource.next
resource "null_resource" "previous" {}

resource "time_sleep" "wait_5_seconds" {
  depends_on = [null_resource.previous]

  create_duration = "5s"
}

# This resource will create (at least) 30 seconds after null_resource.previous
resource "null_resource" "next" {
  depends_on = [time_sleep.wait_5_seconds]
}

Execute Terraform Remotely

Terraform Init:

$ terraform init

Initializing Terraform Cloud...

No workspaces found.
  
  There are no workspaces with the configured tags (example, myplayground)
  in your Terraform Cloud organization. To finish initializing, Terraform needs at
  least one workspace available.
  
  Terraform can create a properly tagged workspace for you now. Please enter a
  name to create a new Terraform Cloud workspace.

  Enter a value: simple


Initializing provider plugins...
- Finding latest version of hashicorp/null...
- Finding latest version of hashicorp/time...
- Installing hashicorp/null v3.2.1...
- Installed hashicorp/null v3.2.1 (signed by HashiCorp)
- Installing hashicorp/time v0.9.1...
- Installed hashicorp/time v0.9.1 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform apply:

$ terraform apply

Running apply in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will cancel the remote apply if it's still pending. If the apply started it
will stop streaming the logs, but will not stop the apply running remotely.

Preparing the remote apply...

To view this run in a browser, visit:
https://terrakube-api.minikube.net/app/simple/simple/runs/40

Waiting for the plan to start...

***************************************
Running Terraform PLAN
***************************************

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # null_resource.next will be created
  + resource "null_resource" "next" {
      + id = (known after apply)
    }

  # null_resource.previous will be created
  + resource "null_resource" "previous" {
      + id = (known after apply)
    }

  # time_sleep.wait_5_seconds will be created
  + resource "time_sleep" "wait_5_seconds" {
      + create_duration = "5s"
      + id              = (known after apply)
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + creation_time = "5s"

Do you want to perform these actions in workspace "simple"?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

null_resource.previous: Creating...
null_resource.previous: Creation complete after 0s [id=6182221597368620892]
time_sleep.wait_5_seconds: Creating...
time_sleep.wait_5_seconds: Creation complete after 5s [id=2023-10-19T20:25:22Z]
null_resource.next: Creating...
null_resource.next: Creation complete after 0s [id=9093191930998774410]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Checking the UI will show:

Policy Enforcement (OPA)

Terrakube can be integrated with Open Policy Agent using the Terrakube extension to validate the Terraform plan, below you can find an example of how this can be achieved. The example is based on this .

Terraform Policy

The firts step will be to create some terraform policies inside your Terrakube extensions repository folder, inside the policy folder as you can see in the image below.

Terrakube extensions can be stored inside a GIT repository that you can configure when staring the platform. This is an example repository that you can fork or customiza to create your custom extensions

The following policy computes a score for a Terraform that combines

  • The number of deletions of each resource type

  • The number of creations of each resource type

  • The number of modifications of each resource type

The policy authorizes the plan when the score for the plan is below a threshold and there are no changes made to any IAM resources. (For simplicity, the threshold in this tutorial is the same for everyone, but in practice you would vary the threshold depending on the user.)

policy/terraform.rego

Template Definition

Once we have defined one or more policies inside your Terrakube extension repository, yoa can write a custom template that make use of the Open Policy Agent using the Terrakube Configuration Language as the following example:

In a high level this template will do the following:

  • Run the terraform plan for the Terrakube workspace

  • After the terraform plan is completed successfully it will import the Open Policy Agent inside our job using the Opa Groovy extension.

  • Once the Open Policy Agent has been imported successfully inside our Terrakube Job, you can make use of it to validate different terraform policies defined inside the Terrakube extensions repository that gets imported inside our workspace job using a simple BASH script. Based on the result of the policy you can difine if the job flow can continue or not.

You can add this template inside your organization settings using a custom template and following these steps.

  • Create a new template

  • Select a blank template.

  • Add the template body.

  • Save the template

Terrakube Workspace Code

Now that your terraform policy is completed and you have defined the Terrakube template that make us of the Open Policy agent, you can star testing it with some sample workspaces.

Example 1:

Create a Terrakube Workspace with the terraform file that includes an auto-scaling group and a server on AWS. (You will need to define the variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to access your AWS account to run the terraform plan)

When you run the Terrakube job for this workspace you can see that the terraform policy is valid for our workspace.

Example 2:

Create a Terrakube plan that creates enough resources to exceed the blast-radius permitted by the terraform policy. (You will need to define the variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to access your AWS account to run the terraform plan)

When you run this Terrakube job for this particular workspace you will see that the terraform policy is invalid and the job flow will failed.

This is how you can make use of Terrakube extension and Open Policy Agent to validate your terraform resources inside your organization before you deployed it to the cloud.

Terrakube templates are very flexible and can be used to create more complex scenarios, in a real world escenario if the policy is valid you can proceed to deploy your infrastructure like this template where you can add an additional step to run the terraform apply.

Github

Requirements

To run Terrakube in Docker Desktop you wil need the following:

  • Create Github

  • Create Github and add some

  • Create a team called TERRAKUBE_ADMIN and add the members

  • Install

  • Nginx Ingress for

  • Azure Storage Account/AWS S3 Bucket/GCP Storage Bucket

To get more information about the Dex Configuration for Github you can check this

Setup

  • Create a new Github Oauth application with the authorization callback URL "" in this

  • Copy the values for Client Id and Client Secret

  • Update the HOSTS file adding the following

YAML Example

Replace <<CHANGE_THIS>> with the real values, create the values.yaml file and run the helm install

Run the installation

For any question please open an issue in our

# Linux Path /etc/hosts
# Windows Path c:\Windows\System32\Drivers\etc\hosts


127.0.0.1 ui.terrakube.docker.com
127.0.0.1 registry.terrakube.docker.com
## Global Name
name: "terrakube"

## Terrakube Security
security:
  adminGroup: "<<CHANGE_THIS>>" # This should be your Github team the format is OrganizationName:TeamName (Example: MyOrg:TERRAKUBE_ADMIN)
  patSecret: "<<CHANGE_THIS>>"  # Sample Key 32 characters z6QHX!y@Nep2QDT!53vgH43^PjRXyC3X 
  internalSecret: "<<CHANGE_THIS>>" # Sample Key 32 characters Kb^8cMerPNZV6hS!9!kcD*KuUPUBa^B3 
  dexClientId: "github"
  dexClientScope: "email openid profile offline_access groups"
  dexIssuerUri: "http://host.docker.internal/dex" # Change for your real domain

## Terraform Storage
storage:
  # SELECT THE TYPE OF STORAGE THAT YOU WANT TO USE AND REPLACE THE VALUES
  
  #azure:
  #  storageAccountName: "<<CHANGE_THIS>>"
  #  storageAccountResourceGroup: "<<CHANGE_THIS>>"
  #  storageAccountAccessKey: "<<CHANGE_THIS>>"
  #aws:
  #  accessKey: "<<CHANGE_THIS>>"
  #  secretKey: "<<CHANGE_THIS>>"
  #  bucketName: "<<CHANGE_THIS>>"
  #  region: "<<CHANGE_THIS>>"
  #gcp:
  #  projectId: "<<CHANGE_THIS>>"
  #  bucketName: "<<CHANGE_THIS>>"
  #  credentials: |
  #    ## GCP JSON CREDENTIALS for service account with access to read/write to the storage bucket
  #    {
  #      "type": "service_account",
  #      "project_id": "",
  #      "private_key_id": "",
  #      "private_key": "",
  #      "client_email": "",
  #      "client_id": "",
  #      "auth_uri": "",
  #      "token_uri": "",
  #      "auth_provider_x509_cert_url": "",
  #      "client_x509_cert_url": ""
  #    } 

## Dex
dex:
  enabled: true
  version: "v2.32.0"
  replicaCount: "1"
  serviceType: "ClusterIP"
  resources:
    limits:
      cpu: 512m
      memory: 256Mi
    requests:
      cpu: 256m
      memory: 128Mi
  properties:
    config:
      issuer: http://host.docker.internal/dex
      storage:
        type: memory
      oauth2:
        responseTypes: ["code", "token", "id_token"] 
        skipApprovalScreen: true
      web:
        allowedOrigins: ["*"]
  
      staticClients:
      - id: github
        redirectURIs:
        - 'http://ui.terrakube.docker.com'
        - 'http://localhost:10001/login'
        - 'http://localhost:10000/login'
        - '/device/callback'
        name: 'github'
        public: true

      connectors:
      - type: github
        id: github
        name: gitHub
        config:
          clientID: "<<CHANGE_THIS>>" 
          clientSecret: "<<CHANGE_THIS>>"
          redirectURI: "http://host.docker.internal/dex/callback"
          loadAllGroups: true

## API properties
api:
  enabled: true
  version: "2.6.0"
  replicaCount: "1"
  serviceType: "ClusterIP"
  properties:
    databaseType: "H2"

## Executor properties
executor:
  enabled: true
  version: "2.6.0"  
  replicaCount: "1"
  serviceType: "ClusterIP"
  properties:
    toolsRepository: "https://github.com/AzBuilder/terrakube-extensions"
    toolsBranch: "main"

## Registry properties
registry:
  enabled: true
  version: "2.6.0"
  replicaCount: "1"
  serviceType: "ClusterIP"

## UI Properties
ui:
  enabled: true
  version: "2.6.0"
  replicaCount: "1"
  serviceType: "ClusterIP"

## Ingress properties
ingress:
  useTls: false
  ui:
    enabled: true
    domain: "ui.terrakube.docker.com"
    path: "/(.*)"
    pathType: "Prefix" 
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      cert-manager.io/cluster-issuer: letsencrypt
  api:
    enabled: true
    domain: "host.docker.internal"
    path: "/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
  registry:
    enabled: true
    domain: "registry.terrakube.docker.com"
    path: "/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
  dex:
    enabled: true
    path: "/dex/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
helm install --debug --values ./values.yaml terrakube ./terrakube-helm-chart/ -n terrakube
Organization
Teams
members
Docker Desktop
Docker Desktop
link
http://host.docker.internal/dex/callback
link
helm chart repository

Using Providers

The following will explain how to add a provider to Terrakube.

The provider needs to be added using the API, currently there is no support to do it using the UI and if you want to upload a private provider you will need to upload the provider binary and related files to some domain that you can manage.

Lets add the random provider version 3.0.1 to terrakube, we can get the information that we need from the following endpoints:

GET https://registry.terraform.io/v1/providers/hashicorp/random/versions
GET https://registry.terraform.io/v1/providers/hashicorp/random/3.0.1/download/linux/amd64

Create the provider inside the organization:

POST https://terrakube-api.minikube.net/api/v1/organization/d9b58bd3-f3fc-4056-a026-1163297e80a8/provider
Content-Type: application/vnd.api+json
Authorization: Bearer (PAT TOKEN)
{
    "data": {
        "type": "provider",
        "attributes": {
          "name": "random",
          "description": "random provider"
        }
    }
}

Create the provider version

POST https://terrakube-api.minikube.net/api/v1/organization/d9b58bd3-f3fc-4056-a026-1163297e80a8/provider/950ce3fd-3f11-46e4-8ef2-34ec3718de3e/version
Content-Type: application/vnd.api+json
Authorization: Bearer (PAT TOKEN)
{
    "data": {
        "type": "version",
        "attributes": {
          "versionNumber": "3.0.1",
          "protocols": "5.0"
        }
    }
}

Create the provider implementation

POST https://terrakube-api.minikube.net/api/v1/organization/d9b58bd3-f3fc-4056-a026-1163297e80a8/provider/950ce3fd-3f11-46e4-8ef2-34ec3718de3e/version/8bc7127a-3215-486a-ac68-45fb8676f5a2/implementation
Content-Type: application/vnd.api+json
Authorization: Bearer (PAT TOKEN)
{
    "data": {
        "type": "implementation",
        "attributes": {
          "os": "linux",
          "arch": "amd64",
          "filename": "terraform-provider-random_3.0.1_linux_amd64.zip",
          "downloadUrl": "https://releases.hashicorp.com/terraform-provider-random/3.0.1/terraform-provider-random_3.0.1_linux_amd64.zip",
          "shasumsUrl": "https://releases.hashicorp.com/terraform-provider-random/3.0.1/terraform-provider-random_3.0.1_SHA256SUMS",
          "shasumsSignatureUrl": "https://releases.hashicorp.com/terraform-provider-random/3.0.1/terraform-provider-random_3.0.1_SHA256SUMS.72D7468F.sig",
          "shasum": "e385e00e7425dda9d30b74ab4ffa4636f4b8eb23918c0b763f0ffab84ece0c5c",
          "keyId": "34365D9472D7468F",
          "asciiArmor": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGB9+xkBEACabYZOWKmgZsHTdRDiyPJxhbuUiKX65GUWkyRMJKi/1dviVxOX\nPG6hBPtF48IFnVgxKpIb7G6NjBousAV+CuLlv5yqFKpOZEGC6sBV+Gx8Vu1CICpl\nZm+HpQPcIzwBpN+Ar4l/exCG/f/MZq/oxGgH+TyRF3XcYDjG8dbJCpHO5nQ5Cy9h\nQIp3/Bh09kET6lk+4QlofNgHKVT2epV8iK1cXlbQe2tZtfCUtxk+pxvU0UHXp+AB\n0xc3/gIhjZp/dePmCOyQyGPJbp5bpO4UeAJ6frqhexmNlaw9Z897ltZmRLGq1p4a\nRnWL8FPkBz9SCSKXS8uNyV5oMNVn4G1obCkc106iWuKBTibffYQzq5TG8FYVJKrh\nRwWB6piacEB8hl20IIWSxIM3J9tT7CPSnk5RYYCTRHgA5OOrqZhC7JefudrP8n+M\npxkDgNORDu7GCfAuisrf7dXYjLsxG4tu22DBJJC0c/IpRpXDnOuJN1Q5e/3VUKKW\nmypNumuQpP5lc1ZFG64TRzb1HR6oIdHfbrVQfdiQXpvdcFx+Fl57WuUraXRV6qfb\n4ZmKHX1JEwM/7tu21QE4F1dz0jroLSricZxfaCTHHWNfvGJoZ30/MZUrpSC0IfB3\niQutxbZrwIlTBt+fGLtm3vDtwMFNWM+Rb1lrOxEQd2eijdxhvBOHtlIcswARAQAB\ntERIYXNoaUNvcnAgU2VjdXJpdHkgKGhhc2hpY29ycC5jb20vc2VjdXJpdHkpIDxz\nZWN1cml0eUBoYXNoaWNvcnAuY29tPokCVAQTAQoAPhYhBMh0AR8KtAURDQIQVTQ2\nXZRy10aPBQJgffsZAhsDBQkJZgGABQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAAAoJ\nEDQ2XZRy10aPtpcP/0PhJKiHtC1zREpRTrjGizoyk4Sl2SXpBZYhkdrG++abo6zs\nbuaAG7kgWWChVXBo5E20L7dbstFK7OjVs7vAg/OLgO9dPD8n2M19rpqSbbvKYWvp\n0NSgvFTT7lbyDhtPj0/bzpkZEhmvQaDWGBsbDdb2dBHGitCXhGMpdP0BuuPWEix+\nQnUMaPwU51q9GM2guL45Tgks9EKNnpDR6ZdCeWcqo1IDmklloidxT8aKL21UOb8t\ncD+Bg8iPaAr73bW7Jh8TdcV6s6DBFub+xPJEB/0bVPmq3ZHs5B4NItroZ3r+h3ke\nVDoSOSIZLl6JtVooOJ2la9ZuMqxchO3mrXLlXxVCo6cGcSuOmOdQSz4OhQE5zBxx\nLuzA5ASIjASSeNZaRnffLIHmht17BPslgNPtm6ufyOk02P5XXwa69UCjA3RYrA2P\nQNNC+OWZ8qQLnzGldqE4MnRNAxRxV6cFNzv14ooKf7+k686LdZrP/3fQu2p3k5rY\n0xQUXKh1uwMUMtGR867ZBYaxYvwqDrg9XB7xi3N6aNyNQ+r7zI2lt65lzwG1v9hg\nFG2AHrDlBkQi/t3wiTS3JOo/GCT8BjN0nJh0lGaRFtQv2cXOQGVRW8+V/9IpqEJ1\nqQreftdBFWxvH7VJq2mSOXUJyRsoUrjkUuIivaA9Ocdipk2CkP8bpuGz7ZF4uQIN\nBGB9+xkBEACoklYsfvWRCjOwS8TOKBTfl8myuP9V9uBNbyHufzNETbhYeT33Cj0M\nGCNd9GdoaknzBQLbQVSQogA+spqVvQPz1MND18GIdtmr0BXENiZE7SRvu76jNqLp\nKxYALoK2Pc3yK0JGD30HcIIgx+lOofrVPA2dfVPTj1wXvm0rbSGA4Wd4Ng3d2AoR\nG/wZDAQ7sdZi1A9hhfugTFZwfqR3XAYCk+PUeoFrkJ0O7wngaon+6x2GJVedVPOs\n2x/XOR4l9ytFP3o+5ILhVnsK+ESVD9AQz2fhDEU6RhvzaqtHe+sQccR3oVLoGcat\nma5rbfzH0Fhj0JtkbP7WreQf9udYgXxVJKXLQFQgel34egEGG+NlbGSPG+qHOZtY\n4uWdlDSvmo+1P95P4VG/EBteqyBbDDGDGiMs6lAMg2cULrwOsbxWjsWka8y2IN3z\n1stlIJFvW2kggU+bKnQ+sNQnclq3wzCJjeDBfucR3a5WRojDtGoJP6Fc3luUtS7V\n5TAdOx4dhaMFU9+01OoH8ZdTRiHZ1K7RFeAIslSyd4iA/xkhOhHq89F4ECQf3Bt4\nZhGsXDTaA/VgHmf3AULbrC94O7HNqOvTWzwGiWHLfcxXQsr+ijIEQvh6rHKmJK8R\n9NMHqc3L18eMO6bqrzEHW0Xoiu9W8Yj+WuB3IKdhclT3w0pO4Pj8gQARAQABiQI8\nBBgBCgAmFiEEyHQBHwq0BRENAhBVNDZdlHLXRo8FAmB9+xkCGwwFCQlmAYAACgkQ\nNDZdlHLXRo9ZnA/7BmdpQLeTjEiXEJyW46efxlV1f6THn9U50GWcE9tebxCXgmQf\nu+Uju4hreltx6GDi/zbVVV3HCa0yaJ4JVvA4LBULJVe3ym6tXXSYaOfMdkiK6P1v\nJgfpBQ/b/mWB0yuWTUtWx18BQQwlNEQWcGe8n1lBbYsH9g7QkacRNb8tKUrUbWlQ\nQsU8wuFgly22m+Va1nO2N5C/eE/ZEHyN15jEQ+QwgQgPrK2wThcOMyNMQX/VNEr1\nY3bI2wHfZFjotmek3d7ZfP2VjyDudnmCPQ5xjezWpKbN1kvjO3as2yhcVKfnvQI5\nP5Frj19NgMIGAp7X6pF5Csr4FX/Vw316+AFJd9Ibhfud79HAylvFydpcYbvZpScl\n7zgtgaXMCVtthe3GsG4gO7IdxxEBZ/Fm4NLnmbzCIWOsPMx/FxH06a539xFq/1E2\n1nYFjiKg8a5JFmYU/4mV9MQs4bP/3ip9byi10V+fEIfp5cEEmfNeVeW5E7J8PqG9\nt4rLJ8FR4yJgQUa2gs2SNYsjWQuwS/MJvAv4fDKlkQjQmYRAOp1SszAnyaplvri4\nncmfDsf0r65/sd6S40g5lHH8LIbGxcOIN6kwthSTPWX89r42CbY8GzjTkaeejNKx\nv1aCrO58wAtursO1DiXCvBY7+NdafMRnoHwBk50iPqrVkNA8fv+auRyB2/G5Ag0E\nYH3+JQEQALivllTjMolxUW2OxrXb+a2Pt6vjCBsiJzrUj0Pa63U+lT9jldbCCfgP\nwDpcDuO1O05Q8k1MoYZ6HddjWnqKG7S3eqkV5c3ct3amAXp513QDKZUfIDylOmhU\nqvxjEgvGjdRjz6kECFGYr6Vnj/p6AwWv4/FBRFlrq7cnQgPynbIH4hrWvewp3Tqw\nGVgqm5RRofuAugi8iZQVlAiQZJo88yaztAQ/7VsXBiHTn61ugQ8bKdAsr8w/ZZU5\nHScHLqRolcYg0cKN91c0EbJq9k1LUC//CakPB9mhi5+aUVUGusIM8ECShUEgSTCi\nKQiJUPZ2CFbbPE9L5o9xoPCxjXoX+r7L/WyoCPTeoS3YRUMEnWKvc42Yxz3meRb+\nBmaqgbheNmzOah5nMwPupJYmHrjWPkX7oyyHxLSFw4dtoP2j6Z7GdRXKa2dUYdk2\nx3JYKocrDoPHh3Q0TAZujtpdjFi1BS8pbxYFb3hHmGSdvz7T7KcqP7ChC7k2RAKO\nGiG7QQe4NX3sSMgweYpl4OwvQOn73t5CVWYp/gIBNZGsU3Pto8g27vHeWyH9mKr4\ncSepDhw+/X8FGRNdxNfpLKm7Vc0Sm9Sof8TRFrBTqX+vIQupYHRi5QQCuYaV6OVr\nITeegNK3So4m39d6ajCR9QxRbmjnx9UcnSYYDmIB6fpBuwT0ogNtABEBAAGJBHIE\nGAEKACYCGwIWIQTIdAEfCrQFEQ0CEFU0Nl2UctdGjwUCYH4bgAUJAeFQ2wJAwXQg\nBBkBCgAdFiEEs2y6kaLAcwxDX8KAsLRBCXaFtnYFAmB9/iUACgkQsLRBCXaFtnYX\nBhAAlxejyFXoQwyGo9U+2g9N6LUb/tNtH29RHYxy4A3/ZUY7d/FMkArmh4+dfjf0\np9MJz98Zkps20kaYP+2YzYmaizO6OA6RIddcEXQDRCPHmLts3097mJ/skx9qLAf6\nrh9J7jWeSqWO6VW6Mlx8j9m7sm3Ae1OsjOx/m7lGZOhY4UYfY627+Jf7WQ5103Qs\nlgQ09es/vhTCx0g34SYEmMW15Tc3eCjQ21b1MeJD/V26npeakV8iCZ1kHZHawPq/\naCCuYEcCeQOOteTWvl7HXaHMhHIx7jjOd8XX9V+UxsGz2WCIxX/j7EEEc7CAxwAN\nnWp9jXeLfxYfjrUB7XQZsGCd4EHHzUyCf7iRJL7OJ3tz5Z+rOlNjSgci+ycHEccL\nYeFAEV+Fz+sj7q4cFAferkr7imY1XEI0Ji5P8p/uRYw/n8uUf7LrLw5TzHmZsTSC\nUaiL4llRzkDC6cVhYfqQWUXDd/r385OkE4oalNNE+n+txNRx92rpvXWZ5qFYfv7E\n95fltvpXc0iOugPMzyof3lwo3Xi4WZKc1CC/jEviKTQhfn3WZukuF5lbz3V1PQfI\nxFsYe9WYQmp25XGgezjXzp89C/OIcYsVB1KJAKihgbYdHyUN4fRCmOszmOUwEAKR\n3k5j4X8V5bk08sA69NVXPn2ofxyk3YYOMYWW8ouObnXoS8QJEDQ2XZRy10aPMpsQ\nAIbwX21erVqUDMPn1uONP6o4NBEq4MwG7d+fT85rc1U0RfeKBwjucAE/iStZDQoM\nZKWvGhFR+uoyg1LrXNKuSPB82unh2bpvj4zEnJsJadiwtShTKDsikhrfFEK3aCK8\nZuhpiu3jxMFDhpFzlxsSwaCcGJqcdwGhWUx0ZAVD2X71UCFoOXPjF9fNnpy80YNp\nflPjj2RnOZbJyBIM0sWIVMd8F44qkTASf8K5Qb47WFN5tSpePq7OCm7s8u+lYZGK\nwR18K7VliundR+5a8XAOyUXOL5UsDaQCK4Lj4lRaeFXunXl3DJ4E+7BKzZhReJL6\nEugV5eaGonA52TWtFdB8p+79wPUeI3KcdPmQ9Ll5Zi/jBemY4bzasmgKzNeMtwWP\nfk6WgrvBwptqohw71HDymGxFUnUP7XYYjic2sVKhv9AevMGycVgwWBiWroDCQ9Ja\nbtKfxHhI2p+g+rcywmBobWJbZsujTNjhtme+kNn1mhJsD3bKPjKQfAxaTskBLb0V\nwgV21891TS1Dq9kdPLwoS4XNpYg2LLB4p9hmeG3fu9+OmqwY5oKXsHiWc43dei9Y\nyxZ1AAUOIaIdPkq+YG/PhlGE4YcQZ4RPpltAr0HfGgZhmXWigbGS+66pUj+Ojysc\nj0K5tCVxVu0fhhFpOlHv0LWaxCbnkgkQH9jfMEJkAWMOuQINBGCAXCYBEADW6RNr\nZVGNXvHVBqSiOWaxl1XOiEoiHPt50Aijt25yXbG+0kHIFSoR+1g6Lh20JTCChgfQ\nkGGjzQvEuG1HTw07YhsvLc0pkjNMfu6gJqFox/ogc53mz69OxXauzUQ/TZ27GDVp\nUBu+EhDKt1s3OtA6Bjz/csop/Um7gT0+ivHyvJ/jGdnPEZv8tNuSE/Uo+hn/Q9hg\n8SbveZzo3C+U4KcabCESEFl8Gq6aRi9vAfa65oxD5jKaIz7cy+pwb0lizqlW7H9t\nQlr3dBfdIcdzgR55hTFC5/XrcwJ6/nHVH/xGskEasnfCQX8RYKMuy0UADJy72TkZ\nbYaCx+XXIcVB8GTOmJVoAhrTSSVLAZspfCnjwnSxisDn3ZzsYrq3cV6sU8b+QlIX\n7VAjurE+5cZiVlaxgCjyhKqlGgmonnReWOBacCgL/UvuwMmMp5TTLmiLXLT7uxeG\nojEyoCk4sMrqrU1jevHyGlDJH9Taux15GILDwnYFfAvPF9WCid4UZ4Ouwjcaxfys\n3LxNiZIlUsXNKwS3mhiMRL4TRsbs4k4QE+LIMOsauIvcvm8/frydvQ/kUwIhVTH8\n0XGOH909bYtJvY3fudK7ShIwm7ZFTduBJUG473E/Fn3VkhTmBX6+PjOC50HR/Hyb\nwaRCzfDruMe3TAcE/tSP5CUOb9C7+P+hPzQcDwARAQABiQRyBBgBCgAmFiEEyHQB\nHwq0BRENAhBVNDZdlHLXRo8FAmCAXCYCGwIFCQlmAYACQAkQNDZdlHLXRo/BdCAE\nGQEKAB0WIQQ3TsdbSFkTYEqDHMfIIMbVzSerhwUCYIBcJgAKCRDIIMbVzSerh0Xw\nD/9ghnUsoNCu1OulcoJdHboMazJvDt/znttdQSnULBVElgM5zk0Uyv87zFBzuCyQ\nJWL3bWesQ2uFx5fRWEPDEfWVdDrjpQGb1OCCQyz1QlNPV/1M1/xhKGS9EeXrL8Dw\nF6KTGkRwn1yXiP4BGgfeFIQHmJcKXEZ9HkrpNb8mcexkROv4aIPAwn+IaE+NHVtt\nIBnufMXLyfpkWJQtJa9elh9PMLlHHnuvnYLvuAoOkhuvs7fXDMpfFZ01C+QSv1dz\nHm52GSStERQzZ51w4c0rYDneYDniC/sQT1x3dP5Xf6wzO+EhRMabkvoTbMqPsTEP\nxyWr2pNtTBYp7pfQjsHxhJpQF0xjGN9C39z7f3gJG8IJhnPeulUqEZjhRFyVZQ6/\nsiUeq7vu4+dM/JQL+i7KKe7Lp9UMrG6NLMH+ltaoD3+lVm8fdTUxS5MNPoA/I8cK\n1OWTJHkrp7V/XaY7mUtvQn5V1yET5b4bogz4nME6WLiFMd+7x73gB+YJ6MGYNuO8\ne/NFK67MfHbk1/AiPTAJ6s5uHRQIkZcBPG7y5PpfcHpIlwPYCDGYlTajZXblyKrw\nBttVnYKvKsnlysv11glSg0DphGxQJbXzWpvBNyhMNH5dffcfvd3eXJAxnD81GD2z\nZAriMJ4Av2TfeqQ2nxd2ddn0jX4WVHtAvLXfCgLM2Gveho4jD/9sZ6PZz/rEeTvt\nh88t50qPcBa4bb25X0B5FO3TeK2LL3VKLuEp5lgdcHVonrcdqZFobN1CgGJua8TW\nSprIkh+8ATZ/FXQTi01NzLhHXT1IQzSpFaZw0gb2f5ruXwvTPpfXzQrs2omY+7s7\nfkCwGPesvpSXPKn9v8uhUwD7NGW/Dm+jUM+QtC/FqzX7+/Q+OuEPjClUh1cqopCZ\nEvAI3HjnavGrYuU6DgQdjyGT/UDbuwbCXqHxHojVVkISGzCTGpmBcQYQqhcFRedJ\nyJlu6PSXlA7+8Ajh52oiMJ3ez4xSssFgUQAyOB16432tm4erpGmCyakkoRmMUn3p\nwx+QIppxRlsHznhcCQKR3tcblUqH3vq5i4/ZAihusMCa0YrShtxfdSb13oKX+pFr\naZXvxyZlCa5qoQQBV1sowmPL1N2j3dR9TVpdTyCFQSv4KeiExmowtLIjeCppRBEK\neeYHJnlfkyKXPhxTVVO6H+dU4nVu0ASQZ07KiQjbI+zTpPKFLPp3/0sPRJM57r1+\naTS71iR7nZNZ1f8LZV2OvGE6fJVtgJ1J4Nu02K54uuIhU3tg1+7Xt+IqwRc9rbVr\npHH/hFCYBPW2D2dxB+k2pQlg5NI+TpsXj5Zun8kRw5RtVb+dLuiH/xmxArIee8Jq\nZF5q4h4I33PSGDdSvGXn9UMY5Isjpg==\n=7pIB\n-----END PGP PUBLIC KEY BLOCK-----",
          "trustSignature": "5.0",
          "source": "HashiCorp",
          "sourceUrl": "https://www.hashicorp.com/security.html"
        }
    }
}

You can change the URL from the above paramaters and use a URL that you control, I was using the terraform registry url for example purposes

Now in your terraform code you can have something like this:

terraform {
  required_providers {
    random = {
      source  = "terrakube-reg.minikube.net/simple/random"
      version = "3.0.1"
    }
  }
}

And when running terraform init you will see something like:

user@pop-os:~/git/simple-terraform$ terraform init

Initializing the backend...
Initializing modules...

Initializing provider plugins...
- Finding terrakube-reg.minikube.net/simple/random versions matching "3.0.1"...
- Finding latest version of hashicorp/null...
- Finding latest version of hashicorp/time...
- Finding latest version of hashicorp/random...
- Installing hashicorp/null v3.2.2...
- Installed hashicorp/null v3.2.2 (signed by HashiCorp)
- Installing hashicorp/time v0.12.0...
- Installed hashicorp/time v0.12.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing terrakube-reg.minikube.net/simple/random v3.0.1...
- Installed terrakube-reg.minikube.net/simple/random v3.0.1 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

package terraform.analysis

import input as tfplan

########################
# Parameters for Policy
########################

# acceptable score for automated authorization
blast_radius := 30

# weights assigned for each operation on each resource-type
weights := {
    "aws_autoscaling_group": {"delete": 100, "create": 10, "modify": 1},
    "aws_instance": {"delete": 10, "create": 1, "modify": 1}
}

# Consider exactly these resource types in calculations
resource_types := {"aws_autoscaling_group", "aws_instance", "aws_iam", "aws_launch_configuration"}

#########
# Policy
#########

# Authorization holds if score for the plan is acceptable and no changes are made to IAM
default authz := false
authz {
    score < blast_radius
    not touches_iam
}

# Compute the score for a Terraform plan as the weighted sum of deletions, creations, modifications
score := s {
    all := [ x |
            some resource_type
            crud := weights[resource_type];
            del := crud["delete"] * num_deletes[resource_type];
            new := crud["create"] * num_creates[resource_type];
            mod := crud["modify"] * num_modifies[resource_type];
            x := del + new + mod
    ]
    s := sum(all)
}

# Whether there is any change to IAM
touches_iam {
    all := resources["aws_iam"]
    count(all) > 0
}

####################
# Terraform Library
####################

# list of all resources of a given type
resources[resource_type] := all {
    some resource_type
    resource_types[resource_type]
    all := [name |
        name:= tfplan.resource_changes[_]
        name.type == resource_type
    ]
}

# number of creations of resources of a given type
num_creates[resource_type] := num {
    some resource_type
    resource_types[resource_type]
    all := resources[resource_type]
    creates := [res |  res:= all[_]; res.change.actions[_] == "create"]
    num := count(creates)
}


# number of deletions of resources of a given type
num_deletes[resource_type] := num {
    some resource_type
    resource_types[resource_type]
    all := resources[resource_type]
    deletions := [res |  res:= all[_]; res.change.actions[_] == "delete"]
    num := count(deletions)
}

# number of modifications to resources of a given type
num_modifies[resource_type] := num {
    some resource_type
    resource_types[resource_type]
    all := resources[resource_type]
    modifies := [res |  res:= all[_]; res.change.actions[_] == "update"]
    num := count(modifies)
}
flow:
  - type: "terraformPlan"
    step: 100
    name: "Terraform Plan with OPA Check"
    commands:
      - runtime: "GROOVY"
        priority: 100
        after: true
        script: |
          import Opa

          new Opa().loadTool(
            "$workingDirectory",
            "$bashToolsDirectory",
            "0.44.0")
          "Opa Download Completed..."
      - runtime: "BASH"
        priority: 200
        after: true
        script: |
          cd $workingDirectory;
          terraform show -json terraformLibrary.tfPlan > tfplan.json;
          echo "Checkint Open Policy Agent Policy";
          opaCheck=$(opa exec --decision terraform/analysis/authz --bundle .terrakube/toolsRepository/policy/ tfplan.json | jq '.result[0].result');

          if [ "$opaCheck" == "true" ]; then
            echo "Policy is valid"
            exit 0
          else
            echo "Policy is invalid"
            exit 1
          fi
...
- type: "terraformPlan"
    step: 100
    name: "Terraform Plan with OPA Check"
...
...
    commands:
      - runtime: "GROOVY"
        priority: 100
        after: true
        script: |
          import Opa

          new Opa().loadTool(
            "$workingDirectory",
            "$bashToolsDirectory",
            "0.44.0")
          "Opa Download Completed..."
...
...
      - runtime: "BASH"
        priority: 200
        after: true
        script: |
          cd $workingDirectory;
          
          terraform show -json terraformLibrary.tfPlan > tfplan.json;
          echo "Checkint Open Policy Agent Policy";
          opaCheck=$(opa exec --decision terraform/analysis/authz --bundle .terrakube/toolsRepository/policy/ tfplan.json | jq '.result[0].result');

          if [ "$opaCheck" == "true" ]; then
            echo "Policy is valid"
            exit 0
          else
            echo "Policy is invalid"
            exit 1
          fi
..
provider "aws" {
    region = "us-west-1"
}
resource "aws_instance" "web" {
  instance_type = "t2.micro"
  ami = "ami-09b4b74c"
}
resource "aws_autoscaling_group" "my_asg" {
  availability_zones        = ["us-west-1a"]
  name                      = "my_asg"
  max_size                  = 5
  min_size                  = 1
  health_check_grace_period = 300
  health_check_type         = "ELB"
  desired_capacity          = 4
  force_delete              = true
  launch_configuration      = "my_web_config"
}
resource "aws_launch_configuration" "my_web_config" {
    name = "my_web_config"
    image_id = "ami-09b4b74c"
    instance_type = "t2.micro"
}
provider "aws" {
    region = "us-west-1"
}
resource "aws_instance" "web" {
  instance_type = "t2.micro"
  ami = "ami-09b4b74c"
}
resource "aws_autoscaling_group" "my_asg" {
  availability_zones        = ["us-west-1a"]
  name                      = "my_asg"
  max_size                  = 5
  min_size                  = 1
  health_check_grace_period = 300
  health_check_type         = "ELB"
  desired_capacity          = 4
  force_delete              = true
  launch_configuration      = "my_web_config"
}
resource "aws_launch_configuration" "my_web_config" {
    name = "my_web_config"
    image_id = "ami-09b4b74c"
    instance_type = "t2.micro"
}
resource "aws_autoscaling_group" "my_asg2" {
  availability_zones        = ["us-west-2a"]
  name                      = "my_asg2"
  max_size                  = 6
  min_size                  = 1
  health_check_grace_period = 300
  health_check_type         = "ELB"
  desired_capacity          = 4
  force_delete              = true
  launch_configuration      = "my_web_config"
}
resource "aws_autoscaling_group" "my_asg3" {
  availability_zones        = ["us-west-2b"]
  name                      = "my_asg3"
  max_size                  = 7
  min_size                  = 1
  health_check_grace_period = 300
  health_check_type         = "ELB"
  desired_capacity          = 4
  force_delete              = true
  launch_configuration      = "my_web_config"
}
flow:
  - type: "terraformPlan"
    step: 100
    name: "Terraform Plan with OPA Check"
    commands:
      - runtime: "GROOVY"
        priority: 100
        after: true
        script: |
          import Opa

          new Opa().loadTool(
            "$workingDirectory",
            "$bashToolsDirectory",
            "0.44.0")
          "Opa Download Completed..."
      - runtime: "BASH"
        priority: 200
        after: true
        script: |
          cd $workingDirectory;
          terraform show -json terraformLibrary.tfPlan > tfplan.json;
          echo "Checkint Open Policy Agent Policy";
          opaCheck=$(opa exec --decision terraform/analysis/authz --bundle .terrakube/toolsRepository/policy/ tfplan.json | jq '.result[0].result');

          if [ "$opaCheck" == "true" ]; then
            echo "Policy is valid"
            exit 0
          else
            echo "Policy is invalid"
            exit 1
          fi
  - type: "terraformApply"
    step: 300
 
document
https://github.com/AzBuilder/terrakube-extensions

CLI-driven Workflow

With the CLI integration, you can use Terrakube’s collaboration features in the Terraform CLI workflow that you are already familiar with. This gives you the best of both worlds as a developer who uses the Terraform CLI, and it also works well with your existing CI/CD pipelines.

You can initiate runs with the usual terraform plan and terraform apply commands and then monitor the run progress from your terminal. These runs run remotely in Terrakube.

Terrakube creates default templates for CLI-Driven workflow when you create a new organization, if you delete those templates you won't be able to run this workflow, check for more information.

At the moment all the terraform operations are executed remotely in Terrakube

Configuration

To use the CLI-driven workflow the firts step will be to setup our project to use the terraform remote backend or the cloud block as the following example:

  • Hostname

    • This should be the Terrakube API

  • Organization

    • This should be the Terrakube organization where the workspace will be created.

  • Workspace

    • The name of the workspace to be created. Or you can use an existing workspace created in the UI using the API or CLI driven workflow.

The cloud block is available in Terraform v1.1 and later. Previous versions can use the remote bakced to configure the CLI workflow and migrate state. Using tags in the cloud block is not yet supported

Terraform login.

Once we have added the terraform remote state configuration we will need to authenticate to Terrakube using the following command with the Terrakube API as the following:

The output will look something like the following:

Terraform Init

After the authentication is completed you can run the following command to initialized the workspace

The output will look like this:

If the initialization is successfull you should be able to see something like this:

Implicit Workspace Creation

Terrakube will create a new workspace with the name you specify in the cloud block if it doesn’t already exist in your organization. You will see a message about this when you run terraform init.

Terraform Plan

Once the terraform initialization is completed we should be able to run a terraform plan remotely using the following:

You can use .tfvars file inside the workspace to upload the variables to the workspace automatically

If you go to the Terrakube UI, you will be able to see the workspace plan execution.

Terraform Apply

You can also execute a terraform apply and it will run inside Terrakube remotely.

The terraform apply can only be approved using the terraform CLI. You wont be able to approve the job inside the UI.

Terraform Destroy

Executing the terraform destroy is also possible.

Inside the UI the terraform destroy operation will look like this.

The terraform destroy can only be approved using the terraform CLI. You wont be able to approve the job inside the UI.

Amazon Cognito

AWS Cognito Authentication with Dex Connecor require Terrakube >= 2.6.0 and Helm Chart >= 2.0.0

Requirements

  • AWS Cognito

  • AWS S3 Bucket

For this example lets image that you will be using the following domains to deploy Terrakube.

  • registry.terrakube.aws.com

  • ui.terrakube.aws.com

  • api.terrakube.aws.com

Setup AWS Cognito Authentication

You need to complete the AWS Cognito authentication setup for Dex using the OIDC connector. You can found information in this

You need to create a new Cognito user pool

You can keep the default values

Add the domain name to cognito

Once the user pool is created you will need to create a new application.

Update the application configuration and update the redirect URL configuration.

Now you can create the DEX configuration, you will use this config later when deploying the helm chart.

The firt step is to clone the repository.

Replace <<CHANGE_THIS>> with the real values, create the values.yaml file and run the helm install

Run the installation

For any question or feedback please open an issue in our

terraform {
  cloud {
    organization = "simple"
    hostname = "terrakube-api.example.com"

    workspaces {
      tags = ["development", "networking"]
    }
  }
}

resource "random_string" "random" {
  length           = 16
  special          = true
  override_special = "/@£$"
}
terraform login 8080-azbuilder-terrakube-3xqsq270uq1.ws-us82.gitpod.io
terraform login 8080-azbuilder-terrakube-po7evw1u15x.ws-us86.gitpod.io
Terraform will request an API token for 8080-azbuilder-terrakube-po7evw1u15x.ws-us86.gitpod.io using OAuth.

This will work only if you are able to use a web browser on this computer to
complete a login process. If not, you must obtain an API token by another
means and configure it in the CLI configuration manually.

If login is successful, Terraform will store the token in plain text in
the following file for use by subsequent commands:
    C:\Users\XXXXXX\AppData\Roaming\terraform.d\credentials.tfrc.json

Do you want to proceed?
  Only 'yes' will be accepted to confirm.

  Enter a value: yes

Terraform must now open a web browser to the login page for 8080-azbuilder-terrakube-po7evw1u15x.ws-us86.gitpod.io.

If a browser does not open this automatically, open the following URL to proceed:
    https://5556-azbuilder-terrakube-po7evw1u15x.ws-us86.gitpod.io/dex/auth?scope=openid+profile+email+offline_access+groups&client_id=example-app&code_challenge=YYe059U9HyeMy1-RmuEzSajPJYGdcY4IkNr0pZz4LZ8&code_challenge_method=S256&redirect_uri=http%3A%2F%2Flocalhost%3A10000%2Flogin&response_type=code&state=f8b6da79-35f3-2b92-d8d8-179cda386f91

Terraform will now wait for the host to signal that login was successful.


---------------------------------------------------------------------------------

Success! Terraform has obtained and saved an API token.

The new API token will be used for any future Terraform command that must make
authenticated requests to 8080-azbuilder-terrakube-po7evw1u15x.ws-us86.gitpod.io.
terraform init
terraform init

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Using previously-installed hashicorp/random v3.4.3

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
terraform plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://8080-azbuilder-terrakube-po7evw1u15x.ws-us86.gitpod.io/app/simple/workspace1/runs/1

Waiting for the plan to start...

Terrakube Remote Plan Execution



Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # random_string.random will be created
  + resource "random_string" "random" {
      + id               = (known after apply)
      + length           = 16
      + lower            = true
      + min_lower        = 0
      + min_numeric      = 0
      + min_special      = 0
      + min_upper        = 0
      + number           = true
      + numeric          = true
      + override_special = "/@£$"
      + result           = (known after apply)
      + special          = true
      + upper            = true
    }

Plan: 1 to add, 0 to change, 0 to destroy.
terraform apply
Running apply in the remote backend. Output will stream here. Pressing Ctrl-C
will cancel the remote apply if it's still pending. If the apply started it
will stop streaming the logs, but will not stop the apply running remotely.

Preparing the remote apply...

To view this run in a browser, visit:
https://8080-azbuilder-terrakube-po7evw1u15x.ws-us86.gitpod.io/app/simple/workspace1/runs/2

Waiting for the plan to start...

Terrakube Remote Plan Execution



Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # random_string.random will be created
  + resource "random_string" "random" {
      + id               = (known after apply)
      + length           = 16
      + lower            = true
      + min_lower        = 0
      + min_numeric      = 0
      + min_special      = 0
      + min_upper        = 0
      + number           = true
      + numeric          = true
      + override_special = "/@£$"
      + result           = (known after apply)
      + special          = true
      + upper            = true
    }

Plan: 1 to add, 0 to change, 0 to destroy.


Do you want to perform these actions in workspace "workspace1"?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

random_string.random: Creating...
random_string.random: Creation complete after 0s [id=QvsEmtO7WoeJJcAJ]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
terraform destroy
Running apply in the remote backend. Output will stream here. Pressing Ctrl-C
will cancel the remote apply if it's still pending. If the apply started it
will stop streaming the logs, but will not stop the apply running remotely.

Preparing the remote apply...

To view this run in a browser, visit:
https://8080-azbuilder-terrakube-po7evw1u15x.ws-us86.gitpod.io/app/simple/workspace1/runs/3

Waiting for the plan to start...

Terrakube Remote Plan Execution


random_string.random: Refreshing state... [id=QvsEmtO7WoeJJcAJ]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # random_string.random will be destroyed
  - resource "random_string" "random" {
      - id               = "QvsEmtO7WoeJJcAJ" -> null
      - length           = 16 -> null
      - lower            = true -> null
      - min_lower        = 0 -> null
      - min_numeric      = 0 -> null
      - min_special      = 0 -> null
      - min_upper        = 0 -> null
      - number           = true -> null
      - numeric          = true -> null
      - override_special = "/@£$" -> null
      - result           = "QvsEmtO7WoeJJcAJ" -> null
      - special          = true -> null
      - upper            = true -> null
    }

Plan: 0 to add, 0 to change, 1 to destroy.


Do you really want to destroy all resources in workspace "workspace1"?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

random_string.random: Destroying... [id=QvsEmtO7WoeJJcAJ]
random_string.random: Destruction complete after 0s

Apply complete! Resources: 0 added, 0 changed, 1 destroyed.
default templates
terraform {
  backend "remote" {
    hostname = "terrakube-api.example.com"
    organization = "simple"
    workspaces {
      name = "workspace1"
    }
  }
}

resource "random_string" "random" {
  length           = 16
  special          = true
  override_special = "/@£$"
}
terraform {
  cloud {
    organization = "simple"
    hostname = "terrakube-api.example.com"
    workspaces {
      name = "samplecloud"
    }
  }
}

resource "random_string" "random" {
  length           = 16
  special          = true
  override_special = "/@£$"
}
## Dex
dex:
  config:
    issuer: https://terrakube-api.yourdomain.com/dex #<<CHANGE_THIS>>
    storage:
      type: memory
    oauth2:
      responseTypes: ["code", "token", "id_token"]
      skipApprovalScreen: true
    web:
      allowedOrigins: ["*"]

    staticClients:
    - id: cognito
      redirectURIs:
      - 'https://ui.yourdomain.com' #<<CHANGE_THIS>>
      - 'http://localhost:3000'
      - 'http://localhost:10001/login'
      - 'http://localhost:10000/login'
      - '/device/callback'
      name: 'cognito'
      public: true

    connectors:
    - type: oidc
      id: cognito
      name: cognito
      config:
        issuer: "https://cognito-idp.XXXXX.amazonaws.com/XXXXXXX" #<<CHANGE_THIS>>
        clientID: "XXXX" #<<CHANGE_THIS>>
        clientSecret: "XXXXX" #<<CHANGE_THIS>>
        redirectURI: "https://terrakube-api.yourdomain.com/dex/callback" #<<CHANGE_THIS>>
        scopes:
          - openid
          - email
          - profile
        insecureSkipEmailVerified: true
        insecureEnableGroups: true
        userNameKey: "cognito:username"
        claimMapping:
          groups: "cognito:groups"
git clone https://github.com/AzBuilder/terrakube-helm-chart.git
## Global Name
name: "terrakube"

## Terrakube Security
security:
  adminGroup: "<<CHANGE_THIS>>" # The value should be a valida azure ad group (example: TERRAKUBE_ADMIN)
  patSecret: "<<CHANGE_THIS>>"  # Sample Key 32 characters z6QHX!y@Nep2QDT!53vgH43^PjRXyC3X 
  internalSecret: "<<CHANGE_THIS>>" # Sample Key 32 characters Kb^8cMerPNZV6hS!9!kcD*KuUPUBa^B3 
  dexClientId: "cognito"
  dexClientScope: "email openid profile offline_access groups"
  
## Terraform Storage
storage:
  aws:
    accessKey: "XXXXX" #<<CHANGE_THIS>>
    secretKey: "XXXXX" #<<CHANGE_THIS>>
    bucketName: "XXXXX" #<<CHANGE_THIS>>
    region: "XXXXX" #<<CHANGE_THIS>>

## Dex
dex:
  config:
    issuer: https://terrakube-api.yourdomain.com/dex #<<CHANGE_THIS>>
    storage:
      type: memory
    oauth2:
      responseTypes: ["code", "token", "id_token"]
      skipApprovalScreen: true
    web:
      allowedOrigins: ["*"]

    staticClients:
    - id: cognito
      redirectURIs:
      - 'https://ui.yourdomain.com' #<<CHANGE_THIS>>
      - 'http://localhost:3000'
      - 'http://localhost:10001/login'
      - 'http://localhost:10000/login'
      - '/device/callback'
      name: 'cognito'
      public: true

    connectors:
    - type: oidc
      id: cognito
      name: cognito
      config:
        issuer: "https://cognito-idp.XXXXX.amazonaws.com/XXXXXXX" #<<CHANGE_THIS>>
        clientID: "XXXX" #<<CHANGE_THIS>>
        clientSecret: "XXXXX" #<<CHANGE_THIS>>
        redirectURI: "https://terrakube-api.yourdomain.com/dex/callback" #<<CHANGE_THIS>>
        scopes:
          - openid
          - email
          - profile
        insecureSkipEmailVerified: true
        insecureEnableGroups: true
        userNameKey: "cognito:username"
        claimMapping:
          groups: "cognito:groups"

## API properties
api:
  enabled: true
  replicaCount: "1"
  serviceType: "ClusterIP"
  properties:
    databaseType: "H2"

## Executor properties
executor:
  enabled: true 
  replicaCount: "1"
  serviceType: "ClusterIP"
  properties:
    toolsRepository: "https://github.com/AzBuilder/terrakube-extensions"
    toolsBranch: "main"

## Registry properties
registry:
  enabled: true
  replicaCount: "1"
  serviceType: "ClusterIP"

## UI Properties
ui:
  enabled: true
  replicaCount: "1"
  serviceType: "ClusterIP"

## Ingress properties
ingress:
  useTls: true
  ui:
    enabled: true
    domain: "ui.terrakube.azure.com"
    path: "/(.*)"
    pathType: "Prefix" 
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      cert-manager.io/cluster-issuer: letsencrypt
  api:
    enabled: true
    domain: "api.terrakube.azure.com"
    path: "/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
      cert-manager.io/cluster-issuer: letsencrypt
  registry:
    enabled: true
    domain: "registry.terrakube.azure.com"
    path: "/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
      cert-manager.io/cluster-issuer: letsencrypt
  dex:
    enabled: true
    path: "/dex/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
      cert-manager.io/cluster-issuer: letsencrypt
helm install --debug --values ./values.yaml terrakube ./terrakube-helm-chart/ -n terrakube
link
helm chart repository

Google Cloud Identity

Google Identity Authentication with Dex connector require Terrakube >= 2.6.0 and Helm Chart >= 2.0.0

Requirements

  • Google Cloud Identity

  • Gooble Storage Bucket

For this example lets image that you will be using the following domains to deploy Terrakube.

  • registry.terrakube.gcp.com

  • ui.terrakube.gcp.com

  • api.terrakube.gcp.com

Setup Google Authentication

You need to complete the Google authentication setup for Dex. You can found information in this

You need to go to your GCP projet and create a new OAuth Application you can follow this steps: firts select "APIs & Services => Credentials"

Once inside the "Credentials" page, you will have to create a new OAuth Client

The OAuth application should look like this with the redirect URL ""

For Google authentication we need to get the GCP groups so you need to complete .

Include the Domain Wide Delegation inside the admin consol for the OAuth application

Using the following permission ""

You can now generate the JSON credentials file for your application, you will use this file later in the helm chart.

Now you can create the DEX configuration, you will use this config later when deploying the helm chart.

The firt step is to clone the repository.

Replace <<CHANGE_THIS>> with the real values, create the values.yaml file and run the helm install

Run the installation

For any question or feedback please open an issue in our

## Dex
dex:
  enabled: true
  config:
    issuer: https://api.terrakube.gcp.com/dex #<<CHANGE_THIS>>
    storage:
      type: memory
    oauth2:
      responseTypes: ["code", "token", "id_token"]
    web:
      allowedOrigins: ["*"]

    staticClients:
    - id: google
      redirectURIs:
      - 'https://ui.terrakube.gcp.com' #<<CHANGE_THIS>>
      - 'http://localhost:3000'
      - 'http://localhost:10001/login'
      - 'http://localhost:10000/login'
      - '/device/callback'
      name: 'google'
      public: true

    connectors:
    - type: google
      id: google
      name: google
      config:
        clientID: "<<CHANGE_THIS>>"
        clientSecret: "<<CHANGE_THIS>>"
        redirectURI: "https://api.terrakube.gcp.com/dex/callback"
        serviceAccountFilePath: "/etc/gcp/secret/gcp-credentials" # GCP CREDENTIAL FILE WILL BE IN THIS PATH
        adminEmail: "<<CHANGE_THIS>>"
git clone https://github.com/AzBuilder/terrakube-helm-chart.git
## Global Name
name: "terrakube"

## Terrakube Security
security:
  adminGroup: "<<CHANGE_THIS>>" # The value should be a gcp group (format: [email protected] example: [email protected])
  patSecret: "<<CHANGE_THIS>>"  # Sample Key 32 characters z6QHX!y@Nep2QDT!53vgH43^PjRXyC3X 
  internalSecret: "<<CHANGE_THIS>>" # Sample Key 32 characters Kb^8cMerPNZV6hS!9!kcD*KuUPUBa^B3 
  dexClientId: "google"
  dexClientScope: "email openid profile offline_access groups"
  gcpCredentials: |
    ## GCP JSON CREDENTIALS for service account with API Scope https://www.googleapis.com/auth/admin.directory.group.readonly
    {
      "type": "service_account",
      "project_id": "",
      "private_key_id": "",
      "private_key": "",
      "client_email": "",
      "client_id": "",
      "auth_uri": "",
      "token_uri": "",
      "auth_provider_x509_cert_url": "",
      "client_x509_cert_url": ""
    } 


## Terraform Storage
storage:
  gcp:
    projectId: "<<CHANGE_THIS>>"
    bucketName: "<<CHANGE_THIS>>"
    credentials: |
      ## GCP JSON CREDENTIALS for service account with access to write to the storage bucket
      {
        "type": "service_account",
        "project_id": "",
        "private_key_id": "",
        "private_key": "",
        "client_email": "",
        "client_id": "",
        "auth_uri": "",
        "token_uri": "",
        "auth_provider_x509_cert_url": "",
        "client_x509_cert_url": ""
      } 

## Dex
dex:
  enabled: true
  config:
    issuer: https://api.terrakube.gcp.com/dex #<<CHANGE_THIS>>
    storage:
      type: memory
    oauth2:
      responseTypes: ["code", "token", "id_token"]
    web:
      allowedOrigins: ["*"]

    staticClients:
    - id: google
      redirectURIs:
      - 'https://ui.terrakube.gcp.com' #<<CHANGE_THIS>>
      - 'http://localhost:3000'
      - 'http://localhost:10001/login'
      - 'http://localhost:10000/login'
      - '/device/callback'
      name: 'google'
      public: true

    connectors:
    - type: google
      id: google
      name: google
      config:
        clientID: "<<CHANGE_THIS>>"
        clientSecret: "<<CHANGE_THIS>>"
        redirectURI: "https://api.terrakube.gcp.com/dex/callback"
        serviceAccountFilePath: "/etc/gcp/secret/gcp-credentials" # GCP CREDENTIAL FILE WILL BE IN THIS PATH
        adminEmail: "<<CHANGE_THIS>>" 

## API properties
api:
  enabled: true
  replicaCount: "1"
  serviceType: "ClusterIP"
  properties:
    databaseType: "H2"

## Executor properties
executor:
  enabled: true  
  replicaCount: "1"
  serviceType: "ClusterIP"
  properties:
    toolsRepository: "https://github.com/AzBuilder/terrakube-extensions"
    toolsBranch: "main"

## Registry properties
registry:
  enabled: true
  replicaCount: "1"
  serviceType: "ClusterIP"

## UI Properties
ui:
  enabled: true
  replicaCount: "1"
  serviceType: "ClusterIP"

## Ingress properties
ingress:
  useTls: true
  ui:
    enabled: true
    domain: "terrakube-ui.yourdomain.com"
    path: "/(.*)"
    pathType: "Prefix" 
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      cert-manager.io/cluster-issuer: letsencrypt
  api:
    enabled: true
    domain: "terrakube-api.yourdomain.com"
    path: "/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
      cert-manager.io/cluster-issuer: letsencrypt
  registry:
    enabled: true
    domain: "terrakube-reg.yourdomain.com"
    path: "/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
      cert-manager.io/cluster-issuer: letsencrypt
  dex:
    enabled: true
    path: "/dex/(.*)"
    pathType: "Prefix"
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
      cert-manager.io/cluster-issuer: letsencrypt
helm install --debug --values ./values.yaml terrakube ./terrakube-helm-chart/ -n terrakube
here
link
https://api.terrakube.gcp.com/dex/callback
this setup
https://admin.google.com/
https://www.googleapis.com/auth/admin.directory.group.readonly
helm chart repository

Updates

May 2024 (2.21.0)

Welcome to the 2.21.0 release from Terrakube! As we reach the midpoint of the year, we’re excited to introduce several new features and enhancements that improve the overall user experience and functionality. Let’s dive into the key highlights:

Terrakube Actions

Terrakube actions allow you to extend the UI in Workspaces, functioning similarly to plugins. These actions enable you to add new functions to your Workspace, transforming it into a hub where you can monitor and manage your infrastructure and gain valuable insights. With Terrakube actions, you can quickly observe your infrastructure and apply necessary actions.

In the below demo, you can view how to use a couple of the new actions to monitor metrics and restart an Azure VM.

actions

Additionally, we are providing an action to interact with OpenAI directly from the workspace. Check out the Terrakube OpenAI Action for more details.

openai

Check our documentation for more details on how to use and start creating Terrakube actions.

Authenticate Providers with Dynamic Credentials

Terrakube's dynamic provider credentials feature allows you to establish a secure trust relationship between Terraform/OpenTofu and your cloud provider. By using unique, short-lived credentials for each job, this feature significantly reduces the risk of compromised credentials. We are excited to announce support for dynamic credentials for the following cloud providers:

  • AWS

  • Google Cloud Platform

  • Azure

This enhancement ensures that your infrastructure management is both secure and efficient, minimizing potential security vulnerabilities while maintaining seamless integration with your cloud environments.

Support for --auto-approve Flag

We are pleased to announce the addition of support for the --auto-approve flag in the CLI-driven workflow. This feature allows for the automatic approval of Terraform and OpenTofu apply operations.

user@pop-os:~/git/simple-terraform$ terraform apply --auto-approve

Running apply in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will cancel the remote apply if it's still pending. If the apply started it
will stop streaming the logs, but will not stop the apply running remotely.

Preparing the remote apply...

To view this run in a browser, visit:
https://8080-azbuilder-terrakube-jdlq1j61x7k.ws-us110.gitpod.io/app/simple/auto-approve/runs/1

Waiting for the plan to start...

***************************************
Running Terraform PLAN
***************************************

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # null_resource.next will be created
  + resource "null_resource" "next" {
      + id = (known after apply)
    }

  # time_sleep.wait_30_seconds will be created
  + resource "time_sleep" "wait_30_seconds" {
      + create_duration = (known after apply)
      + id              = (known after apply)
    }

  # module.time_module.random_integer.time will be created
  + resource "random_integer" "time" {
      + id     = (known after apply)
      + max    = 5
      + min    = 1
      + result = (known after apply)
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + creation_time = (known after apply)
  + fake_data     = {
      + data     = "Hello World"
      + resource = {
          + resource1 = "fake"
        }
    }

module.time_module.random_integer.time: Creating...
module.time_module.random_integer.time: Creation complete after 0s [id=5]
time_sleep.wait_30_seconds: Creating...
time_sleep.wait_30_seconds: Creation complete after 5s [id=2024-04-16T22:25:15Z]
null_resource.next: Creating...
null_resource.next: Creation complete after 0s [id=7860036023219461249]

Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

Outputs:

creation_time = "5s"
fake_data = {
  "data" = "Hello World"
  "resource" = {
    "resource1" = "fake"
  }
}

UI Enhancements

We have made several enhancements to the UI, focusing on improving the overall user experience:

  • Buttons: Basic rounded corner adjustments.

  • Global Shadow Optimization: Adjusted from three layers of shadows to two layers, used in various components.

  • Loading Spinners: Added spinners for loading main pages instead of static loading text.

  • Field Validations: Added validations for fields to improve form submission accuracy.

These updates aim to make the Terrakube interface more intuitive and visually appealing.

Bug fixes

We have made some improvements and fixes based on the feedback from the community and the security analysis. You can find the full list of changes for this version here https://github.com/AzBuilder/terrakube/releases/tag/2.21.0

March 2024 (2.20.0)

Welcome to the 2.20.0 release from Terrakube! In our second update of the year, we're excited to share several key highlights with you:

Workspace Importer

Our newly Workspace Importer is designed to improve your migration from Terraform Cloud or Terraform Enterprise to Terrakube. Providing a user-friendly wizard, this feature simplifies the process of transferring all your workspaces to Terrakube, ensuring that variables, tags, state files, and other components are easily copied over. During its beta phase, this feature has been tested with hundreds of workspaces, receiving positive feedback from the community. If you're considering a migration to Terrakube, we highly recommend reviewing our documentation to see how this feature can save you time and effort.

image
image

Enhanced API Token Management

We've enhanced the UI for managing API tokens, making it more intuitive and efficient. Now, revoking tokens can be done directly through the UI with ease.

image

Organization Default Execution Mode Update

This new feature allows you to set the default execution mode—either local or remote—at the organization level.

image

Enhanced Support for Monorepos in Private Registry

Responding to community feedback regarding the management of Terraform modules within a monorepo, we've made significant enhancements to our Private Registry. Now, you have the capability to specify the path of the module within the monorepo and set a tag prefix when creating a module.

image

Agents Support

We've expanded Terrakube capabilities with the introduction of multiple terrakube executor agents. Now, you have the flexibility to select the specific agent you wish to use for each workspace. This new feature enables a higher level of execution isolation based on workspaces, offering improved security and customization to suit your project's needs.

image

Prefix Support for Remote Backend

If you are using the remote backend this enhancement allows you to perform operations across multiple workspaces under the same prefix, improving workspace management and operation execution.

terraform {
  backend "remote" {
    hostname = "8080-azbuilder-terrakube-7tnuq3gnkgf.ws-us108.gitpod.io"
    organization = "simple"

    workspaces {
      prefix = "my-app-"
    }
  }
}

Breaking change for POSTGRESQL user

if you are using POSTGRESQL and using the helm chart 3.16.0 it will require you to add the database port using the property

api.properties.databasePort: "5432"

More information can be found in this issue

Bug fixes

We have made some improvements and fixes based on the feedback from the community and the security analysis. You can find the full list of changes for this version here https://github.com/AzBuilder/terrakube/releases/tag/2.20.0

January 2024 (2.19.0)

As we step into the new year, we are thrilled to announce the release of Terrakube 2.19.0, a significant update that continues to push the boundaries of Infrastructure as Code (IaC) management. Here are some key highlights of this release:

OpenTofu Now Available on Terrakube

With OpenTofu now in General Availability (GA), we've integrated it as an alternative within Terrakube. Now, when you create workspaces, you have the option to select either Terraform or OpenTofu. Terrakube will then tailor the user experience specifically to the tool you choose. Additionally, you can switch your existing workspaces to OpenTofu via the Workspace settings, allowing for a smooth transition between tools.

image
image
image

In the future we are planning to introduce specific features for each Terraform and Opentofu based in their evolution. Also we will introduce support for more Iac tools later.

Enhanced Workspace Overview Page

We've upgraded the Workspace Overview Page to enrich your interaction with your infrastructure. Now, you'll find a detailed count of resources and outputs. Each resource is represented by an icon indicating its type, with current support for AWS and Azure icons. Clicking on a resource name unfolds a detailed view of all its attributes and dependencies, and provides direct access to the resource's documentation.

image
image

You can access the attributes and dependencies view from both Workspace Overview Page or using the Visual State.

Webhook Improvements

Now, you have the flexibility to select the default template that will execute for each new push in your Version Control System (VCS) at the workspace level. By default, Terrakube performs Plan and Apply operations, but you can now choose a different template tailored to your specific use case. This enhancement allows for the implementation of customized workflows across different organizations or workspaces.

image

Additionally, Terrakube now checks for the configured directory in the Workspace and will only run the job when a file changes in the specified directory. This feature is particularly useful for monorepositories, where you want to manage different workspaces from a single repository. This functionality has been implemented for GitLab, GitHub, and Bitbucket.

CLI Driven workflow

When leveraging the CLI-driven workflow, you now have the option to utilize the refresh flag.

Here's an example without the refresh flag:

user@pop-os:~/git/simple-terraform$ terraform plan

Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://8080-azbuilder-terrakube-3it4oxf0vpt.ws-us107.gitpod.io/app/simple/simple-terraform/runs/2

Waiting for the plan to start...

***************************************
Running Terraform PLAN
***************************************
null_resource.previous: Refreshing state... [id=2942672164070633234]
module.time_module.random_integer.time: Refreshing state... [id=4]
time_sleep.wait_30_seconds: Refreshing state... [id=2023-12-28T20:29:16Z]
null_resource.next: Refreshing state... [id=1447023117451433293]
null_resource.next2: Refreshing state... [id=8910164257527828565]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # null_resource.next3 will be created
  + resource "null_resource" "next3" {
      + id = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Example setting refresh to false:

terraform plan -refresh=false

Running plan in Terraform Cloud. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

To view this run in a browser, visit:
https://8080-azbuilder-terrakube-3it4oxf0vpt.ws-us107.gitpod.io/app/simple/simple-terraform/runs/3

Waiting for the plan to start...

***************************************
Running Terraform PLAN
***************************************

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # null_resource.next3 will be created
  + resource "null_resource" "next3" {
      + id = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

API Token Revocation

You now have the ability to revoke both Personal and Team API Tokens. Currently, this functionality is exclusively accessible via the API, but it will be extended to the UI in a forthcoming release. For more information, please refer to our documentation.

Bug fixes

We have made some improvements and fixes based on the feedback from the community and the security analysis. You can find the full list of changes for this version here https://github.com/AzBuilder/terrakube/releases/tag/2.19.0

December 2023 (2.18.0)

Greetings! As the year comes to a close, we are excited to present you the 2.18.0 release of Terrakube. This version brings you some features and enhancements that we hope you’ll enjoy, some of the key highlights include:

AWS icons in State Diagram

The state diagram now includes all the AWS architecture icons, so you can easily recognize each resource type. This feature will help you visualize your Terraform state in a more intuitive way. We are also working on improving the diagram to group the resources by VPC, location, etc which will give you a closer look at how your architecture is designed using the Terraform state.

image

Visual State Diagram export option

Following the state diagram enhancements, we have also added the option to save the diagram in different formats. You can export a high-quality image in png, svg or jpeg formats by using the Download and selecting the preferred format. This feature will help you use the diagram for your documentation or presentations.

image

Self Hosted VCS Providers Support

We have added support for Github Enterprise, GitLab Enterprise Edition and GitLab Community Edition. This means that you can now integrate your workspaces and modules using your self-hosted VCS providers. This is a major milestone in our vision of providing an enterprise-grade experience with Terrakube. In the upcoming releases we will add Bitbucket Server and Azure DevOps Server. For more details in how to use this feature check the following links.

  • Github Enterprise

image
  • GitLab Enterprise and Community Editions

image

Webhooks support for GitLab (Cloud, EE and CE), BitBucket and Github (Cloud and Enterprise)

We have introduced a new feature that will make your workflow more efficient and convenient. Now, Terrakube will automatically trigger a new Plan and Apply job for each new push in the repository that is linked to the workspace. You no longer have to run the jobs manually using the UI or the terraform CLI. This feature will help you keep your infrastructure in sync with your code changes.

image

Bug fixes

We have made some improvements and fixes based on the feedback from the community and the security analysis. You can find the full list of changes for this version here https://github.com/AzBuilder/terrakube/releases/tag/2.18.0

November 2023 (2.17.0)

Welcome to the 2.17.0 release of Terrakube. There are a couple of updates in this version that we hope you'll like, some of the key highlights include:

Submodules view

You can now view the submodules of your Terraform Module in the open registry. When you select a submodule, you will see its details, such as inputs, outputs and resources.

image
image

Workspace Overview

We have enhanced the workspace overview to give you more information at a glance. You can now see the Terraform version, the repository, and the list of resources and outputs for each workspace.

image
image

Team API Tokens

You can now generate team API tokens using the Terrakube UI and specify their expiration time in days/minutes/hours. This gives you more flexibility and control over your team’s access to Terrakube. To learn more, please visit our documentation.

image
image

Terraform provider for Terrakube

You can use the Terrakube provider for Terraform to manage modules, organizations and so on.

provider "terrakube" {
  endpoint = "http://terrakube-api.minikube.net"
  token    = "(PERSONAL ACCESS TOKEN OR TEAM TOKEN)"
}

data "terrakube_organization" "org" {
  name = "simple"
}

data "terrakube_vcs" "vcs" {
  name            = "sample_vcs"
  organization_id = data.terrakube_organization.org.id
}

data "terrakube_ssh" "ssh" {
  name            = "sample_ssh"
  organization_id = data.terrakube_organization.org.id
}

resource "terrakube_team" "team" {
  name             = "TERRAKUBE_SUPER_ADMIN"
  organization_id  = data.terrakube_vcs.vcs.organization_id
  manage_workspace = false
  manage_module    = false
  manage_provider  = true
  manage_vcs       = true
  manage_template  = true
}

resource "terrakube_module" "module1" {
  name            = "module_public_connection"
  organization_id = data.terrakube_ssh.ssh.organization_id
  description     = "module_public_connection"
  provider_name   = "aws"
  source          = "https://github.com/terraform-aws-modules/terraform-aws-vpc.git"
}

Bug fixes

We’ve addressed some issues reported by the community and fixed some vulnerabilities. You can see the full change log for this version here https://github.com/AzBuilder/terrakube/releases/tag/2.17.0

October 2023 (2.16.0)

Welcome to the 2.16.0 release of Terrakube. There are a couple of updates in this version that we hope you'll like, some of the key highlights include:

Improvement in Workspaces List

We have added more filters to workspaces, so you can easily find the one you need. You can now use the tags or description of the workspaces to narrow down your search. Additionally, you can see the tags of each workspace on its card, without having to enter it. This way, you can quickly identify the workspaces that are relevant to you.

image

Team API Tokens

In the previous version, you could only generate API tokens that were associated with a user account. Now, you can also generate API tokens that are linked to a specific team. This gives you more flexibility and control over your API access and permissions. To generate a team API token, you need to use the API for now, but we are working on adding this feature to the UI in the next version. You can find more information on how to use team API tokens in our documentation

POST {{terrakubeApi}}/access-token/v1/teams
Authorization: Bearer (PERSONAL ACCESS TOKEN)
Request Body:
{
  "days": "1",
  "group": "TERRAKUBE_ADMIN",
  "description": "Sample Team Token"
}

Bug fixes and Performance

We’ve addressed some issues reported by the community regarding performance,bugs and security. You can see the full change log for this version here https://github.com/AzBuilder/terrakube/releases/tag/2.16.0

August 2023 (2.15.0)

Welcome to the 2.15.0 release of Terrakube. There are a couple of updates in this version that we hope you'll like, some of the key highlights include:

Local Execution Mode

In the previous version, all executions were run remotely in Terrakube. However, starting from this version, you can choose to define the execution mode as either local or remote. By default, all workspaces run remotely. If you want to disable remote execution for a workspace, you can change its execution mode to “Local”. This mode allows you to perform Terraform runs locally using the CLI-driven run workflow.

image

Accessing State from Other Workspaces

Now its possible to use terraform_remote_state datasource to acces state between workspaces. For further details check our documentation

data "terraform_remote_state" "remote_creation_time" {
  backend = "remote"
  config = {
    organization = "simple"
    hostname = "8080-azbuilder-terrakube-vg8s9w8fhaj.ws-us102.gitpod.io"
    workspaces = {
      name = "simple_tag1"
    }
  }
}

Support Cloud Block with Tags

In our previous version, we introduced support for adding tags to Workspaces. Starting from this version, you can use these tags within the cloud block to apply Terraform operations across multiple workspaces that share the same tags. You can find more details in the CLI-Driven workflow documentation

terraform {
  cloud {
    organization = "simple"
    hostname = "8080-azbuilder-terrakube-md8dylvuzta.ws-us101.gitpod.io"

    workspaces {
      tags = ["networking", "development"]
    }
  }
}

Search modules

We’ve added a search function to the Modules pages, making it easier to find modules.

Bug fixes

We’ve addressed some issues reported by the community, so Terrakube is becoming more stable. You can see the full change log for this version here https://github.com/AzBuilder/terrakube/releases/tag/2.15.0

Thanks for all the community support and contributions. Special thanks to @klinux for implementing the search feature inside Modules.

Jun 2023 (2.14.0)

Welcome to the 2.14.0 release of Terrakube. There are a couple of updates in this version that we hope you'll like, some of the key highlights include:

Terraform cloud block support

This new version lets you use the terraform cloud block to execute remote operations using the CLI driven run workflow.

terraform {
  cloud {
    hostname = "8080-azbuilder-terrakube-q8aleg88vlc.ws-us92.gitpod.io"
    organization = "migrate-org"
    workspaces {
      name = "migrate-state"
    }
  }
}

Enhanced UI error messages for Authorization errors

We have enhanced the UI to handle authorization errors more gracefully.

image

Workspace Tags

We have enabled tagging functionality within the workspaces. In an upcoming release, you will be able to use these tags in the cloud block feature. For more details please check the terrakube documentation.

image
image

Overview Workspace page

With Terrakube’s new workspace Overview page, you can access all the essential information for each of your workspaces in a simple and convenient panel. Initially we are including the last run and workspace tags. We will add more information in upcoming releases.

image

Improved CLI run workflow logs

From this version onwards, you can view the logs in real time when you use the Terraform CLI remote backend or the cloud block.

image

SSH key injection

Before, Terrakube only used SSH keys to clone private git repos. Now, Terrakube will inject the keys into the workspace. This way, if your terraform modules use git as source, Terraform will access the modules with the keys. Check the documentation for further details.

For the full changelog for this version please check https://github.com/AzBuilder/terrakube/releases/tag/2.14.0

May 2023 (2.13.0)

Welcome to the 2.13.0 release of Terrakube. There are a couple of updates in this version that we hope you'll like, some of the key highlights include:

Terraform init -migrate-state support

Now you can easily migrate your Terraform state from other tools like TFE or Terraform Cloud to Terrakube. The migration command is supported by using the terraform CLI.

terraform login 8080-azbuilder-terrakube-q8aleg88vlc.ws-us92.gitpod.io
terraform init -migrate-state

Check out our documentation for further details about the process.

Improved Log streaming in the UI

Starting with this version, you can see the logs in the Terrakube UI in almost real time, so you can easily follow up on your Terraform changes.

image

Also we optimized the code and updated most libs to the latest versions For the full changelog for this version please check https://github.com/AzBuilder/terrakube/releases/tag/2.13.0

Mar 2023 (2.12.0)

Welcome to the 2.12.0 release of Terrakube. There are a couple of updates in this version that we hope you'll like, some of the key highlights include:

Search options in Workspaces

Use the new controls in the workspaces list to filter by status or search by name.

Simplified process to add a new VCS provider

You can now connect to a vcs provider in one step, instead of creating it first and then updating the call back url.

Open Telemetry

We added Open Telemetry to the api, registry and executor components for monitoring Terrakube. In future versions, we will also provide Grafana templates for custom metrics like jobs execution and workspaces usage. Check the documentation for further information.

Improvement to Templates

  • In the previous version we introduced the use of importCommands, and now improve reutilization we are adding inputsTerraform and inputsEnv so basically now you can send parameters to your templates in an easy way. For example:

flow:
  - type: "terraformPlan"
    step: 100
    importComands:
      repository: "https://github.com/AzBuilder/terrakube-extensions"
      folder: "templates/terratag"
      branch: "import-template"
      inputsEnv:
        INPUT_IMPORT1: $IMPORT1
        INPUT_IMPORT2: $IMPORT2
      inputsTerraform:
        INPUT_IMPORT1: $IMPORT1
        INPUT_IMPORT2: $IMPORT2
  - type: "terraformApply"
    step: 200

Check the documentation for more details.

  • And if you want to create a schedule for a template, you can do it easily now with scheduleTemplates.

flow:
  - type: "customScripts"
    step: 100
    inputsTerraform:
      SIZE: $SIZE_SMALL
    commands:
      - runtime: "BASH"
        priority: 100
        before: true
        script: |
          echo "SIZE : $SIZE"
  - type: "scheduleTemplates"
    step: 200
    templates:
      - name: "schedule1"
        schedule: "0 0/5 * ? * * *"
      - name: "schedule2"
        schedule: "0 0/10 * ? * * *"

Check here for more details.

Restrict Terraform Versions or Provide your own Terraform build

You can now use a custom Terraform CLI build or restrict the Terraform versions in your organization with terrakube. Check the documentation for more details.

Terrakube Installation

We improved the helm chart to make the installation easy and now you can quickly install Terrakube for a Sandbox Test environment in Minikube. If you prefer you can now try Terrakube using Docker compose

Thanks for the community contribution, specially thanks to @Diliz and @jstewart612 for their contributions in the Helm Chart. For the full changelog for this version please check https://github.com/AzBuilder/terrakube/releases/tag/2.12.0

And if you have any idea or suggestion don't hesitate to let us know.

February 2023 (2.11.0)

Welcome to the 2.11.0 release of Terrakube. There are a couple of updates in this version that we hope you'll like, some of the key highlights include:

Adding Global Variables to Organization Settings

Global Variables allow you to define and apply variables one time across all workspaces within an organization. So you can set your cloud provider credentials and reuse it in all the workspaces inside the same organization. See more details in our documentation

Adding CLI and API Driven workflows to workspaces

CLI-driven and API-driven workflows are available now. This mean you can create a workspace in the UI and then connect to the same using the Terraform cli or the Terrakube API. Or if you prefer you can go ahead and connect to your Terrakube organization locally from the Terraform cli. Check the CLI-driven worflow documentation for further details.

For CLI or API driven workspaces you will see the steps directly in the overview tab:

And in the list of workspaces you will see a badge so you can easily identify CLI and API driven workspaces

Improved error handling for invalid templates

If you have an error in your template yaml syntax will be easy to identify it. You will see a more clear message about the job execution failure so you can easily see if the error is related with the execution or with the template configuration.

Import scripts inside the templates

If you have some scripts that you want to reuse, you can import it using the new importCommand this will improve the reuse of templates, so you can share some commons steps with the community in your github repos. See Terrakube docs for more details in how to import scripts

flow:
  - type: "terraformPlan"
    step: 100
    importComands:
      repository: "https://github.com/AzBuilder/terrakube-extensions"
      folder: "templates/terratag"
      branch: "import-template"
  - type: "terraformApply"
    step: 200

Adding UI templates support in workspaces

UI templates extend the idea of templates to the UI, so you can customize how each step will look like, in your custom flow inside Terrakube. Basically you can use standard HTML to build the way in which Terrakube will present the specific step

For example you can present a more user friendly information when extracting data from infracost result. And in the UI if you have a custom Template for that step you will se the Structured UI by default, but you can switch the view to the standard terminal output to see the full logs for that step.

Check some examples:

UI templates will allow to create better UI experiences. And brings new ways to customize the way you use Terrakube in your organization. In the upcoming releases we will create some standards templates that you can reuse for the terraform plan and apply. And also we will provide a Terrakube extensions that allows to create UI templates easily.

Improving organization selection on initial login

When you login to Terrakube for the first time and if you have multiple organizations assigned, then you will see a screen to select your organization.

For the full changelog for this version please check https://github.com/AzBuilder/terrakube/releases/tag/2.11.0

And if you have any idea or suggestion don't hesitate to let us know.