Weekly blog part 4 – HPCC System Intern

Connecting from the previous post

In the last post, we walked through how to create kind cluster with configuration of your preferences. In order to step forward to familiarize ourselves with kind and terraform, this post demonstrates how terraform manages Kubernetes resources via terraform.

Setup

This post assumes that readers have already installed kind on their machine, as well as basic code editor such as visual studio code.

To install terraform on your device, please refer to the official terraform installation website.

For whom have not yet installed visual studio code, visual studio installation is available here.

To organize the files easily, create a folder called kind-terraform on your desktop.

Open your folder in visual studio code.

Creating a kind cluster

Before diving into terraform and Kubernetes, we must ensure that kind cluster is up and running locally, otherwise, we cannot create Kubernetes services. So let’s first create a kind cluster named kind-terraform-tutorial with a following configuration (referenced kind configuration documentation)

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes: 
- role: control-plane
  extraPortMappings:
  - containerPort: 80
    hostPort: 80

This configuration file create a simple kind cluster with control-plane with a specified port mapping. Let’s save this configuration in a file called config.yaml.

kind create cluster --name="kind-terraform-tutorial" --config config.yaml

Start coding in terraform

To access and modify the kind cluster, we need Kubernetes as provider. By creating a new file with touch provider.tf and have a provider block with Kubernetes as follows.

provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "kind-kind-terraform-tutorial"
}

In this provider block, the config_path, and the config_context are present. Config_path finds the kubernetes config file which contains cluster authentication information. Context is a parameter which helps us specify which clusters we are interacting with. These two information are important in order for terraform to be able to use the Kubernetes provider and the service as orchestration tool, so that we can manage resources inside the cluster.

In order to connect and be able to use our kind cluster, kubernetes have a config file in "~/.kube/" directory. In our case, we are looking for a context called kind-kind-terraform-tutorial. Config file contains the context name associated with authentication information, and by calling config_context, we are grabbing the information.

In addition to that, in order to specify required and minimum configuration of our terraform project, we also need a terraform block as follows in the provider.tf file

terraform {
    required_providers {
      kubernetes = {
          source = "hashicorp/kubernetes"
      }
    }
}

Now the basic configuration is done. In order to create a working terraform directory with necessary configuration, we execute command terraform init from the terminal.

The first deployment

After the configuration, we will deploy a kubernetes_deployment. This deployment created by the terraform is capable of controlling the number of pods, containerized applications port, as well as the resource specific details.

In terraform context, the deployment is a resource block. Resource block declares the type of resource with a name with specified details and features about the resources for example, number of replicaSet in our case.

Create a file called main.tf and write the resource block below.

resource "kubernetes_deployment" "nginx" {
  metadata {
    name = "tutorial-deployment"
    labels = {
      App = "tutorial-deployment"
    }
  }
  spec {
    replicas = 3
    selector {
      match_labels = {
        App = "tutorial-deployment"
      }
    }
    template {
      metadata {
        labels = {
          App = "tutorial-deployment"
        }
      }
      spec {
        container {
          image = "nginx:1.7.8"
          name  = "tutorial-deployment"
          port {
            container_port = 80
          }
          resources {
            limits = {
              cpu    = "0.5"
              memory = "512Mi"

            }
            requests = {
              cpu    = "250m"
              memory = "50Mi"
            }
          }
        }
      }
    }
  }
}

When we look at the definition of this resource block above, we may realize some arguments and details are very similar to the Kubernetes .yaml files we create. In fact, we could write .yaml file and load the file and its configuration information.

Above resource block will create three pods with spec that is specified by the template block.

After you have created this resource block. Finally run a terraform command terraform apply --auto-approve.

The command will look at the declaration of the kubernetes_deployment resource block and create 3 Kubernetes pods in the configured Kubernetes cluster, kind-terraform-tutorial.

After a while the terminal will show messages like below:

First deployment done!

This post briefly went through a deployment of Kubernetes instance onto a kind cluster with terraform. We must note that cluster needs to be running before we can do the terraform apply.

To destroy the resources created in the cluster, run terraform destory --auto-approve. This command will clean up the resources from the kind cluster.

Due to its surface level explanation, list of resources below are wonderful place to concretely understand the concept involved in this post.

Next, we would like to cover concepts related to AWS starting with aws vpc. Looking forward to seeing there! Happy learning!

List of resources


Leave a comment

Design a site like this with WordPress.com
Get started