Previous post covered the practical exercise of deploying a Kubernetes deployment with terraform. This week, unlike furthering our steps into more practical example, this post will go over concept of virtual private cloud (VPC), from one of the prominent cloud provider, Amazon AWS.
What is cloud? Cloud vs Bare Metal
Following from the very first blog, explanation of Docker, Kubernetes and Helm, focused on how efficiently we can manage our application. When we talk about cloud, we are talking about where we are deploying our application.
Cloud is a huge data center/server that provide multiple operating system environment by creating a virtual machine. On the contrary, individually owned physical machine with single operating system is called bare metal.
They are both server, but unlike bare metal, cloud server creates virtual machine that emulates and provide different environment. Because of the use of virtual machine, cloud application are often scalable compared to bare metal. This scalability is the biggest advantage and unique characteristic of cloud.
AWS VPC
We traditionally deploy our application to the data center, the bare metal physical server. AWS VPC provides a private network that are secured to create resources that are needed for our application such as Kubernetes cluster. When we want full control of our environment, connectivity and security to our application. We can us AWS VPC to customize and build our environment and infrastructure as well as security.
This is different compared to bare metal where bare metal provide complete freedom and control over the data center hardware, VPC is virtual, and is highly scalable and customizable to our needs.
Connecting the dots
So far we have focused on how to manage our resources and application with tools such as Docker, Kubernetes, and Helm. This post went over where these application and Kubernetes cluster can live. I hope this post was able to provide some context and gain more intuitive understanding surrounding containerized application.
Happy Learning!
Connecting from the previous post
In the last post, we walked through how to create kind cluster with configuration of your preferences. In order to step forward to familiarize ourselves with kind and terraform, this post demonstrates how terraform manages Kubernetes resources via terraform.
Setup
This post assumes that readers have already installed kind on their machine, as well as basic code editor such as visual studio code.
For whom have not yet installed visual studio code, visual studio installation is available here.
To organize the files easily, create a folder called kind-terraform on your desktop.
Open your folder in visual studio code.
Creating a kind cluster
Before diving into terraform and Kubernetes, we must ensure that kind cluster is up and running locally, otherwise, we cannot create Kubernetes services. So let’s first create a kind cluster named kind-terraform-tutorial with a following configuration (referenced kind configuration documentation)
This configuration file create a simple kind cluster with control-plane with a specified port mapping. Let’s save this configuration in a file called config.yaml.
To access and modify the kind cluster, we need Kubernetes as provider. By creating a new file with touch provider.tf and have a provider block with Kubernetes as follows.
In this provider block, the config_path, and the config_context are present. Config_path finds the kubernetes config file which contains cluster authentication information. Context is a parameter which helps us specify which clusters we are interacting with. These two information are important in order for terraform to be able to use the Kubernetes provider and the service as orchestration tool, so that we can manage resources inside the cluster.
In order to connect and be able to use our kind cluster, kubernetes have a config file in "~/.kube/" directory. In our case, we are looking for a context called kind-kind-terraform-tutorial. Config file contains the context name associated with authentication information, and by calling config_context, we are grabbing the information.
In addition to that, in order to specify required and minimum configuration of our terraform project, we also need a terraform block as follows in the provider.tf file
Now the basic configuration is done. In order to create a working terraform directory with necessary configuration, we execute command terraform init from the terminal.
The first deployment
After the configuration, we will deploy a kubernetes_deployment. This deployment created by the terraform is capable of controlling the number of pods, containerized applications port, as well as the resource specific details.
In terraform context, the deployment is a resource block. Resource block declares the type of resource with a name with specified details and features about the resources for example, number of replicaSet in our case.
Create a file called main.tf and write the resource block below.
When we look at the definition of this resource block above, we may realize some arguments and details are very similar to the Kubernetes .yaml files we create. In fact, we could write .yaml file and load the file and its configuration information.
Above resource block will create three pods with spec that is specified by the template block.
After you have created this resource block. Finally run a terraform command terraform apply --auto-approve.
The command will look at the declaration of the kubernetes_deployment resource block and create 3 Kubernetes pods in the configured Kubernetes cluster, kind-terraform-tutorial.
After a while the terminal will show messages like below:
First deployment done!
This post briefly went through a deployment of Kubernetes instance onto a kind cluster with terraform. We must note that cluster needs to be running before we can do the terraform apply.
To destroy the resources created in the cluster, run terraform destory --auto-approve. This command will clean up the resources from the kind cluster.
Due to its surface level explanation, list of resources below are wonderful place to concretely understand the concept involved in this post.
Next, we would like to cover concepts related to AWS starting with aws vpc. Looking forward to seeing there! Happy learning!
With enough conceptual introduction, let’s visualize our understanding by building a simple example. In order to do so, there is going to be two part exercise. First, environmental setup for tools such as Helm, Kubernetes, and Terraform is necessary. Once the environmental setup is done, there comes the fun coding part. First this post covers Kubernetes creation of pods and clusters with kind and monitoring with Docker Desktop.
Create a designated folder
In order to organize this exercise as much as possible, let’s start from creating a folder called First-Terraform in your desktop. This folder will contain files we create for the walk throughs. Open up your terminal and go to the folder you just created.
Before we fully begin
This posts assume that you have following already installed locally on your computer: Terraform, Helm, and Kind.
If not, resources below guides you through an installation of resources we need.
Kind is a tool for running Kubernetes clusters locally. However, Docker Desktop needs to be running in the background. According to the documentation, “kind runs Kubernetes clusters by using Docker containers as ‘nodes’” So the Docker first needs to be running in order to create a kind cluster.
Once the Docker desktop is running, the command to create kind cluster is:
kind create cluster
this command create a single kind cluster with default name, ‘kind’
Creating a tutorial cluster with following command:
kind create cluster --name="tutorial-1"
--name argument enables creation of cluster with specified name. In order to check if the node has been correctly created, we can run:
kubectl get nodes
This command shows a list of existing nodes that is running locally.
Creating kind cluster with .yaml config file
Specifying every details about the cluster created on command line is a daunting and error prone task. YAML configure file with declarative syntax allows us to create a cluster with desired characteristics.
Create a .yaml file with touch config.yaml . YAML is a human readable data serialization language used for writing configuration file (Red hat).
Open config.yaml, with nano config.yaml.
Write the configuration as follows to create a cluster named tutorial-1 with one control plane nodes, and two worker nodes.
Other than designating roles to nodes, you can specify internet protocol versions, ports to be exposed and more.
Two worker nodes hosts pods to run containerized applications while a control-plane manages the worker nodes and monitor the cluster.
What is next?
Now we have warmed up to creating kind cluster, we would like to go into Terraform and creating cluster, manage them, and observe how we can use helm, Kubernetes and terraform altogether.
Kubernetes and Helm is useful for developing containerized applications. Kubernetes enables ease in scaling up and down of software while Helm is useful for package management and version control. However, as size grows larger, the deployment after updates is a hectic inconsistent task, thus Terraform is there to help build the infrastructure with ease.
What is Terraform?
Terraformis an infrastructure as code (IAC) tool which manages the lifecycle of the infrastructure of application with declarative language and human-readable configuration files.
Declarative vs Imperative / Procedural language
In computer science, computer programming languages can be categorized by paradigms, that is the style and approach to coding in particular programming language. Declarative languages focus on what programmers want and logical tasks to be accomplished. On the contrary, imperative languages focus on the logical steps that are involved in accomplishing a task. Therefore, the executions of commands needs to be specified in particular order. This set of execution can be defined as a reusable block of code known as functions.
Terraform specify the desired state of the infrastructure without needing to specify the details of how to maintain its desired state. Hence, Terraform is a declarative language.
How can Terraform, Kubernetes and Helm be connected?
Terraform is declarative IAC that provide consistent management of infrastructure lifecycle. Kubernetes and Helm provide services to build and maintain the infrastructure. Terraform enables the interactions with the services by specifying the provider. This is one of the most essential and first thing to declare in terraform configuration file. By specifying provider in terraform code, we communicate with the application programming interface to connect to the particular service providers. Kubernetes and Helm provide services to Terraform.
What is next?
Now, we have more clarity on the components and software that constitute the consistent and easy maintenance of application lifecycle, it’s time to suit up and code some introductory example using services we have learned so far!
HPCC system has started focusing on containerizing platform and applications from bare-metal platform. While bare-metal platform requires specification of process execution to a particular physical machine, the use of containerized platform and application simplifies system management and flexibility per-demand. HPCC has started the shift from bare-metal system to containerized system with tools such as Docker, Helm chart, and Kubernetes.
What are Docker, Helm chart, and Kubernetes?
Docker
Docker is a software platform to bundle applications into a unit called container. By containerizing source code along with everything that the application needs, Docker speeds up the development, testing , and deployment. The container can be scaled easily without hectic manual manipulation.
Kubernetes
Kubernetes is a container-orchestration tool. As we create more and more containerized applications, it becomes harder to scale and maintain. Kubernetes allows us to group containers as pod and develop and deploy them in a consistent manner. Pods are then run on either virtual or physical worker machine, called Node, which has all the necessary services to run pods. Kubernetes cluster is a collection of nodes that are bundled together to function as a system. This hierarchical grouping of system eases difficulties of development, testing, and maintenance of the system. However, version control can be a challenge for cluster. Helm helps us manage Kubernetes application better.
Helm
Helm is described as package manager of Kubernetes application. Helm eases the installation, update, and uninstallation of Kubernetes application. Instead of manually managing packages needed for particular Kubernetes application deployment, Helm chart facilitates consistent and faster package management for deployment of Kubernetes cluster. Installation and uninstallation of Kubernetes cluster are made easy thanks to helm chart.
What’s next?
As the size of cloud application infrastructure grows larger, manual deployments can be a hectic and error-prone task, and Kubernetes does not help in this regard. However, we have infrastructure management/orchestration tool we could use called Terraform, which we will talk about next!