Kubernetes (K8s) Overview

Photo by Growtika on Unsplash

Kubernetes (K8s) Overview

Kubernetes is a container orchestration tool that streamlines the deployment, scaling, descaling, and load balancing of containers. It is open-source and was originally developed by Google, later donated to the Cloud Native Computing Foundation (CNCF). Written in Golang, Kubernetes has a large community and is compatible with various cloud vendors, including public, hybrid, and on-premises.

With Kubernetes, users can manage and deploy groups of containers easily by treating them as a single logical unit. The platform's cluster-based architecture provides automated healing by restarting failed containers and rescheduling them when their hosts become unavailable, ensuring high application availability. Kubernetes is designed to work seamlessly with Docker containers.

Features of Kubernetes:

  1. Automated Scheduling– Kubernetes provides an advanced scheduler to launch containers on cluster nodes. It performs resource optimization.

  2. Self-Healing Capabilities– It provides rescheduling, replacing, and restarting the containers which are dead.

  3. Automated Rollouts and Rollbacks– It supports rollouts and rollbacks for the desired state of the containerised application.

  4. Horizontal Scaling and Load Balancing– Kubernetes can scale up and scale down the application as per the requirements.

Docker swarm is another container orchestration tools and both aim to solve the same problem of managing and deploying containerised applications across a cluster of machines. However, there are some differences in their approach and features.

Kubernetes automatically does container scaling while Docker swarm does not. With kubernetes we can manually configure the load balancing settings while Docker swarm automatically manages the load balancing. Docker swarm shares the storage volume with any other container while Kubernetes shares storage volumes between multiple contains inside the same pod.

Architecture of Kubernetes

Kubernetes follows the client-server architecture where we have the master installed on one machine and the node on separate Linux machines. It follows the master-slave model, which uses a master to manage Docker containers across multiple Kubernetes nodes. A master and its controlled nodes(worker nodes) constitute a “Kubernetes cluster”. A developer can deploy an application in the docker containers with the assistance of the Kubernetes master.

A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.

In an application's infrastructure, the worker node(s) act as hosts for the essential Pods. Meanwhile, the control plane oversees and manages the worker nodes and Pods in the cluster. Typically, in production environments, the control plane operates across various computers, and the cluster consists of multiple nodes to ensure fault-tolerance and high availability.

Kubernetes Master Node / Control Plane Components

The Kubernetes master plays a vital role in managing the entire cluster, coordinating activities and communicating with the worker nodes to ensure the smooth running of both Kubernetes and your application. It serves as the entry point for all administrative tasks. When installing Kubernetes, four primary components of the Kubernetes Master are installed. These components include:

  1. kube-apiserver- This acts as the API server, serving as the entry point for all REST commands that control the cluster. All administrative tasks are performed by the API server within the master node. It is responsible for configuring and validating API objects, such as services, ports, controllers, and deployments. The API server exposes APIs for every operation, which can be accessed using the kubectl tool, a command-line interface for running commands against Kubernetes clusters.

  2. kube-scheduler- This service in the master is responsible for distributing the workload across available nodes, based on each node's utilization. It tracks the available resources on each node and assigns workloads to nodes with available resources. The scheduler is responsible for scheduling pods across available nodes based on the constraints specified in the configuration file.

  3. kube-controller-manager: Also known as controllers, this daemon runs continuously and is responsible for collecting and sending information to the API server. It regulates the Kubernetes cluster by performing lifecycle functions such as namespace creation, lifecycle event garbage collection, terminated pod garbage collection, cascading deleted garbage collection, node garbage collection, and more. The controller watches the desired state of the cluster and takes corrective steps to ensure that the current state matches the desired state. Key controllers include the replication controller, endpoint controller, namespace controller, and service account controller.

  4. etcd- A distributed key-value lightweight database, etcd serves as the central database for storing the current cluster state and configuration details such as subnets and config maps. Written in the Go programming language, it provides a reliable and consistent method of storing and retrieving data within the cluster.

Kubernetes Worker Node Components

The Kubernetes worker node plays a critical role in managing container networking, communication with the master node, and resource allocation. Here are the key components of the Kubernetes worker node:

  1. Kubelet - This is the primary agent that runs on each worker node and communicates with the master node. It receives pod specifications from the API server and ensures that the containers associated with the pods are running smoothly. If any issues are detected, such as a pod failing to run or a worker node failure, kubelet attempts to restart the pod on the same node. However, if the worker node fails, the Kubernetes master node detects the failure and schedules the pods on a healthy node.

  2. Kube-Proxy - This core networking component maintains the network configuration and is responsible for managing the distributed network across all nodes, pods, and containers. It exposes the services to the external world and acts as a network proxy and load balancer for a service on a single worker node. Kube-Proxy listens to the API server for each service endpoint creation and deletion, setting up the network route for each service endpoint. It also manages network routing for TCP and UDP packets.

By working together, these components ensure that the containers are running smoothly and the network is configured correctly, ensuring the reliable and efficient operation of your Kubernetes cluster.

Keywords

Pods– In Kubernetes, a pod is the smallest deployable unit that represents a single instance of a running process in a cluster. A pod can contain one or more tightly coupled containers that share the same host and network namespace, and can communicate with each other using localhost. Pods act as an abstraction layer that groups and manages the containers, allowing you to manage the containers collectively as a single entity. By using pods, you can easily deploy multiple dependent containers together, and interact with and manage them collectively.

Adios 👋