Kubernetes is an open-source platform that is used for deploying, scaling, and managing containerized applications. It is a powerful tool that enables developers to create and manage complex distributed systems with ease.
Here’s a quick overview from Fireship.io.
Concepts
Cluster
A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.
The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.
This document outlines the various components you need to have for a complete and working Kubernetes cluster.
Control Plane Components
The control plane’s components make global decisions about the cluster, such as scheduling, detecting, and responding to cluster events (for example, starting up a new pod when a deployment’s replicas field is unsatisfied).
kube-apiserver
The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane.
etcd
Consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.
kube-scheduler
Control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.
cloud-controller-manager
A Kubernetes control plane component that embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider’s API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
Node Components
Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.
kubelet
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
kube-proxy
A network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
Container runtime
The container runtime is the software that is responsible for running containers.
Kubernetes supports container runtimes such as containerd
, CRI-O
, rkt
, frakti
and any other implementation of the Kubernetes CRI (Container Runtime Interface).
However, most Kubernetes users deploy their applications using Docker.
Getting Started
You can set up a cluster on your own hardware, or you can use a cloud-based service like Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or Microsoft Azure Kubernetes Service (AKS).
We’ll go ahead and utilize minikube, which should provide us with a local Kubernetes cluster we can start with.
Make sure to check out the Kubernetes guide or minikube documentation if you need more details.
Requirements
- 2 CPUs or more
- 2GB of free memory
- 20GB of free disk space
- Internet connection
- Container or virtual machine manager, such as: Docker, QEMU, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware Fusion/Workstation
Installation
We’ll utilize the latest minikube stable release for Debian in this example.
|
|
Once you have installed minikube
, start your cluster:
|
|
If you already have kubectl
installed, you can now use it to access your shiny new cluster:
|
|
Alternatively, minikube can download the appropriate version of kubectl and you should be able to use it like this:
|
|
You can also make your life easier by adding the following to your shell config:
|
|
To set up kubectl
to communicate with your Kubernetes cluster, you need to provide the cluster credentials in a configuration file.
This file can be generated using the cloud provider’s console or CLI tools.
|
|
The above commands set up a new cluster named my-cluster
, add user credentials, create a new context, and switch to using the new context.
Create a Deployment
|
|
Deploying a Single Container with a Manifest
To create a deployment using a manifest file
, you will need to define one in YAML
.
This manifest creates a pod with a single container that runs the myapp
image on port 80.
|
|
Deploying a Multi-Container Application
To deploy a multi-container application, create a manifest
that defines a pod with multiple containers.
|
|
Scaling an Application
To scale an application, you can define how many HA deployments you would want by specifying the replicas
in your yml
file.
Here’s an example deployment file for a Node.js application with 3
replicas:
|
|
Apply
Apply the deployment by running the following command:
|
|
This command creates the deployment and replicas in your cluster.
Kubernetes provides a stable IP address and DNS name for accessing your application.
It can also load balance traffic between replicas of your application.
Conclusion
Kubernetes is a powerful platform for deploying, scaling, and managing containerized applications.
With its many components and features, it can seem daunting at first, but once you get started, you’ll find that it’s a valuable tool for building and managing complex distributed systems.
By following the examples above and exploring the many resources available online, you can quickly become proficient in using Kubernetes to deploy and manage your applications.
Note
Be aware, products can change over time. I do my best to keep up with the latest changes and releases. The documentation above may become outdated or inaccurate.
Always refer to the latest official documentation and resources.