Kubernetes Tutorial for Beginners: Basics To Advanced

Kubernetes is an open-source tool for automating the deployment, scaling, and management of containerized applications.

It is designed to work with containers of any size and scale, and provides a way to group containers into logical units called “pods”.

Kubernetes also includes tools for tracking, managing, and monitoring all of the containers in a cluster.

In this Kubernetes Tutorial for Beginners, I will discuss the basics of Kubernetes and help you set up your own Kubernetes cluster.

So, let’s get started.

Kubernetes Tutorial for Beginners

What is Kubernetes?

Kubernetes is a system that is specifically designed to manage and orchestrate containerized applications across a cluster of nodes.

It was built to address the disconnect between the way modern, clustered infrastructure is designed and the needs of the applications running on it.

With most cluster technologies, the platform is focused on providing a unique way of deploying applications and the user shouldn’t have to worry about where their workload is being scheduled.

In Kubernetes, the unit of work presented to the user is at the service level and can be executed by any member node in the cluster.

However, many applications that are designed to scale are made up of smaller elements called services, which must be scheduled to the same host and often depend on specific networking conditions to communicate properly.

What Are The Features of Kubernetes?

Kubernetes offers a wide range of features that include:

  • Automated rollouts, scaling, and rollbacks – Kubernetes automatically creates and distributes a specified number of replicas, reschedules them in case of a node failure, and allows instant scaling of replicas on demand or in response to changing conditions such as CPU usage.
  • Service discovery, load balancing, and network ingress – Kubernetes offers a comprehensive networking solution that covers internal service discovery and public exposure of containers.
  • Support for both stateless and stateful applications – Initially focused on stateless containers, Kubernetes now also provides built-in objects to handle stateful applications.
  • Storage management – Persistent storage is abstracted with a consistent interface that works across different providers, whether in the cloud, on a network share, or on a local filesystem.
  • Declarative state management – Kubernetes uses YAML-based manifests to define the desired state of the cluster and automatically transitions the cluster to that state, eliminating the need for manual scripting of changes.
  • Compatibility across environments – Kubernetes can be used in various environments, from the cloud to edge devices or developer workstations, and various distributions are available to match different use cases.
  • High degree of extensibility – Kubernetes offers a wide range of functionality and can be further extended with custom object types, controllers, and operators to support specific workloads.

Architecture of Kubernetes & How Does It Works?

Here is the detailed architecture of Kubernetes.

In Kubernetes, two main categories are:

  • Master node
  • Worker node
Architecture of Kubernetes

Master Node

The management of a cluster is handled by the master node, which serves as the primary point of contact for all administrative tasks within the cluster.

Depending on the configuration, a cluster can have one or more master nodes to ensure high availability.

The master node includes several components, such as the API Server, Controller-manager, Scheduler, and ETCD.

  • The API Server is the primary point of contact for all REST commands used to manage and manipulate the cluster.
  • The Controller-manager is a daemon that regulates the cluster and manages various non-terminating control loops.
  • The Scheduler is responsible for assigning tasks to worker nodes and monitoring resource utilization across those nodes.
  • ETCD is used for shared configuration and service discovery and is essentially a distributed key-value store.

Worker Node

Worker nodes, also known as slave nodes, are responsible for managing networking among containers and communicating with the master node to allocate resources to scheduled containers.

The worker node comprises several components, including:

  • Docker container
  • Kubelet
  • Kuber-proxy
  • Pods

Docker container: To use Docker in a cluster, it must be initialized and run on each worker node. These containers run on each worker node and also host the configured pods.

Kubelet: Kubelet is responsible for obtaining the configuration of pods from the API server, and ensuring that the specified containers are running and ready.

Kube-proxy: Kube-proxy acts as a network proxy and load balancer for a service on a single worker node.

Pods: A pod consists of one or more containers that are designed to run together on a node. They can be thought of as a logical host for one or more containers.

Checkout CKA coupon to get discount on the Kubernetes courses.

How To Install and Setup Kubernetes?

There are various ways to begin working with Kubernetes, due to the different distributions available.

Setting up a cluster with the official distribution can be complex, so many users opt for a packaged solution such as Minikube, MicroK8s, K3s, or Kind.

For this tutorial, we will use K3s. It is an extremely lightweight distribution of Kubernetes that combines all necessary components into a single binary.

Unlike other options, there are no dependencies to install or heavy virtual machines to run.

Additionally, it comes with the Kubectl CLI, which is used to execute Kubernetes commands.

Follow the below commands to install K3s on your machine.

$ curl -sfL https://get.k3s.io | sh -
...
[INFO]  systemd: Starting k3s

K3s automatically downloads the most recent version of Kubernetes and registers it as a system service.

Once installed, you can use the following command to copy the automatically generated Kubectl config file to your .kube directory:

$ mkdir -p ~/.kube
$ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
$ sudo chown $USER:$USER ~/.kube/config

To make K3s use the config file, you can use the following command:

$ export KUBECONFIG=~/.kube/config

You can add this line to your profile file (such as ~/.profile or ~/.bashrc) to have the change automatically applied every time you log in.

Then, you can proceed by running this command:

$ kubectl get nodes
NAME       STATUS   ROLES                  AGE    VERSION
ubuntu22   Ready    control-plane,master   102s   v1.24.4+k3s1

Upon executing the command, you should see a single node appear with the name of your machine’s hostname.

The node status will be showing as “Ready”, indicating that your Kubernetes cluster is now fully set up and ready to use!

What Are The Pros & Cons Of Kubernetes?

ProsCons
Adheres to the principles of immutable infrastructureSecurity is not very effective
Easy organization of service with podsKubernetes is a little bit complicated
Offers a variety of storage options, including on-premises, SANs, and public cloudsKubernetes dashboard is not as useful as it should be
It is developed by Google
Kubernetes can run on-premises bare metal, OpenStack, public clouds Google, Azure, AWS, etc.
Largest community among container orchestration tools
Helps you to avoid vendor lock issues as it can use any vendor-specific APIs or services except where Kubernetes provides an abstraction

Best Resources For Kubernetes Tutorial for Beginners

Here are some of the best resources to learn more about Kubernetes.

  1. Learn Kubernetes Basics: It is the official Kubernetes tutorial blog
  2. Kubernetes Concepts and Architecture: Explains the Kubernetes architecture and important concepts of Kubernetes
  3. Kubernetes Crash Course for Absolute Beginners
  4. YouTube Kubernetes Tutorials
  5. Learn Kubernetes in Under 3 Hours: A Detailed Guide to Orchestrating Containers
  6. Kubernetes The Hard Way
  7. Advanced Kubernetes Objects You Need to Know
  8. Build a Kubernetes Operator in six steps
  9. Dynamic Kubernetes Cluster Scaling at Airbnb
  10. 10 Kubernetes Security Risks & Best Practices

Final Thoughts

The use of Kubernetes as a container orchestration tool is rapidly gaining popularity. Many SREs and DevOps engineers already have experience with it.

To further learn about Kubernetes, the best approach is to begin experimenting with it. Take existing projects and try containerizing them and deploying them on a Kubernetes cluster.

If you encounter difficulties or need additional information about a specific Kubernetes object, refer to the Kubernetes documentation.

This concludes the beginner’s guide on Kubernetes. If you have any remaining questions related to Kubernetes or Okteto, please feel free to leave a comment.

Rachel Kylian

Rachel Kylian

Rachel Kylian is an online course expert at AllSeenAlliance. She has completed her Master's in language learning and has also taken various certification courses in data analytics. Rachel comes with a solid background of more than 8 years in content writing. She has previously worked for various businesses and also completed the courses by Datacamp and providers like Thinkific. In her free time, she likes to read about history, and languages.

Leave a Reply

Your email address will not be published. Required fields are marked *