Containers & Kubernetes

In my previous blog, I explained my journey towards the container world and also explained a little bit about some of the technologies you would encounter if you start with containers. If you haven’t read that, I recommend you to read it first and then jump in here. This blog is intended to give you more insights about the containers and kubernetes. The first question that might be coming to your mind is how do you create a container? So, first let’s understand how we do this?

How to create a container?

The first step is to write a Dockerfile which is basically a script where you put in instructions in order which will be used to build the image. A DockerFile has a specific format and set of instructions which you can write. When done with this step, the next step is to build your image.

To build an image you will need a tool like Docker and run the “docker build” command. Now, this image is used to spin up the containers by executing “docker run” command. A simple analogy to understand the relationship between docker image and container would be to consider docker image as a class and container as an object(instance) of that class. Hope this gives you a clear idea about the process that needs to be followed while creating a container.

Now, Let’s move on to a very popular orchestration tool for containers which is kubernetes.

What is a Kubernetes Cluster and Kubernetes Node?

A Kubernetes Node has everything needed to run your containers, including the container runtime and other critical services. And, group of these nodes is called as cluster.

There are two tools which you would need if you want to run kubernetes on your local machine :

  • Minikube: A minikube is a single node cluster. It can work with Linux, Mac, Windows.
  • kubectl: It is a CLI tool for managing operations on your kubernetes cluster.

Kubernetes API Objects

To understand how to use kubernetes, it is very crucial to understand the different kubernetes api objects we have in our hand and how do we create these? Let’s first get into the basic API objects :


It is the smallest deployable unit on your cluster. A pod can consist of one or more containers running together. An important property of pod to keep in mind is that they are ephemeral in nature i.e. pod object gets deleted when it completes it’s execution or fails due to any problem.


It is an abstraction layer over pods and ensures that desired amount (which we define in our yaml files)of pod is running at all times. If a pod crashes, it will be restarted to meet the desired state.


It is an abstraction over replicaSets which we defined above and allows declarative updates to application. Suppose, we want to update our application to a newer version, then we would need to stop our containers and recreate new ones which can become a task if we have many pods to handle. But, with the help of deployments we can do this easily. We can even ensure that certain number of pods are always running when you update your application so that we have minimum or zero downtime by following the strategy of creating a new pod with the new version of application and then deleting the old pod. The update process is also wholly recorded, and versioned with options to pause, continue, and roll back to previous versions.


An abstract way to expose an application running on a set of Pods as a network service.

Services select Pods based on their labels. When a network request is made to the service, it selects all Pods in the cluster matching the service’s selector, chooses one of them, and forwards the network request to it.

Suppose you have your backend application and frontend application running on different pods and you want frontend pod to interact with the backend. The IP address of Pods does not remain same all the time as they are ephemeral in nature. So, how do we connect them? That is where, services comes into picture. After, creating the service with the name as “backend-service” and port as “5000” for an example, you will be able to interact with your backend pods by sending requests to “http://backend-service:5000” from the frontend.


Pods are ephemeral in nature so when a pod gets deleted, all the files inside it also gets destroyed. This can be quite unfavourable in the situations like when a database server is running inside pod as all the data can get delete if pod gets crashed or deleted due to some reasons. Volumes helps to solve this problem, it allows us to persist the data irrespective of the pod state. Also,it allows to share files among the containers running inside a pod.

ConfigMaps and Secrets

A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.

If the data you want to store are confidential, use a Secret rather than a ConfigMap.

We are very well aware of the fact, while developing our application we use certain config variables such as API tokens, API Secrets, Port Number, Database Password etc. and for this we create “.env” file where we write all the key value pairs and access them in our code. But, what if we want to share the same set of variables in different containers inside pod or may be in different pods as well? So, we have ConfigMaps and secrets in our hand to get rid of this problem.

Hope this blog has helped you to collect more information on Containers. Also, During my training phase at Red Hat I created a sample application, containerized it and then deployed it onto kubernetes which you can check out here. Feel free to drop any comments/suggestions.