Once you start working with Kubernetes, it’s natural to think about how you can run your traditional applications inside a cluster. You might even have to work with Kubernetes for a while before you start thinking about other ways to structure your applications. But why should you restructure your applications in the first place? It’s perfectly possible to run an application inside Kubernetes, just as you would run it on a Linux server. Most applications can be converted into a Docker image without having to change any code, and you can also create a pod inside your cluster based on that image. Now you’re running in Kubernetes!
However, just because something can run in a certain environment, it doesn’t mean it should. Kubernetes provides so many capabilities for running applications, like ConfigMaps, deployments, and autoscaling, that it would be a waste of resources to not utilise them. By structuring your application to fit Kubernetes, you’ll not only follow best practices, but you’ll also likely increase your efficiency and decrease your costs.
In this article, you’ll learn more about how to properly run an application inside Kubernetes and why you should consider restructuring your application for Kubernetes. You’ll also see an example of what a Kubernetes application can look like. So let’s get started!
As mentioned; nothing is stopping you from taking a regular application like a monolith, turning it into a Docker image, and running it inside a cluster. What makes Kubernetes applications special, though, isn’t about *making* you change your application. It’s about giving you possibilities that you can take advantage of.
The simplest thing to take advantage of inside Kubernetes is the ability to orchestrate containers. Orchestration is a catchall term for many different things, but it’s primarily about making sure that your application is functioning properly. In some cases, this can be done by terminating the application and spinning up a new instance. If your applications aren’t ready to handle a SIGTERM signal when Kubernetes wants to terminate the container, you can run into some serious issues.
Of course, Kubernetes won’t just kill containers out of nowhere, but there are many use cases in which you may want it to. For example, if you want to configure horizontal scaling of pods - meaning you increase or decrease the number of pods based on resource usage - then Kubernetes needs to kill some containers when scaling down.
There are many more reasons why you might want to build your application for Kubernetes, such as its logging infrastructure, networking policies, and volume mounts. Let’s take a look at what it means to build a Kubernetes application.
Once you decide to optimise your applications for running inside Kubernetes, the next obvious step is to figure out what changes you need to make. For this, you need a good understanding of what makes a Kubernetes application. You also need to know the twelve-factor app methodology. This set of rules hasn’t been developed specifically for Kubernetes, but following it will get you ninety percent of the way to a completely Kubernetes-optimised application.
In simple terms; a Kubernetes application is one that has been optimised to run inside a Kubernetes cluster. This means it will properly work with ingress controllers, which direct traffic from the client to the correct pods or even handle load balancing; persistent volumes, which handle the underlying storage of pods; deployments, which ensure that your applications are always running in the most optimal state; along with other Kubernetes resources.
It’s important to clarify that building a Kubernetes application does *not* mean you should build an application that directly interacts with any Kubernetes resources. Rather than building an application that only works inside Kubernetes; you’re building an application that allows Kubernetes to interact with it, in the most efficient way. For example, you shouldn’t build an application that requests Kubernetes for a PersistentVolume. Instead, build an application that uses the traditional filesystem, so Kubernetes can simply mount the volume. You’ll see this in the example at the end of this article.
The twelve-factor app methodology is seen by many engineers as the holy grail when it comes to developing applications that will be run inside containers. If you follow its recommendations, you’ll have an application that runs as efficiently as possible inside a cloud environment. Check this website for detailed explanations of the factors, or our webinar on the topic, as they’re too comprehensive to include in this article.
The twelve factors are as follows:
As you can see, following the twelve-factor methodology will give you an application that allows Kubernetes to interact with it most efficiently. One of the most relevant factors for a Kubernetes application is the third point mentioned above: “store config in the environment”
In Kubernetes, the ConfigMap resource is a map of configurations that you can attach to a container. To understand why this is relevant, think about an application that has the configuration stored inside the code. How do you change environments? Well, you need to change the code and redeploy it. If instead, you’re storing your configuration inside the environment, you can simply attach one or more ConfigMaps and restart the container. You can also deploy the same container and use different ConfigMaps for the different containers, giving you a separate development and production environment.
This principle applies to all the other factors. Exporting services via port binding, for instance, means that Kubernetes can easily attach an Ingress resource and expose it to the public web via an ingress controller.
Great! You now have a better understanding of the basics, but to see what a proper Kubernetes application looks like, Let’s study an example. Something as simple as the `nginx` image can demonstrate this well. Try deploying it into your cluster by running the following:
$ kubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml
`nginx` exposes the web server via port binding, which means it can be accessed easily by a Kubernetes Ingress resource. You can also access it easily for yourself by running:
$ kubectl port-forward pods/nginx 8080:80
Now you can open http://localhost:8080 and access the nginx service. You can also run kubectl logs nginx to see the logs of the pod, since the container streams the logs to /dev/stdout, which Kubernetes can then easily pick up.
This is a simple example, but it shows you how following the twelve-factor app methodology can help Kubernetes interact with your application in meaningful ways that allow you to have more efficient and powerful deployments.
As you discovered in this article, the point of building an application for Kubernetes isn’t so much for creating it to interact directly with Kubernetes in any way. But instead, it’s about setting up your application for success. If you build it properly, Kubernetes can interact with the application without needing any workarounds.
If you need help getting started with Kubernetes and developing your applications, take a look at our Kubernetes management tool Wayfinder, which enables security best practices while keeping you in control. Check the documentation for details.