DevOps teams have rapidly adopted Kubernetes as the standard way to deploy and scale containers in the cloud. It provides everything you need to configure, launch, and maintain containerized workloads in distributed environments.
Kubernetes is a complex system with many moving parts, however. You need to configure your deployments correctly to get the most value from your infrastructure. This article will share some best practices for maximising performance, closing security holes, and addressing common Kubernetes gotchas.
Deploying services into production in Kubernetes is inherently risky, especially when teams are switching between multiple apps and clusters. A simple config mistake can lead to downtime and a costly business interruption. Following best practices for your Kubernetes configs reduces the chance of mistakes creeping in.
The best practices below have been chosen for one of two reasons: they improve your cluster’s performance and security, or they improve long-term maintainability. Adopting these techniques will harden your cluster, enhance stability, and make it easier for newcomers to pick up where you left off.
The following are some best practices you should use when you make new deployments inside your cluster. This list is non-exhaustive: the Kubernetes documentation suggests other small tweaks that can be useful, and you’ll probably discover your own standards as you increase your adoption. But by following the techniques listed here, you can cover the basics and improve your use of Kubernetes.
Don’t deploy containers as standalone Pod resources. Pods on their own (“naked” pods) can’t be automatically rescheduled. If a node fails, your workload will become inaccessible.
Wrap your pods in a deployment or ReplicaSet instead. Kubernetes will guarantee the availability of a specified number of container replicas, adding redundancy that persists after node outages.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: nginx
image: nginx:latest
This deployment ensures there will be three replicas of the NGINX container inside your cluster. Using a deployment is normally preferable to a plain ReplicaSet; deployments incorporate an automatically created ReplicaSet and a strategy for applying replica count changes.
Services should be started before any resources like deployments, ReplicaSets, and pods that need access. Create the service before you create the downstream resource.
You can achieve this by creating the objects individually with kubectl:
kubectl apply -f service.yaml
kubectl apply -f deployment.yaml
Alternatively, structure your YAML files so services are positioned at the top, ahead of any other resources:
yaml
apiVersion: v1
kind: Service
metadata:
name: example-service
# ...
---
apiVersion: v1
kind: Deployment
metadata:
name: deployment-that-needs-example-service
# ...
The ordering is important so your pods can reliably access the service using environment variables. Kubernetes automatically injects variables that provide connection details for the services visible to your pods. For the example service shown above, pods in the deployment receive these variables:
EXAMPLE_SERVICE_SERVICE_HOST=<host running "example-service">
EXAMPLE_SERVICE_SERVICE_PORT=<port the service is running on>
The naming syntax is the service’s `metadata.name`, transformed to upper snake case (such as `example-service` to `EXAMPLE_SERVICE`) and suffixed with `SERVICE_HOST` or `SERVICE_PORT`. If the pod starts before the service is created, it won’t have the environment variables and your workload may exhibit connection errors.
Making use of labels helps you to organise and select your Kubernetes objects. They’re key/value pairs that identify the characteristics of particular resources:
apiVersion: v1
kind: Pod
metadata:
name: pod-with-labels
labels:
environment: production
business-unit: "Contracting Division"
Labels are the primary mechanism for selecting objects within other resources:
# Deployment includes any Pod with the "app: my-app" label
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
Labels can be used to select and filter objects with kubectl as well. You can quickly identify all the objects with a particular characteristic:
$ kubectl get pods --selector='environment=production'
Kubernetes defines a set of well-known labels for properties like your object’s name, version, and author. Adding these labels is recommended even if you don’t need them immediately. They’re commonly used by third-party tools to display identifying data for the items in your cluster.
Add liveness and readiness probes to your pods so Kubernetes can detect their healthiness and avoid sending traffic to unhealthy containers. In some cases, deploying a pod without these probes could cause it to be terminated before it’s fully started up.
Liveness probes implement regular health check commands that monitor for long-lived pods becoming unhealthy. Kubernetes can automatically restart pods after a failed health check, ensuring your application stays accessible. Readiness probes are used to identify when a new pod is ready to handle traffic. Startup probes are a third type that inform Kubernetes when a pod has finished its initialisation routine. Pods aren’t targeted by liveness and readiness probes if they’ve got an incomplete startup probe.
All three probes are configured similarly. Here’s a simple pod that includes a liveness probe:
apiVersion: v1
kind: Pod
metadata:
name: liveness-probe-demo
spec:
containers:
- name: liveness-probe-demo
image: k8s.gcr.io/liveness
args:
- /server
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 15
failureThreshold: 1
The special `k8s.gcr.io/liveness` image returns a successful response from its `/healthz` endpoint for the first ten seconds of its life. It’ll return a `500` status afterward, failing the HTTP liveness probe and causing the container to restart. Probes can also execute commands, check TCP sockets, or make gRPC health check calls instead of the HTTP request shown here.
Avoid using the hostPort and hostIp fields on pods wherever possible. If you need to expose a pod, use a service or load balancer in front of it.
Host ports are bound directly to nodes, reducing scalability and limiting how many pods can be deployed. Since you can only bind one application to each physical port, the combination of hostPort and hostIp needs to be unique on each of your nodes.
Pods should only communicate with each other when it’s absolutely necessary. Locking down network communications inside your cluster reduces the attack surface. Network policies define rules that isolate your pods from each other.
Network policies use selectors to target pods. They can restrict communications to specific pods, namespaces, and IP address ranges. Separate policies are created for ingress and egress traffic.
Here’s a simple policy that targets pods with the service: app-service label:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: demo-policy
spec:
podSelector:
matchLabels:
service: app-service
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
service: app-service
egress:
- to:
- namespaceSelector:
matchLabels:
app: my-app
Matching pods will be restricted to receiving traffic from pods that also have the `service: app-service` label. They can receive traffic from any pod inside a namespace with the `app: my-app` label.
Only running images that you’ve inspected is a common-sense safety measure. Vet each new image before you deploy it in your cluster, even if it’s listed in a major registry. Providers like Docker Hub could be compromised by a bad actor who replaces popular images with malicious versions.
Understanding what you’re using is a vital step towards improving your security posture. You can use open source container scanning tools like Anchore and Trivy to spot vulnerabilities in the images you’re deploying. Another defence is to upload known safe images to a private registry, then exclusively use those images in your cluster.
Kubernetes includes an advanced mechanism for rejecting pods that use images from unapproved sources. You can create an ImagePolicyWebhook to validate images using an external web hook server. Your web hook endpoint needs to respond with an error if the image originates from a source outside your own private registry.
Avoid using imperative kubectl commands like kubectl run. Write declarative YAML files instead, then add them to your cluster using the kubectl apply command. You only need to express the *desired* state of your cluster; Kubernetes works out how to obtain it.
By relying on YAML files for all your objects, you can store and version them alongside your code. If a deployment goes awry, you can easily rollback by restoring the earlier YAML file and reapplying it to your cluster. This model also ensures everyone can access the complete current state of the cluster and see how it’s evolved over time.
Here’s a simple example of creating a pod using an imperative command:
kubectl run nginx --image=nginx --port=80 --restart=Always
While this works, you’ve got no way of knowing which flags were used unless you retrieve the pod’s details from the live cluster.
By encapsulating the pod’s config as a YAML file, you can easily inspect its state and make targeted changes:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
ports:
- containerPort: 80
restartPolicy: Always
name: mycontainer
Each time you make a change, commit the file to your repository and run `kubectl apply -f pod.yaml` to apply the diff to your cluster.
Attaching CPU and memory limits to your pods prevents runaway resource usage during traffic spikes. You should always allocate an appropriate amount of resources for the applications in your pods. This will help protect cluster stability.
A resource request informs Kubernetes of a pod’s maximum expected consumption. This information is used to select a node to schedule the pod to.
Once scheduled, pods can exceed the request up to the value of their limit for that resource. Limits are strictly enforced; a pod that exceeds a CPU limit will be throttled, while memory limit overages usually result in process termination.
Resource allocations are made using the resources property on a pod’s spec.containers field:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
resources:
requests:
memory: 128Mi
cpu: 0.5
limits:
memory: 512Mi
cpu: 1.0
name: mycontainer
ports:
- containerPort: 80
This pod will be scheduled to a node that can provide at least 0.5 CPU cores and 128 MiB of RAM. Once it’s been scheduled, the pod can use up a whole CPU core and 512 MiB of RAM before it encounters a hard resource cap.
You need a way of monitoring the applications in your cluster. The simplest way to get started is with the kubectl logs command, which automatically surfaces log output from your pods. Logs are usually an invaluable source of debugging information, but your app needs to emit them in a form that Kubernetes can collect.
Write logs to your container’s standard output and error streams (stdout/stderr) so they’ll show up in the Kubernetes log stream. If you’re writing to log files in the container’s filesystem, you’ll need to set up extra tooling to make the files accessible to your monitoring tools. These files could also be lost after a pod failure if they’re not being stored in a persistent volume.
Relying on the standard streams is the simplest route to good observability. General logging messages can be echoed to stdout using your programming language’s IO mechanism. Error traces and warnings should be directed to stderr so they’re properly highlighted in the logs.
Correctly configuring Kubernetes avoids unexpected downtime, performance issues, and security holes. Adopting a set of best practice standards will ensure all your workloads are held to those same standards, giving individual team members confidence in each other’s deployments.
Kubernetes configuration is ultimately a manual process: it’s up to you to remember and apply these best practices as you write your YAML files. When you want a more controlled approach, our Kubernetes management system Wayfinder can help you configure your clusters and applications using centralised policies. Wayfinder limits what individual teams and users can change, ensuring that everyone follows the standards you select. This streamlines your workload and reduces overall risk for your team.