A cheat sheet is designed to be a helpful reference tool during your journey with Kubernetes. If you’re a true beginner or a bit more seasoned, you can find handfuls of Kubernetes or Kubectl cheat sheets out in the world. All formatted a bit differently, and all with slightly different information.
The kubectl command-line tool provides ways to create and manage Kubernetes objects. This guide is separated into the most common Kubernetes objects, the ones that you’re likely to interact with the most often, and breaks down a list of issues, fixes and useful commands surrounding each one.
Download an at-a-glance PDF to continue referencing these commands as you go, and keep reading for more insight and use-cases for each command.
The smallest object within the Kubernetes ecosystem, a Pod represents a group of one or more containers running together on your cluster.
The most common way to deploy pods into Kubernetes clusters are through .yaml files provided to kubectl. While there are several advantages to deploying workloads this way (one of which being able to provide a single source of truth when coupled with version control systems such as git), there may be scenarios when a pod will need to be created for debugging or testing purposes, without committing to your source of truth. What if you just want to quickly deploy a pod into kubernetes quickly without any YAML declaration?
Kubectl provides the run subcommand to quickly provision a pod:
kubectl run <pod-name> --image=<image-name> --restart=Never
The flag --restart=Never is used to provide kubectl in knowing the restart policy of the workload. By default this is set to Always, which means it will start back up again rather than the pod being deleted. Pods are generally queried using the kubectl get pods command. Though this may be useful to view a list of all pods running on your cluster or a namespace, the command itself doesn’t provide much flexibility without the use of flags. Here are some short commands which will help you with interaction with pods:
Kubernetes provides the functionality of executing commands within running containers and pods:
For example, if your command requires additional arguments to be passed to it, this needs to be separated via a double-dash as follows:
kubectl exec -ti <pod-name> -- ls -lah
Alternatively, specify a certain container within a pod using -c to identify your container of choice:
kubectl exec -ti <pod-name> -c <container
Although there are many ways to discover the health and status of your containers, application logs can be incredibly useful to observe what your workloads are doing in real-time. Here’s a quick command to achieve this:
kubectl logs <pod-name> -c <container-name> --tail=<number> -f
In the example above, -f can be defined to stream or follow the output of logs in real time, and --tail can be used to display the most recent number of lines of your output.
Services are abstractions which define a set of pods and make sure that network traffic can be directed to the pods for the workload.
Pods themselves are an isolated workload; by default, Kubernetes denies pod-to-pod connectivity or even external traffic outside of a cluster to your application. That’s where services come in, as an abstract way of exposing your application without modifying the workload, but while still providing load-balancing, DNS names and IP allocations.
Here are a few quick commands to create a service without any YAML declaration:
Similarly to the command for creating pods without a YAML manifest in the section above, kubectl provides the expose command to create a service when given a resource name and its container port, as well as being able to define the port number in which you want your service to be exposed at:
kubectl expose pod/deployment <name> --port 80 --target-port 8080
The example above expects that the target deployment is listening on a non-privileged port 8080 (> 1024), otherwise the Pod would need to run privileged (e.g. as a root user).
By default, a service with a type ClusterIP is defined when no type is specifically defined. A Service exposed via ClusterIP is only accessible by workloads running within the Kubernetes Cluster. You can then create an Ingress resource to expose this Service externally, or alternatively directly create a Service with type LoadBalancer, which instructs Kubernetes to create an external load balancer via your Cloud Provider.
You can specify the Service type using the --type flag, for example:
kubectl expose pod/deployment name --port x --target-port x --type=<ClusterIP/LoadBalancer>
The --show-labels flag can be used in order to find out which service defined corresponds to the set of pods, which becomes particularly useful when you’re dealing with multiple services within a namespace:
kubectl get services --show-labels
A deployment is an object that manages a replicated application, making sure to automatically replace any instances that fail or become unresponsive. Deployments help make sure that one (or more) instance of your application is available to serve user requests.
Similarly to pods, kubectl provides a useful subcommand which allows you to create a deployment quickly via the CLI by specifying the deployment name, as well as the image name:
kubectl create deployment <name> --image=<image>
As mentioned earlier, deployment objects manages replications of pods via ReplicaSets, therefore within a deployment you can quickly scale up or down a set of identical pods:
kubectl scale deployment <deployment> --replicas=<number>
Note that you can also scale your deployment to 0 replicas. This means that your ReplicaSet will still be existent within your cluster, however will not provision any pods. This is particularly useful in keeping deployment definitions within the cluster, as well as downscaling non-production workloads during out-of-hours to save on resource costs. In relation to cost savings, you could also consider using cluster auto-scalers to automatically scale your cluster nodes proportionally to your workload’s CPU and memory usage. A good example of this would be using the cluster-autoscaler.
Any updates to a deployment, such as a configuration or replica change, will trigger the concept of a rollout. This means that the deployment’s ReplicaSet will trigger an update to the pods that it controls with its newly defined configuration.
A quick way to find out whether if a rollout has been successful is by using the rollout status subcommand:
kubectl rollout status deployment <deployment>
You can also use rollout restart to restart your rolling update:
kubectl rollout restart deployment <deployment>
In order to find out the history of rolling updates which have happened to a particular deployment rollout history can help you out:
kubectl rollout history deployment <deployment>
Lastly, the undo subcommand is helpful when you need to roll back your deployment to a previously deployed revision (or a particular revision):
kubectl rollout undo deploy <deployment> --to-revision=<revision>
Secrets let you store and manage sensitive information such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a pod definition or in a container image.
As Kubernetes secrets are generally used for sensitive information such as passwords or information consumed by pods via runtime, they are often created manually via kubectl. Since declaring secrets via YAML would require you to base64 encode your secret values, the following way lets you create a secret object via kubectl by passing your secret values as plaintext:
kubectl create secret generic <name> --from-literal=<key>=<value>
Alternatively, you can also pass in a file with plaintext to kubectl:
kubectl create secret generic <name> --from-file=<key>=./file.txt
Base64 decoding values can be a bit of a nuisance when trying to find out your secrets in plaintext. the next time you’re put in a situation where you have to, try using this one-liner to decode your secret keys:
kubectl get secret <secret-name> -o jsonpath=”{.data.<key-name>}”
A physical or virtual machine depending on your cluster, nodes are responsible for running your workloads by leveraging its computational power and memory.
We can instruct Kubernetes to stop allocating new workloads to a particular node. This is particularly useful when a faulty node is identified, and needs to be isolated for investigation:
kubectl cordon <node-name>
Cordoning can also be undone by using the uncordon command:
kubectl uncordon <node-name>
There may be scenarios where a faulty node has been identified and will need maintenance or to be decommissioned - what would be the safest way to evict all workloads to another node before removing the faulty node from service? The drain command safely evicts all of your pods by terminating them gracefully, while only leaving node-critical workloads such as networking and logging components:
kubectl drain <node-name> --delete-local-data --ignore-daemonsets
In the majority of cases the flags --delete-local-data and --ignore-daemonsets would need to be specified, to remove pods which makes use of a node’s local storage (emptyDir), as well as ignoring any daemonset replicas currently running within a node.
We’ve covered snippets of ways in which popular Kubernetes objects can be queried via kubectl, however some flags and commands can be used globally and are resource agnostic. Here are some examples which will help your productivity in using Kubernetes:
Forgotten if a command exists within kubectl, or wondering if there was a short name available for a command? api-resources will output all available API resources which kubectl will accept, as well as displaying their shortnames, and if they’re namespace dependant:
kubectl api-resources
Kubectl provides an easy way of querying a Kubernetes object within all namespaces of your cluster with the --all-namespaces flag:
kubectl get <resource> -- all-namespaces
One example of using this flag would be to list all pods in all namespaces in your cluster, or to query if there are any pods which aren’t in a “Running” state
kubectl get pods --all-namespaces | grep -v Running
The describe subcommand can be used to find out a detailed description of a selected resource, as events, and controllers related to such resource. This is particularly useful for debugging workloads which may have failed, or viewing a more detailed status of a particular resource:
kubectl describe [pod | service | deployment | etc.]
Although describing a resource can be useful in knowing its state, events and its basic configuration, outputting a resource via YAML or JSON will show it’s full configuration:
kubectl get <resource> <name> [-o yaml | -o json]
The --dry-run flag is helpful during deployment of an application to ensure that your kubernetes resource to be deployed will be valid. This can also be used to generate yaml definitions as a template combined with the -o yaml flag.
For example, when creating a pod using a YAML definition:
kubectl run <pod-name> --image=<image-name> --restart=Never -o yaml --dry-run > pod.yaml
Environment variables will also be injected into the YAML output from within the cluster when using the flag with kubectl create:
kubectl create -f file.yaml -o yaml --dry-run
These commands and short-cuts are a great, quick reference when you’re stuck or need a refresher. Because of its complexities and nuances, there’s a lot of ground to cover with Kubernetes. If you’re just starting out, read through our Guide to Kubernetes.