In order to understand how the technology industry landed at containers and Kubernetes, we need to first look back to the beginning, where it all started.
The year is 2016. Amazon AWS is a $12.2 billion business (compared with $45.3 in 2020) and still growing every day. It was the year that, for most companies, the adoption of cloud was no longer a matter of IF but WHEN. Legacy private datacenter technologies and processes had left many with an inescapable conclusion: We’re too slow to deliver new services for our business and we’re spending too much money to do it.
Cloud was the way, and thus started the first big wave of cloud adoption.
Cloud promised immediate access to resources and a radically different ‘pay-as-you-go’ model. It looked like a chance for businesses to re-think the way that they delivered technology resources to the business and do it on a timeline (and for a price) that more closely aligned with the value it provided.
Practically, most organisations viewed the move to cloud through the lens of their current technologies and processes. Services partners reaped huge profits from migrations of VM based workloads to AWS, Azure & GCP. The underlying technologies, processes and ways of working hadn’t changed that much. It was more a matter of “get to the cloud and everything will be better”.
But it wasn’t.
Legacy processes and skill sets simply moved from one ‘datacenter’ to another. Resources were certainly easier to procure but it came at a cost: Surprise bills of unexpected size. Often the migrated workloads were over-provisioned making the “pay-as-you-go” model of cloud a double-edged sword. Legacy tooling and processes often locked the value of cloud behind a swarm of tickets and manual work, leaving organisations with a lackluster version of the same systems they had before. In short, the speed and cost promise of the cloud remained locked behind a wall of legacy systems, processes and skill sets.
The Gartner Hype Cycle defines a phase called the Trough of Disillusionment. It means the point at which an organisation’s adoption of a new technology leaves the theoretical, rose-colored glasses view of the technology and begins to enter the cold light of reality. The end of the First Wave was that point for many organisations.
The cloud had so much promise … why was this not working out the way we planned? For a number of reasons.
The first was that many simply looked at Cloud as a change in technology and not a change in Operating Model. The highly programmable, automated nature of Cloud services is in direct conflict with legacy ways of thinking about and working with technology. The assumption of manual configuration of resources (and the correlated ticketing/work assignment) simply doesn’t allow organisations to take full advantage of what cloud can offer. Put simply: Processes needed to evolve along with the technology.
The second was that the underlying technology didn’t really change. Applications (and their many components) were still running in Virtual Machines. Their configurations assumed a level of static configuration that (again) was in direct contradiction with the promise of Cloud: On-Demand, ephemeral resources that are there when you need them and gone when you don’t.
The third is due to the lack of skill set transition. Because most organisations approached cloud as a ‘like for like’ transition of VM based resources, the skill sets within the organisation didn’t change much. Sure, team members got trained up on AWS/Azure/GCP but it was likely in the context of managing VM based resources. Because the organisation wasn’t pursuing cloud resources outside that mold, Engineers and Architects paid them less attention. That’s not their fault; they were simply meeting the organisation’s needs at that time.
This problem becomes particularly problematic as an organisation approaches the next big challenge: The Second Wave of Cloud Adoption.
Organisations went to the cloud-based on a promise of easy accessibility of resources and a simple cost model but found that their adoption of cloud was limited and failed to deliver on those core promises.
Timeline Update: The year is now 2019, 2020 or 2021. A reflection on their cloud activities so far has caused organisations to realise that what they considered the destination is actually just a transitional step. If they want to fully realise the value of cloud, they have to adapt their technologies, processes and skillsets to more closely align with those of the cloud provider. In short, the second wave of cloud adoption is required.
Primarily, it’s the adoption of containerization and Kubernetes. In 2016 (at the beginning of the First Wave) the containerization of applications was an insignificant piece of the overall IT landscape. Docker was just beginning to be a topic of discussion but was poorly understood. Moreso, Kubernetes was an open-source project in its infancy. It promised high degrees of application resilience, scalability and control but at a cost: complexity.
At that time, and still true now, the number of Architects and Engineers that are truly experts with Kubernetes is small. They command hefty salaries and are attracted to cutting-edge organisations with disruptive products. This scarcity of resources has created a large problem for the average, established enterprise: Even if these resources can be attracted and hired, they are an inherent flight risk. If they aren’t presented with sufficiently ‘interesting’ problems and a commensurate salary, they will simply leave when a better offer is made (which it will).
Second, processes have to adapt to embrace automation and self-service over manual configuration. Manual configuration work is costly, error-prone and slow. If the value of cloud is to be fully unlocked, organisations must treat cloud resources as the highly-programmable resources that they are; removing human bottlenecks in favour of fatigue-less automation that doesn’t have moods or become distracted by context switching and lack of focus. Moreover, cloud resources should be directly requestable by the users that need them. Through automation, these resources come preconfigured to align with an organisation’s security and operational policies.
Third, applications must be refactored to take full advantage of Kubernetes’ flexibility and control. Rigid architectures must be re-imagined (and technologies re-applied) to be dynamically provisioned, dynamically configured and ephemeral. Application state must be externalised from application components to allow Kubernetes to dynamically schedule/reschedule components without the worry of session state or volumes.
Effectively, containers and Kubernetes hold the key to unlocking the value of cloud but the inherent complexity of the technologies (and the required changes) present barriers to operationalisation that are not easily overcome. Specifically, overcoming the skills gap represents a major challenge for most organisations.
While re-architecting/re-factoring applications can be challenging and altering the processes within an organisation can introduce barriers to adoption, they pale in comparison to the challenge of overcoming the skills gap. Simply put, the first two challenges can be dealt with relatively easily but the skill gap (attract/recruit/retain) simply can’t. This is the fundamental design decision behind Appvia’s flagship product: Wayfinder, which takes the complexity out of creating and maintaining Kubernetes clusters.
We’d love to share more of our perspective on this journey with you. Join our Slack community and or start your free 30 day trial of Wayfinder.