🚀 New Wayfinder SaaS launching soon!
Sign up for early access
BLOGKubernetes

6 Serverless Frameworks on Kubernetes You Need to Know

Serverless is ever the hot-button topic, removing the frustration and concern of hardware and allowing you to focus on individual code functions. Here are six serverless frameworks that run on Kubernetes that you need to know.

Category
Kubernetes
Time to read
5 minutes
Published
October 14, 2025
Author

Key Takeaways

Serverless computing continues to revolutionise how developers build and deploy applications, removing infrastructure management concerns and allowing teams to focus on writing code. When combined with Kubernetes, serverless architectures offer unprecedented flexibility, scalability, and control.

As we move through 2025, the serverless-on-Kubernetes landscape has matured significantly. From Knative's dominance as a CNCF project to OpenFaaS's enterprise adoption, each framework offers unique features tailored to different use cases and organisational needs.

In this blog, we'll explore the leading serverless frameworks running on Kubernetes, their architectures, and their suitability for different scenarios. Whether you're a developer looking to enhance productivity or an enterprise seeking efficient cloud-native solutions, understanding these frameworks is essential for navigating the evolving landscape of serverless computing.

If you need help with Kubernetes and implementing serverless functions, book a quick demo with our team of cloud experts.

Why Serverless on Kubernetes?

Before diving into specific frameworks, it's worth understanding why organisations are increasingly choosing to run serverless workloads on Kubernetes:

  • Reduced vendor lock-in: Open-source frameworks can run on any Kubernetes distribution, providing flexibility across cloud providers and on-premises environments
  • Language flexibility: Unlike managed serverless platforms, Kubernetes-based solutions support functions written in any language, packaged as container images
  • Enhanced observability: Deep integration with Kubernetes-native tools like Prometheus and Grafana provides comprehensive insights into function performance and resource usage
  • Ecosystem integration: Leverage the full Kubernetes ecosystem including service meshes, security controls, and GitOps workflows
  • Cost optimisation: Automatic scaling, including scale-to-zero capabilities, ensures you only pay for resources when functions are actively processing requests

1. Knative

Status: Production-ready, CNCF project with strong industry backing

Overview

Knative has emerged as the leading serverless platform for Kubernetes in 2025. Initially developed by Google and now a Cloud Native Computing Foundation (CNCF) project, Knative benefits from contributions by over 50 companies including IBM, Red Hat, and VMware.

Knative provides a comprehensive set of components for building and operating modern, source-centric, serverless applications. It standardises best practices from successful Kubernetes-based frameworks and can run anywhere—on-premises, in the cloud, or within third-party data centres.

Architecture

Knative consists of two main components:

  • Serving: Handles deploying, scaling, and managing serverless applications. It offers features like traffic splitting, gradual rollouts, automatic scaling based on request concurrency, and scale-to-zero capabilities
  • Eventing: Provides a declarative model for event-driven applications, enabling developers to consume and produce events from various sources without being tied to a specific messaging system

Key Features

  • Scale-to-zero with rapid cold-start performance
  • Concurrency-based autoscaling with fine-grained control
  • Advanced traffic management including blue-green deployments and canary releases
  • Flexible networking layer—supports Istio, Kourier, Contour, and other ingress options (no longer requires Istio as a hard dependency)
  • Rich eventing framework with support for multiple event sources
  • Integration with build tools for source-to-URL deployments

Pros

  • Strong community backing and governance as a CNCF project
  • Production-proven at scale across numerous enterprises
  • Comprehensive feature set for both serving and eventing
  • Excellent documentation and ecosystem support
  • Multiple managed offerings available (Google Cloud Run, IBM Cloud Code Engine)

Cons

  • More complex to set up compared to simpler frameworks
  • Resource overhead—requires careful cluster sizing for optimal performance
  • Steeper learning curve for teams new to Kubernetes

Best For

Enterprises seeking a production-ready, vendor-neutral serverless platform with comprehensive features and strong community support. Ideal for organisations already invested in the Kubernetes ecosystem and those requiring advanced traffic management and eventing capabilities.

2. OpenFaaS

Status: Mature, actively maintained with commercial support available

Overview

OpenFaaS (Open Function as a Service) is an independent open-source project founded by Alex Ellis. It has evolved into a mature platform with a strong community and commercial backing through OpenFaaS Pro. The project emphasises simplicity, developer experience, and first-class metrics support.

OpenFaaS stands out for its straightforward approach to serverless on Kubernetes, making it particularly appealing to teams wanting to adopt serverless without the complexity of some alternatives.

Architecture

OpenFaaS uses a modular architecture consisting of:

  • Gateway: The API gateway acts as the entry point for function invocations and system management
  • Provider: Abstracts the container orchestrator (Kubernetes or Docker Swarm)
  • Watchdog: A process supervisor that wraps your functions and handles HTTP requests
  • Function Store: A repository of ready-to-deploy functions

Key Features

  • Auto-scaling with Prometheus metrics integration
  • Support for any language via custom templates or containers
  • Built-in UI for function management
  • CLI (faas-cli) for streamlined development workflow
  • Async function invocation with automatic retries via JetStream
  • Fan-out patterns for parallel execution at scale
  • Multi-tenancy support through network policies and resource limits

OpenFaaS Pro Enhancements

  • Fine-tuned autoscaling for specific execution patterns
  • Advanced retry mechanisms for failed invocations
  • Enhanced support for long-running batch jobs and ML models
  • Commercial support and SLAs

Pros

  • Simple to install and get started—can be running in minutes
  • Excellent developer experience with intuitive tooling
  • Strong community and extensive documentation
  • Production deployments at companies like Deel, Waylay, and Patchworks
  • Lighter resource footprint compared to Knative

Cons

  • Fewer advanced features compared to Knative's full offering
  • Pro version required for some enterprise features
  • Smaller ecosystem of integrations

Best For

Teams seeking a straightforward, developer-friendly serverless platform that can be deployed quickly. Excellent choice for organisations requiring multi-tenancy, customer-facing extensions, or rapid time-to-production. Particularly well-suited for industrial IoT, e-commerce customisation, and data science workloads.

3. Fission

Status: Active, production-ready with focus on performance

Overview

Fission is a serverless framework built by Platform9 and maintained by an active contributor community. It's specifically designed for Kubernetes and emphasises developer productivity and high performance, particularly around cold-start times.

Fission's unique approach of pre-warming function environments delivers some of the fastest cold-start times in the serverless space.

Architecture

Fission defines several core concepts:

  • Environment: Pre-built container images providing runtime components (language installation, web server, dynamic loader)
  • Function: Your application code following Fission's structure
  • Trigger: Events that cause function execution (HTTP, time-based, or message queues including NATS, Kafka, and Azure Storage Queue)
  • Executor: Manages function execution and resource allocation

Key Features

  • Extremely fast cold-start times via pre-warmed pods
  • Native Kubernetes integration using CRDs
  • Support for canary deployments
  • Live-reload for rapid development iteration
  • Multiple trigger types including HTTP, timers, and message queues
  • Efficient resource optimisation through pod pooling

Pros

  • Industry-leading cold-start performance
  • Simple, Kubernetes-native architecture
  • Good documentation and active community
  • Live-reload feature accelerates development
  • Lower resource overhead than some alternatives

Cons

  • Smaller community compared to Knative and OpenFaaS
  • Fewer managed service offerings
  • More limited event source integrations

Best For

Applications where cold-start latency is critical. Ideal for development teams who want Kubernetes-native serverless with minimal overhead and fast iteration cycles. Well-suited for API backends, webhooks, and scheduled jobs where response time matters.

4. Apache OpenWhisk

Status: Mature Apache project with IBM and Adobe backing

Overview

OpenWhisk is an Apache Foundation project supported by IBM and Adobe. It powers IBM Cloud Functions and introduces a comprehensive programming model for serverless computing. OpenWhisk's design emphasises composability and event-driven patterns.

Key Concepts

  • Actions: The function containing your application code in any supported language
  • Triggers: Groups of events (e.g., messages published to a topic or HTTP requests)
  • Feeds: Streams of events implemented using hooks, polling, or connections
  • Alarms: Time-based periodic triggers
  • Rules: Associates triggers with actions, injecting events as inputs

Key Features

  • Event-driven architecture with strong compositional model
  • Container reuse for improved performance
  • Built-in monitoring and metrics
  • Support for complex event processing workflows
  • REST API and CLI for management
  • Multi-language support

Pros

  • Apache governance ensures long-term stability
  • Production-proven at enterprise scale (IBM, Adobe)
  • Sophisticated event processing capabilities
  • Comprehensive tooling and APIs
  • Deployment flexibility (Kubernetes, Mesos, OpenShift)

Cons

  • More complex architecture and concepts to learn
  • Setup can be involved compared to simpler alternatives
  • Less momentum than Knative in recent years
  • Smaller open-source community outside IBM ecosystem

Best For

Organisations with complex event-driven architectures requiring sophisticated trigger mechanisms and workflow composition. Particularly suitable for enterprises already using IBM Cloud or those needing battle-tested, Apache-governed open source.

5. Fn Project

Status: Maintained, Oracle-backed

Overview

Fn Project is an open-source, container-native serverless platform backed by Oracle. Originally forked from Iron Functions, Fn emphasises being cloud-agnostic and not tied to any specific container orchestrator.

Architecture

Fn consists of four main components:

  • Fn Server: Core component managing build, deployment, and scaling. Described as multi-cloud and container-native
  • Load Balancer: Routes requests to functions and maintains "hot functions" with pre-pulled images
  • Fn FDKs (Function Development Kits): Language-specific tooling for bootstrapping functions
  • Fn Flow: Enables workflow orchestration (parallel, sequential, fan-out execution)

Key Features

  • Container-native approach—any container is a function
  • Hot function optimisation for frequently-invoked functions
  • Adaptive scaling based on load
  • Workflow orchestration via Fn Flow
  • Multi-language support through FDKs

Pros

  • True container-native—deploy any Docker image as a function
  • Oracle backing provides enterprise credibility
  • Cloud-agnostic design
  • Good workflow orchestration capabilities

Cons

  • Smaller community adoption compared to alternatives
  • Less comprehensive than Knative's feature set
  • Requires additional dependencies (cert-manager, MySQL, Redis)
  • HTTP-only trigger support (more limited than competitors)

Best For

Organisations requiring a truly container-native approach or those already invested in the Oracle ecosystem. Suitable for teams who want portability across orchestrators and don't need extensive event source integrations.

6. What Happened to Kubeless?

Status: Archived (December 2021)

Previous versions of this blog featured Kubeless as a leading Kubernetes-native serverless framework. However, VMware archived the project in December 2021, meaning it's no longer maintained or receiving updates.

Kubeless was one of the early FaaS solutions on Kubernetes and demonstrated significant organic adoption, validating enterprise demand for on-premises serverless. Its creator, Sebastien Goasguen, has since focused efforts on Knative, which has become the natural successor.

For Existing Kubeless Users

If you're currently running Kubeless in production, migration to Knative is strongly recommended. Knative provides similar functionality with:

  • Active development and strong community support
  • CNCF governance ensuring long-term sustainability
  • More comprehensive features and better scalability
  • Commercial support options from multiple vendors

Migration services and tooling are available to help teams transition from Kubeless to Knative.

Emerging Trends for 2025 and Beyond

The serverless-on-Kubernetes landscape continues to evolve rapidly. Here are key trends shaping the future:

AI/ML Workload Integration

Kubernetes is becoming the preferred platform for deploying machine learning models, with serverless frameworks providing the perfect abstraction for inference endpoints. Tools like Kubeflow work alongside serverless platforms to simplify ML operations, while frameworks increasingly support GPU scheduling and long-running model training jobs.

WebAssembly (Wasm) Revolution

WebAssembly is emerging as a game-changer for serverless workloads on Kubernetes. Wasm provides near-native speed, significantly smaller binary sizes, faster startup times (microseconds vs milliseconds), and enhanced security through sandboxing. Projects like SpinKube are pioneering WebAssembly workloads in Kubernetes, opening possibilities for ultra-high-performance serverless applications.

Multi-Cloud and Hybrid Strategies

Organisations are increasingly distributing applications across multiple cloud providers to avoid vendor lock-in. Kubernetes provides a consistent framework for managing workloads across AWS, Azure, Google Cloud, and on-premises infrastructure. Serverless frameworks running on Kubernetes enable truly portable serverless applications.

GitOps and Infrastructure as Code

Serverless function deployment is increasingly managed through GitOps workflows using tools like Flux and ArgoCD. This provides version control, automated deployments, and easy rollbacks for function code and configurations.

Enhanced Observability

Modern serverless frameworks offer deep integration with observability platforms including Prometheus, Grafana, Jaeger for distributed tracing, and OpenTelemetry for standardised instrumentation. This provides unprecedented visibility into function performance, cold-start times, and resource utilisation.

Choosing the Right Framework

Selecting the optimal serverless framework depends on your specific requirements:

Choose ThisIf You NeedKnativeComprehensive, production-proven platform with strong community and vendor support. Best all-round choice for enterprises.OpenFaaSSimple, developer-friendly platform with quick setup. Excellent for multi-tenancy and customer-facing extensions.FissionMinimum cold-start latency and rapid development cycles. Ideal for latency-sensitive applications.OpenWhiskComplex event-driven architectures with sophisticated workflow requirements. Good for IBM ecosystem.Fn ProjectTrue container-native approach and Oracle ecosystem alignment.

Getting Started

Implementing serverless on Kubernetes requires careful planning and expertise. Key considerations include:

  • Cluster sizing: Ensure adequate resources for your chosen framework's overhead
  • Networking: Select appropriate ingress controllers and service mesh options
  • Security: Implement network policies, RBAC, and pod security standards
  • Monitoring: Deploy comprehensive observability tooling from day one
  • Cost management: Configure autoscaling appropriately to balance performance and cost

The goal of building serverless applications on Kubernetes is setting your applications up for success. With proper architecture and the right framework choice, Kubernetes can manage your serverless workloads efficiently without requiring workarounds or compromises.

Conclusion

Serverless computing on Kubernetes has matured significantly, offering production-ready frameworks suitable for enterprise workloads. Whether you choose Knative's comprehensive platform, OpenFaaS's simplicity, Fission's performance, or another option, you're building on battle-tested foundations with strong community support.

As we progress through 2025, the integration of AI/ML workloads, WebAssembly's emergence, and multi-cloud strategies are reshaping serverless computing. Kubernetes remains the ideal orchestration layer, providing the flexibility, scalability, and control that modern applications demand.

If you need help with Kubernetes and implementing serverless functions, book a quick demo with our team of cloud experts.

Related Posts

Related Resources