One of the biggest crowd reactions during November’s AWS re:INVENT conference was AWS CEO Andy Jassy’s announcement of Elastic Container Service for Kubernetes (EKS). For those not already familiar with it, this seems like a good opportunity to talk about what Kubernetes is, some of its use cases, and where it fits in the cloud and DevOps landscape.
Kubernetes was developed at Google over the course of 15 years, to support their ever-growing global collection of user-facing applications and batch workloads. Starting with a resource-allocation mechanism called Borg to share infrastructure across jobs using containers, many Google teams contributed to this toolset, adding dynamic configuration, service discovery, load balancing, and lifecycle management. As the toolset evolved, they created a ground-up rewrite to focus on engineering, decentralize cluster control and homogenize the various control systems to create Borg’s successor: Omega. As Google made inroads into the public-cloud infrastructure market, the third iteration of cluster management was designed with a focus on developer experience, hoping to make deployment and management easy while keeping the utilization advantages of containers. This open-source solution, called Kubernetes, has been developed and extended by the wider DevOps and container communities to become a mature platform. CoreOS is a Linux distribution purpose-built for containers and tightly integrated with Kubernetes out of the box. Red Hat has built a commercial enterprise management interface on top of Kubernetes, OpenShift, which makes deployment a snap.
Kubernetes as a platform offers several advantages to its users, including:
- Automatic bin packing
- Horizontal scaling
- Service discovery and load balancing
- Automated rollouts and rollbacks
- Secret and configuration management
- Storage orchestration
- Batch execution
This functionality is all exposed through a consistent REST API that gives developers and engineers visibility into, and control of, the managed cluster. Kubernetes clusters can span the globe to offer high-availability. It can also be multi-cloud or hybrid hosted, spanning on-premises datacenters and various public and private cloud providers. Bin packing is the key selling point to stakeholders, as Kubernetes allows users to dynamically scale their always-on workloads to match demand, and will fill unused capacity with best-effort jobs. Provisioned infrastructure theoretically never goes to waste.
The AWS EKS service adds a few touches to stock Kubernetes. For starters, Kubernetes' basic cluster configuration is simplified, allowing a cluster to be launched with ease. By default, this cluster’s masters are spread across a minimum of 3 availability zones for resiliency. As with any AWS managed service, the underlying cluster instances themselves are self-healing. The API endpoints are authenticated using the AWS Identity and Authorization Management (IAM) system. And there is a custom plugin to allow Kubernetes pods (the groups of containers that make up an application) to use the VPC infrastructure for network control. (photo credit: AWS)
According to the Cloud Native Computing Foundation, 63% of Kubernetes workloads are already being run on AWS. This new service allows customers to launch easier and worry less about their Kubernetes installations, retains full compatibility with existing clusters, and adds integration to the AWS ecosystem.
Have you explored AWS EKS? Questions about the Cloud, Kubernetes, or AWK EKS? Comment below and join the conversation!