What is Kubernetes?

Copy URL

Kubernetes is an open source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.

Kubernetes automates the configuration of your applications and maintains and tracks resource allocation. A project of the  Cloud Native Computing Foundation (CNCF), Kubernetes was first introduced in 2014 and has become a widely adopted platform for organizations to run distributed applications and services at scale.

Kubernetes is a platform for managing containers, which bundle the code, configuration, and dependencies of an application, allowing it to run as an isolated process with its own resources. Each application gets its own container or multiple containers, which are grouped into Kubernetes pods.

Kubernetes can run on bare metal servers, virtual machines, public cloud providers, private clouds, and hybrid cloud environments. One of Kubernetes’s key advantages is it works on many different kinds of infrastructure.

Kubernetes is built to help users to follow 3 core design principles, as explained in the Kubernetes implementation details. A Kubernetes deployment should be:

  • Secure. It should follow the latest security best-practices.
  • User-friendly. It should be operable using a few simple commands.
  • Extendable. It shouldn’t favor one provider and should be customizable from a configuration file.

Hybrid cloud strategy for dummies e-book

Kubernetes automates much of the tedious work of deploying and scaling applications, and opens the way to cloud-native development methods that can save time and bring new software to market faster.  Some of the main benefits include:

Support for large, complex environments: A production environment running multiple applications will require many containers deployed across many hosts, all working together. Kubernetes provides the orchestration and management capabilities required to deploy containers at the scale required for large workloads.

Scalability: Kubernetes automatically scales based on your needs, providing the capacity your applications need while saving resources and costs.

Portability: Kubernetes can run on-site in your own datacenter, in a public cloud, and in hybrid configurations of both public and private instances. With Kubernetes, the same commands can be used anywhere.

Consistent deployments: Kubernetes deployments are consistent across infrastructure. Containers embody the concept of immutable infrastructure, and all the dependencies and setup instructions required to run an application are bundled with the container.

Separated and automated operations and development: Containers save developers time, empowering rapid iteration cycles. At the same time, Kubernetes helps operations teams to feel confident in the stability of the system.

Hybrid cloud strategy support: Many organizations combine on-site datacenters with public or private cloud solutions, and balance workloads between multiple cloud providers to take advantage of changes in pricing and service levels. The consistency and portability of Kubernetes can support these hybrid strategies.

Continued support for traditional applications: Kubernetes can help you deliver and manage containerized, legacy, and cloud-native apps, as well as those being refactored into microservices.

Management of complex environments: Microservices in containers make it easier to orchestrate services, including storage, networking, and security, but they also significantly multiply the number of containers in your environment, increasing complexity. Kubernetes groups containers into pods, helping you schedule workloads and provide necessary services—like networking and storage—to those containers.

Improved security: Kubernetes security practices can help organizations take effective steps toward better IT security. Administrators can apply policies for security and governance, and segment policies by pods or groups of pods. Development teams can identify security issues in containers at runtime and fix them at the build stage, rather than updating or patching them in production. Role-based access control (RBAC) can assign specific permissions to users and service accounts. Kubernetes secrets can safeguard sensitive data like encryption keys.

Enabling DevOps: By providing a consistent infrastructure foundation for containers, Kubernetes can a support DevOps approach, which promotes an efficient working relationship between development and operations teams. Employing CI/CD, or continuous integration and continuous delivery/deployment, helps streamline and accelerate the software development lifecycle. And an evolution of DevOps, DevSecOps, shifts security controls and vulnerability management earlier in the software development lifecycle.

E-book: DevOps Culture and Practice with OpenShift

A working Kubernetes deployment is called a cluster, which is a group of hosts running containers.

Administrators set the desired state of a Kubernetes cluster, describing which applications or other workloads should be running, along with which images they use, which resources should be made available to them, and other such configuration details.

You can visualize a Kubernetes cluster as two parts: a control plane, and a series of compute nodes (typically servers or virtual servers).

The control plane is responsible for maintaining the desired state of the cluster, such as which applications are running and which container images they use.

The nodes, meanwhile, follow instructions from the control plane and do the actual work of running the applications and workloads. Each node is its own Linux® environment, and could be either a physical or virtual machine. A Kubernetes cluster needs at least one compute node, but will normally have many.

A Kubernetes node consists of pods, with each pod representing a single instance of an application. A pod is made up of a container or a series of tightly coupled containers, along with options that govern how the containers are run.

A multitude of Kubernetes services work together to automatically identify which node is best suited for each task, allocate resources, and assign the pods in that node to fulfill the requested work. Kubernetes automatically sends requests to the right pod, no matter where the pod moves in the cluster or even if it’s been replaced.

Keeping it all under control, Kubernetes provides a unified application programming interface (API) to manage, create, and configure the cluster. 

Let’s take a deeper look at what’s happening in a Kubernetes cluster.

The control plane is the nerve center, home to the components that control the cluster, along with data about the cluster’s state and configuration. These core Kubernetes components handle the work of making sure containers are running in sufficient numbers and with the necessary resources.

The Kubernetes API, or kube-apiserver, is the front end of the Kubernetes control plane, handling internal and external requests. The API server determines if a request is valid and, if it is, processes it. You can access the API through REST calls, through the kubectl command-line interface, or through other command-line tools such as kubeadm.

The Kubernetes scheduler, or kube-scheduler, considers the resource needs of a pod, such as CPU or memory, along with the health of the cluster. Then it schedules the pod to an appropriate compute node.

Controllers, or kube-controller-manager, take care of actually running the cluster, and the Kubernetes controller-manager contains several controller functions in one. One controller consults the scheduler and makes sure the correct number of pods is running. If a pod goes down, another controller notices and responds.

Configuration data and information about the state of the cluster lives in etcd, a key-value store database. Fault-tolerant and distributed, etcd is designed to be the ultimate source of truth about your cluster.

To run the containers, each node has a container runtime engine. Docker is one example, but Kubernetes supports other Open Container Initiative-compliant runtimes as well, such as rkt and CRI-O.

Each node contains a kubelet, a tiny application that communicates with the control plane. The kubelet makes sure containers are running in a pod. When the control plane needs something to happen in a node, the kubelet executes the action.

Each node also contains kube-proxy, a network proxy for facilitating Kubernetes networking services. The kube-proxy handles network communications inside or outside of your cluster—relying either on your operating system’s packet filtering layer, or forwarding the traffic itself.

Beyond just managing the containers that run an application, Kubernetes can also manage the application data attached to a cluster. Kubernetes allows users to request storage resources without having to know the details of the underlying storage infrastructure. Persistent volumes are specific to a cluster, rather than a pod, and thus can outlive the life of a pod.

The container images that Kubernetes relies on are stored in a container registry. This can be a registry you configure, or a third party registry.

Read more about Kubernetes architecture

Serverless is a cloud-native development model that allows developers to build and run applications without having to manage servers. There are still servers in serverless, but they are abstracted away from app development. Developers can simply package their code in containers for deployment.

Once deployed, serverless apps respond to demand and automatically scale up and down as needed. Serverless offerings from public cloud providers are usually metered on-demand through an event-driven execution model. As a result, when a serverless function is sitting idle, it doesn’t cost anything.

Kubernetes is a popular choice for running serverless environments. But Kubernetes by itself doesn’t come ready to natively run serverless apps. Knative is an open source community project which adds components for deploying, running, and managing serverless apps on Kubernetes.

With Knative, you create a service by packaging your code as a container image and handing it to the system. Your code only runs when it needs to, with Knative starting and stopping instances automatically.

Read more about serverless

A Kubernetes operator is an application-specific controller that extends the functionality of the Kubernetes API to create, configure, and manage instances of complex applications on behalf of a Kubernetes user, and add other application functionality. It builds upon the basic Kubernetes resource and controller concepts, but includes domain or application-specific knowledge to automate the entire life cycle of the software it manages.

A Kubernetes operator can be built to perform almost any Kubernetes action: scaling a complex app, application version upgrades, or even managing kernel modules for nodes in a computational cluster with specialized hardware. Examples of software and tools that are deployed as Kubernetes operators include the Prometheus Operator for monitoring and the Elastic Kubernetes Operator for automating search.

Learn about Kubernetes operators

Kubernetes by itself is open source software for deploying, managing, and scaling containers. Putting it to work in any practical sense takes significant work. Most organizations will want to integrate capabilities such as automation, monitoring, log analytics, service mesh, serverless, and developer productivity tools. You may wish to add additional tools to help with networking, ingress, load balancing, storage, monitoring, logging, multi cluster management, continuous integration and continuous delivery (CI/CD). Put simply, for most use cases, Kubernetes by itself is not enough.

Many software vendors provide their own versions of Kubernetes, including self-managed distributions, hosted services, installers, and Platform-as-a-service (PaaS) offerings. The CNCF maintains a list of dozens of certified Kubernetes offerings.

Red Hat® OpenShift® is a certified Kubernetes offering by the CNCF, but also includes much more. Red Hat OpenShift uses Kubernetes as a foundation for a complete platform to deliver cloud-native applications in a consistent way across hybrid cloud environments.

With Kubernetes as a container orchestration engine, Red Hat OpenShift incorporates many more features from the CNCF open source ecosystem, all tested, packaged, and supported by Red Hat. Red Hat OpenShift is available as public cloud service from the major cloud providers such as AWS, Microsoft Azure, Google, and IBM, or as a self-managed software on the broad spectrum of bare metal and virtual infrastructure across data center, public clouds, and edge.

Read about OpenShift vs. Kubernetes

OKD is a community project of packaged software components needed to run Kubernetes. In addition to Kubernetes, OKD offers developer- and operations-focused tools that help teams speed up application development, efficiently deploy and scale, and maintain a long-term lifecycle. OKD lets developers create, test, and deploy applications on the cloud, while also supporting several programming languages, including Go, Node.js, Ruby, Python, PHP, Perl, and Java.

OKD is the upstream project of Red Hat OpenShift, optimized for continuous application development and deployment. OKD is generally a few releases ahead of OpenShift on features OKD is where community updates happen first, and where they are trialed for enterprise use.

The primary difference between OKD and OpenShift is that Red Hat OpenShift is validated and tested by Red Hat, and comes with subscription benefits to meet the requirements for enterprise operations. A Red Hat OpenShift subscription includes technical support, security response teams, long-term support options, validated third party operators, certified databases and middleware, and more.

Read about OpenShift vs. OKD

Red Hat is a leader and active builder of open source container technology, including Kubernetes, and creates essential tools for securing, simplifying, and automatically updating your container infrastructure.

With Red Hat OpenShift, your developers can make new containerized apps, host them, and deploy them in the cloud with the scalability, control, and orchestration that can turn a good idea into new business quickly and easily. If you’re looking to deploy or move your Kubernetes workloads to a managed cloud service, OpenShift is also available as a cloud-native service on Amazon Web Services (AWS), Microsoft AzureGoogle CloudIBM Cloud, and other providers.

Building on a foundation of OpenShift, you can use Red Hat Advanced Cluster Management and Red Hat Ansible® Automation Platform together to help you efficiently deploy and manage multiple Kubernetes clusters across regions, including public cloud, on-premise, and edge environments.

Product

Red Hat OpenShift Container Platform

A consistent hybrid cloud foundation for building and scaling containerized applications.

How to use Red Hat OpenShift as a modern application platform

Discover current trends in application transformation and ways that you can modernize using hybrid cloud application platforms and cloud services.

Keep reading

What is the Kubernetes Java client?

The Kubernetes Java client is a client library that enables the use of the Java programming language to interface with Kubernetes.

What are hosted control planes?

Pave the way for a true hybrid-cloud approach that allows smaller nodes to run a control plane, thereby reducing the cost of clusters.

Red Hat OpenShift for developers

Develop applications without worrying about infrastructure. Red Hat® OpenShift® helps you build and deploy applications using the tools you want.

Containers resources

Featured product

  • Red Hat OpenShift

    A unified application development platform that lets you build, modernize, and deploy applications at scale on your choice of hybrid cloud infrastructure.

Related articles