This page describes how to deploy a Flink job and session cluster on Kubernetes.
Info This page describes deploying a standalone Flink session on top of Kubernetes. For information on native Kubernetes deployments read here.
Please follow Kubernetes’ setup guide in order to deploy a Kubernetes cluster. If you want to run Kubernetes locally, we recommend using MiniKube.
minikube ssh 'sudo ip link set docker0 promisc on'
before deploying a Flink cluster.
Otherwise Flink components are not able to self reference themselves through a Kubernetes service.
A Flink session cluster is executed as a long-running Kubernetes Deployment. Note that you can run multiple Flink jobs on a session cluster. Each job needs to be submitted to the cluster after the cluster has been deployed.
A basic Flink session cluster deployment in Kubernetes has three components:
Using the resource definitions for a session cluster, launch the cluster with the kubectl
command:
kubectl create -f flink-configuration-configmap.yaml
kubectl create -f jobmanager-service.yaml
kubectl create -f jobmanager-deployment.yaml
kubectl create -f taskmanager-deployment.yaml
Note that you could define your own customized options of flink-conf.yaml
within flink-configuration-configmap.yaml
.
You can then access the Flink UI via different ways:
kubectl proxy
:
kubectl proxy
in a terminal.kubectl port-forward
:
kubectl port-forward ${flink-jobmanager-pod} 8081:8081
to forward your jobmanager’s web ui port to local 8081.NodePort
service on the rest service of jobmanager:
kubectl create -f jobmanager-rest-service.yaml
to create the NodePort
service on jobmanager. The example of jobmanager-rest-service.yaml
can be found in appendix.kubectl get svc flink-jobmanager-rest
to know the node-port
of this service and navigate to http://<public-node-ip>:<node-port> in your browser.port-forward
solution, you could also use the following command below to submit jobs to the cluster:In order to terminate the Flink session cluster, use kubectl
:
kubectl delete -f jobmanager-deployment.yaml
kubectl delete -f taskmanager-deployment.yaml
kubectl delete -f jobmanager-service.yaml
kubectl delete -f flink-configuration-configmap.yaml
A Flink job cluster is a dedicated cluster which runs a single job. The job is part of the image and, thus, there is no extra job submission needed.
The Flink job cluster image needs to contain the user code jars of the job for which the cluster is started. Therefore, one needs to build a dedicated container image for every job. Please follow these instructions to build the Docker image.
As described in the plugins documentation page: in order to use plugins they must be copied to the correct location in the flink installation for them to work.
The simplest way to enable plugins for use on Kubernetes is to modify the provided Flink docker image by adding an additional layer. This does however assume you have a docker registry available where you can push images to and that is accessible by your Kubernetes cluster.
How this can be done is described on the Docker Setup page.
With such an image created you can now start your Kubernetes based Flink cluster which can use the enabled plugins.
In order to deploy the a job cluster on Kubernetes please follow these instructions.
An early version of a Flink Helm chart is available on GitHub.
The Deployment definitions use the pre-built image flink:latest
which can be found on Docker Hub.
The image is built from this Github repository.
flink-configuration-configmap.yaml
jobmanager-deployment.yaml
taskmanager-deployment.yaml
jobmanager-service.yaml
jobmanager-rest-service.yaml
. Optional service, that exposes the jobmanager rest
port as public Kubernetes node’s port.