Configuration in this directory creates an AWS EKS cluster with Karpenter provisioned for managing compute resource scaling. In the example provided, Karpenter is provisioned on top of an EKS Managed Node Group.
To provision the provided configurations you need to execute:
$ terraform init
$ terraform plan
$ terraform apply --auto-approve
Once the cluster is up and running, you can check that Karpenter is functioning as intended with the following command:
# First, make sure you have updated your local kubeconfig
aws eks --region eu-west-1 update-kubeconfig --name ex-karpenter
# Second, deploy the Karpenter NodeClass/NodePool
kubectl apply -f karpenter.yaml
# Second, deploy the example deployment
kubectl apply -f inflate.yaml
# You can watch Karpenter's controller logs with
kubectl logs -f -n kube-system -l app.kubernetes.io/name=karpenter -c controller
Validate if the Amazon EKS Addons Pods are running in the Managed Node Group and the inflate
application Pods are running on Karpenter provisioned Nodes.
kubectl get nodes -L karpenter.sh/registered
NAME STATUS ROLES AGE VERSION REGISTERED
ip-10-0-13-51.eu-west-1.compute.internal Ready <none> 29s v1.31.1-eks-1b3e656 true
ip-10-0-41-242.eu-west-1.compute.internal Ready <none> 35m v1.31.1-eks-1b3e656
ip-10-0-8-151.eu-west-1.compute.internal Ready <none> 35m v1.31.1-eks-1b3e656
kubectl get pods -A -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
NAME NODE
inflate-67cd5bb766-hvqfn ip-10-0-13-51.eu-west-1.compute.internal
inflate-67cd5bb766-jnsdp ip-10-0-13-51.eu-west-1.compute.internal
inflate-67cd5bb766-k4gwf ip-10-0-41-242.eu-west-1.compute.internal
inflate-67cd5bb766-m49f6 ip-10-0-13-51.eu-west-1.compute.internal
inflate-67cd5bb766-pgzx9 ip-10-0-8-151.eu-west-1.compute.internal
aws-node-58m4v ip-10-0-3-57.eu-west-1.compute.internal
aws-node-pj2gc ip-10-0-8-151.eu-west-1.compute.internal
aws-node-thffj ip-10-0-41-242.eu-west-1.compute.internal
aws-node-vh66d ip-10-0-13-51.eu-west-1.compute.internal
coredns-844dbb9f6f-9g9lg ip-10-0-41-242.eu-west-1.compute.internal
coredns-844dbb9f6f-fmzfq ip-10-0-41-242.eu-west-1.compute.internal
eks-pod-identity-agent-jr2ns ip-10-0-8-151.eu-west-1.compute.internal
eks-pod-identity-agent-mpjkq ip-10-0-13-51.eu-west-1.compute.internal
eks-pod-identity-agent-q4tjc ip-10-0-3-57.eu-west-1.compute.internal
eks-pod-identity-agent-zzfdj ip-10-0-41-242.eu-west-1.compute.internal
karpenter-5b8965dc9b-rx9bx ip-10-0-8-151.eu-west-1.compute.internal
karpenter-5b8965dc9b-xrfnx ip-10-0-41-242.eu-west-1.compute.internal
kube-proxy-2xf42 ip-10-0-41-242.eu-west-1.compute.internal
kube-proxy-kbfc8 ip-10-0-8-151.eu-west-1.compute.internal
kube-proxy-kt8zn ip-10-0-13-51.eu-west-1.compute.internal
kube-proxy-sl6bz ip-10-0-3-57.eu-west-1.compute.internal
Because Karpenter manages the state of node resources outside of Terraform, Karpenter created resources will need to be de-provisioned first before removing the remaining resources with Terraform.
- Remove the example deployment created above and any nodes created by Karpenter
kubectl delete deployment inflate
- Remove the resources created by Terraform
terraform destroy --auto-approve
Note that this example may create resources which cost money. Run terraform destroy
when you don't need these resources.
Name | Version |
---|---|
terraform | >= 1.3.2 |
aws | >= 5.83 |
helm | >= 2.7 |
Name | Version |
---|---|
aws | >= 5.83 |
aws.virginia | >= 5.83 |
helm | >= 2.7 |
Name | Source | Version |
---|---|---|
eks | ../.. | n/a |
karpenter | ../../modules/karpenter | n/a |
karpenter_disabled | ../../modules/karpenter | n/a |
vpc | terraform-aws-modules/vpc/aws | ~> 5.0 |
Name | Type |
---|---|
helm_release.karpenter | resource |
aws_availability_zones.available | data source |
aws_ecrpublic_authorization_token.token | data source |
No inputs.
No outputs.