-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingress Stuck in Redeployment Loop After Being Removed #3871
Comments
Thanks for your report. We will look at this as soon as possible. |
/kind bug |
If it's of any value, I've also tried with helm 1.8.0 and app version 2.8.0 and experience the same issue. Version 2.8.0 works on eks/kubernetea 1.30. So I suspect this is an issue with the use of k8s 1.31. I'm in the middle of preparing to out of place upgrade our cluster to 1.31 when I came across this issue. Please let me know if there is anything else I can do/provide to help as this bug is blocking us from our upgrade. Thank you |
Same here. It got stuck deleting on this error: ,"error":"ingress: default/pax-custom-domain-certificate-k8ntw-wjpwd: no certificate found for host: xxx.dummy.nl"} |
Reproductionscenario:
Delete one ingress, it remains in deleting and will not finalize. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Describe the bug
On a fresh kubernetes cluster - I am deploying an ingress it successfully deploys, i wait for all resources to be up and im able to access the service.
I then delete the ingress and aws load balancer begins to delete and then shortly after initiates a redeployment on its own. It then becomes stuck in a loop of redeploying and attempting to delete a sg that no longer exists.
Steps to reproduce
kubectl apply -f echoserver.ym
Wait until all resources are created and up.
Delete
kubectl delete -f echoserver.yml
Expected outcome
I expect it to not be redeployed.
Environment
Additional Context:
echoserver.yaml
kubectl logs -f --selector=app.kubernetes.io/instance=aws-load-balancer-controller -n kube-system
log file
logs-from-aws-load-balancer-controller-in-aws-load-balancer-controller-6b6457c65b-vs2tj.log
aws-load-balancer-controller configuration settings - installed via helm chart 1.8.4 - 2.8.3 app version
The text was updated successfully, but these errors were encountered: