Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support using the existing NLB in service resources #3247

Open
shiyuhang0 opened this issue Jun 16, 2023 · 13 comments
Open

Support using the existing NLB in service resources #3247

shiyuhang0 opened this issue Jun 16, 2023 · 13 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@shiyuhang0
Copy link

Is your feature request related to a problem?

Kubernetes Service resources of type LoadBalancer will create a new NLB with an instance or ip target type.

Now, I have another service with a different port. I don't want to create a new NLB for it, what I want is to use the existing NLB by adding a new listener and a new target group binding to this new service.

I have looked through the doc but not find the answer. If I miss something and the controller already has the ability to do this, please tell me how to do it.

Describe the solution you'd like
Add an annotation like this
service.beta.kubernetes.io/aws-load-balancer-nlb-arn: ${nlb_arn}

Then the controller can create a new listener with the new target and a new target group binding in this existing NLB

Describe alternatives you've considered
A description of any alternative solutions or features you've considered.

@oliviassss
Copy link
Collaborator

@shiyuhang0 Hi, we have in our roadmap to support existing ALB/NLB for the LBC. We're tracking it in
#228
#2638

@Wilderone
Copy link

Hi @oliviassss! I see #2638 has been closed due to "not planned" and #228 only about ALB, but no word about NLB. Also i can't see smth related to NLB in 2.7 plans. Could you please clarify situation around this issue? There is few discussions / issues related to abilitiy to reuse NLB, but no answers :)

@raffraffraff
Copy link

raffraffraff commented Nov 30, 2023

Oof, I just ran into this. I have exactly the same use-case: I'm using API Gateway, and one of my integrations needs to target a private service in my EKS cluster. I don't want a pointless "me too" post so I'll elaborate on why this bugs me:

This features is important for better IAC because an NLB that gets auto-created by the AWS Load Balancer Controller has to be considered ephemeral. It is also unknown to Terraform, so generally, I have to copypasta into my Terraform vars. Even if I fully deploy my Kubernetes services / ingresses first, it's painful to perform NLB ARN lookup in Terraform so that I can use it in the API Gateway - this requires two chained terraform data resources:

  • The first queries Kubernetes for the service and parse the external IP
  • The second queries AWS for a load balancer that matches the external IP and returns its ARN

But it's brittle anyways, because the NLB can disappear and get replaced by a new one with a different ARN, breaking my API Gateway integrations. But it also forces me to fully deploy my Kubernetes services first and then not mess with them. Generally, we try to deploy all infrastructure and then the CICD can deploy services to Kubernetes. This forces us to break up our operations so that teams are dependent on each other, which causes delays. "You go first, then I'll do this bit, while you twiddle your thumbs, and when I'm done I'll let you know you can continue... " etc

If it were possible to get LBC to manage targets on an existing NLB without destroying it, I could simply deploy the NLB along with API Gateway in a single terraform apply, and the service team could reference the NBL in FluxCD deployments.

@raffraffraff
Copy link

Possible workaround? I need to experiment with this, but from other issues related to this, I think the main problem is that the AWS Load Balancer Controller deletes the NLB if all target groups are removed. As a workaround, could we modify the IAM role for the LBC service account, denying it the right to delete the NLB? This would of course cause errors, but should prevent the load balancer from getting deleted

@vrioux
Copy link

vrioux commented Dec 15, 2023

I see this was planned for 1.6.0. Is it now planned for 1.7.0?

@ellazhao-testo
Copy link

ellazhao-testo commented Feb 13, 2024

any plan for this one? currently ,we are faceing the same issue. the existing nlb created by terraform. if we want to re-use it ,we need add tags to this nlb, but when we try to destroy the vernemq service, the nlb also force deleted by aws-load-balancer-controller even we enable nlb delection-protection .

@grosser
Copy link

grosser commented Feb 13, 2024

FYI we had some success attaching existing nlbs to services by making their aws tags match what the controller expects + setting the name override when necessary.
If you do the same tag setting for the target-group it also stays around.
For 0-downtime prefill the TG with the new ips before cutting over.

@shraddhabang shraddhabang added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 20, 2024
@ntwkninja
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 3, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 1, 2024
@mausch
Copy link

mausch commented Oct 7, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 7, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests