-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws cloud controller manager is unable to manage the nodes in cluster #916
Comments
This issue is currently awaiting triage. If cloud-provider-aws contributors determine this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This:
Isn't an error, it's expected behavior when a |
@cartermckinnon We have followed below steps on existing 1.26 cluster to make it ready for 1.27 upgrade
Now when upgrading cluster to 1.27, below are the issues which we are facing:-
|
Are you passing CCM should fill in the provider ID if it's missing, but it's generally preferable to just pass it to |
@cartermckinnon Let me share you 10-kubeadm-conf and kubeadm-config which we currently have in 1.26 where in tree support is there :-
Now we are planning to opt out off tree aws cloud controller manager, Could you please guide us what changes we need to make to migrate from in-tree to out-tree . Currently we have deployed aws-cloud-controllermanager daemonset and those are running. But kube-controller-manager also running with above configurations. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
What happened: We are running k8s cluster of version 1.26 using kubeadm with resources from aws. We wanted to upgrade our clusters to 1.28 (1.26->1.27->1.28) as per update notes we tried to move from in-tree aws cloud provider to external aws cloud provider.
As per the upgrade process we deployed the new 1.27 nodes along with aws cloud controller manager in the cluster, post which we scaled down the 1.26 nodes.
What you expected to happen: The issue we face is that the etcd and worker nodes of 1.26 version which is scaled down gets removed from the cluster, but the control plane nodes still shows up in the cluster even after its ec2 instance is removed. eg -
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
we are seeing this error in the cloud controller manager pod logs -
we have set the hostname according to the pre req but still we get this
Environment: kubeadm
kubectl version
):Cloud provider or hardware configuration: aws
OS (e.g. from /etc/os-release):
uname -a
):/kind bug
The text was updated successfully, but these errors were encountered: