-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8s-dashboard exec is giving black screen intermittently & sometime with error - Sending Error: Error: http status 404 #8838
Comments
Hi @floreks can I offer any specific details/logs/tcp traces for your analysis? |
On our side, it's quite simple. API exposes the WebSocket endpoint and frontend connects to it. In case you are using some kind of additional proxies you need to make sure that they can reliably forward such connections and maintain them. I can imagine that if the connection is dropped and/or retried then the next exec can be forwarded to a different API pod than the previous one. It could potentially result in an error. On our side, the only thing I can think of we could improve is to add a few retry attempts on the frontend side so that you don't have to reload the page. This is only an improvement though. Everything else has to be done on your side as part of your proxy configuration. |
Observation: when I scale api deployment to a single replica (as the default used to be 3, and which has now been changed in helm chart for 7.1.3, it works consistently. |
That's expected. If the connection will be dropped at any point by i.e. your proxy then it will always reconnect to the same pod that keeps the connection open internally for some time. With more than 1 replica there is no guarantee that the request will be forwarded to the same pod to reconnect. It would always have to create a new connection. |
Bump |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
What happened?
When try to exec into pod getting black screen intermittently & sometime with error - Sending Error: Error: http status 404.
Error on dashboard pod logs
2024/03/26 09:31:23 handleTerminalSession: can't Recv: sockjs: session not in open state
What did you expect to happen?
Expectation is exec should work everytime and able to get pod session on every attempt.
How can we reproduce it (as minimally and precisely as possible)?
EKS version we tried and observed same behaviour - 1.26.7 & 1.28.5.
Dashboard version - 6.0.8
chart: kubernetes-dashboard
repoURL: https://kubernetes.github.io/dashboard/
Observed this issue when running 2 replicas on dashboard pod. With single replica this issue is not reporting. Opened case with openunison as well OpenUnison/openunison-k8s#105
Anything else we need to know?
Detailed info about openunison and ingress is updated in OpenUnison/openunison-k8s#105
What browsers are you seeing the problem on?
Chrome, Safari
Kubernetes Dashboard version
6.0.8
Kubernetes version
1.26.7 & 1.28.5
Dev environment
NA
The text was updated successfully, but these errors were encountered: