-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 controller/machine: use unstructured caching client #8896
🌱 controller/machine: use unstructured caching client #8896
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: does this apply to essentially all of our live client reads? Or are there cases where the client-go cache doesn't follow this behaviour?
I'm near certain there's been bugs in the past that have been solved by moving to the live client, but I don't understand what the difference with these calls is.
I think the difference is if we can tolerate a stale read or not. There are cases where we don't want to tolerate it, e.g. if we create duplicate MachineDeployments because we didn't see the one we created before. Basically because we don't want to implement logic which tries to rollback what went wrong before. I think in the Machine case it's fine to just continue reconciling until we eventually get the "final" BootstrapConfig / InfraMachine (btw I run tests and even with hundreds of clusters and thousands of machines I barely ran into the case where the read from the cache was stale, just cases where concurrently during Machine reconcile the BootstrapConfig / InfraMachine were updated). It's also crucial that the Machine controller watches BootstrapConfig / InfraMachine. This will guarantee that we get another reconcile with up-to-date BootstrapConfig / InfraMachine. This is also not always the case. So I think it comes down to a case-by-case decision. (Happy to look at specific previous bugs, I got the same feeling that you have, but based on debugging through client-go, running experiments with logs and the documentation I'm pretty sure that this is just how informers behave) To make another example. I think we had cases where e.g. the KCP controller was reading a stale KCP object. But I think the question is how stale was that object really. Or was it that after the reconcile started a new update on the KCP object came in (which of course then also lead to a subsequent reconcile). Some more context about the Machine controller specifically. Even after the BootstrapConfig / InfraMachine stabilize we still get a few additional reconciles on the Machine object (probably because of updates on the Node). Additionally we also have the 10m resyncPeriod. |
I'll take a look at the e2e tests. EDIT: Ups forgot to also add the changes to the main.go :) |
08e2f36
to
1946e23
Compare
This is great @sbueringer . In our scale testing, we are seeing the read performance of |
/test pull-cluster-api-e2e-full-main |
/lgtm I think it is ok to use cached read for external objects in the machine controller, given that as explained above we are watching for those objects and the machine controller will reconcile again as soon as boostrapConfig or InfraMachine will change (also machine controller is just waiting for provision to happen, so no harm will be done if we wait for the next reconcile) Looking at this from another perspective, I would say the current implementation is more due to the limitations of the cache (or of the cache options) at the time we first implemented this code; over time not only controller runtime has been improved a lot (kudos to the team), but we have a better and deep understanding of how all this work and we can now make changes like this in a very surgical way. Note: eventually, in a follow up we can optimize the memory footprint of the cache for boostrapConfig and InfraMachine by keeping in memory only the subset of object fields that are defined by the contract and dropping everything else. |
LGTM label has been added. Git tree hash: ec426561a3c74aa88db7ea7e9a5bc8eac73365af
|
Signed-off-by: Stefan Büringer [email protected]
1946e23
to
e7f1621
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
LGTM label has been added. Git tree hash: 8270820dbfd9feb96ee32340f354df4174d45875
|
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: fabriziopandini The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/area machine |
Signed-off-by: Stefan Büringer [email protected]
What this PR does / why we need it:
tl;dr
I think we can cache all our get calls with unstructured in the Machine controller. This leads to huge performance improvements at scale (an average Machine reconcile goes from ~ 1 second to double digits milliseconds).
Most of this PR is just updating the tests.
Why do I think it's safe to cache the unstructured gets?
While it could happen that a Machine reconcile is seeing stale BootstrapConfigs / InfraMachines, based on experiments and upstream documentation for every update on BootstrapConfig / InfraMachine we will get another Machine reconcile. During that reconcile the BootstrapConfig / InfraMachine will have been already updated in the cache.
This is the case because Kubernetes informers always first update the cache before they notify event handlers. (Event handlers eventually enqueue a reconcile request for our Machines)
(Source: https://github.com/kubernetes/sample-controller/blob/master/docs/controller-client-go.md)
In this diagram it can be seen that once an event is received in 1), the informer always first updates the cache in 5) before triggering the event handler in 6). A controller-runtime controller reconciles roughly after 8).
Please note that there is no way to guarantee that a Machine reconcile will always use a 100% up-to-date BootstrapConfig / InfraMachine as it's impossible to guarantee that the reconciler gets the objects which might have been written during the Machine reconcile - even with a live client.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Related #8814