Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

resolvers: run in separate processes #538

Open
oliver-sanders opened this issue Dec 8, 2023 · 8 comments
Open

resolvers: run in separate processes #538

oliver-sanders opened this issue Dec 8, 2023 · 8 comments

Comments

@oliver-sanders
Copy link
Member

oliver-sanders commented Dec 8, 2023

In multi-user setups we may have multiple users subscribing to multiple workflows simultaneously.

Large workflows can cause heavy server load in some cases (e.g. #547). Because we handle each request synchronously on the same process, this means larger updates can hold up other updates. In the extreme, this can cause UIs to show as disconnected as a result of websocket heartbeat timeout.

Ideally we would find a way to run the resolvers for each workflow in separate processes to isolate them from each other. Though we would probably want to limit the number of processes and distribute subscriptions across a pool.

Original Post

We run subscriptions in a ThreadPoolExecutor.

In Python only one thread can actively run at a time (because of the GIL) so there is no compute parallelism advantage to this (but there may be IO concurrency advantages depending on the implementation details of the code being run).

This means that one large workflow can hog 100% of the CPU of the server, causing issues with other workflows.

We should be able to swap the ThreadPoolExecutor for a ProcessPoolExecutor. I had a quick try, but it didn't work first time, the first update came through but subsequent ones got stuck, so a little work required.

This does raise the question of how many processes the UI Server should be allowed to spawn. I think we should be able to run more subscribers than processes in the pool, but would need to read the docs to find out.

@oliver-sanders oliver-sanders added this to the pending milestone Dec 8, 2023
@oliver-sanders oliver-sanders changed the title data store: swap threads for processes resolvers: run in separate processes Jan 10, 2024
@oliver-sanders
Copy link
Member Author

#548 may reduce the urgency on this by optimizing some resolver stuff.

@dwsutherland
Copy link
Member

Yeah, reason I didn't go with separate processes is so the resolvers have access to the central data-store (if it works that way)

@oliver-sanders
Copy link
Member Author

We could potentially put the data store into shared memory but I'm guessing that managing parallel access would be challenging.

@dwsutherland
Copy link
Member

dwsutherland commented Feb 7, 2024

Silly idea, and wider scope..
But if cylc-flow DataStoreMgr is refactored into DeltaMgr and a DataStoreMgr (which is imported by the UIS)..
This would mean UIS window size(s) are isolated from the Scheduler window size(s), so:

  • Users cannot impact the scheduler machines with absurd window sizes.
  • Different users/UIS viewing the same workflow can't effect each others window size (including TUI)

The DeltaMgr would receive/be-called-by events from the scheduler internals, and package them up for local (Schd data-store) and abroad (UI Servers). Effectively breaking up DataStoreMgr, and reforming it to something agnostic to where it is and how it get's it's deltas.
Would be a bit of work, and might require the scheduler config being made available somehow.. (if DataStoreMgr is to be imported for independent window sizes)

Would we even consider building a data-store for each subscription? advantages being:

  • Different subscriptions can build different size views (including history).
  • Each subscription can be it's own process (?), though, at the moment I'm not sure how this yields data back to the UIS WS sub.
  • No subscriptions, no/less UIS load.

Disadvantages would include machine load, amongst others..

First part, yes (and I think we've discussed it).. Second part, maybe?

@oliver-sanders
Copy link
Member Author

oliver-sanders commented Feb 8, 2024

This would mean UIS window size(s) are isolated from the Scheduler window size(s)

That sounds like a good idea. I think this aligns with what I was thinking of in #464.

I think resolving subscriptions off of the same data store makes still sense so long as filtering by the n-window doesn't become prohibitively expensive.

@hjoliver
Copy link
Member

hjoliver commented Feb 9, 2024

Yes we originally wanted n=0 (or perhaps 1) at the scheduler, since that's all it needs to schedule.

@oliver-sanders
Copy link
Member Author

(the only reason we would want n=1 at the scheduler is Tui, we can drop back to n=0 if there isn't a client connected)

@oliver-sanders
Copy link
Member Author

See also #194

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants