Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature(op-node): pre-fetch receipts concurrently #100

Merged

Conversation

welkin22
Copy link
Contributor

Description

I added pre-fetch receipt logic in the #57. However, if the L1 endpoint is not functioning well and the interface response time increases, the efficiency of pre-fetch will be significantly reduced. To address this issue, we parallelize the pre-fetch process for multiple future block heights. This allows us to achieve better performance even when the L1 endpoint is not in good condition.

Rationale

To solve the problem of low L1 endpoint performance, we have added concurrent logic to improve the efficiency of pre-fetching receipts.

Example

none

Changes

Notable changes:

  • Pre-fetch adds concurrency logic.
  • The context passed into GoOrUpdatePreFetchReceipts has been modified to use the background context, as the ctx has a timeout of 3 seconds. We do not currently need a timeout here.

@welkin22 welkin22 requested a review from bnoieh December 19, 2023 08:22
@welkin22 welkin22 changed the title feature(op-node): concurrent pre-fetch receipts feature(op-node): pre-fetch receipts concurrently Dec 19, 2023
continue
}
s.log.Debug("pre-fetching receipts", "block", currentL1Block)

go func(ctx context.Context, blockInfo eth.L1BlockRef) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not put the line 152 "L1BlockRefByNumber" here as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because when we reach the latest block height, L1BlockRefByNumber also has the effect of making our processing process wait for a period of time. We assume that if we parallelize the process of L1BlockRefByNumber, then when we reach the latest block height, the processing process will not stop, and it will continue to launch new goroutines to try to process blocks that have not yet been generated.
On the other hand, the performance of the L1BlockRefByNumber interface is not so bad, and it also has its own cache, so there is no need to parallelize the processing.

if err != nil {
s.log.Warn("failed to pre-fetch receipts", "err", err)
time.Sleep(200 * time.Millisecond)
waitErr := s.preFetchReceiptsRateLimiter.Wait(ctx)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it a better way to put this ratelimit.wait before L1BlockRefByNumber call?
After triggering the rate limiting threshold, can reduce some unnecessary L1BlockRefByNumber calls

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The requests for L1BlockRefByNumber should all be necessary:
When we have not reached the latest block height, the request parameters for L1BlockRefByNumber are different every time, and the results obtained are useful.
When we reach the latest block height, continuously requesting L1BlockRefByNumber allows us to process subsequent processes as soon as a new block height appears. If we add a limiter here, we will still need to wait for a period of time before entering the subsequent process when a new block height appears.

krish-nr
krish-nr previously approved these changes Dec 20, 2023
Copy link
Contributor

@krish-nr krish-nr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/LGTM

@welkin22
Copy link
Contributor Author

I changed the value of MaxConcurrentRequests from 10 to 20 in ab8dcdf and modified the limiter in GoOrUpdatePreFetchReceipts to be half of MaxConcurrentRequests. This is because if GoOrUpdatePreFetchReceipts takes up all of MaxConcurrentRequests, other places that need to request will be throttled. See the code here:

func LimitRPC(c client.RPC, concurrentRequests int) client.RPC {

@krish-nr PTAL

@welkin22 welkin22 merged commit 7947c25 into bnb-chain:develop Dec 21, 2023
9 checks passed
welkin22 added a commit to welkin22/opbnb that referenced this pull request Dec 21, 2023
* feature(op-node): concurrent pre-fetch receipts

* use background ctx in GoOrUpdatePreFetchReceipts

* change MaxConcurrentRequests from 10 to 20

---------

Co-authored-by: Welkin <[email protected]>
welkin22 added a commit to welkin22/opbnb that referenced this pull request Dec 22, 2023
* feature(op-node): concurrent pre-fetch receipts

* use background ctx in GoOrUpdatePreFetchReceipts

* change MaxConcurrentRequests from 10 to 20

---------

Co-authored-by: Welkin <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants