Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trigger batch correctly at each size #4

Merged
merged 1 commit into from
Sep 4, 2023

Conversation

martinsumner
Copy link

@martinsumner
Copy link
Author

As the fold queues using riak_kv_queue_manager:bulk_request/2 which is a gen_server:call - there is no need to replicate the riak_kv_clusteraae_fsm:handle_in_batches/4 behaviour of sending a sync message every batch (to prevent the mailbox from overflowing before the queue can overflow to disk).

@martinsumner martinsumner merged commit c744121 into nhse-develop-3.0 Sep 4, 2023
1 check passed
@martinsumner martinsumner deleted the nhse-d30-kv1872 branch September 4, 2023 17:55
martinsumner added a commit that referenced this pull request Nov 14, 2023
martinsumner added a commit that referenced this pull request Nov 14, 2023
@martinsumner martinsumner mentioned this pull request Nov 14, 2023
martinsumner added a commit that referenced this pull request Feb 13, 2024
* Merge pull request #1 from nhs-riak/nhse-contrib-kv1871

KV i1871 - Handle timeout on remote connection

* Trigger batch correctly at each size (#4)

* Force timeout to trigger (#3)

Previously, the inactivity timeout on handle_continue could be cancelled by a call to riak_kv_rpelrtq_snk (e.g. from riak_kv_rpelrtq_peer).  this might lead to the log_stats loop never being triggered.

* Configurable %key query on leveled (#8)

Can be configured to ignore tombstone keys by default.

* Allow nextgenrepl to real-time replicate reaps (#6)

* Allow nextgenrepl to real-time replicate reaps

This is to address the issue of reaping across sync'd clusters.  Without this feature it is necessary to disable full-sync whilst independently replicating on each cluster.

Now if reaping via riak_kv_reaper the reap will be replicated assuming the `riak_kv.repl_reap` flag has been enabled.  At the receiving cluster the reap will not be replicated any further.

There are some API changes to support this.  The `find_tombs` aae_fold will now return Keys/Clocks and not Keys/DeleteHash.  The ReapReference for riak_kv_repaer will now expect a clock (version vector) not a DeleteHash, and will also now expect an additional boolean to indicate if this repl is a replication candidate (it will be false for all pushed reaps).

The object encoding for nextgenrepl now has a flag to indicate a reap, with a special encoding for reap references.

* Update riak_object.erl

Clarify specs

* Take timestamp at correct point (after push)

* Updates following review

* Update rebar.config

* Make current_peers empty when disabled (#10)

* Make current_peers empty when disabled

* Peer discovery to recognise suspend and disable of sink

* Update src/riak_kv_replrtq_peer.erl

Co-authored-by: Thomas Arts <[email protected]>

* Update src/riak_kv_replrtq_peer.erl

Co-authored-by: Thomas Arts <[email protected]>

---------

Co-authored-by: Thomas Arts <[email protected]>

* De-lager

* Add support for v0 object in parallel-mode AAE (#11)

* Add support for v0 object in parallel-mode AAE

Cannot assume that v0 objects will not happen - capability negotiation down to v0 on 3.0 Riak during failure scenarios

* Update following review

As ?MAGIC is distinctive constant, then it should be the one on the pattern match - with everything else assume to be convertible by term_to_binary.

* Update src/riak_object.erl

Co-authored-by: Thomas Arts <[email protected]>

---------

Co-authored-by: Thomas Arts <[email protected]>

* Update riak_kv_ttaaefs_manager.erl (#13)

For bucket-based full-sync `{tree_compare, 0}` is the return on success.

* Correct log macro typo

---------

Co-authored-by: Thomas Arts <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants