Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Bulk V2 #2568

Open
wants to merge 2 commits into
base: integration
Choose a base branch
from
Open

WIP: Bulk V2 #2568

wants to merge 2 commits into from

Conversation

keith-ratcliffe
Copy link
Collaborator

No description provided.

@@ -41,7 +41,7 @@ LIVE_CHILD_MAP_MAX_MEMORY_MB=1024
BULK_CHILD_REDUCE_MAX_MEMORY_MB=2048
LIVE_CHILD_REDUCE_MAX_MEMORY_MB=1024

BULK_INGEST_DATA_TYPES=shardStats
BULK_INGEST_DATA_TYPES=shardStats,wikipedia,mycsv,myjson
LIVE_INGEST_DATA_TYPES=wikipedia,mycsv,myjson
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are moving those types to bulk, then remove from live

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC, my goal here was just to signal to the user that they can now use either live or bulk for the 3 test data types...although, I guess it really doesn't matter in the end, since both live and bulk types here get dumped into the ingest.data.types config list, and that list gets deduped in TypeRegistry

Makes me wonder, do we really need to maintain separate variables for these? So far, I haven't found a case where the distinction matters to our code

Anyway, the live flag maker still polls the datatypeName dirs in hdfs, same as always. And the bulk flag maker is now configured to run, polling the new datatypeName-bulk dirs (created in quickstart's install-ingest.sh above)

keith-turner added a commit to keith-turner/datawave that referenced this pull request Sep 30, 2024
These draft changes build on NationalSecurityAgency#2568 with the following differences.

 * Compute bulkv2 load plans using new unreleased APIs in accumulo PR
   4898
 * The table splits are loaded at the beginning of writing to rfiles
   instead of at the end.  Not sure about the overall implications on
   on memory use in reducers of this change.  The load plan could
   be computed after the rfile is closed using a new API in 4898 if
   defering the loading of tablet splits is desired.
 * Switches to using accumulo public APIs for writing rfiles instaead of
   internal accumulo methods. Well public once they are actually
   released.
 * The algorithm to compute the load plan does less work per key/value.
   Should be rougly constant time vs log(N).
 * Adds a simple SortedList class.  This reason this was added is that
   this code does binary searches on list, however it was not certain
   those list were actually sorted.  If the list was not sorted it would
   not cause exceptions in binary search but could lead to incorrect load
   plans and lost data. This new SortedList class ensures list are
   sorted and allows this assurance to travel around in the code.  Maybe
   this change should be its own PR.
keith-turner added a commit to keith-turner/datawave that referenced this pull request Sep 30, 2024
These draft changes build on NationalSecurityAgency#2568 with the following differences.

 * Compute bulkv2 load plans using new unreleased APIs in accumulo PR
   4898
 * The table splits are loaded at the beginning of writing to rfiles
   instead of at the end.  Not sure about the overall implications on
   on memory use in reducers of this change.  The load plan could
   be computed after the rfile is closed using a new API in 4898 if
   defering the loading of tablet splits is desired.
 * Switches to using accumulo public APIs for writing rfiles instaead of
   internal accumulo methods. Well public once they are actually
   released.
 * The algorithm to compute the load plan does less work per key/value.
   Should be rougly constant time vs log(N).
 * Adds a simple SortedList class.  This reason this was added is that
   this code does binary searches on list, however it was not certain
   those list were actually sorted.  If the list was not sorted it would
   not cause exceptions in binary search but could lead to incorrect load
   plans and lost data. This new SortedList class ensures list are
   sorted and allows this assurance to travel around in the code.  Maybe
   this change should be its own PR.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants