Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[not FINAL] feat: new System API call ic0.call_cycles_add128_up_to #316

Closed
wants to merge 7 commits into from

Conversation

mraszyk
Copy link
Contributor

@mraszyk mraszyk commented Jun 14, 2024

This PR adds a new System API call ic0.call_cycles_add128_up_to moving the maximum amount of cycles from the canister cycles balance onto an inter-canister call up to a specified upper bound.

@mraszyk mraszyk requested a review from a team as a code owner June 14, 2024 06:55
Copy link

github-actions bot commented Jun 14, 2024

🤖 Here's your preview: https://nlbfx-nyaaa-aaaak-qcnzq-cai.icp0.io/docs

Dfinity-Bjoern

This comment was marked as outdated.

spec/index.md Show resolved Hide resolved
@dsarlis

This comment was marked as outdated.

@Dfinity-Bjoern Dfinity-Bjoern changed the title feat: new System API call ic0.call_cycles_add128_up_to [FINAL] feat: new System API call ic0.call_cycles_add128_up_to Jul 9, 2024
spec/index.md Outdated Show resolved Hide resolved
es.params.sysenv.compute_allocation,
es.params.sysenv.memory_allocation,
es.params.sysenv.freezing_threshold,
memory_usage_wasm_state(es.wasm_state) + es.params.sysenv.memory_usage_raw_module + es.params.sysenv.memory_usage_canister_history,
Copy link
Contributor Author

@mraszyk mraszyk Aug 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
memory_usage_wasm_state(es.wasm_state) + es.params.sysenv.memory_usage_raw_module + es.params.sysenv.memory_usage_canister_history,
memory_usage_wasm_state(es.wasm_state) + es.params.sysenv.memory_usage_raw_module + es.params.sysenv.memory_usage_canister_history + max(|es.pending_call.method_name| + |es.pending_call.arg|, MAX_RESPONSE_SIZE),

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe not needed though as we don't model message memory anyway.

github-merge-queue bot pushed a commit to dfinity/ic that referenced this pull request Aug 29, 2024
github-merge-queue bot pushed a commit to dfinity/ic that referenced this pull request Aug 29, 2024
- It moves no more cycles than represented by a 128-bit value which can be obtained by combining the `max_amount_high` and `max_amount_low` parameters.

- A subsequent `ic0.call_perform` does not fail because of insufficient cycles balance
(if no extra bytes were added to the message via `ic0.call_data_append` between `ic0.call_cycles_add128_up_to` and `ic0.call_perform`)
Copy link
Contributor Author

@mraszyk mraszyk Sep 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
(if no extra bytes were added to the message via `ic0.call_data_append` between `ic0.call_cycles_add128_up_to` and `ic0.call_perform`)
(if the canister makes no heap or stable memory allocations and no extra bytes were added to the message via `ic0.call_data_append` between `ic0.call_cycles_add128_up_to` and `ic0.call_perform`)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dsarlis this seems a bit unpredictable to use. In higher-level languages, it seems difficult to guarantee that no extra memory is allocated. Should we maybe extend the implementation so that we can drop the additional conditions (by doing the computation of actual cycles to be sent to call_perform)?

Copy link
Member

@Dfinity-Bjoern Dfinity-Bjoern Sep 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mraszyk just pointed out that this would violate

This system call also copies the actual amount of cycles that were moved onto the call represented [...]

which seems to make it necessary to eagerly compute the amount of cycles attached. Not sure which is the best path forward here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is possible to compute the actual amount of cycles attached by taking the difference of the canister's cycles balance before and after calling the system API ic0.call_cycles_add128_up_to (confirmed empirically). So dropping the destination pointer should not make the API substantially harder to use. The "trick" with computing the cycles balance difference does not work across ic0.call_perform though (as xnet fees are charged, too). Moreover, it is not clear what the semantics of multiple ic0.call_cycles_add128_up_to should be if we only attach the cycles during ic0.call_perform (the sum vs the maximum?). Hence, delaying the actual attachment to ic0.call_perform might not be the best path forward either.

I'd suggest to simply drop the destination pointer if the CDK teams could confirm that allocations are unlikely in between ic0.call_cycles_add128_up_to and ic0.call_perform. (Allocations happening after ic0.call_perform are fine as long as they are covered by the unused execution cycles that are refunded first.)

@Dfinity-Bjoern Dfinity-Bjoern changed the title [FINAL] feat: new System API call ic0.call_cycles_add128_up_to [not FINAL] feat: new System API call ic0.call_cycles_add128_up_to Sep 3, 2024
@Dfinity-Bjoern
Copy link
Member

Removing the FINAL tag since we need to validate whether the API is actually practically usable.

@mraszyk mraszyk closed this Sep 17, 2024
levifeldman pushed a commit to levifeldman/ic that referenced this pull request Oct 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants