Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

experiment: perf and allocation counter based idl decoder limiting #4624

Closed
wants to merge 36 commits into from

Conversation

crusso
Copy link
Contributor

@crusso crusso commented Jul 22, 2024

periodically check the instruction counter during IDL decoding to prevent abuse

  • Set limit relative to initial perf_counter and some instruction limit based allowance
  • Disable for destabilization (iff Registers.get_rel_buf_opt is zero)
  • Port new candid spacebomb test suite to drun-tests, to test against real perf counter provided by drun.
  • Add pseudo idl instruction counter for running under wasmtime.
  • Bump candid dependency to most recent
  • Pass new spacebomb tests, both in candid test suite on wasmtime using pseudo counter

@@ -5131,7 +5131,7 @@ module IC = struct
)

(* For debugging *)
let _compile_static_print env s =
let compile_static_print env s =
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Revert before merge

@crusso crusso changed the title experiment: perf counter based idl decoder limiting experiment: perf and allocation counter based idl decoder limiting Aug 8, 2024
@@ -314,6 +319,9 @@ unsafe fn skip_any_vec(buf: *mut Buf, typtbl: *mut *mut u8, t: i32, count: u32)
// makes no progress. No point in calling it over and over again.
// (This is easier to detect this way than by analyzing the type table,
// where we’d have to chase single-field-records.)
for _ in 1..count {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better might be to measure the cost of the previous skip and lower the limits based on a multiplier of that cost and return (without this faux for_loop). Then we would exit even earlier!

mergify bot pushed a commit that referenced this pull request Aug 16, 2024
Simplifies #4624 to a simple linear limit on the number of decoded values as a function of decoded payload size,
instead of using two linear functions on perfcounter (simulated or real) and allocation counter.

The function is:

value_quota(blob) : Nat64 = blob.size() * (numerator/denominator) + bias

where blob is the candid blob to be decoded, and `numerator` (default 1), `denominator` (default 1) and `bias` (default 1024) are `Nat32s`.


Much simpler than #4624 and doesn't depend on vagaries of instruction metering and byte allocation which varies with gc and compiler options, but is it good enough?

The constants can be (globally) modified/inspected using prims (Prim.getCandidLimits/Prim.setCandidLimits) which will need to get exposed in base eventually.

The quota is decremented on every call to deserialise or skip a value in vanilla candid mode (destabilization is not metered).
The quota is eagerly checked before deserializing or skipping arrays.

One possible refinement would be to combine the value quota with a memory quota (though the latter would still vary with gc flavour and perhaps word-size unless we count logical words)


- [x] Disable for destabilization (iff Registers.get_rel_buf_opt is zero)
- [x] Port new candid spacebomb test suite to drun-tests, to test against real perf counter provided by drun. 
- [x] Bump candid dependency to most recent
- [x] Pass new spacebomb tests, both in candid test suite on wasmtime using value counter.
@crusso crusso closed this Sep 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant