-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Use EnumMap
for AckTracker
#2047
feat: Use EnumMap
for AckTracker
#2047
Conversation
Hopefully addresses @martinthomson's `TODO` about this.
Failed Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
All resultsSucceeded Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
Unsupported Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2047 +/- ##
=======================================
Coverage 95.36% 95.36%
=======================================
Files 112 112
Lines 36475 36477 +2
=======================================
+ Hits 34784 34786 +2
Misses 1691 1691 ☔ View full report in Codecov by Sentry. |
Benchmark resultsPerformance differences relative to 67fb75f. coalesce_acked_from_zero 1+1 entries: 💚 Performance has improved.time: [191.89 ns 192.36 ns 192.85 ns] change: [-2.7613% -2.3159% -1.8723%] (p = 0.00 < 0.05) coalesce_acked_from_zero 3+1 entries: 💚 Performance has improved.time: [231.32 ns 231.77 ns 232.28 ns] change: [-2.6497% -2.2408% -1.8308%] (p = 0.00 < 0.05) coalesce_acked_from_zero 10+1 entries: 💚 Performance has improved.time: [231.31 ns 231.89 ns 232.64 ns] change: [-2.9122% -2.4503% -1.9837%] (p = 0.00 < 0.05) coalesce_acked_from_zero 1000+1 entries: No change in performance detected.time: [214.87 ns 215.06 ns 215.27 ns] change: [-1.3323% -0.5171% +0.3936%] (p = 0.24 > 0.05) RxStreamOrderer::inbound_frame(): Change within noise threshold.time: [119.58 ms 119.66 ms 119.74 ms] change: [+0.4193% +0.5228% +0.6248%] (p = 0.00 < 0.05) transfer/pacing-false/varying-seeds: No change in performance detected.time: [39.957 ms 41.547 ms 43.150 ms] change: [-7.1273% -1.8040% +4.0437%] (p = 0.53 > 0.05) transfer/pacing-true/varying-seeds: No change in performance detected.time: [55.388 ms 58.130 ms 60.871 ms] change: [-5.3835% +1.3347% +9.2072%] (p = 0.72 > 0.05) transfer/pacing-false/same-seed: No change in performance detected.time: [48.466 ms 49.873 ms 51.243 ms] change: [-5.1386% -0.7707% +3.7100%] (p = 0.73 > 0.05) transfer/pacing-true/same-seed: No change in performance detected.time: [64.604 ms 70.883 ms 77.173 ms] change: [-18.086% -7.4421% +4.3337%] (p = 0.21 > 0.05) 1-conn/1-100mb-resp (aka. Download)/client: No change in performance detected.time: [170.69 ms 172.44 ms 174.23 ms] thrpt: [573.94 MiB/s 579.90 MiB/s 585.85 MiB/s] change: time: [-0.3787% +1.0156% +2.3539%] (p = 0.14 > 0.05) thrpt: [-2.2998% -1.0054% +0.3801%] 1-conn/10_000-parallel-1b-resp (aka. RPS)/client: Change within noise threshold.time: [411.92 ms 415.01 ms 418.08 ms] thrpt: [23.919 Kelem/s 24.096 Kelem/s 24.276 Kelem/s] change: time: [+0.1147% +1.2316% +2.3798%] (p = 0.03 < 0.05) thrpt: [-2.3245% -1.2166% -0.1146%] 1-conn/1-1b-resp (aka. HPS)/client: No change in performance detected.time: [46.072 ms 46.771 ms 47.471 ms] thrpt: [21.066 elem/s 21.381 elem/s 21.705 elem/s] change: time: [-1.5051% +0.4752% +2.6834%] (p = 0.66 > 0.05) thrpt: [-2.6133% -0.4730% +1.5281%] Client/server transfer resultsTransfer of 33554432 bytes over loopback.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My only reservation here is that the AckSpaces might end up taking more space than is idea. The reason we chose to use SmallVec is that we'd be spilling onto the heap for the handshake, but it would be an inline allocation once the handshake is done. This version looks like it will have a permanent memory overhead. That overhead is probably trivial, but it might be good to confirm the actual cost.
Hopefully addresses @martinthomson's
TODO
about this.