-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new 'perf' test property #4486
Conversation
This will be used to mark suites as performance test suites.
We have a performance test PFS - why are you using SMS? |
@jedwards4b , thanks, I was not aware of PFS. I'm not sure if that's the approach E3SM wants to take for performance tests... To start, we absolutely need SAVE_TIMING for all our performance tests. We could maybe change the PFS xml definition to turn that on. Also, putting in it the test type (PFS) means we could not do performance testing of any other test types. |
"time": "0:10:00", | ||
"perf": True, | ||
"tests": ( | ||
"SMS_P2.T42_T42.S", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean here - just change SMS to PFS.
Oh, I see what you mean. I could easily change that although the only purpose for that suite is the doctest below that is checking the perf property. |
if is_perf_test(test): | ||
case.set_value("SAVE_TIMING", True) | ||
else: | ||
case.set_value("SAVE_TIMING", self._save_timing) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the key change I think. E3SM outputs a bunch of extra perf data and, if you use create_test, none of it is saved. If you use create_newcase, its saved to a special directory on each machine. You can override that with a command-line argument but that would do it for every test in a suite. This allows another way to save the timing info from single tests in a large suite (our integration suite has 100+ cases). Instead of creating/running a separate suite just for performance.
@@ -23,6 +23,7 @@ | |||
# "inherit" : (suite1, suite2, ...), # Optional. Suites to inherit tests from. Default is None. Tuple, list, or str. | |||
# "time" : "HH:MM:SS", # Optional. Recommended upper-limit on test time. | |||
# "share" : True|False, # Optional. If True, all tests in this suite share a build. Default is False. | |||
# "perf" : True|False, # Optional. If True, all tests in this suite will do performance tracking. Default is False. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wait every test in a suite? We have a command line argument to create_test for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rljacob , yes:
The idea would be to select a few tests in the integration suite, pull them into their own suite, mark them as perf=True, then have the original suite inherit the new suite.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So you would still need 2 separate blocks in tests.py but they could all be launched with one create_test command?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rljacob , yes. It has to be this way (for this perf=True approach) since we store properties on test suites, not individual tests. Another option would be to add a testmod that turns on performance tracking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Take a look at how e3sm_integration is currently assembled. Its a combo of suites which are themselves combos of suites and each of those may have to be broken up in to perf and no-perf suites.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's expected. I think only a few tests will become performance tests.
Do we really need another mechanism for something that has been available in the test system all along? |
While the |
Sure, no worries, I just wanted to point out the one that was already available. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Its more accurate to say the "Perf" label turns on performance data collection rather then turns it in to a performance test. It doesn't add any pass/fail conditions. Over time, the data collection will let us spot trends. |
This will be used to mark suites as performance test suites.
I think this is the best way to approach the issue of performance testing. It will allow us to mark a handful of tests in our common test suites as performance tests so we can piggyback on the current testing jobs. The idea would be to select a few tests in the integration suite, pull them into their own suite, mark them as perf=True, then have the original suite inherit the new suite.
Test suite: doctests
Test baseline:
Test namelist changes:
Test status: bit for bit
Fixes [CIME Github issue #]
User interface changes?:
Update gh-pages html (Y/N)?: