-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add evaluation.py to ml branch #239
Conversation
|
||
mname = args.model_name | ||
MODELS_ROOT = Path(args.models_root) | ||
model = tf.keras.models.load_model(MODELS_ROOT / mname) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, look at new ModelsStore
too. Can be reused here, I guess, to generalize model loading logic. It'll have to be moved to components
from mlbackend
then.
e9b65a1
to
278e99d
Compare
eb555f9
to
465fc14
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
getting there! relatively minor fixes still required, but during next pass I'll probably execute/test it myself and implement all final needed refactorings.
also try this (leave it up to me if it seems complex):
- make example path a
main
argument - refactor CLI argument parsing to do so via
argparse
module (see other scripts for examples) - implement automatic test doing a simple
main(...)
call on a quick example to make sure this script doesn't break over time
ml/synthesis/src/components/data_processing/feature_engineering.py
Outdated
Show resolved
Hide resolved
ml/synthesis/src/components/data_processing/feature_engineering.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need:
- rebase on the master
- format code via black, isort and check by flake8 (see students chat for details)
- we need by script arguments:
- list of examples (not path). E.g.:
evaluation.py example/*.lua
, orevaluation.py example/pid.lua example/sum.lua
- select evaluation methods (should work ML synthesis)
- pass nitta arguments.
- list of examples (not path). E.g.:
- Add usage example (with output) to readme.md (for master branch -- without ML, for ml branch -- with).
…ss nitta arguments.
Raw version of the evaluation.py script
#233