Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add evaluation.py to ml branch #239

Closed
wants to merge 18 commits into from

Conversation

OlegMatveevS
Copy link
Collaborator

@OlegMatveevS OlegMatveevS commented Mar 29, 2023

Raw version of the evaluation.py script

#233

ml/synthesis/src/scripts/evaluation.py Outdated Show resolved Hide resolved
ml/synthesis/src/scripts/evaluation.py Outdated Show resolved Hide resolved
ml/synthesis/src/scripts/evaluation.py Outdated Show resolved Hide resolved
ml/synthesis/src/scripts/evaluation.py Outdated Show resolved Hide resolved
ml/synthesis/src/scripts/evaluation.py Outdated Show resolved Hide resolved
ml/synthesis/src/scripts/evaluation.py Outdated Show resolved Hide resolved
ml/synthesis/src/scripts/evaluation.py Outdated Show resolved Hide resolved
ml/synthesis/src/scripts/evaluation.py Outdated Show resolved Hide resolved

mname = args.model_name
MODELS_ROOT = Path(args.models_root)
model = tf.keras.models.load_model(MODELS_ROOT / mname)
Copy link
Collaborator

@iburakov iburakov Mar 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, look at new ModelsStore too. Can be reused here, I guess, to generalize model loading logic. It'll have to be moved to components from mlbackend then.

@OlegMatveevS OlegMatveevS reopened this Apr 8, 2023
Copy link
Collaborator

@iburakov iburakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getting there! relatively minor fixes still required, but during next pass I'll probably execute/test it myself and implement all final needed refactorings.

also try this (leave it up to me if it seems complex):

  • make example path a main argument
  • refactor CLI argument parsing to do so via argparse module (see other scripts for examples)
  • implement automatic test doing a simple main(...) call on a quick example to make sure this script doesn't break over time

ml/synthesis/src/scripts/evalution.py Outdated Show resolved Hide resolved
ml/synthesis/src/scripts/evalution.py Outdated Show resolved Hide resolved
ml/synthesis/src/scripts/evalution.py Outdated Show resolved Hide resolved
@OlegMatveevS OlegMatveevS changed the title evaluation.py add evaluation.py to ml branch May 25, 2023
Copy link
Owner

@ryukzak ryukzak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need:

  • rebase on the master
  • format code via black, isort and check by flake8 (see students chat for details)
  • we need by script arguments:
    • list of examples (not path). E.g.: evaluation.py example/*.lua, or evaluation.py example/pid.lua example/sum.lua
    • select evaluation methods (should work ML synthesis)
    • pass nitta arguments.
  • Add usage example (with output) to readme.md (for master branch -- without ML, for ml branch -- with).

@OlegMatveevS OlegMatveevS deleted the OlegMatveevS-patch-1 branch June 3, 2023 08:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement external synthesis by ML (via Python script and NITTA REST API)
3 participants