This is a project, that tries to accelerate micro research projects by providing a richer functionality for the already known hpyerparameter optimization library optuna. The code of optuna is not modified, it is incorporated into rapidFlow to provide richer evaluation and easy parallel processing.
- Python >= 3.7
- PyTorch
rapidFlow is build upon Pytorch, so make sure you have PyTorch installed.
-
From Pip Install package with:
pip install rapidflow
-
With cloned repository Install package with:
pip install -e /src
- move experiment library to another repo
- experiments in docker container with gpu? (or singularity)
- test on multiple gpus
- testing and propper doku
- significance testing
Feel free to contribute. If you use this repository please cite with:
@misc{rapidFlow_geb,
author = {Gebauer, Michael},
title = {rapidFlow},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/gebauerm/model_storage}},
}