Skip to content

Latest commit

 

History

History
executable file
·
160 lines (111 loc) · 10.3 KB

README.md

File metadata and controls

executable file
·
160 lines (111 loc) · 10.3 KB

End-to-end collaborative development platform to build and run machine learning solutions.

Getting StartedDocumentationSupportReport a BugFAQContributingChangelog

ML Lab is a centralized hub for development teams to seamlessly build, deploy, and operate machine learning solutions at scale. It is designed to cover the end-to-end machine learning lifecycle from data processing and experimentation to model training and deployment. It combines the libraries, languages, and tools data scientists love, with the infrastructure, services and workflows they need to deliver machine learning solutions into production.

Highlights

  • 🔐 Secure multi-user development plaform for machine learning solutions.
  • 🛠 Workspace with integrated tooling (Jupyter, VS Code, SSH, VNC, Hardware Monitoring, ...)
  • 🗃️ Upload, manage, version, and share datasets & models.
  • 🔬 Monitor and share experiments for reproducability.
  • 🎛 Deploy and operate machine learning solutions for productive usage.
  • 🐳 Deployable on a single-server via Docker or a server-cluster via Kubernetes.

Getting Started

Deploying ML Lab in a single-host deployment (via Docker) is as simple as:

docker run --rm \
           --env LAB_ACTION=install \
           --env LAB_PORT=8080 \
           --env BACKEND_SERVICE_IMAGE=docker.pkg.github.com/sap/machine-learning-lab/lab-service:0.1.0 \
           --env MODEL_SERVICE_IMAGE=docker.pkg.github.com/sap/machine-learning-lab/lab-model-service:0.1.0 \
           --env WORKSPACE_IMAGE=docker.pkg.github.com/sap/machine-learning-lab/ml-workspace-lab:0.1.0 \
           -v /var/run/docker.sock:/var/run/docker.sock \
           docker.pkg.github.com/sap/machine-learning-lab/lab-service:0.1.0

# The following two commands are needed since the GitHub package registry only works when being logged in; even for pulling.
# See note below.
docker pull docker.pkg.github.com/sap/machine-learning-lab/lab-model-service:0.1.0
docker pull docker.pkg.github.com/sap/machine-learning-lab/ml-workspace-lab:0.1.0

Note: We deployed the current version to the GitHub package repository which requires the image environment variables to be set; that might change in the future. Even to pull public images, you need to login to GitHub package registry as described here.

Local Build: If you built the ML Lab locally, you can omit the *_IMAGE env variables and don't have to pull the images! In that case, lab-service:latest is used. See the Build Section for details how to build the code.

Voilà, that was easy! Now, Docker will pull the required Docker images to your machine. After the installation is finished, visit http://localhost:8080 and login with admin:admin (please change the admin password from the user menu).

Please refer to our documentatation for further information on deploying ML Lab for productive usage, including information about configuration options, Kubernetes deployment, adminisrative task and general usage.

Known Issues

For known issues and other questions, check out the documentation.

Support & Feedback

The ML Lab project is maintained by Benjamin Räthlein and Lukas Masuch. Please understand that we won't be able to provide individual support via email. We also believe that help is much more valuable if it's shared publicly so that more people can benefit from it.

Type Channel
🚨 Bug Reports
🎁 Feature Requests
👩‍💻 Usage Questions tbd
🗯 General Discussion tbd

Documentation

Please refer to our documentatation for information about productive deployment, configuration options, adminisrative tasks, architecture, and general usage.

Development

Requirements:

  • To build locally: Java >= 8, Python >= 3.6, Npm >= 6.4, Maven, Docker
  • To build in containerized environment: Docker and Act are required to be installed on your machine to execute the containerized build process._

To simplify the process of building this project from scratch, we provide build-scripts that run all necessary steps (build, check, test, and release). There is also an easy way to do so in a containerized environment (see the workflows for details).

Build

Execute this command in the project root folder to build this project and the respective docker container:

# Install dependency
pip install universal-build

python build.py --make

# Containerized via act
# The `-b` flag binds the current directory to the act container and the build artifacts appear on your host machine.
act -b -j build -s BUILD_ARGS="--make"

When the BUILD_ARGS secret is omitted for act, the default flags are used.

This script compiles the project, assembles the various JAR artifacts (executable service, client, sources) and builds a docker container with the assembled executable jar. For additional script options:

python build.py --help

Test

Running the tests from the repository root execute the backend tests as well as the webapp tests (all *.test.js files).

python build.py --test

# Containerized via act
act -b -j build -s BUILD_ARGS="--test"

Before running the tests the project has to be built. You can additionally add the --make flag to first build and then test.

The project can be built and tested on GitHub Actions by using the build-pipeline, click on Run workflow and pass --make --test --force --version 0.0.0 --skip-path services/lab-workspace --skip-path services/lab-model-service --skip-path services/simple-workspace-service to the Arguments passed to build script. input. With this input, the project is built and tested; since the Workspace image is really big and not needed for the tests, it is skipped.

Deploy

Execute this command in the project root folder to push all docker containers to the configured docker registry:

python build.py --release --version={MAJOR.MINOR.PATCH-TAG}

For deployment, the version has to be provided. The version format should follow the Semantic Versioning standard (MAJOR.MINOR.PATCH). For additional script options:

python build.py --help

Configuration

For more configuration details, check out the documentation.


Contributing