Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PrintNanny Vision SDK [Request for Comment] #305

Open
leigh-johnson opened this issue May 8, 2023 · 0 comments
Open

PrintNanny Vision SDK [Request for Comment] #305

leigh-johnson opened this issue May 8, 2023 · 0 comments
Assignees

Comments

@leigh-johnson
Copy link
Member

leigh-johnson commented May 8, 2023

Describe your Request
Right now, the implementation of PrintNanny Vision is embedded into the PrintNanny OS system image. PrintNanny OS bundles the whole webrtc-based video streaming stack, camera drivers, and vision/detection applications (gstreamer pipelines).

We want to separate the vision components, so these can exist as a stand-alone SDK for OEMs looking to integrate PrintNanny into their existing software stack.

Community Edition

tl;dr:: Connect PrintNanny up to any camera system using an open-source model.
demo: Included in PrintNanny OS
licensing: AGPL

Please take a look at inference step 4 below.

OEM Edition

tl;dr:: Train a PrintNanny model customized for YOUR 3D printer hardware.
demo: TBD
licensing: Commercial

Plug PrintNanny into your existing camera system. The bare-bones interfaces needed to collect data, train, deploy:

1. Data collection

Define an Arrow schema for raw bayer sensor data (so we're agnostic to encoding stack), temperature sensor data, and filament flow rate sensors.

Collect one sample frame and histogram of temperature readings per Z-axis movement.

import printnanny_vision

# Configure your API key
printnanny_vision.init(api_key="demo")

# Provide a name and schema for your dataset
SCHEMA = "/path/to/arrow/schema"
DATASET_NAME = "2023-05-08__Printer1__ModelFilename"

# Collect data samples until someone runs control+c to interrupt this script.
my_dataset = printnanny_vision.Dataset(schema=SCHEMA, name=DATASET_NAME)
try:
 
   print("PrintNanny is collecting samples for dataset {DATASET_NAME}. Press control+c to interrupt and upload dataset.")
    my_dataset.run_collector()
except KeyboardInterrupt:
    print(f"PrintNanny is uploading {DATASET_NAME}. This could take a while, you might want to grab a coffee☕")
    # Upload dataset, and print upload progress to terminal
    my_dataset.upload(progress=True)

print(f"PrintNanny finished uploading {DATASET_NAME}! You can view it at: {my_dataset.url}")

2. Labeling

  • Bounding box defective areas
  • Paint (segment) defective areas

TBD. I use a fork of VoTT for my labeling infrastructure, with a guidance model to speed up manual labeling.

We have the option of partnering with a data labeling service here.

3. Training

  • EffidientDet backbone
  • BiFPN allows us to start with image data, then add additional feature extractor networks for temperature/flow
  • Recommend a sensitivity threshold

For a first pass (without temperature/flow rate data), we can use any commodity vision AutoML product. Here's an example of the results achieved with Google Cloud AutoML Vision, for example.

import printnanny_vision

DATASET_NAME = "2023-05-08__Printer1__ModelFilename"

# Submit training job via Google Cloud Platform AutoML platform (get a quick working prototype for ~$200, minimum 4,000 samples)
# See this blog post for an example: https://medium.com/towards-data-science/soft-launching-an-ai-ml-product-as-a-solo-founder-87ee81bbe6f6
printnanny_vision.train(dataset_name=DATASET_NAME, timeout="6h", backend="gcp-automl", model_name="2023-05-08_AutoML")

# Run a local EfficientDet training job, incorporating flow rate and temperature data
printnanny_vision.train(dataset_name=DATASET_NAME, timeout="6h", backend="printnanny-efficientdet", model_name="2023-05-08-efficientdet")

4. Inference

  • 1 inference pass per Z-axis layer
  • Online (cloud) inference
  • Offline (air-gapped) inference remains available in PrintNanny OS as a reference implementation, and we'll work with vendors directly where air-gapped operations are p0.
import printnanny_vision


JOB_NAME = "KaplanTurbineV2.stl"
MODEL_NAME = "2023-05-08__EfficidentDet"
CAMERA_ENDPOINT = "http://localhost:8080/snapshot`
NOTIFICATION_WEBHOOK = "http://localhost:8080/notifications"

# On a z-axis height change, call printnanny_vision.monitor()
printnanny_vision.monitor(camera=CAMERA_ENDPOINT, model_name=MODEL_NAME, save_results=True, job_name=JOB_NAME, webhook=NOTIFICATION_WEBHOOK)

5. Feedback

  • Build a data frame where 1 row is original input data + inference pass
  • Notification webhook for true positives ☝️ Configured in printnanny_vision.monitor() call.

This gives us everything we need to train and deploy a pilot model.

@leigh-johnson leigh-johnson added the enhancement New feature or request label May 8, 2023
@leigh-johnson leigh-johnson self-assigned this May 8, 2023
@leigh-johnson leigh-johnson added help wanted Extra attention is needed request for comment and removed enhancement New feature or request help wanted Extra attention is needed labels May 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant