Skip to content

Latest commit

 

History

History

control_net

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 

ControlNet

ControlNet with Canny Edge

Input

  • Prompt: "bird"

Input

Output

Output

ControlNet with Human Pose

Input

  • Prompt: "Chef in the kitchen"

Input

Output

Output

ControlNet with Semantic Segmentation

Input

  • Prompt: "house"

Input

Output

Output

Requirements

This model requires additional module.

pip3 install transformers

Usage

Automatically downloads the onnx and prototxt files on the first run. It is necessary to be connected to the Internet while downloading.

For the sample image,

$ python3 control_net.py

This will save each sample individually as well as a grid of size --n_samples option values.

If you want to specify the input image, put the image path after the --input option. You can use --savepath option to change the name of the output file to save.

$ python3 control_net.py --input IMAGE_PATH --savepath SAVE_IMAGE_PATH

If you want to specify the prompt text, put the text after the --prompt option.

$ python3 control_net.py --prompt TEXT

By adding the --model_type option, you can specify model type which is selected from "canny", "pose", "seg". (default is canny)

$ python3 control_net.py --model_type canny
  • For Canny Edge example.

    $ python3 control_net.py --model_type canny --input examples/bird.png --prompt bird
  • For Human Pose example.

    $ python3 control_net.py --model_type pose --input examples/pose1.png --prompt "Chef in the kitchen"
  • For Semantic Segmentation example.

    $ python3 control_net.py --model_type seg --input examples/house.png --prompt house

Reference

Framework

Pytorch

Model Format

ONNX opset=12

Netron