Skip to content

grupo-avispa/depth_anything_v2_ros2

Repository files navigation

depth_anything_v2_ros2

ROS2 License

Overview

This package is a ROS2 wrapper for the depth_anything_v2 library. It provides a ROS2 node that subscribes to a camera topic and publishes the depth map of the scene.

RGB Depth
RGB Depth

The depth_anything_v2_ros2 package has been tested under ROS2 Humble on Ubuntu 22.04. This is research code, expect that it changes often and any fitness for a particular purpose is disclaimed.

Installation

Building from Source

Dependencies

Building

To build from source, clone the latest version from the main repository into your colcon workspace and install the python dependencies by running the following commands:

cd colcon_workspace/src
git clone https://github.com/grupo-avispa/depth_anything_v2_ros2.git -b main
cd depth_anything_v2_ros2
git submodule init
git submodule update
pip3 install -r requirements.txt

Before building, make sure to download the depth_anything_v2 model weights and place them in the models directory. You can download them here.

Then, install the ROS2 dependencies using rosdep and build the package using:

cd colcon_workspace
rosdep install -i --from-path src --rosdistro humble -y
colcon build --symlink-install

Usage

With some RGB image source running, run the depth_anything_v2_ros2 node with:

ros2 launch depth_anything_v2_ros2 default.launch.py

Nodes

depth_anything

This node subscribes to a camera topic and publishes the depth map of the scene.

Subscribed Topics

Published Topics

Parameters

  • image_topic (string, default: "camera/color/image_raw")

    Topic where the image will be subscribed.

  • depth_image_topic (string, default: "depth")

    Topic where the raw depth image will be published.

  • device (string, default: "cuda:0")

    Device to use for the inference (cpu or cuda).

  • model_file (string, default: "depth_anything_v2_vits.pth")

    Path to the model, located in the models directory.

  • encoder (string, default: "vits")

    Encoder to use for the inference (vits, vitb or vitl).