Skip to content

Victim identification

MariosPro edited this page Dec 1, 2014 · 3 revisions

Package architecture

Architecture Overview

Nodes

Interfaces

  • Input: victim_topics.yaml::/subscribed_topics_names/enhanced_hole_alert. This topic is of type vision_communications::EnhancedHolesVectorMsg.msg.
  • Output: victim_topics.yaml::/published_topics_names/victim_alert . This topic is of type pandora_common_msgs::GeneralAlertMsg.msg.

How to run

Both victim and training nodes are state dependent. For this reason, in order to proceed to it's execution two steps are required. First of all it is necessary to initialize node with an appropriate roslaunch command and second to change to one of the appropriate states with a rosrun command. Victim node is responsible to identify victims in a scene based on a trained dataset and training node is responsible for the training of the system using a preselected dataset . The camera images that used can be either captured from Kinect, or Xtion. The corresponding launcher to initiate each of the nodes allows to choose the camera to capture from.

Standalone

You initiate the execution by running:

Victim node:

roslaunch pandora_vision_victim victim_node_standalone.launch [option]

Victim Training node:

roslaunch pandora_vision_victim victim_train_node_standalone.launch [option]

Argument option:

  • With a Kinect plugged in your computer: openni:=true
  • With a Xtion plugged in your computer: openni2:=true

Then, execute the following, in order for the nodes to transition to an ON state.

rosrun state_manager state_changer 3

On the robot

Only for the victim node!

You initiate the execution by running:

roslaunch pandora_vision_victim pandora_vision_victim_node.launch

Visualization

In standalone mode

Run rosrun rqt_reconfigure rqt_reconfigure and choose pandora_vision -> victim_node.