Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No slam Points for 3 cameras. #455

Open
BearrGod opened this issue Jul 10, 2024 · 2 comments
Open

No slam Points for 3 cameras. #455

BearrGod opened this issue Jul 10, 2024 · 2 comments
Labels
user-platform User has trouble running on their own platform.

Comments

@BearrGod
Copy link

Hello everyone. I'm using openvins with 3 cameras (150 degrees HFOV) at 45 deg from one another making a decent overlapping field of view. But when i try to use the 3 cameras, the slam part does not work ( no slam points published) which is fine if i keep moving the platform around, but the moment a stay stationary the estimated pose drifts away. But when i activate only 2 cameras (max_cameras:=2), the slam works. Any help on the matter ?
Here is my config file :

%YAML:1.0 # need to specify the file type at the top!

verbosity: "INFO" # ALL, DEBUG, INFO, WARNING, ERROR, SILENT

use_fej: true # if first-estimate Jacobians should be used (enable for good consistency)
integration: "rk4" # discrete, rk4, analytical (if rk4 or analytical used then analytical covariance propagation is used)
use_stereo: false # if we have more than 1 camera, if we should try to track stereo constraints between pairs
max_cameras: 2 # how many cameras we have 1 = mono, 2 = stereo, >2 = binocular (all mono tracking)

calib_cam_extrinsics: true # if the transform between camera and IMU should be optimized R_ItoC, p_CinI
calib_cam_intrinsics: false # if camera intrinsics should be optimized (focal, center, distortion)
calib_cam_timeoffset: false # if timeoffset between camera and IMU should be optimized
calib_imu_intrinsics: true # if imu intrinsics should be calibrated (rotation and skew-scale matrix)
calib_imu_g_sensitivity: true # if gyroscope gravity sensitivity (Tg) should be calibrated

max_clones: 11 # how many clones in the sliding window
max_slam: 80 # number of features in our state vector
max_slam_in_update: 25 # update can be split into sequential updates of batches, how many in a batch
max_msckf_in_update: 40 # how many MSCKF features to use in the update
dt_slam_delay: 0.5 # delay before initializing (helps with stability from bad initialization...)

gravity_mag: 9.81 # magnitude of gravity in this location

feat_rep_msckf: "GLOBAL_3D"
feat_rep_slam: "ANCHORED_MSCKF_INVERSE_DEPTH"
feat_rep_aruco: "ANCHORED_MSCKF_INVERSE_DEPTH"

# zero velocity update parameters we can use
# we support either IMU-based or disparity detection.
try_zupt: false
zupt_chi2_multipler: 0.4 # set to 0 for only disp-based
zupt_max_velocity: 0.05
zupt_noise_multiplier: 50
zupt_max_disparity: 0.0 # set to 0 for only imu-based
zupt_only_at_beginning: false

# ==================================================================
# ==================================================================

init_window_time: 1.0 # how many seconds to collect initialization information
init_imu_thresh: 1.2 # threshold for variance of the accelerometer to detect a "jerk" in motion
init_max_disparity: 4.0 # max disparity to consider the platform stationary (dependent on resolution)
init_max_features: 120 # how many features to track during initialization (saves on computation)

init_dyn_use: false # if dynamic initialization should be used
init_dyn_mle_opt_calib: true # if we should optimize calibration during intialization (not recommended)
init_dyn_mle_max_iter: 50 # how many iterations the MLE refinement should use (zero to skip the MLE)
init_dyn_mle_max_time: 0.05 # how many seconds the MLE should be completed in
init_dyn_mle_max_threads: 6 # how many threads the MLE should use
init_dyn_num_pose: 6 # number of poses to use within our window time (evenly spaced)
init_dyn_min_deg: 10.0 # orientation change needed to try to init

init_dyn_inflation_ori: 10 # what to inflate the recovered q_GtoI covariance by
init_dyn_inflation_vel: 100 # what to inflate the recovered v_IinG covariance by
init_dyn_inflation_bg: 10 # what to inflate the recovered bias_g covariance by
init_dyn_inflation_ba: 100 # what to inflate the recovered bias_a covariance by
init_dyn_min_rec_cond: 1e-12 # reciprocal condition number thresh for info inversion

init_dyn_bias_g: [ 0.0, 0.0, 0.0 ] # initial gyroscope bias guess
init_dyn_bias_a: [ 0.0, 0.0, 0.0 ] # initial accelerometer bias guess

# ==================================================================
# ==================================================================

record_timing_information: false # if we want to record timing information of the method
record_timing_filepath: "/tmp/traj_timing.txt" # https://docs.openvins.com/eval-timing.html#eval-ov-timing-flame

# if we want to save the simulation state and its diagional covariance
# use this with rosrun ov_eval error_simulation
save_total_state: false
filepath_est: "/tmp/ov_estimate.txt"
filepath_std: "/tmp/ov_estimate_std.txt"
filepath_gt: "/tmp/ov_groundtruth.txt"

# ==================================================================
# ==================================================================

# our front-end feature tracking parameters
# we have a KLT and descriptor based (KLT is better implemented...)
use_klt: true # if true we will use KLT, otherwise use a ORB descriptor + robust matching
num_pts: 200# number of points (per camera) we will extract and try to track
fast_threshold: 25 # threshold for fast extraction (warning: lower threshs can be expensive)
grid_x: 5 # extraction sub-grid count for horizontal direction (uniform tracking)
grid_y: 5 # extraction sub-grid count for vertical direction (uniform tracking)
min_px_dist: 15 # distance between features (features near each other provide less information)
knn_ratio: 0.70 # descriptor knn threshold for the top two descriptor matches
track_frequency: 31.0 # frequency we will perform feature tracking at (in frames per second / hertz)
downsample_cameras: false # will downsample image in half if true
num_opencv_threads: -1 # -1: auto, 0-1: serial, >1: number of threads
histogram_method: "HISTOGRAM" # NONE, HISTOGRAM, CLAHE

# aruco tag tracker for the system
# DICT_6X6_1000 from https://chev.me/arucogen/
use_aruco: false
num_aruco: 1024
downsize_aruco: true

# ==================================================================
# ==================================================================

# camera noises and chi-squared threshold multipliers
up_msckf_sigma_px: 0.8
up_msckf_chi2_multipler: 1
up_slam_sigma_px: 0.8
up_slam_chi2_multipler: 1
up_aruco_sigma_px: 1
up_aruco_chi2_multipler: 1

# masks for our images
use_mask: false
mask0: "mask_t265_cam0.png" #relative to current file
mask1: "mask_t265_cam1.png" #relative to current file

# imu and camera spacial-temporal
# imu config should also have the correct noise values
relative_config_imu: "kalibr_imu_chain.yaml"
relative_config_imucam: "kalibr_imucam_chain.yaml"

@goldbattle goldbattle added the user-platform User has trouble running on their own platform. label Aug 12, 2024
@goldbattle
Copy link
Member

Do you have an example bag with your configuration files I could test with?

The logic to append new SLAM features should be pretty independent of the number of cameras. There might be a bias to selecting new SLAM features in smaller camera ids, but it seems you don't have any at all after using the third camera.

// Append a new SLAM feature if we have the room to do so
// Also check that we have waited our delay amount (normally prevents bad first set of slam points)
if (state->_options.max_slam_features > 0 && message.timestamp - startup_time >= params.dt_slam_delay &&
(int)state->_features_SLAM.size() < state->_options.max_slam_features + curr_aruco_tags) {
// Get the total amount to add, then the max amount that we can add given our marginalize feature array
int amount_to_add = (state->_options.max_slam_features + curr_aruco_tags) - (int)state->_features_SLAM.size();
int valid_amount = (amount_to_add > (int)feats_maxtracks.size()) ? (int)feats_maxtracks.size() : amount_to_add;
// If we have at least 1 that we can add, lets add it!
// Note: we remove them from the feat_marg array since we don't want to reuse information...
if (valid_amount > 0) {
feats_slam.insert(feats_slam.end(), feats_maxtracks.end() - valid_amount, feats_maxtracks.end());
feats_maxtracks.erase(feats_maxtracks.end() - valid_amount, feats_maxtracks.end());
}
}

@BearrGod
Copy link
Author

Hello @goldbattle. I have a dataset here that you could use to test my 3 cameras setup. I changed my extrinsics in the mean time (no overlap in field of view) so no stereo constrains. Here is the bag file. My config files are attached here
config3camsbas.zip .
Additionnaly i've done a few test that might point in the right direction. When reading the code , i realized that when there are two ameras, they are automatically synchronized (sync_pol ) . So i did the same for the 3 cameras (meaning 3 images and one timestamp in the camera measurement) and the slam starts working. But i wasn't satisfied with it since all the slam points tend to be on my camera 0 .
On the other hand i tried to print out the amount_to_add and valid_amount in the piece of code that adds new slam features and surptisingly the valid_amount is always equal to zero when using 3 cameras with my setup.
I hope ths helps out.
Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
user-platform User has trouble running on their own platform.
Projects
None yet
Development

No branches or pull requests

2 participants