Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scale Estimation doesn't converge #148

Open
bishwa9 opened this issue Jul 7, 2016 · 9 comments
Open

Scale Estimation doesn't converge #148

bishwa9 opened this issue Jul 7, 2016 · 9 comments

Comments

@bishwa9
Copy link

bishwa9 commented Jul 7, 2016

Hello @simonlynen ,

I think I have the framework up and running (no fuzzy tracking or negative scale errors frequently). However, I can't get the scale estimate to converge. My initialization procedure:

  1. Start all nodes
  2. Get an initial map (very minimal) from ORB_SLAM
  3. Set it back at ORB_SLAM's origin (close by)
  4. Set the init_height to 0.1 (lowest it goes on the dynamic reconfigure window)
  5. Initialize the framework

These are my initialization parameters:
selection_012

This is the scale. As you can see, it keeps increasing. The scale, by experimentation, is approximately at 0.1. No matter the initial scale, the scale always increases.

selection_013

Please help,
Thank You,
Roy.

@bishwa9
Copy link
Author

bishwa9 commented Jul 27, 2016

Sadly, I haven't gotten it working with ORB_SLAM. Frustrated with the finickiness, I tried a very simple set-up by replacing the ORB_SLAM with the mocap system. Basically, mocap for pose and an IMU for the state prediction. Even with this I get weird results. I basically think my frame set-ups are wrong, but can't figure out where the problem is. Can you please post your frame setups?

However, I am currently trying a Engel's way of scale estimation and the scale still keeps diverging. Hence, it might be something to do with the fact how ORB_SLAM keeps changing the scale or the inherent pose noise that's causing us so much grief.

@yewzijian
Copy link

@bishwa9 I managed to get it to work with ORB-SLAM as an unscaled pose sensor. Just some pointers, you need to set/initialize the following carefully:

  1. Inter sensor Calibration parameters, esp q_ic. You can use Kalibr to find these inter sensor parameters.
  2. Initial values of q, q_wv: You should have some idea on the starting orientation of the platform.
  3. b_w: Which can be estimated from a stationary IMU.
  4. Lambda: It seems to be able to tolerate values even >2x the groundtruth, but you should still initialize it with a reasonable value.

As you can see, you need to set reasonable values for the rotation parameters, otherwise the EKF won't converge. MSF does estimate inter-sensor calibration such as q_ic, but only if the initial values are reasonably correct.

Also, even if you do all these things right, do understand that the EKF converges only with sufficient excitation of the system, so you'll need to move and rotate the system around.

@bishwa9
Copy link
Author

bishwa9 commented Aug 24, 2016

@yewzijian thank you for the update. Can you please elaborate on sufficient excitation? I realize there needs to be movement in 4 DOFs but how much? Magnitude of acceleration and angular velocity. So, if you move 1 meter, is it saying you have moved a meter?

Additionally, one of the things I saw on the dataset is that no matter what value you initialize the parameters as, they converge to the correct values. For instance, I initialized the q_ic to be a random value, once movement started in the dataset, the state variable converged to Identity.

@yewzijian
Copy link

@bishwa9 As long there's some acceleration and rotation in multiple axes for some time (maybe around 20-60s), that should be enough. You may want to try some of the Euroc MAV datasets.

Anyway, I just tried initializing q_ic to a unreasonable value, and the scale couldn't converge. Couldn't replicate what you observed.

@kongan
Copy link

kongan commented Dec 23, 2016

@yewzijian Hi, can you share the initialization parameters of Euroc MAV datasets?

@yewzijian
Copy link

@kongan Here's my relevant values for the inter sensor calibration, which are provided with the datasets themselves. The original IMU noise levels doesn't work well for me, so I doubled the values.

/msf_viconpos_sensor/core/core_noise_acc: 4.0000e-3
/msf_viconpos_sensor/core/core_noise_accbias: 6.0000e-3
/msf_viconpos_sensor/core/core_noise_gyr: 3.4e-04
/msf_viconpos_sensor/core/core_noise_gyrbias: 3.8e-05

/msf_viconpos_sensor/my_sensor/pose_initial_scale: 1.0
/msf_viconpos_sensor/my_sensor/pose_noise_scale: 0.0
/msf_viconpos_sensor/my_sensor/pose_noise_p_wv: 0.0
/msf_viconpos_sensor/my_sensor/pose_noise_q_wv: 0.0
/msf_viconpos_sensor/my_sensor/pose_noise_q_ic: 0.0
/msf_viconpos_sensor/my_sensor/pose_noise_p_ic: 0.0
/msf_viconpos_sensor/my_sensor/pose_delay: 0.0
/msf_viconpos_sensor/my_sensor/pose_noise_meas_p: 0.005 # Need to set according to VO algorithm covariance
/msf_viconpos_sensor/my_sensor/pose_noise_meas_q: 0.005

/msf_viconpos_sensor/my_sensor/pose_fixed_scale: false
/msf_viconpos_sensor/my_sensor/pose_fixed_p_ic: true
/msf_viconpos_sensor/my_sensor/pose_fixed_q_ic: true
/msf_viconpos_sensor/my_sensor/pose_fixed_p_wv: false
/msf_viconpos_sensor/my_sensor/pose_fixed_q_wv: false

/msf_viconpos_sensor/pose_sensor/init/q_ic/w: 0.7123014607
/msf_viconpos_sensor/pose_sensor/init/q_ic/x: -0.007707179756
/msf_viconpos_sensor/pose_sensor/init/q_ic/y: 0.01049932337
/msf_viconpos_sensor/pose_sensor/init/q_ic/z: 0.7017528003
/msf_viconpos_sensor/pose_sensor/init/p_ic/x: -0.0216401454975
/msf_viconpos_sensor/pose_sensor/init/p_ic/y: -0.064676986768
/msf_viconpos_sensor/pose_sensor/init/p_ic/z: 0.00981073058949

@kongan
Copy link

kongan commented Dec 24, 2016

@yewzijian Thank you for your response. I use the ORB_SLAM2 and the parameter values that you give , but I get the warning message continuously, and the scale estimation doesn't converge.

ORB_SLAM2 pose:

cv::Mat Rwc = Tcw.rowRange(0,3).colRange(0,3).t();
cv::Mat twc = -Rwc*Tcw.rowRange(0,3).col(3);
vector<float> q = ORB_SLAM2::Converter::toQuaternion(Rwc);
poseMSG.pose.position.x = twc.at<float>(0);
poseMSG.pose.position.y = twc.at<float>(1);
poseMSG.pose.position.z = twc.at<float>(2);
poseMSG.pose.orientation.x = q[0];
poseMSG.pose.orientation.y = q[1];
poseMSG.pose.orientation.z = q[2];
poseMSG.pose.orientation.w = q[3];

WAN message:

[ WARN] [1482579671.208827019]: Fuzzy tracking triggered: 0.221361 limit: 0.1

Initialization parameters:

data_playback: false
core/core_fixed_bias: false

#########IMU PARAMETERS#######
####### ADIS16448
core/core_noise_acc: 0.004
core/core_noise_accbias: 0.006
core/core_noise_gyr: 3.4e-04
core/core_noise_gyrbias: 3.8e-05

pose_sensor/pose_noise_scale: 0.0
pose_sensor/pose_noise_p_wv: 0.0
pose_sensor/pose_noise_q_wv: 0.0
pose_sensor/pose_noise_q_ic: 0.0
pose_sensor/pose_noise_p_ic: 0.0
pose_sensor/pose_delay: 0.0
pose_sensor/pose_noise_meas_p: 0.005
pose_sensor/pose_noise_meas_q: 0.005
pose_sensor/pose_initial_scale: 1

pose_sensor/init/q_ic/w: 0.712301
pose_sensor/init/q_ic/x: -0.00770718
pose_sensor/init/q_ic/y: 0.0104993
pose_sensor/init/q_ic/z: 0.701753

pose_sensor/init/p_ic/x: -0.0216401454975
pose_sensor/init/p_ic/y: -0.064676986768
pose_sensor/init/p_ic/z: 0.00981073058949

pose_sensor/pose_absolute_measurements: true
pose_sensor/pose_use_fixed_covariance: true
pose_sensor/pose_measurement_world_sensor: false # we do not publish the world in camera frame as set in SVO parameters.

pose_sensor/pose_fixed_scale: false
pose_sensor/pose_fixed_p_ic: true
pose_sensor/pose_fixed_q_ic: true
pose_sensor/pose_fixed_p_wv: false
pose_sensor/pose_fixed_q_wv: false

@yewzijian
Copy link

pose_sensor/pose_measurement_world_sensor should be true for ORB_SLAM.

If the EKF still doesn't converge, and you're getting many fuzzy tracking warning, check that you initialised q_wv to a reasonable value (you'll have to figure that out yourself according to your use-case).

Also, even though the EKF can estimate the scale, it's only robust when the scale is near it's true value. Try setting the scale correctly and see if it converges.

@yalan2017
Copy link

yalan2017 commented Oct 13, 2017

Hi
i am running RGB-D SLAM i just started to use this frame work i want to fuse my imu data with SLAM. im confuse to how use out put of EKF for my map! any help for me?

this is my launch file:

### <!--  --> 
<launch>
    <node name="msf_viconpos_sensor" pkg="msf_updates" type="pose_sensor" clear_params="true" output="screen">

		<remap from="msf_core/hl_state_input" to="/fcu/ekf_state_out" />
		<remap from="msf_core/correction" to="/fcu/ekf_state_in" />
		<remap from="msf_updates/transform_input" to="/vicon/robot_name/robot_name" />

                <remap from="/msf_core/imu_state_input" to="/imu/data"/>
                <remap from="msf_updates/transform_input" to="/rtabmap/odom"/>

		<rosparam file="$(find msf_updates)/rgb-dslam_sensor_fix.yaml"/>

    </node>

<node pkg="rosservice" type="rosservice" name="initialize" args="call --wait /msf_viconpos_sensor/pose_sensor/initialize_msf_scale 1"/>
</launch>
/pose_sensor/core/data_playback: false
/pose_sensor/core/fixed_bias: 0

#########IMU PARAMETERS#######
######## Asctec Firefly IMU
/pose_sensor/core/core_noise_acc: 0.1
/pose_sensor/core/core_noise_accbias: 0.1
/pose_sensor/core/core_noise_gyr: 0.1
/pose_sensor/core/core_noise_gyrbias: 0.00013

/pose_sensor/pose_sensor/fixed_scale: 1
/pose_sensor/pose_sensor/pose_noise_scale: 0.1
/pose_sensor/pose_sensor/pose_noise_p_wv: 0.0
/pose_sensor/pose_sensor/pose_noise_q_wv: 0.0
/pose_sensor/pose_sensor/pose_noise_q_ic: 0.0024454
/pose_sensor/pose_sensor/pose_noise_p_ic: -0.01823385
/pose_sensor/pose_sensor/pose_delay: 0.002
/pose_sensor/pose_sensor/pose_noise_meas_p: 0.005
/pose_sensor/pose_sensor/pose_noise_meas_q: 0.005

/pose_sensor/pose_sensor/init/q_ic/w: 1.0
/pose_sensor/pose_sensor/init/q_ic/x: 0.0
/pose_sensor/pose_sensor/init/q_ic/y: 0.0
/pose_sensor/pose_sensor/init/q_ic/z: 0.0
/pose_sensor/pose_sensor/init/p_ic/x: 0.0   
/pose_sensor/pose_sensor/init/p_ic/y: 0.0
/pose_sensor/pose_sensor/init/p_ic/z: 0.0

/pose_sensor/pose_sensor/pose_fixed_scale: true
/pose_sensor/pose_sensor/pose_fixed_p_ic: true
/pose_sensor/pose_sensor/pose_fixed_q_ic: true
/pose_sensor/pose_sensor/pose_fixed_p_wv: true
/pose_sensor/pose_sensor/pose_fixed_q_wv: true
 
/pose_sensor/pose_sensor/pose_absolute_measurements: true
/pose_sensor/pose_sensor/pose_use_fixed_covariance: true
/pose_sensor/pose_sensor/pose_measurement_world_sensor: true  # selects if sensor measures its position w.r.t. world (true, e.g. Vicon) or the position of the world coordinate system w.r.t. the sensor (false, e.g. ethzasl_ptam

/pose_sensor/pose_sensor/pose_measurement_minimum_dt: 0.05  # Sets the minimum time in seconds between two pose measurements.

@hitzg
Regards
screenshot from 2017-10-13 19 27 32

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants