Skip to content

Instantly share code, notes, and snippets.

@goldbattle
Created May 31, 2021 14:44
Show Gist options
  • Save goldbattle/cdf4245b0c18dad7f69f622b8704881b to your computer and use it in GitHub Desktop.
Save goldbattle/cdf4245b0c18dad7f69f622b8704881b to your computer and use it in GitHub Desktop.

From Hans Kumar to Everyone: 10:13 AM Hi Great Talk! I always have trouble picking the delta parameter for the RPE metric. Any intuition on how to pick this?

  • Normally this is tied to what you want “to look at”. For example, a short length of a meter might be enough if you just care about odometry accuracy while longer lengths can evaluate the long-term performance. Generally speaking the max length should be much smaller than the total trajectory length since one would want to get multiple samples of the same length over the whole trajectory.
  • Generally, what one cares about is the “trend” in the RPE. Is the algorithm always better at all lengths? For loop-closure methods is the RPE the same over all segment lengths? What is the expected accuracy if the robot travels a given distance?

From Ran Cheng to Everyone: 10:14 AM can we use deep nets on vison to do initialization and complementary of the missing Do for of imu. for example we can always estimate the opposite of gravity direction by a rgb image

  • I am not sure how to interpret, but using networks in conjunction with VINS is a promising direction to provide more constraints to the system from these learned priors.

From Haoming Zhang (IRT | RWTH) to Me: (Direct Message) 10:16 AM HI Patrick, I do have a Question regarding to OpenVINS. Will direct Mono/Stereo-VIO (e.g. dso) also be integrated in the Framework in future for benchmarking with feature-based VIOs?

  • Right now direct isn’t supported but it might be down the line. In general one could replace the current update functions with the direct update, but the difficulty typically is the visual tracking frontend. The current focus of the project is more straightforward deployment to robots along with better secondary loop-closure thread / mapping.

Youtube 10:09 AM HC A question on MSCKF! You mentioned that MSCKF can use many features because it removes the features in the EKF update equation. This sounds like we can use many features for MSCKF, but the actual use cases of MSCKF like ARCore or ARKit uses very small amount of features (correct me if I am wrong please!).

  • As mentioned this is an issue of both computational complexity and diminishing returns. The frontend is normally 30-40% of the total computational cost of VINS. Additionally, as more features are included the accuracy will improve by less and less. This makes sense since the FOV of the camera isn’t changing thus we saturate the maximum information we can recover from a given viewpoint.

Youtube 10:15 AM songming chen I see OpenVINS system running on the KAIST Urban 39 dataset, with the stereo image feed along with the 200Hz IMU. Can we feed monocular images? What will be the challenge then?

  • You can but it will not be as stable due to both not having dynamic initialization and the constant acceleration degeneracy. There is a github issue which discusses trying to run the KAIST dataset on a monocular image input.

Youtube 10:15 AM Shashika Chathuranga does it has a self initialization at the beginning ? and does it support for 360 dataset ?

  • Right now OpenVINS only has static initialization, this requires the beginning of the trajectory to be stationary. 360 cameras are not supported since we only support pinhole models and I would assume you would need to use a spherical one?
@rising-turtle
Copy link

Hi @patrick, thank you for the explanation for the importance of observability analysis: avoid updating the state variables along the unobservable directions. How to implement this strategy to a smoothing-based method, like OKVIS? Do you have some ideas or suggestions? Thanks,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment