Trending repositories for topic visual-odometry
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
Unsupervised Scale-consistent Depth Learning from Video (IJCV2021 & NeurIPS 2019)
Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
Unsupervised Scale-consistent Depth Learning from Video (IJCV2021 & NeurIPS 2019)
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
MACVO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
A simple monocular visual odometry (part of vSLAM) by ORB keypoints with initialization, tracking, local map and bundle adjustment. (WARNING: Hi, I'm sorry that this project is tuned for course demo, ...
StereoVision-SLAM is a real-time visual stereo SLAM (Simultaneous Localization and Mapping)
EndoSLAM Dataset and an Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner
Unsupervised Scale-consistent Depth Learning from Video (IJCV2021 & NeurIPS 2019)
Underwater Dataset for Visual-Inertial Methods and data with transitioning between multiple refractive media.
[CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
[ICRA'23] The official Implementation of "Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras"
RGBD-3DGS-SLAM is a monocular SLAM system leveraging 3D Gaussian Splatting (3DGS) for accurate point cloud and visual odometry estimation. By integrating neural networks, it estimates depth and camera...
MACVO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
StereoVision-SLAM is a real-time visual stereo SLAM (Simultaneous Localization and Mapping)
Underwater Dataset for Visual-Inertial Methods and data with transitioning between multiple refractive media.
Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
RGBD-3DGS-SLAM is a monocular SLAM system leveraging 3D Gaussian Splatting (3DGS) for accurate point cloud and visual odometry estimation. By integrating neural networks, it estimates depth and camera...
EndoSLAM Dataset and an Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner
A simple monocular visual odometry (part of vSLAM) by ORB keypoints with initialization, tracking, local map and bundle adjustment. (WARNING: Hi, I'm sorry that this project is tuned for course demo, ...
Implementation of the paper "Transformer-based model for monocular visual odometry: a video understanding approach".
[ICCV 2021] Official implementation of "The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation"
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
Unsupervised Scale-consistent Depth Learning from Video (IJCV2021 & NeurIPS 2019)
[ICRA'23] The official Implementation of "Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras"
MACVO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
StereoVision-SLAM is a real-time visual stereo SLAM (Simultaneous Localization and Mapping)
RGBD-3DGS-SLAM is a monocular SLAM system leveraging 3D Gaussian Splatting (3DGS) for accurate point cloud and visual odometry estimation. By integrating neural networks, it estimates depth and camera...
Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
[ICRA'23] The official Implementation of "Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras"
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
A simple monocular visual odometry (part of vSLAM) by ORB keypoints with initialization, tracking, local map and bundle adjustment. (WARNING: Hi, I'm sorry that this project is tuned for course demo, ...
EndoSLAM Dataset and an Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner
[CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo
MACVO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
Unsupervised Scale-consistent Depth Learning from Video (IJCV2021 & NeurIPS 2019)
StereoVision-SLAM is a real-time visual stereo SLAM (Simultaneous Localization and Mapping)
RGBD-3DGS-SLAM is a monocular SLAM system leveraging 3D Gaussian Splatting (3DGS) for accurate point cloud and visual odometry estimation. By integrating neural networks, it estimates depth and camera...
Implementation of the paper "Transformer-based model for monocular visual odometry: a video understanding approach".
Underwater Dataset for Visual-Inertial Methods and data with transitioning between multiple refractive media.
Implementation of the paper "Transformer-based model for monocular visual odometry: a video understanding approach".
Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
[ICRA'23] The official Implementation of "Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras"
EndoSLAM Dataset and an Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner
[ICCV 2021] Official implementation of "The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation"
Dense Prediction Transformer for scale estimation in monocular visual odometry
A simple monocular visual odometry (part of vSLAM) by ORB keypoints with initialization, tracking, local map and bundle adjustment. (WARNING: Hi, I'm sorry that this project is tuned for course demo, ...
"Visual-Inertial Dataset" (RA-L'21 with ICRA'21): it contains harsh motions for VO/VIO, like pure rotation or fast rotation with various motion types.
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
[ECCV 2022]JPerceiver: Joint Perception Network for Depth, Pose and Layout Estimation in Driving Scenes
Efficient monocular visual odometry for ground vehicles on ARM processors