Trending repositories for topic visual-odometry
MACVO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
MACVO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
MACVO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
A simple monocular visual odometry (part of vSLAM) by ORB keypoints with initialization, tracking, local map and bundle adjustment. (WARNING: Hi, I'm sorry that this project is tuned for course demo, ...
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
MACVO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
A simple monocular visual odometry (part of vSLAM) by ORB keypoints with initialization, tracking, local map and bundle adjustment. (WARNING: Hi, I'm sorry that this project is tuned for course demo, ...
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
MACVO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
StereoVision-SLAM is a real-time visual stereo SLAM (Simultaneous Localization and Mapping)
A simple monocular visual odometry (part of vSLAM) by ORB keypoints with initialization, tracking, local map and bundle adjustment. (WARNING: Hi, I'm sorry that this project is tuned for course demo, ...
[ICRA'23] The official Implementation of "Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras"
RGBD-3DGS-SLAM is a monocular SLAM system leveraging 3D Gaussian Splatting (3DGS) for accurate point cloud and visual odometry estimation. By integrating neural networks, it estimates depth and camera...
EndoSLAM Dataset and an Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner
[CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo
Underwater Dataset for Visual-Inertial Methods and data with transitioning between multiple refractive media.
Sample code and supplementary materials of IEEE Robotics and Automation Letters (RA-L) with ICRA 2022 paper: "Quasi-globally Optimal and Real-time Visual Compass in Manhattan Structured Environments"
[ICCV 2021] Official implementation of "The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation"
"Visual-Inertial Dataset" (RA-L'21 with ICRA'21): it contains harsh motions for VO/VIO, like pure rotation or fast rotation with various motion types.
MACVO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
StereoVision-SLAM is a real-time visual stereo SLAM (Simultaneous Localization and Mapping)
RGBD-3DGS-SLAM is a monocular SLAM system leveraging 3D Gaussian Splatting (3DGS) for accurate point cloud and visual odometry estimation. By integrating neural networks, it estimates depth and camera...
Underwater Dataset for Visual-Inertial Methods and data with transitioning between multiple refractive media.
Sample code and supplementary materials of IEEE Robotics and Automation Letters (RA-L) with ICRA 2022 paper: "Quasi-globally Optimal and Real-time Visual Compass in Manhattan Structured Environments"
A simple monocular visual odometry (part of vSLAM) by ORB keypoints with initialization, tracking, local map and bundle adjustment. (WARNING: Hi, I'm sorry that this project is tuned for course demo, ...
[ICCV 2021] Official implementation of "The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation"
Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
"Visual-Inertial Dataset" (RA-L'21 with ICRA'21): it contains harsh motions for VO/VIO, like pure rotation or fast rotation with various motion types.
EndoSLAM Dataset and an Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
[ICRA'23] The official Implementation of "Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras"
Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency (AAAI 2021)
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
MACVO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
StereoVision-SLAM is a real-time visual stereo SLAM (Simultaneous Localization and Mapping)
RGBD-3DGS-SLAM is a monocular SLAM system leveraging 3D Gaussian Splatting (3DGS) for accurate point cloud and visual odometry estimation. By integrating neural networks, it estimates depth and camera...
Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
[ICRA'23] The official Implementation of "Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras"
A simple monocular visual odometry (part of vSLAM) by ORB keypoints with initialization, tracking, local map and bundle adjustment. (WARNING: Hi, I'm sorry that this project is tuned for course demo, ...
MACVO: Metrics-aware Covariance for Learning-based Stereo Visual Odometry
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
EndoSLAM Dataset and an Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner
[CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo
StereoVision-SLAM is a real-time visual stereo SLAM (Simultaneous Localization and Mapping)
RGBD-3DGS-SLAM is a monocular SLAM system leveraging 3D Gaussian Splatting (3DGS) for accurate point cloud and visual odometry estimation. By integrating neural networks, it estimates depth and camera...
Unsupervised Scale-consistent Depth Learning from Video (IJCV2021 & NeurIPS 2019)
Implementation of the paper "Transformer-based model for monocular visual odometry: a video understanding approach".
Underwater Dataset for Visual-Inertial Methods and data with transitioning between multiple refractive media.
Implementation of the paper "Transformer-based model for monocular visual odometry: a video understanding approach".
Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM
[ICCV 2021] Official implementation of "The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation"
EndoSLAM Dataset and an Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner
[ICRA'23] The official Implementation of "Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras"
A simple monocular visual odometry (part of vSLAM) by ORB keypoints with initialization, tracking, local map and bundle adjustment. (WARNING: Hi, I'm sorry that this project is tuned for course demo, ...
Dense Prediction Transformer for scale estimation in monocular visual odometry
"Visual-Inertial Dataset" (RA-L'21 with ICRA'21): it contains harsh motions for VO/VIO, like pure rotation or fast rotation with various motion types.
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
Sample code and supplementary materials of IEEE Robotics and Automation Letters (RA-L) with ICRA 2022 paper: "Quasi-globally Optimal and Real-time Visual Compass in Manhattan Structured Environments"
[ECCV 2022]JPerceiver: Joint Perception Network for Depth, Pose and Layout Estimation in Driving Scenes