Statistics for topic depth-estimation
RepositoryStats tracks 654,789 Github repositories, of these 188 are tagged with the depth-estimation topic. The most common primary language for repositories using this topic is Python (143). Other languages include: Jupyter Notebook (17)
Stargazers over time for topic depth-estimation
Most starred repositories for topic depth-estimation (view more)
Trending repositories for topic depth-estimation (view more)
Effortless data labeling with AI support from Segment Anything and other awesome models.
[CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
GeometryCrafter: Consistent Geometry Estimation for Open-world Videos with Diffusion Priors
Depth Estimation using Stereo images using deep learning based architecture for disparity measurement.The architectures used for disparity estimation are BgNet,CreStereo, Raft-Stereo, HitNet,GwcNet et...
Inference and fine-tuning examples for vision models from 🤗 Transformers
[CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos
GeometryCrafter: Consistent Geometry Estimation for Open-world Videos with Diffusion Priors
Effortless data labeling with AI support from Segment Anything and other awesome models.
[CVPR 2024 - Oral, Best Paper Award Candidate] Marigold: Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation
[CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
[CVPR 2025] A Unified Image-Dense Annotation Generation Model for Underwater Scenes
Inference and fine-tuning examples for vision models from 🤗 Transformers
Official implementation of ECCVW 2024 SLAM paper "ES-PTAM: Event-based Stereo Parallel Tracking and Mapping"
6D Pose Annotation Tool and Real-time Visualization - Vision6D for supporting users to annotate the 6D pose of a given 3D object for any given 2D images and depth estimation. This 6D pose annotation ...
[CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos
Effortless data labeling with AI support from Segment Anything and other awesome models.
[CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos
[CVPR 2024 - Oral, Best Paper Award Candidate] Marigold: Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation
Inference and fine-tuning examples for vision models from 🤗 Transformers
Inference and fine-tuning examples for vision models from 🤗 Transformers
6D Pose Annotation Tool and Real-time Visualization - Vision6D for supporting users to annotate the 6D pose of a given 3D object for any given 2D images and depth estimation. This 6D pose annotation ...
Official implementation of ECCVW 2024 SLAM paper "ES-PTAM: Event-based Stereo Parallel Tracking and Mapping"
[CVPR 2025] A Unified Image-Dense Annotation Generation Model for Underwater Scenes
[CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos
[CVPR 2025 Highlight] Align3R: Aligned Monocular Depth Estimation for Dynamic Videos
Effortless data labeling with AI support from Segment Anything and other awesome models.
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
[CVPR 2024 - Oral, Best Paper Award Candidate] Marigold: Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation
[CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos
[CVPR 2025] DEFOM-Stereo: Depth foundation model based stereo matching
[CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos
Monocular Depth Estimation Toolbox and Benchmark. [Arxiv'24 ScaleDepth, TCSVT'24 Plane2Depth, TIP'24 Binsformer]
Minimal code and examnples for inferencing Sapiens foundation human models in Pytorch