Trending repositories for topic video-generation
MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising
A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch
MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators
[AAAI 2024] Follow-Your-Pose: This repo is the official implementation of "Follow-Your-Pose : Pose-Guided Text-to-Video Generation using Pose-Free Videos"
[CVPR2024] Official Repository of Paper "Panacea: Panoramic and Controllable Video Generation for Autonomous Driving"
[ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
[ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)
Sora AI Awesome List – Your go-to resource hub for all things Sora AI, OpenAI's groundbreaking model for crafting realistic scenes from text. Explore a curated collection of articles, videos, podcasts...
MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising
[CVPR2024] Official Repository of Paper "Panacea: Panoramic and Controllable Video Generation for Autonomous Driving"
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)
Fine-Grained Open Domain Image Animation with Motion Guidance
A Survey on Text-to-Video Generation/Synthesis.
A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch
MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators
Sora AI Awesome List – Your go-to resource hub for all things Sora AI, OpenAI's groundbreaking model for crafting realistic scenes from text. Explore a curated collection of articles, videos, podcasts...
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
[ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"
[AAAI 2024] Follow-Your-Pose: This repo is the official implementation of "Follow-Your-Pose : Pose-Guided Text-to-Video Generation using Pose-Free Videos"
The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".
Official PyTorch implementation of Video Probabilistic Diffusion Models in Projected Latent Space (CVPR 2023).
MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising
A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
Fine-Grained Open Domain Image Animation with Motion Guidance
MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators
[ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators
A collection of awesome video generation studies.
[ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"
[AAAI 2024] Follow-Your-Pose: This repo is the official implementation of "Follow-Your-Pose : Pose-Guided Text-to-Video Generation using Pose-Free Videos"
[CVPR2024] Official Repository of Paper "Panacea: Panoramic and Controllable Video Generation for Autonomous Driving"
[ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction
MiniSora: A community aims to explore the implementation path and future development direction of Sora.
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)
A collection of awesome video generation studies.
[CVPR2024] Official Repository of Paper "Panacea: Panoramic and Controllable Video Generation for Autonomous Driving"
MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)
The official repository of the paper EDTalk: Efficient Disentanglement for Emotional Talking Head Synthesis
You can easily calculate FVD, PSNR, SSIM, LPIPS for evaluating the quality of generated or predicted videos.
Fine-Grained Open Domain Image Animation with Motion Guidance
本项目开源基于NextJS的前端, 希望能够提供一个用于生成式AI的文字转视频, 尤其是电影从编剧到视频生成的Web前端平台参考。Everyone can become a director. The Nextjs front-end of an AI driven platform for automatic movie/video generation (form GPT script gen...
KandinskyVideo — multilingual end-to-end text2video latent diffusion model
A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
A Large Short-video Recommendation Dataset with Raw Text/Audio/Image/Videos (Talk Invited by DeepMind).
MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators
[ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
Fine-Grained Open Domain Image Animation with Motion Guidance
[ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators
MiniSora: A community aims to explore the implementation path and future development direction of Sora.
Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch
[arXiv 2024] Follow-Your-Click: This repo is the official implementation of "Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts"
The official repository of the paper EDTalk: Efficient Disentanglement for Emotional Talking Head Synthesis
MotionDirector: Motion Customization of Text-to-Video Diffusion Models.
[ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction
A collection of awesome video generation studies.
The official repository of the paper EDTalk: Efficient Disentanglement for Emotional Talking Head Synthesis
A collection of awesome video generation studies.
You can easily calculate FVD, PSNR, SSIM, LPIPS for evaluating the quality of generated or predicted videos.
MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising
[CVPR 2024] On the Content Bias in Fréchet Video Distance
A list of works on evaluation of visual generation models, including evaluation metrics, models, and systems
本项目开源基于NextJS的前端, 希望能够提供一个用于生成式AI的文字转视频, 尤其是电影从编剧到视频生成的Web前端平台参考。Everyone can become a director. The Nextjs front-end of an AI driven platform for automatic movie/video generation (form GPT script gen...
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)
ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation
Avatar Generation For Characters and Game Assets Using Deep Fakes
A Large Short-video Recommendation Dataset with Raw Text/Audio/Image/Videos (Talk Invited by DeepMind).
[ICLR 2024] LLM-grounded Video Diffusion Models (LVD): official implementation for the LVD paper
[CVPR 2024] Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models
Fine-Grained Open Domain Image Animation with Motion Guidance
[CVPR2024] Official Repository of Paper "Panacea: Panoramic and Controllable Video Generation for Autonomous Driving"
:bulb: PseudoDiffusers: paper/code review and experimental findings related to computer vision generation and diffusion-based models
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators
MiniSora: A community aims to explore the implementation path and future development direction of Sora.
[ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction
[arXiv 2024] Follow-Your-Click: This repo is the official implementation of "Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts"
MotionDirector: Motion Customization of Text-to-Video Diffusion Models.
Sora AI Awesome List – Your go-to resource hub for all things Sora AI, OpenAI's groundbreaking model for crafting realistic scenes from text. Explore a curated collection of articles, videos, podcasts...
Clearer anytime frame interpolation & Manipulated interpolation of anything
ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation
Official implementations for paper: LivePhoto: Real Image Animation with Text-guided Motion Control
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.
MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators
MiniSora: A community aims to explore the implementation path and future development direction of Sora.
[ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators
InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, ...
Official repo for VideoComposer: Compositional Video Synthesis with Motion Controllability
[ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction
[arXiv 2024] Follow-Your-Click: This repo is the official implementation of "Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts"
MotionDirector: Motion Customization of Text-to-Video Diffusion Models.
Official JAX implementation of MAGVIT: Masked Generative Video Transformer
Fine-Grained Open Domain Image Animation with Motion Guidance
[AAAI 2024] Follow-Your-Pose: This repo is the official implementation of "Follow-Your-Pose : Pose-Guided Text-to-Video Generation using Pose-Free Videos"
A Survey on Text-to-Video Generation/Synthesis.
MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators
MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising
Official repo for VideoComposer: Compositional Video Synthesis with Motion Controllability
[arXiv 2024] Follow-Your-Click: This repo is the official implementation of "Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts"
MiniSora: A community aims to explore the implementation path and future development direction of Sora.
MotionDirector: Motion Customization of Text-to-Video Diffusion Models.
FreeInit: Bridging Initialization Gap in Video Diffusion Models
Sora AI Awesome List – Your go-to resource hub for all things Sora AI, OpenAI's groundbreaking model for crafting realistic scenes from text. Explore a curated collection of articles, videos, podcasts...
[CVPR 2024] Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models
Official Pytorch Implementation for "SceneScape: Text-Driven Consistent Scene Generation"
Avatar Generation For Characters and Game Assets Using Deep Fakes
Pyhton script for generating zoom in/out videos from a set of images
You can easily calculate FVD, PSNR, SSIM, LPIPS for evaluating the quality of generated or predicted videos.
[ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction
InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions
ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation
KandinskyVideo — multilingual end-to-end text2video latent diffusion model
A Survey on Text-to-Video Generation/Synthesis.
A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc.