Trending repositories for topic talking-head
EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
实时语音交互数字人,支持端到端语音方案(GLM-4-Voice - THG)和级联方案(ASR-LLM-TTS-THG)。可自定义形象与音色,无须训练,支持音色克隆,首包延迟低至3s。Real-time voice interactive digital human, supporting end-to-end voice solutions (GLM-4-Voice - THG) and cas...
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
💬 An extensive collection of exceptional resources dedicated to the captivating world of talking face synthesis! ⭐ If you find this repo useful, please give it a star! 🤩
Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation
CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors
AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head
Talking Head (3D): A JavaScript class for real-time lip-sync using Ready Player Me full-body 3D avatars.
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
A Survey on Deepfake Generation and Detection
One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
实时语音交互数字人,支持端到端语音方案(GLM-4-Voice - THG)和级联方案(ASR-LLM-TTS-THG)。可自定义形象与音色,无须训练,支持音色克隆,首包延迟低至3s。Real-time voice interactive digital human, supporting end-to-end voice solutions (GLM-4-Voice - THG) and cas...
Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation
One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024
EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
Talking Head (3D): A JavaScript class for real-time lip-sync using Ready Player Me full-body 3D avatars.
💬 An extensive collection of exceptional resources dedicated to the captivating world of talking face synthesis! ⭐ If you find this repo useful, please give it a star! 🤩
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
A Survey on Deepfake Generation and Detection
CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors
[ECCV 2024 Oral] EDTalk - Official PyTorch Implementation
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
实时语音交互数字人,支持端到端语音方案(GLM-4-Voice - THG)和级联方案(ASR-LLM-TTS-THG)。可自定义形象与音色,无须训练,支持音色克隆,首包延迟低至3s。Real-time voice interactive digital human, supporting end-to-end voice solutions (GLM-4-Voice - THG) and cas...
EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation
💬 An extensive collection of exceptional resources dedicated to the captivating world of talking face synthesis! ⭐ If you find this repo useful, please give it a star! 🤩
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
A Survey on Deepfake Generation and Detection
CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors
Talking Head (3D): A JavaScript class for real-time lip-sync using Ready Player Me full-body 3D avatars.
AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head
[CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024
The official code of our ICCV2023 work: Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video Generation
实时语音交互数字人,支持端到端语音方案(GLM-4-Voice - THG)和级联方案(ASR-LLM-TTS-THG)。可自定义形象与音色,无须训练,支持音色克隆,首包延迟低至3s。Real-time voice interactive digital human, supporting end-to-end voice solutions (GLM-4-Voice - THG) and cas...
Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation
EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
A Survey on Deepfake Generation and Detection
One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024
Talking Head (3D): A JavaScript class for real-time lip-sync using Ready Player Me full-body 3D avatars.
[CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models
💬 An extensive collection of exceptional resources dedicated to the captivating world of talking face synthesis! ⭐ If you find this repo useful, please give it a star! 🤩
CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
[ECCV 2024 Oral] EDTalk - Official PyTorch Implementation
The official code of our ICCV2023 work: Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video Generation
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
实时语音交互数字人,支持端到端语音方案(GLM-4-Voice - THG)和级联方案(ASR-LLM-TTS-THG)。可自定义形象与音色,无须训练,支持音色克隆,首包延迟低至3s。Real-time voice interactive digital human, supporting end-to-end voice solutions (GLM-4-Voice - THG) and cas...
EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
💬 An extensive collection of exceptional resources dedicated to the captivating world of talking face synthesis! ⭐ If you find this repo useful, please give it a star! 🤩
Talking Head (3D): A JavaScript class for real-time lip-sync using Ready Player Me full-body 3D avatars.
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
A Survey on Deepfake Generation and Detection
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head
CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors
DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models
[CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models
Avatar Generation For Characters and Game Assets Using Deep Fakes
Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation
One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024
A Survey on Deepfake Generation and Detection
EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
Talking Head (3D): A JavaScript class for real-time lip-sync using Ready Player Me full-body 3D avatars.
💬 An extensive collection of exceptional resources dedicated to the captivating world of talking face synthesis! ⭐ If you find this repo useful, please give it a star! 🤩
DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
Daily tracking of awesome avatar papers, including 2d talking head, 3d head avatar, body avatar.
[CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models
Avatar Generation For Characters and Game Assets Using Deep Fakes
CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
💬 An extensive collection of exceptional resources dedicated to the captivating world of talking face synthesis! ⭐ If you find this repo useful, please give it a star! 🤩
实时语音交互数字人,支持端到端语音方案(GLM-4-Voice - THG)和级联方案(ASR-LLM-TTS-THG)。可自定义形象与音色,无须训练,支持音色克隆,首包延迟低至3s。Real-time voice interactive digital human, supporting end-to-end voice solutions (GLM-4-Voice - THG) and cas...
A Survey on Deepfake Generation and Detection
[CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models
Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation
One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior, CVPRW 2024
Daily tracking of awesome avatar papers, including 2d talking head, 3d head avatar, body avatar.
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"
💬 An extensive collection of exceptional resources dedicated to the captivating world of talking face synthesis! ⭐ If you find this repo useful, please give it a star! 🤩
AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors
实时语音交互数字人,支持端到端语音方案(GLM-4-Voice - THG)和级联方案(ASR-LLM-TTS-THG)。可自定义形象与音色,无须训练,支持音色克隆,首包延迟低至3s。Real-time voice interactive digital human, supporting end-to-end voice solutions (GLM-4-Voice - THG) and cas...
Talking Head (3D): A JavaScript class for real-time lip-sync using Ready Player Me full-body 3D avatars.
A Survey on Deepfake Generation and Detection
[CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models
Offical implement of Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for talking head Video Generation
DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models
Avatar Generation For Characters and Game Assets Using Deep Fakes
[CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models
[ECCV 2024 Oral] EDTalk - Official PyTorch Implementation
DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models
Avatar Generation For Characters and Game Assets Using Deep Fakes
CVPR2023 talking face implementation for Identity-Preserving Talking Face Generation With Landmark and Appearance Priors
The official code of our ICCV2023 work: Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video Generation
canvas-based talking head model using viseme data
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
Long-Inference, High Quality Synthetic Speaker (AI avatar/ AI presenter)
code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021
This is the official repository for OTAvatar: One-shot Talking Face Avatar with Controllable Tri-plane Rendering [CVPR2023].
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
The pytorch implementation of our WACV23 paper "Cross-identity Video Motion Retargeting with Joint Transformation and Synthesis".
PyTorch implementation for NED (CVPR 2022). It can be used to manipulate the facial emotions of actors in videos based on emotion labels or reference styles.
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.