david-yoon / multimodal-speech-emotion

TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18

Date Created 2019-01-13 (5 years ago)
Commits 49 (last one 9 months ago)
Stargazers 268 (0 this week)
Watchers 10 (0 this week)
Forks 70
License mit
Ranking

RepositoryStats indexes 595,856 repositories, of these david-yoon/multimodal-speech-emotion is ranked #146,450 (75th percentile) for total stargazers, and #207,845 for total watchers. Github reports the primary language for this repository as Jupyter Notebook, for repositories using this language it is ranked #3,276/17,543.

Other Information

david-yoon/multimodal-speech-emotion has 1 open pull request on Github, 0 pull requests have been merged over the lifetime of the repository.

Homepage URL: https://arxiv.org/abs/1810.04635

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

1 commits on the default branch (master) since jan '22

Yearly Commits

Commits to the default branch (master) per year

Issue History

Languages

The primary language is Jupyter Notebook but there's also others...

updated: 2024-12-16 @ 12:55pm, id: 165534624 / R_kgDOCd3boA