david-yoon / multimodal-speech-emotion

TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18

Date Created 2019-01-13 (5 years ago)
Commits 49 (last one 6 months ago)
Stargazers 256 (0 this week)
Watchers 10 (0 this week)
Forks 69
License mit
Ranking

RepositoryStats indexes 565,279 repositories, of these david-yoon/multimodal-speech-emotion is ranked #146,324 (74th percentile) for total stargazers, and #203,735 for total watchers. Github reports the primary language for this repository as Jupyter Notebook, for repositories using this language it is ranked #3,234/16,285.

Other Information

david-yoon/multimodal-speech-emotion has 1 open pull request on Github, 0 pull requests have been merged over the lifetime of the repository.

Homepage URL: https://arxiv.org/abs/1810.04635

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

1 commits on the default branch (master) since jan '22

Yearly Commits

Commits to the default branch (master) per year

Issue History

Languages

The primary language is Jupyter Notebook but there's also others...

updated: 2024-09-28 @ 12:58pm, id: 165534624 / R_kgDOCd3boA