gionanide / Speech_Signal_Processing_and_Classification

Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].

Date Created 2018-03-16 (6 years ago)
Commits 197 (last one about a year ago)
Stargazers 240 (0 this week)
Watchers 11 (0 this week)
Forks 63
License mit
Ranking

RepositoryStats indexes 579,555 repositories, of these gionanide/Speech_Signal_Processing_and_Classification is ranked #155,835 (73rd percentile) for total stargazers, and #190,647 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #27,228/115,123.

gionanide/Speech_Signal_Processing_and_Classification is also tagged with popular topics, for these it's ranked: nlp (#884/2392),  natural-language-processing (#546/1402)

Other Information

gionanide/Speech_Signal_Processing_and_Classification has Github issues enabled, there are 3 open issues and 1 closed issue.

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

11 commits on the default branch (master) since jan '22

Yearly Commits

Commits to the default branch (master) per year

Issue History

Languages

The only known language in this repository is Python

updated: 2024-11-05 @ 12:35pm, id: 125501606 / R_kgDOB3sApg