JackHCC / Chinese-Tokenization

利用传统方法(N-gram,HMM等)、神经网络方法(CNN,LSTM等)和预训练方法(Bert等)的中文分词任务实现【The word segmentation task is realized by using traditional methods (n-gram, HMM, etc.), neural network methods (CNN, LSTM, etc.) and pre training methods (Bert, etc.)】

Date Created 2022-04-05 (2 years ago)
Commits 4 (last one 2 years ago)
Stargazers 31 (0 this week)
Watchers 1 (0 this week)
Forks 4
License unknown
Ranking

RepositoryStats indexes 535,551 repositories, of these JackHCC/Chinese-Tokenization is ranked #513,227 (4th percentile) for total stargazers, and #497,874 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #98,615/103,734.

JackHCC/Chinese-Tokenization is also tagged with popular topics, for these it's ranked: nlp (#2,201/2264)

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

4 commits on the default branch (master) since jan '22

Yearly Commits

Commits to the default branch (master) per year

Issue History

No issues have been posted

Languages

The primary language is Python but there's also others...

updated: 2024-05-29 @ 11:29am, id: 478141800 / R_kgDOHH_daA