JackHCC / Chinese-Tokenization

利用传统方法(N-gram,HMM等)、神经网络方法(CNN,LSTM等)和预训练方法(Bert等)的中文分词任务实现【The word segmentation task is realized by using traditional methods (n-gram, HMM, etc.), neural network methods (CNN, LSTM, etc.) and pre training methods (Bert, etc.)】

Date Created 2022-04-05 (2 years ago)
Commits 4 (last one 2 years ago)
Stargazers 32 (0 this week)
Watchers 1 (0 this week)
Forks 4
License unknown
Ranking

RepositoryStats indexes 585,332 repositories, of these JackHCC/Chinese-Tokenization is ranked #551,535 (6th percentile) for total stargazers, and #536,654 for total watchers. Github reports the primary language for this repository as Python, for repositories using this language it is ranked #108,533/116,570.

JackHCC/Chinese-Tokenization is also tagged with popular topics, for these it's ranked: nlp (#2,317/2401)

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

4 commits on the default branch (master) since jan '22

Yearly Commits

Commits to the default branch (master) per year

Issue History

No issues have been posted

Languages

The primary language is Python but there's also others...

updated: 2024-07-16 @ 06:20pm, id: 478141800 / R_kgDOHH_daA