NLP常用python工具包
1-文法分析英语StanfordNLPhttps://github.com/stanfordnlp/stanfordnlp [python]NLTKhttps://github.com/nltk/nltk [python]spacyhttps://github.com/explosion/spaCy [python/cython]中文NLTKhttps://github...
1-文法分析
英语
- StanfordNLP
https://github.com/stanfordnlp/stanfordnlp [python] - NLTK
https://github.com/nltk/nltk [python] - spacy
https://github.com/explosion/spaCy [python/cython]
中文
-
NLTK
https://github.com/nltk/nltk [python] -
ltp
https://github.com/HIT-SCIR/ltp -
lac
https://github.com/baidu/lac -
StanfordNLP
https://github.com/stanfordnlp/stanfordnlp [python] -
thulac
https://github.com/thunlp/THULAC-Python [python] -
jieba
https://github.com/fxsjy/jieba [python] -
SnowNLP(MIT)
-
pynlpir
python 工具包
调查除下列包之外:{
scipy
numpy
sklearn
pandas
matplt
iPython
PyBrain
PyML - machine learning in Python
Milk:Machine learning toolkit in Python.
PyMVPA: MultiVariate Pattern Analysis (MVPA) in Python
Pyrallel - Parallel Data Analytics in Python
Monte - gradient based learning in Python
xgboost
}
1.gensim{
Corpora and Vector Spaces
Topics and Transformations:LSA,LDA,TF-IDF,
Experiments on the English Wikipedia
Distributed Computing:word2vec
}
2.jieba{
功能 1):分词
功能 2) :添加自定义词典
功能 3) :关键词提取
功能 4) : 词性标注
功能 5) : 并行分词
功能 6) : Tokenize:返回词语在原文的起始位置
}
3.NLTK{
Tokenize and tag some text:
Identify named entities:
Display a parse tree:
它提供了 WordNet 这种方便处理词汇资源的借口,还有分类、分词、除茎、标注、语法分析、语义推理等类库
}
4.TextBlob{
词性标注,
名词性成分提取,
情感分析,
文本翻译
}
5.PyNLPI{
处理N元搜索,计算频率表和分布,建立语言模型。他还可以处理向优先队列这种更加复杂的数据结构,或者像 Beam 搜索这种更加复杂的算法。
Segmenting Text
Getting Key Words
}
6.spaCy{结合Python和Cython:是具有工业级强度的Python NLP工具包
英文断句
词干化(Lemmatize):
词性标注(POS Tagging):
命名实体识别(NER):
名词短语提取:
}
7.polyglot{
Tokenization
Language detection
Morphological analysis
Named Entity Recognition
Sentiment Analysis
Word Embeddings
}
参考文献:
1、https://blog.csdn.net/u010212101/article/details/94740460
2、https://blog.csdn.net/Scotfield_msn/article/details/72904863
更多推荐
所有评论(0)