import spacy
spacy.en.STOP_WORDS
{'a', 'about', 'above', 'across', 'after', 'afterwards', 'again', 'against', 'all', 'almost', 'alone', 'along', 'already', 'also', 'although', 'always', 'am', 'among', 'amongst', 'amount', 'an', 'and', 'another', 'any', 'anyhow', 'anyone', 'anything', 'anyway', 'anywhere', 'are', 'around', 'as', 'at', 'back', 'be', 'became', 'because', 'become', 'becomes', 'becoming', 'been', 'before', 'beforehand', 'behind', 'being', 'below', 'beside', 'besides', 'between', 'beyond', 'both', 'bottom', 'but', 'by', 'ca', 'call', 'can', 'cannot', 'could', 'did', 'do', 'does', 'doing', 'done', 'down', 'due', 'during', 'each', 'eight', 'either', 'eleven', 'else', 'elsewhere', 'empty', 'enough', 'etc', 'even', 'ever', 'every', 'everyone', 'everything', 'everywhere', 'except', 'few', 'fifteen', 'fifty', 'first', 'five', 'for', 'former', 'formerly', 'forty', 'four', 'from', 'front', 'full', 'further', 'get', 'give', 'go', 'had', 'has', 'have', 'he', 'hence', 'her', 'here', 'hereafter', 'hereby', 'herein', 'hereupon', 'hers', 'herself', 'him', 'himself', 'his', 'how', 'however', 'hundred', 'i', 'if', 'in', 'inc', 'indeed', 'into', 'is', 'it', 'its', 'itself', 'just', 'keep', 'last', 'latter', 'latterly', 'least', 'less', 'made', 'make', 'many', 'may', 'me', 'meanwhile', 'might', 'mine', 'more', 'moreover', 'most', 'mostly', 'move', 'much', 'must', 'my', 'myself', 'name', 'namely', 'neither', 'never', 'nevertheless', 'next', 'nine', 'no', 'nobody', 'none', 'noone', 'nor', 'not', 'nothing', 'now', 'nowhere', 'of', 'off', 'often', 'on', 'once', 'one', 'only', 'onto', 'or', 'other', 'others', 'otherwise', 'our', 'ours', 'ourselves', 'out', 'over', 'own', 'part', 'per', 'perhaps', 'please', 'put', 'quite', 'rather', 're', 'really', 'regarding', 'same', 'say', 'see', 'seem', 'seemed', 'seeming', 'seems', 'serious', 'several', 'she', 'should', 'show', 'side', 'since', 'six', 'sixty', 'so', 'some', 'somehow', 'someone', 'something', 'sometime', 'sometimes', 'somewhere', 'still', 'such', 'take', 'ten', 'than', 'that', 'the', 'their', 'them', 'themselves', 'then', 'thence', 'there', 'thereafter', 'thereby', 'therefore', 'therein', 'thereupon', 'these', 'they', 'third', 'this', 'those', 'though', 'three', 'through', 'throughout', 'thru', 'thus', 'to', 'together', 'too', 'top', 'toward', 'towards', 'twelve', 'twenty', 'two', 'under', 'unless', 'until', 'up', 'upon', 'us', 'used', 'using', 'various', 'very', 'via', 'was', 'we', 'well', 'were', 'what', 'whatever', 'when', 'whence', 'whenever', 'where', 'whereafter', 'whereas', 'whereby', 'wherein', 'whereupon', 'wherever', 'whether', 'which', 'while', 'whither', 'who', 'whoever', 'whole', 'whom', 'whose', 'why', 'will', 'with', 'within', 'without', 'would', 'yet', 'you', 'your', 'yours', 'yourself', 'yourselves'}
spacy.ja.STOP_WORDS
{'、', '。'}
def print_token(token):
print("==========================")
print("value:",token.orth_)
print("lemma:",token.lemma_) # lemma is the root of a word
print("shape:",token.shape_) # shape is capitalization and punctuation
def print_sents(sents):
for sent in sents:
print("Sentence:")
print(sent)
print()
def parse(text):
tokens = parser(text)
print_sents(tokens.sents)
tokens_orth = [token.orth_ for token in tokens]
print(tokens_orth)
for token in tokens:
print_token(token)
英語モデルをダウンロード。
$ python -m spacy download en
Downloading en_core_web_sm-1.2.0/en_core_web_sm-1.2.0.tar.gz
Collecting https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-1.2.0/en_core_web_sm-1.2.0.tar.gz
Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-1.2.0/en_core_web_sm-1.2.0.tar.gz (52.2MB)
100% |████████████████████████████████| 52.2MB 411kB/s
parser = spacy.en.English()
parse("I'm Mr. Cong. Dr. Duc is coming. Ph.D. Viet is the man overthere.")
Sentence: I'm Mr. Cong. Sentence: Dr. Duc is coming. Sentence: Ph.D. Viet is the man overthere. ['I', "'m", 'Mr.', 'Cong', '.', 'Dr.', 'Duc', 'is', 'coming', '.', 'Ph.D.', 'Viet', 'is', 'the', 'man', 'overthere', '.'] ========================== value: I lemma: -PRON- shape: X ========================== value: 'm lemma: be shape: 'x ========================== value: Mr. lemma: mr. shape: Xx. ========================== value: Cong lemma: cong shape: Xxxx ========================== value: . lemma: . shape: . ========================== value: Dr. lemma: dr. shape: Xx. ========================== value: Duc lemma: duc shape: Xxx ========================== value: is lemma: be shape: xx ========================== value: coming lemma: come shape: xxxx ========================== value: . lemma: . shape: . ========================== value: Ph.D. lemma: ph.d. shape: Xx.X. ========================== value: Viet lemma: viet shape: Xxxx ========================== value: is lemma: be shape: xx ========================== value: the lemma: the shape: xxx ========================== value: man lemma: man shape: xxx ========================== value: overthere lemma: overthere shape: xxxx ========================== value: . lemma: . shape: .
日本語モデルはあるのかな
$ python -m spacy download ja
Compatibility error
No compatible model found for 'ja' (spaCy v1.8.2).
まだないですね。
parser = spacy.ja.Japanese()
parse("こんいちは。私はコンといいます。ベト博士はあちらにいます。")
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-6-47928386adc1> in <module>() 1 parser = spacy.ja.Japanese() ----> 2 parse("こんいちは。私はコンといいます。ベト博士はあちらにいます。") <ipython-input-4-04a8f5f4066a> in parse(text) 13 def parse(text): 14 tokens = parser(text) ---> 15 print_sents(tokens.sents) 16 tokens_orth = [token.orth_ for token in tokens] 17 print(tokens_orth) <ipython-input-4-04a8f5f4066a> in print_sents(sents) 6 7 def print_sents(sents): ----> 8 for sent in sents: 9 print("Sentence:") 10 print(sent) /home/ubuntu/workspace/nlp-python/.env/lib/python3.4/site-packages/spacy/tokens/doc.pyx in __get__ (spacy/tokens/doc.cpp:10140)() 435 436 if not self.is_parsed: --> 437 raise ValueError( 438 "Sentence boundary detection requires the dependency parse, which " 439 "requires data to be installed. For more info, see the " ValueError: Sentence boundary detection requires the dependency parse, which requires data to be installed. For more info, see the documentation: https://spacy.io/docs/usage
つまり、日本語モデルがないため、spaCyではドキュメントを文ごとに切ることができないのね。 言語モデルを追加する方法: https://spacy.io/docs/usage/adding-languages