ó <æCVc@sdZddlmZddlmZmZmZmZddlm Z m Z m Z m Z m Z mZmZddlmZddlmZmZddlmZddlmZdd lmZdd lmZmZdd lmZd d „Z eƒj!Z"d d„Z#dS(s¼ NLTK Tokenizer Package Tokenizers divide strings into lists of substrings. For example, tokenizers can be used to find the words and punctuation in a string: >>> from nltk.tokenize import word_tokenize >>> s = '''Good muffins cost $3.88\nin New York. Please buy me ... two of them.\n\nThanks.''' >>> word_tokenize(s) ['Good', 'muffins', 'cost', '$', '3.88', 'in', 'New', 'York', '.', 'Please', 'buy', 'me', 'two', 'of', 'them', '.', 'Thanks', '.'] This particular tokenizer requires the Punkt sentence tokenization models to be installed. NLTK also provides a simpler, regular-expression based tokenizer, which splits text on whitespace and punctuation: >>> from nltk.tokenize import wordpunct_tokenize >>> wordpunct_tokenize(s) ['Good', 'muffins', 'cost', '$', '3', '.', '88', 'in', 'New', 'York', '.', 'Please', 'buy', 'me', 'two', 'of', 'them', '.', 'Thanks', '.'] We can also operate at the level of sentences, using the sentence tokenizer directly as follows: >>> from nltk.tokenize import sent_tokenize, word_tokenize >>> sent_tokenize(s) ['Good muffins cost $3.88\nin New York.', 'Please buy me\ntwo of them.', 'Thanks.'] >>> [word_tokenize(t) for t in sent_tokenize(s)] [['Good', 'muffins', 'cost', '$', '3.88', 'in', 'New', 'York', '.'], ['Please', 'buy', 'me', 'two', 'of', 'them', '.'], ['Thanks', '.']] Caution: when tokenizing a Unicode string, make sure you are not using an encoded version of the string (it may be necessary to decode it first, e.g. with ``s.decode("utf8")``. NLTK tokenizers can produce token-spans, represented as tuples of integers having the same semantics as string slices, to support efficient comparison of tokenizers. (These methods are implemented as generators.) >>> from nltk.tokenize import WhitespaceTokenizer >>> list(WhitespaceTokenizer().span_tokenize(s)) [(0, 4), (5, 12), (13, 17), (18, 23), (24, 26), (27, 30), (31, 36), (38, 44), (45, 48), (49, 51), (52, 55), (56, 58), (59, 64), (66, 73)] There are numerous ways to tokenize text. If you need more control over tokenization, see the other methods provided in this package. For further information, please see Chapter 3 of the NLTK book. i’’’’(tload(tSpaceTokenizert TabTokenizert LineTokenizert line_tokenize(tRegexpTokenizertWhitespaceTokenizertBlanklineTokenizertWordPunctTokenizertwordpunct_tokenizetregexp_tokenizetblankline_tokenize(tPunktSentenceTokenizer(tSExprTokenizertsexpr_tokenize(tTreebankWordTokenizer(tStanfordTokenizer(tTextTilingTokenizer(tTweetTokenizertcasual_tokenize(t MWETokenizertenglishcCs"tdj|ƒƒ}|j|ƒS(s Return a sentence-tokenized copy of *text*, using NLTK's recommended sentence tokenizer (currently :class:`.PunktSentenceTokenizer` for the specified language). :param text: text to split into sentences :param language: the model name in the Punkt corpus stokenizers/punkt/{0}.pickle(Rtformatttokenize(ttexttlanguaget tokenizer((sh/private/var/folders/cc/xm4nqn811x9b50x1q_zpkmvdjlphkp/T/pip-build-FUwmDn/nltk/nltk/tokenize/__init__.pyt sent_tokenizeNs cCs3gt||ƒD]}t|ƒD] }|^q qS(s? Return a tokenized copy of *text*, using NLTK's recommended word tokenizer (currently :class:`.TreebankWordTokenizer` along with :class:`.PunktSentenceTokenizer` for the specified language). :param text: text to split into sentences :param language: the model name in the Punkt corpus (Rt_treebank_word_tokenize(RRtsentttoken((sh/private/var/folders/cc/xm4nqn811x9b50x1q_zpkmvdjlphkp/T/pip-build-FUwmDn/nltk/nltk/tokenize/__init__.pyt word_tokenize]s N($t__doc__t nltk.dataRtnltk.tokenize.simpleRRRRtnltk.tokenize.regexpRRRRR R R tnltk.tokenize.punktR tnltk.tokenize.sexprR Rtnltk.tokenize.treebankRtnltk.tokenize.stanfordRtnltk.tokenize.texttilingRtnltk.tokenize.casualRRtnltk.tokenize.mweRRRRR(((sh/private/var/folders/cc/xm4nqn811x9b50x1q_zpkmvdjlphkp/T/pip-build-FUwmDn/nltk/nltk/tokenize/__init__.pyt<s"4