U n‰^iã @sÎdZddlmZzddlmZWn(eefk rHGdd„deƒZYnXz(ddlZddl m Z ddl m Z d Z Wneefk rŽd Z YnXd d d gZGdd„deƒZGdd„deƒZGdd„deƒZdS)z@Lexical analysis of formal languages (i.e. code) using Pygments.é)ÚApplicationError)ÚDistributionNotFoundc@s eZdZdS)Ú ResourceErrorN©Ú__name__Ú __module__Ú __qualname__©r r úv/private/var/folders/sd/whlwsn6x1_qgglc0mjv25_695qk2gl/T/pip-install-4zq3fp6i/docutils/docutils/utils/code_analyzer.pyrsrN)Úget_lexer_by_name)Ú_get_ttype_classTFÚtokenÚtextÚc@s eZdZdS)Ú LexerErrorNrr r r r rsrc@s*eZdZdZd dd„Zdd„Zdd„Zd S) ÚLexera¡Parse `code` lines and yield "classified" tokens. Arguments code -- string of source code to parse, language -- formal language the code is written in, tokennames -- either 'long', 'short', or '' (see below). Merge subsequent tokens of the same token-type. Iterating over an instance yields the tokens as ``(tokentype, value)`` tuples. The value of `tokennames` configures the naming of the tokentype: 'long': downcased full token type name, 'short': short name defined by pygments.token.STANDARD_TYPES (= class argument used in pygments html output), 'none': skip lexical analysis. Úshortc Csv||_||_||_d|_|dks(|dkr,dSts8tdƒ‚zt|jƒ|_Wn(tjj t fk rptd|ƒ‚YnXdS)zE Set up a lexical analyzer for `code` in `language`. N)rrÚnonez0Cannot analyze code. Pygments package not found.z6Cannot analyze code. No Pygments lexer found for "%s".) ÚcodeÚlanguageÚ tokennamesÚlexerÚ with_pygmentsrr ÚpygmentsÚutilZ ClassNotFoundr)Úselfrrrr r r Ú__init__5sÿzLexer.__init__ccspt|ƒ}t|ƒ\}}|D].\}}||kr2||7}q||fV||}}q| d¡r^|dd…}|rl||fVdS)zrMerge subsequent tokens of same token-type. Also strip the final newline (added by pygments). Ú Néÿÿÿÿ)ÚiterÚnextÚendswith)rÚtokensZlasttypeZlastvalÚttypeÚvaluer r r ÚmergePs       z Lexer.mergeccs‚|jdkrg|jfVdSt |j|j¡}| |¡D]H\}}|jdkrZt|ƒ ¡ d¡}n t |ƒg}dd„|Dƒ}||fVq4dS)z7Parse self.code and yield "classified" tokens. NÚlongÚ.cSsg|]}|tkr|‘qSr )Úunstyled_tokens)Ú.0Úclsr r r Ú nsz"Lexer.__iter__..) rrrÚlexr%rÚstrÚlowerÚsplitr )rr"Z tokentyper$Úclassesr r r Ú__iter__bs    zLexer.__iter__N)r)rrrÚ__doc__rr%r1r r r r r!s rc@s eZdZdZdd„Zdd„ZdS)Ú NumberLinesatInsert linenumber-tokens at the start of every code line. Arguments tokens -- iterable of ``(classes, value)`` tuples startline -- first line number endline -- last line number Iterating over an instance yields the tokens with a ``(['ln'], '')`` token added for every code line. Multi-line tokens are splitted.cCs"||_||_dtt|ƒƒ|_dS)Nz%%%dd )r"Ú startlineÚlenr-Úfmt_str)rr"r4Úendliner r r rszNumberLines.__init__ccs~|j}dg|j|fV|jD]Z\}}| d¡}|dd…D],}||dfV|d7}dg|j|fVq<||dfVqdS)NÚlnrré)r4r6r"r/)rÚlinenor#r$ÚlinesÚliner r r r1…s zNumberLines.__iter__N)rrrr2rr1r r r r r3rs r3)r2ZdocutilsrÚ pkg_resourcesrrÚ ImportErrorÚ RuntimeErrorrZpygments.lexersr Zpygments.formatters.htmlr rÚ SyntaxErrorr(rÚobjectrr3r r r r Ús&    þQ