B ç0œ`iã @sÎdZddlmZyddlmZWn(eefk rHGdd„deƒZYnXy(ddlZddl m Z ddl m Z d Z Wneefk rŽd Z YnXd d d gZGdd„deƒZGdd„deƒZGdd„deƒZdS)z@Lexical analysis of formal languages (i.e. code) using Pygments.é)ÚApplicationError)ÚDistributionNotFoundc@s eZdZdS)Ú ResourceErrorN)Ú__name__Ú __module__Ú __qualname__©rrúw/private/var/folders/sf/wxz_36012wxg_prv29r6623x5vlqx8/T/pip-target-tft40_m7/lib/python/docutils/utils/code_analyzer.pyrsrN)Úget_lexer_by_name)Ú_get_ttype_classTFÚtokenÚtextÚc@s eZdZdS)Ú LexerErrorN)rrrrrrr rsrc@s*eZdZdZd dd„Zdd„Zdd„Zd S) ÚLexera¡Parse `code` lines and yield "classified" tokens. Arguments code -- string of source code to parse, language -- formal language the code is written in, tokennames -- either 'long', 'short', or '' (see below). Merge subsequent tokens of the same token-type. Iterating over an instance yields the tokens as ``(tokentype, value)`` tuples. The value of `tokennames` configures the naming of the tokentype: 'long': downcased full token type name, 'short': short name defined by pygments.token.STANDARD_TYPES (= class argument used in pygments html output), 'none': skip lexical analysis. Úshortc Csv||_||_||_d|_|dks(|dkr,dSts8tdƒ‚yt|jƒ|_Wn(tjj t fk rptd|ƒ‚YnXdS)zE Set up a lexical analyzer for `code` in `language`. N)rr Únonez0Cannot analyze code. Pygments package not found.z6Cannot analyze code. No Pygments lexer found for "%s".) ÚcodeÚlanguageÚ tokennamesÚlexerÚ with_pygmentsrr ÚpygmentsÚutilZ ClassNotFoundr)Úselfrrrrrr Ú__init__5szLexer.__init__ccstt|ƒ}t|ƒ\}}x6|D].\}}||kr4||7}q||fV||}}qW| d¡rb|dd…}|rp||fVdS)zrMerge subsequent tokens of same token-type. Also strip the final newline (added by pygments). Ú Néÿÿÿÿ)ÚiterÚnextÚendswith)rÚtokensZlasttypeZlastvalÚttypeÚvaluerrr ÚmergePs     z Lexer.mergeccs†|jdkrg|jfVdSt |j|j¡}xV| |¡D]H\}}|jdkr\t|ƒ ¡ d¡}n t |ƒg}dd„|Dƒ}||fVq6WdS)z7Parse self.code and yield "classified" tokens. NÚlongÚ.cSsg|]}|tkr|‘qSr)Úunstyled_tokens)Ú.0Úclsrrr ú nsz"Lexer.__iter__..) rrrÚlexr$rÚstrÚlowerÚsplitr )rr!Z tokentyper#Úclassesrrr Ú__iter__bs    zLexer.__iter__N)r)rrrÚ__doc__rr$r0rrrr r!s rc@s eZdZdZdd„Zdd„ZdS)Ú NumberLinesatInsert linenumber-tokens at the start of every code line. Arguments tokens -- iterable of ``(classes, value)`` tuples startline -- first line number endline -- last line number Iterating over an instance yields the tokens with a ``(['ln'], '')`` token added for every code line. Multi-line tokens are splitted.cCs"||_||_dtt|ƒƒ|_dS)Nz%%%dd )r!Ú startlineÚlenr,Úfmt_str)rr!r3Úendlinerrr rszNumberLines.__init__ccs†|j}dg|j|fVxh|jD]^\}}| d¡}x<|dd…D],}||dfV|d7}dg|j|fVq@W||dfVq WdS)NÚlnrré)r3r5r!r.)rÚlinenor"r#ÚlinesÚlinerrr r0…s zNumberLines.__iter__N)rrrr1rr0rrrr r2rs r2)r1ZdocutilsrÚ pkg_resourcesrrÚ ImportErrorÚ RuntimeErrorrZpygments.lexersr Zpygments.formatters.htmlr rÚ SyntaxErrorr'rÚobjectrr2rrrr Ús$    Q