ó ½z]c@sdZddlmZyddlmZWn-eefk rYdefd„ƒYZnXy6ddlZddl m Z ddl m Z e ZWneefk r¯eZnXd d d gZd efd „ƒYZdefd„ƒYZdefd„ƒYZdS(s@Lexical analysis of formal languages (i.e. code) using Pygments.iÿÿÿÿ(tApplicationError(tDistributionNotFoundt ResourceErrorcBseZRS((t__name__t __module__(((s>/tmp/pip-build-kBFYxq/docutils/docutils/utils/code_analyzer.pyRsN(tget_lexer_by_name(t_get_ttype_classttokenttexttt LexerErrorcBseZRS((RR(((s>/tmp/pip-build-kBFYxq/docutils/docutils/utils/code_analyzer.pyR stLexercBs,eZdZdd„Zd„Zd„ZRS(s¡Parse `code` lines and yield "classified" tokens. Arguments code -- string of source code to parse, language -- formal language the code is written in, tokennames -- either 'long', 'short', or '' (see below). Merge subsequent tokens of the same token-type. Iterating over an instance yields the tokens as ``(tokentype, value)`` tuples. The value of `tokennames` configures the naming of the tokentype: 'long': downcased full token type name, 'short': short name defined by pygments.token.STANDARD_TYPES (= class argument used in pygments html output), 'none': skip lexical analysis. tshortcCsŸ||_||_||_d|_|dks<|dkr@dStsUtdƒ‚nyt|jƒ|_Wn-tj j t fk rštd|ƒ‚nXdS(sE Set up a lexical analyzer for `code` in `language`. R RtnoneNs0Cannot analyze code. Pygments package not found.s6Cannot analyze code. No Pygments lexer found for "%s".(R R( tcodetlanguaget tokennamestNonetlexert with_pygmentsR Rtpygmentstutilt ClassNotFoundR(tselfRRR((s>/tmp/pip-build-kBFYxq/docutils/docutils/utils/code_analyzer.pyt__init__5s    ccsšt|ƒ}|jƒ\}}xE|D]=\}}||krJ||7}q%||fV||}}q%W|jdƒr‚|d }n|r–||fVndS(srMerge subsequent tokens of same token-type. Also strip the final newline (added by pygments). s iÿÿÿÿN(titertnexttendswith(Rttokenstlasttypetlastvaltttypetvalue((s>/tmp/pip-build-kBFYxq/docutils/docutils/utils/code_analyzer.pytmergePs     ccsÉ|jdkr!g|jfVdStj|j|jƒ}x‰|j|ƒD]x\}}|jdkr‚t|ƒjƒj dƒ}nt |ƒg}g|D]}|t kr˜|^q˜}||fVqIWdS(s7Parse self.code and yield "classified" tokens. Ntlongt.( RRRRtlexR!RtstrtlowertsplitRtunstyled_tokens(RRt tokentypeR tclassestcls((s>/tmp/pip-build-kBFYxq/docutils/docutils/utils/code_analyzer.pyt__iter__bs%(RRt__doc__RR!R,(((s>/tmp/pip-build-kBFYxq/docutils/docutils/utils/code_analyzer.pyR !s  t NumberLinescBs eZdZd„Zd„ZRS(stInsert linenumber-tokens at the start of every code line. Arguments tokens -- iterable of ``(classes, value)`` tuples startline -- first line number endline -- last line number Iterating over an instance yields the tokens with a ``(['ln'], '')`` token added for every code line. Multi-line tokens are splitted.cCs/||_||_dtt|ƒƒ|_dS(Ns%%%dd (Rt startlinetlenR%tfmt_str(RRR/tendline((s>/tmp/pip-build-kBFYxq/docutils/docutils/utils/code_analyzer.pyRs  ccs|j}dg|j|fVxx|jD]m\}}|jdƒ}x@|d D]4}||dfV|d7}dg|j|fVqNW||dfVq(WdS(Ntlns iÿÿÿÿi(R/R1RR'(RtlinenoRR tlinestline((s>/tmp/pip-build-kBFYxq/docutils/docutils/utils/code_analyzer.pyR,…s  (RRR-RR,(((s>/tmp/pip-build-kBFYxq/docutils/docutils/utils/code_analyzer.pyR.rs  (R-tdocutilsRt pkg_resourcesRRt ImportErrort RuntimeErrorRtpygments.lexersRtpygments.formatters.htmlRtTrueRt SyntaxErrortFalseR(R tobjectR R.(((s>/tmp/pip-build-kBFYxq/docutils/docutils/utils/code_analyzer.pyts$    Q