U à€C^V”ã@sddlmZddlZddlZddlZdd„ZeGdd„deƒƒZeGdd„deƒƒZeGd d „d eƒƒZ Gd d „d e ƒZ Gd d„de ƒZ Gdd„de ƒZe eedœZdd„Zdd„Zej d¡Zeej d¡ƒZeej d¡ƒZdd„Zdd„Zdd„Zd"d d!„ZdS)#é)Úunicode_literalsNcsG‡fdd„dtƒ}|ƒS)z=Add error codes to string messages via class attribute names.cseZdZ‡fdd„ZdS)z"add_codes..ErrorsWithCodescstˆ|ƒ}dj||dS)Nz[{code}] {msg})ÚcodeÚmsg)ÚgetattrÚformat)Úselfrr©Úerr_cls©ú//tmp/pip-install-6_kvzl1k/spacy/spacy/errors.pyÚ__getattribute__ s z3add_codes..ErrorsWithCodes.__getattribute__N)Ú__name__Ú __module__Ú __qualname__r r rr r ÚErrorsWithCodes sr)Úobject)r rr rr Ú add_codes src@speZdZdZdZdZdZdZdZdZ dZ d Z d Z d Z d Zd ZdZdZdZdZdZdZdZdZdZdZdZdZdS)ÚWarningszðAs of spaCy v2.0, the keyword argument `path=` is deprecated. You can now call spacy.load with the path as its first argument, and the model's meta.json will be used to determine the language to load. For example: nlp = spacy.load('{path}')zËTokenizer.from_list is now deprecated. Create a new Doc object instead and pass in the strings as the `words` keyword argument, for example: from spacy.tokens import Doc doc = Doc(nlp.vocab, words=[...])z|Positional arguments to Doc.merge are deprecated. Instead, use the keyword arguments, for example tag=, lemma= or ent_type=.zaNo text fixing enabled. Run `pip install ftfy` to enable fixing using ftfy.fix_text if necessary.a!Doc object not parsed. This means displaCy won't be able to generate a dependency visualization for it. Make sure the Doc was processed with a model that supports dependency parsing, and not just a language class like `English()`. For more info, see the docs: https://spacy.io/usage/modelszÚNo entities to visualize found in Doc object. If this is surprising to you, make sure the Doc was processed using a model that supports named entity recognition, and check the `doc.ents` property manually if necessary.a¸The model you're using has no word vectors loaded, so the result of the {obj}.similarity method will be based on the tagger, parser and NER, which may not give useful similarity judgements. This may happen if you're using one of the small models, e.g. `en_core_web_sm`, which don't ship with word vectors and only use context-sensitive tensors. You can always add your own word vectors, or use one of the larger models instead if available.z3Evaluating {obj}.similarity based on empty vectors.z`Custom factory '{name}' provided by entry points of another package overwrites built-in factory.zyAs of v2.1.0, the PhraseMatcher doesn't have a phrase length limit anymore, so the max_length argument is now deprecated.a>It looks like you're calling displacy.serve from within a Jupyter notebook or a similar environment. This likely means you're already running a local web server, so there's no need to make displaCy start another one. Instead, you should be able to replace displacy.serve with displacy.render to show the visualization.a¢A Doc object you're adding to the PhraseMatcher for pattern '{key}' is parsed and/or tagged, but to match on '{attr}', you don't actually need this information. This means that creating the patterns is potentially much slower, because all pipeline components are applied. To only create tokenized Doc objects, try using `nlp.make_doc(text)` or process all texts as a stream using `list(nlp.tokenizer.pipe(all_texts))`.zƒAs of v2.1.0, {obj}.merge is deprecated. Please use the more efficient and less error-prone Doc.retokenize context manager instead.z±As of v2.1.0, the `disable` keyword argument on the serialization methods is and should be replaced with `exclude`. This makes it consistent with the other objects serializable.z¼As of v2.1.0, the use of keyword arguments to exclude fields from being serialized or deserialized is deprecated. Please use the `exclude` argument instead. For example: exclude=['{arg}'].zÚThe keyword argument `n_threads` on the is now deprecated, as the v2.x models cannot release the global interpreter lock. Future versions may introduce a `n_process` argument for parallel inference via multiprocessing.z5Alias '{alias}' already exists in the Knowledge Base.z7Entity '{entity}' already exists in the Knowledge Base.zjChanging vectors name from {old} to {new}, to avoid clash with previously loaded vectors. See Issue #3853.zXUnnamed vectors. This won't allow multiple vectors models to be loaded. (Shape: {shape})zqUnexpected hash collision in PhraseMatcher. Matches may be incorrect. Modify PhraseMatcher._terminal_hash to fix.aŒTraining a new part-of-speech tagger using a model with no lemmatization rules or data. This means that the trained model may not be able to lemmatize correctly. If this is intentional or the language you're using doesn't have lemmatization data, you can ignore this warning by setting SPACY_WARNING_IGNORE=W022. If this is surprising, make sure you have the spacy-lookups-data package installed.z\Multiprocessing of Language.pipe is not supported in Python 2. 'n_process' will be set to 1.zUEntity '{entity}' - Alias '{alias}' combination already exists in the Knowledge Base.z{'{name}' requires '{attr}' to be assigned, but none of the previous components in the pipeline declare that they assign it.N)r rrZW001ZW002ZW003ZW004ZW005ZW006ZW007ZW008ZW009ZW010ZW011ZW012ZW013ZW014ZW015ZW016ZW017ZW018ZW019ZW020ZW021ZW022ZW023ZW024ZW025r r r r rs2rc@søeZdZdZdZdZdZdZdZdZ dZ d Z d Z d Z d Zd ZdZdZdZdZdZdZdZdZdZdZdZdZdZdZdZdZdZ dZ!d Z"d!Z#d"Z$d#Z%d$Z&d%Z'd&Z(d'Z)d(Z*d)Z+d*Z,d+Z-d,Z.d-Z/d.Z0d/Z1d0Z2d1Z3d2Z4d3Z5d4Z6d5Z7d6Z8d7Z9d8Z:d9Z;d:Zd=Z?d>Z@d?ZAd@ZBdAZCdBZDdCZEdDZFdEZGdFZHdGZIdHZJdIZKdJZLdKZMdLZNdMZOdNZPdOZQdPZRdQZSdRZTdSZUdTZVdUZWdVZXdWZYdXZZdYZ[dZZ\d[Z]d\Z^d]Z_d^Z`d_Zad`ZbdaZcdbZddcZeddZfdeZgdfZhdgZidhZjdiZkdjZldkZmdlZndmZodnZpdoZqdpZrdqZsdrZtdsZudtZvduZwdvZxdwZydxZzdyZ{dzZ|d{Z}d|Z~d}Zd~Z€dZd€Z‚dZƒd‚Z„dƒZ…d„Z†d…Z‡d†Zˆd‡Z‰dˆZŠd‰Z‹dŠZŒd‹ZdŒZŽdZdŽZdZ‘dZ’d‘Z“d’Z”d“Z•d”Z–d•Z—d–Z˜d—Z™d˜Zšd™Z›dšZœd›ZdœZždZŸdžZ dŸZ¡d Z¢d¡Z£d¢Z¤d£Z¥d¤Z¦d¥Z§d¦Z¨d§Z©d¨Zªd©Z«dªZ¬d«Z­d¬Z®d­Z¯d®Z°d¯Z±d°Z²d±Z³d²Z´d³Zµd´Z¶dµZ·d¶Z¸d·Z¹d¸Zºd¹Z»dºZ¼d»Z½d¼S)½ÚErrorsz@No component '{name}' found in pipeline. Available names: {opts}afCan't find factory for '{name}'. This usually happens when spaCy calls `nlp.create_pipe` with a component name that's not built in - for example, when constructing the pipeline from a model's meta.json. If you're using a custom component, you can write to `Language.factories['{name}']` or remove it from the model meta and add it via `nlp.add_pipe` instead.zXNot a valid pipeline component. Expected callable, but got {component} (name: '{name}').zkIf you meant to add a built-in component, use `create_pipe`: `nlp.add_pipe(nlp.create_pipe('{component}'))`z|Pipeline component '{name}' returned None. If you're using a custom component, maybe you forgot to return the processed Doc?zWInvalid constraints. You can only set one of the following: before, after, first, last.z;'{name}' already exists in pipeline. Existing names: {opts}a Some current components would be lost when restoring previous pipeline state. If you added components after calling `nlp.disable_pipes()`, you should remove them explicitly with `nlp.remove_pipe()` before the pipeline is restored. Names of the new components: {names}zcThe `update` method expects same number of docs and golds, but got: {n_docs} docs, {n_golds} golds.zÌWord vectors set to length 0. This may be because you don't have a model installed or loaded, or because your model doesn't include word vectors. For more info, see the docs: https://spacy.io/usage/modelsz)Unknown operator: '{op}'. Options: {opts}z9Cannot add pattern for zero tokens to matcher. Key: {key}z!Error selecting action in matcherzUnknown tag ID: {tag}zTConflicting morphology exception for ({tag}, {orth}). Use `force=True` to overwrite.z_MultitaskObjective target should be function or one of: dep, tag, ent, dep_tag_offset, ent_tag.z5Can only add unicode or bytes. Got type: {value_type}zqCan't retrieve string for hash '{hash_value}'. This usually refers to an issue with the `Vocab` or `StringStore`.znCan't create transition with unknown action ID: {action}. Action IDs are enumerated in spacy/syntax/{src}.pyx.a¿Could not find a gold-standard action to supervise the dependency parser. The tree is non-projective (i.e. it has crossing arcs - see spacy/syntax/nonproj.pyx for definitions). The ArcEager transition system only supports projective trees. To learn non-projective representations, transform the data before training and after parsing. Either pass `make_projective=True` to the GoldParse class, or use spacy.syntax.nonproj.preprocess_training_data.z°Could not find a gold-standard action to supervise the dependency parser. The GoldParse was projective. The transition system has {n_actions} actions. State at failure: {state}zDCould not find a transition with the name '{name}' in the NER model.z`Error cleaning up beam: The same state occurred twice at memory address {addr} and position {i}.aCould not find an optimal move to supervise the parser. Usually, this means that the model can't be updated in a way that's valid and satisfies the correct annotations specified in the GoldParse. For example, are all labels added to the model? If you're training a named entity recognizer, also make sure that none of your annotated entity spans have leading or trailing whitespace. You can also use the experimental `debug-data` command to validate your JSON-formatted training data. For details, run: python -m spacy debug-data --helpz6String is too long: {length} characters. Max is 2**30.zOError accessing token at position {i}: out of bounds in Doc of length {length}.zÞArguments 'words' and 'spaces' should be sequences of the same length, or 'spaces' should be left default at None. spaces should be a sequence of booleans, with True meaning that the word owns a ' ' character following it.zyorths_and_spaces expects either a list of unicode string or a list of (unicode, bool) tuples. Got bytes instance: {value}z­noun_chunks requires the dependency parse, which requires a statistical model to be installed and loaded. For more info, see the documentation: https://spacy.io/usage/modelszëSentence boundaries unset. You can add the 'sentencizer' component to the pipeline with: nlp.add_pipe(nlp.create_pipe('sentencizer')) Alternatively, add the dependency parser, or set sentence boundaries by setting doc[i].is_sent_start.z1Invalid token: empty string ('') at position {i}.a Conflicting attributes specified in doc.from_array(): (HEAD, SENT_START). The HEAD attribute currently sets sentence boundaries implicitly, based on the tree structure. This means the HEAD attribute would potentially override the sentence boundaries set by SENT_START.z2Cannot load into non-empty Doc of length {length}.z°Doc.merge received {n_args} non-keyword arguments. Expected either 3 arguments (deprecated), or 0 (use keyword arguments). Arguments supplied: {args} Keyword arguments:{kwargs}zPError creating span with start {start} and end {end} for Doc of length {length}.zPError calculating span: Can't find a token starting at character offset {start}.zLError calculating span: Can't find a token ending at character offset {end}.z8Error finding sentence for span. Infinite loop detected.zµArray bounds exceeded while searching for root word. This likely means the parse tree is in an invalid state. Please report this issue here: http://github.com/explosion/spaCy/issuesz8Attempt to access token at {i}, max length {max_length}.z7Invalid comparison operator: {op}. Likely a Cython bug?z?Error accessing doc[{i}].nbor({j}), for doc of length {length}.zkRefusing to write to token.sent_start if its document is parsed, because this may cause inconsistent state.zNInvalid value for token.sent_start: {value}. Must be one of: None, True, Falsez maximum length ({max_len}).zQError setting tensor on Doc: tensor has {rows} rows, while Doc has {words} words.zsError computing {value}: number of Docs ({n_docs}) does not equal number of GoldParse objects ({n_golds}) in batch.zwError computing score: number of words in Doc ({words_doc}) does not equal number of words in GoldParse ({words_gold}).ztError computing states in beam: number of predicted beams ({pbeams}) does not equal number of gold beams ({gbeams}).z%Duplicate state found in beam: {key}.zjError getting gradient in beam: number of histories ({n_hist}) does not equal number of losses ({losses}).z‚Error deprojectivizing parse: number of heads ({n_heads}), projective heads ({n_proj_heads}) and labels ({n_labels}) do not match.z|Error setting extension: only one of `default`, `method`, or `getter` (plus optional `setter`) is allowed. Got: {nr_defined}z=Error assigning label ID {label} to span: not in StringStore.z*Can't create lexeme for string '{string}'.zfError deserializing lexeme '{string}': orth ID {orth_id} does not match hash {hash_id} in StringStore.z Unknown displaCy style: {style}.a±Text of length {length} exceeds maximum of {max_length}. The v2.x parser and NER models require roughly 1GB of temporary memory per 100,000 characters in the input. This means long texts may cause memory allocation errors. If you're not using the parser or NER, it's probably safe to increase the `nlp.max_length` limit. The limit is in number of characters, so you can check whether your inputs are too long by checking `len(text)`.zrExtensions can't have a setter argument without a getter argument. Check the keyword arguments on `set_extension`.z{Extension '{name}' already exists on {obj}. To overwrite the existing extension, set `force=True` on `{obj}.set_extension`.zOInvalid extension attribute {name}: expected callable or None, but got: {value}zòCould not find or assign name for word vectors. Ususally, the name is read from the model's meta.json in vector.name. Alternatively, it is built from the 'lang' and 'name' keys in the meta.json. Vector names are required to avoid issue #1660.z=token.ent_iob values make invalid sequence: I without B {seq}z4Error reading line {line_num} in vectors file {loc}.zsCan't write to frozen dictionary. This is likely an internal error. Are you writing to a default function argument?zjInvalid object passed to displaCy: Can only visualize Doc or Span objects, or dicts if set to manual=True.zeInvalid pattern: expected token pattern (list of dicts) or phrase pattern (string) but got: {pattern}z:Invalid pattern specified: expected both SPEC and PATTERN.zTFirst node of pattern should be a root node. The root should only contain NODE_NAME.zMNodes apart from the root should contain NODE_NAME, NBOR_NAME and NBOR_RELOP.zfNODE_NAME should be a new node and NBOR_NAME should already have have been declared in previous edges.zßCan't merge non-disjoint spans. '{token}' is already part of tokens to merge. If you want to find the longest non-overlapping spans, you can use the util.filter_spans helper: https://spacy.io/api/top-level#util.filter_spanszœTrying to set conflicting doc.ents: '{span1}' and '{span2}'. A token can only be part of one entity, so make sure the entities you're setting don't overlap.z$Can't find JSON schema for '{name}'.zkThe Doc.print_tree() method is now deprecated. Please use Doc.to_json() instead or write your own function.zNCan't find doc._.{attr} attribute specified in the underscore settings: {opts}z7Value of doc._.{attr} is not JSON-serializable: {value}z³As of spaCy v2.1, the pipe name `sbd` has been deprecated in favor of the pipe name `sentencizer`, which does the same thing. For example, use `nlp.create_pipeline('sentencizer')`zqModel for component '{name}' not initialized. Did you forget to load a model, or forget to call begin_training()?z>Invalid displaCy render wrapper. Expected callable, got: {obj}a:Pickling a token is not supported, because tokens are only views of the parent Doc and can't exist on their own. A pickled token would always have to include its Doc and Vocab, which has practically no advantage over pickling the parent Doc directly. So instead of pickling the token, pickle the Doc it belongs to.axPickling a span is not supported, because spans are only views of the parent Doc and can't exist on their own. A pickled span would always have to include its Doc and Vocab, which has practically no advantage over pickling the parent Doc directly. So instead of pickling the span, pickle the Doc it belongs to or use Span.as_doc to convert the span to a standalone Doc object.z8The newly split token can only have one root (head = 0).z6The newly split token needs to have a root (head = 0).z)All subtokens must have associated heads.zËCannot currently add labels to pretrained text classifier. Add labels before training begins. This functionality was available in previous versions, but had significant bugs that led to poor performance.zdThe newly split tokens must match the text of the original token. New orths: {new}. Old text: {old}.zºThe custom extension attribute '{attr}' is not registered on the Token object so it can't be set during retokenization. To register an attribute, use the Token.set_extension classmethod.a^Can't set custom extension attribute '{attr}' during retokenization because it's not writable. This usually means it was registered with a getter function (and no setter) or as a method extension, so the value is computed dynamically. To overwrite a custom attribute manually, it should be registered with a default value or with a getter AND setter.z~Can't set custom extension attributes during retokenization. Expected dict mapping attribute names to values, but got: {value}z`Can't bulk merge spans. Attribute length {attr_len} should be equal to span length ({span_len}).z1Cannot find token to be split. Did it get merged?z9Cannot find head of token to be split. Did it get merged?z;Cannot read from file: {path}. Supported formats: {formats}zUnexpected value: {value}z}Unexpected matcher predicate: '{bad}'. Expected one of: {good}. This is likely a bug in spaCy, so feel free to open an issue.zXCannot create phrase pattern representation for length 0. This is likely a bug in spaCy.zßUnsupported serialization argument: '{arg}'. The use of keyword arguments to exclude fields from being serialized or deserialized is now deprecated. Please use the `exclude` argument instead. For example: exclude=['{arg}'].a:Cannot write the label of an existing Span object because a Span is a read-only view of the underlying Token objects stored in the Doc. Instead, create a new Span object and specify the `label` keyword argument, for example: from spacy.tokens import Span span = Span(doc, start={start}, end={end}, label='{label}')zÚYou are running a narrow unicode build, which is incompatible with spacy >= 2.1.0. To fix this, reinstall Python and use a wide unicode build instead. You can also rebuild Python and set the --enable-unicode=ucs4 flag.aKCannot write the kb_id of an existing Span object because a Span is a read-only view of the underlying Token objects stored in the Doc. Instead, create a new Span object and specify the `kb_id` keyword argument, for example: from spacy.tokens import Span span = Span(doc, start={start}, end={end}, label='{label}', kb_id='{kb_id}')zThe vectors for entities and probabilities for alias '{alias}' should have equal length, but found {entities_length} and {probabilities_length} respectively.zXThe sum of prior probabilities for alias '{alias}' should not exceed 1, but found {sum}.z7Entity '{entity}' is not defined in the Knowledge Base.zxIf you meant to replace a built-in component, use `create_pipe`: `nlp.replace_pipe('{name}', nlp.create_pipe('{name}'))`z_This additional feature requires the jsonschema library to be installed: pip install jsonschemazÔExpected 'dict' type, but got '{type}' from '{line}'. Make sure to provide a valid JSON object as input with either the `text` or `tokens` key. For more info, see the docs: https://spacy.io/api/cli#pretrain-jsonlz°Invalid JSONL format for raw text '{text}'. Make sure the input includes either the `text` or `tokens` key. For more info, see the docs: https://spacy.io/api/cli#pretrain-jsonlzWKnowledge Base for component '{name}' not initialized. Did you forget to call set_kb()?zWThe list of entities, prior probabilities and entity vectors should be of equal length.zNEntity vectors should be of length {required} instead of the provided {found}.zEUnsupported loss_function '{loss_func}'. Use either 'L2' or 'cosine'.zRLabels for component '{name}' not initialized. Did you forget to call add_label()?zICould not find parameter `{param}` when building the entity linker model.z(Error reading `{param}` from input file.zCould not access `{path}`.zˆUnexpected error in the {method} functionality of the EntityLinker: {msg}. This is likely a bug in spaCy, so feel free to open an issue.zvExpected {ents} KB identifiers but got {ids}. Make sure that each entity in `doc.ents` is assigned to a KB identifier.zmError deserializing model. Check that the config used to create the component matches the model being loaded.zrThe language of the `nlp` object and the `vocab` should be the same, but found '{nlp}' and '{vocab}' respectively.zjTrying to call nlp.update without required annotation types. Expected top-level keys: {exp}. Got: {unexp}.z›The attribute {attr} is not supported for token patterns. Please use the option validate=True with Matcher, PhraseMatcher, or EntityRuler for more details.zThe value type {vtype} is not supported for token patterns. Please use the option validate=True with Matcher, PhraseMatcher, or EntityRuler for more details.z¦One of the attributes or values is not supported for token patterns. Please use the option validate=True with Matcher, PhraseMatcher, or EntityRuler for more details.zÜThe pipeline needs to include a tagger in order to use Matcher or PhraseMatcher with the attributes POS, TAG, or LEMMA. Try using nlp() instead of nlp.make_doc() or list(nlp.pipe()) instead of list(nlp.tokenizer.pipe()).zÌThe pipeline needs to include a parser in order to use Matcher or PhraseMatcher with the attribute DEP. Try using nlp() instead of nlp.make_doc() or list(nlp.pipe()) instead of list(nlp.tokenizer.pipe()).zÏCan't render negative values for dependency arc start or end. Make sure that you're passing in absolute token indices, not relative token offsets. start: {start}, end: {end}, label: {label}, direction: {dir}z>Can't add table '{name}' to lookups because it already exists.z@Can't find table '{name}' in lookups. Available tables: {tables}z%Can't find language data file: {path}z{Found an internal inconsistency when predicting entity links. This is likely a bug in spaCy, so feel free to open an issue.z…Cannot evaluate textcat model on data with different labels. Labels in model: {model_labels} Labels in evaluation data: {eval_labels}zLcumsum was found to be unstable: its last element does not correspond to sumz+x is neither increasing nor decreasing: {}.zLOnly one class present in y_true. ROC AUC score is not defined in that case.zlCan only merge DocBins with the same pre-defined attributes. Current DocBin: {current} Other DocBin: {other}zöUnknown morphological feature: '{feat}' ({feat_id}). This can happen if the tagger was trained with a different set of morphological features. If you're using a pretrained model, make sure that your models are up to date: python -m spacy validatezUnknown field: {field}zCan't find module: {module}z>Cannot apply transition {name}: invalid for the current state.zgMatcher.add received invalid on_match callback argument: expected callable or None, but got: {arg_type}zŸThe Lemmatizer.load classmethod is deprecated. To create a Lemmatizer, initialize the class directly. See the docs for details: https://spacy.io/api/lemmatizerz©As of v2.2, the Lemmatizer is initialized with an instance of Lookups containing the lemmatization tables. See the docs for details: https://spacy.io/api/lemmatizer#initzEArchitecture '{name}' not found in registry. Available names: {names}z5Can't remove rule for unknown match pattern ID: {key}z5Alias '{alias}' is not defined in the Knowledge Base.z$Ill-formed IOB input detected: {tag}aInvalid pattern. Expected list of dicts but got: {pat}. Maybe you accidentally passed a single pattern to Matcher.add instead of a list of patterns? If you only want to add one pattern, make sure to wrap it in a list. For example: matcher.add('{key}', [pattern])z³Invalid pattern. Expected a list of Doc objects but got a single Doc. If you only want to add one pattern, make sure to wrap it in a list. For example: matcher.add('{key}', [doc])zÔSpan attributes can't be declared as required or assigned by components, since spans are only views of the Doc. Use Doc and Token attributes (or custom extension attributes) only and remove the following: {attrs}zjReceived invalid attributes for unkown object {obj}: {attrs}. Only Doc and Token attributes are supported.znReceived invalid attribute declaration: {attr} Did you forget to define the attribute? For example: {attr}.???zoReceived invalid attribute declaration: {attr} Only top-level attributes are supported, for example: {solution}z´Only attributes without underscores are supported in component attribute declarations (because underscore and non-underscore attributes are connected anyways): {attr} -> {solution}zwReceived invalid attribute in component attribute declaration: {obj}.{attr} Attribute '{attr}' does not exist on {obj}.z,'{tok_a}' and '{tok_b}' are different texts.z-Only unicode strings are supported as labels.N)¾r rrZE001ZE002ZE003ZE004ZE005ZE006ZE007ZE008ZE009ZE010ZE011ZE012ZE013ZE014ZE015ZE016ZE017ZE018ZE019ZE020ZE021ZE022ZE023ZE024ZE025ZE026ZE027ZE028ZE029ZE030ZE031ZE032ZE033ZE034ZE035ZE036ZE037ZE038ZE039ZE040ZE041ZE042ZE043ZE044ZE045ZE046ZE047ZE048ZE049ZE050ZE051ZE052ZE053ZE054ZE055ZE056ZE057ZE058ZE059ZE060ZE061ZE062ZE063ZE064ZE065ZE066ZE067ZE068ZE069ZE070ZE071ZE072ZE073ZE074ZE075ZE076ZE077ZE078ZE079ZE080ZE081ZE082ZE083ZE084ZE085ZE086ZE087ZE088ZE089ZE090ZE091ZE092ZE093ZE094ZE095ZE096ZE097ZE098ZE099ZE100ZE101ZE102ZE103ZE104ZE105ZE106ZE107ZE108ZE109ZE110ZE111ZE112ZE113ZE114ZE115ZE116ZE117ZE118ZE119ZE120ZE121ZE122ZE123ZE124ZE125ZE126ZE127ZE128ZE129ZE130ZE131ZE132ZE133ZE134ZE135ZE136ZE137ZE138ZE139ZE140ZE141ZE142ZE143ZE144ZE145ZE146ZE147ZE148ZE149ZE150ZE151ZE152ZE153ZE154ZE155ZE156ZE157ZE158ZE159ZE160ZE161ZE162ZE163ZE164ZE165ZE166ZE167ZE168ZE169ZE170ZE171ZE172ZE173ZE174ZE175ZE176ZE177ZE178ZE179ZE180ZE181ZE182ZE183ZE184ZE185ZE186ZE187r r r r rjsv rc@seZdZdZdZdZdZdS)Ú TempErrorsz=Resizing pretrained Tagger models is not currently supported.z=Currently parser depth is hard-coded to 1. Received: {value}.ztCan't yet set {attr} from Span. Vote for this feature on the issue tracker: http://github.com/explosion/spaCy/issueszöBad configuration of Tagger. This is probably a bug within spaCy. We changed the name of an internal attribute for loading pretrained vectors, and the class has been passed the old name (pretrained_dims) but not the new name (pretrained_vectors).N)r rrZT003ZT004ZT007ZT008r r r r rsrc@seZdZdd„ZdS)ÚMatchPatternErrorcCsPd |¡}| ¡D],\}}d dd„|Dƒ¡}|d ||¡7}qt ||¡dS)zðCustom error for validating match patterns. key (unicode): The name of the matcher rule. errors (dict): Validation errors (sequence of strings) mapped to pattern ID, i.e. the index of the added pattern. z-Invalid token patterns for matcher rule '{}' Ú cSsg|]}d |¡‘qS)z- {})r)Ú.0Úer r r Ú 0sz.MatchPatternError.__init__..z Pattern {}: {} N)rÚitemsÚjoinÚ ValueErrorÚ__init__)rÚkeyÚerrorsrZ pattern_idxZ error_msgsZpattern_errorsr r r r's  zMatchPatternError.__init__N)r rrrr r r r r&src@s eZdZdS)ÚAlignmentErrorN©r rrr r r r r!5sr!c@s eZdZdS)Ú ModelsWarningNr"r r r r r#9sr#)ÚuserÚ deprecationÚmodelscCs4|dkr gS|r|dkr t ¡Sdd„| d¡DƒS)NÚÚallcSs g|]}| ¡tkr| ¡‘qSr )ÚstripÚWARNINGS)rZw_typer r r rIs z#_get_warn_types..ú,)r*ÚkeysÚsplit©Úargr r r Ú_get_warn_typesDs  r0cCs|sgSdd„| d¡DƒS)NcSsg|] }| ¡‘qSr )r))rÚw_idr r r rOsz"_get_warn_excl..r+)r-r.r r r Ú_get_warn_exclLsr2ÚSPACY_WARNING_FILTERÚSPACY_WARNING_TYPESÚSPACY_WARNING_IGNOREcCst|dƒdS)Nr$©Ú_warn©Úmessager r r Ú user_warningWsr:cCst|dƒdS)Nr%r6r8r r r Údeprecation_warning[sr;cCst|dƒdS)Nr&r6r8r r r Úmodels_warning_sr<r$c Cs˜| d¡r(| dd¡d dd¡d}nd}|o6|tk}|tkr”|s”t|}t ¡d}t ¡.t rrt  t |¡t  |||d|d¡W5QRXdS)za message (unicode): The message to display. category (Warning): The Warning to show. ú[éú]rNéÿÿÿÿé) Ú startswithr-r5r4r*ÚinspectÚstackÚwarningsÚcatch_warningsr3Ú simplefilterÚ warn_explicit)r9Z warn_typer1Zignore_warningÚcategoryrDr r r r7cs      r7)r$)Ú __future__rÚosrErCrrrrrrrr!Ú UserWarningr#ÚDeprecationWarningr*r0r2ÚenvironÚgetr3r4r5r:r;r<r7r r r r Ús:  S/ý