ó ùµÈ[c@srdZddlZddlmZddlmZddlmZdefd „ƒYZ d e fd „ƒYZ dS( s_Provide some handy classes for user to implement a simple computation module in Python easily. iÿÿÿÿNi(t BaseModulei(tUniform(tndarrayt PythonModulecBsÝeZdZed„Zed„ƒZed„ƒZed„ƒZed„ƒZ ed„ƒZ d„Z e dƒddeeed „Zd „Zed „Zdeeedd d „Zd„Zddded„ZRS(s™A convenient module class that implements many of the module APIs as empty functions. Parameters ---------- data_names : list of str Names of the data expected by the module. label_names : list of str Names of the labels expected by the module. Could be ``None`` if the module does not need labels. output_names : list of str Names of the outputs. cCstt|ƒjd|ƒt|tƒr7t|ƒ}nt|tƒrUt|ƒ}n||_||_||_d|_ d|_ d|_ dS(Ntlogger( tsuperRt__init__t isinstancettupletlistt _data_namest _label_namest _output_namestNonet _data_shapest _label_shapest_output_shapes(tselft data_namest label_namest output_namesR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyR*s     cCs|jS(s1A list of names for data required by this module.(R (R((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyR=scCs|jS(s/A list of names for the outputs of this module.(R (R((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyRBscCs|jS(sHA list of (name, shape) pairs specifying the data inputs to this module.(R(R((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyt data_shapesJscCs|jS(sA list of (name, shape) pairs specifying the label inputs to this module. If this module does not accept labels -- either it is a module without loss function, or it is not bound for training, then this should return an empty list ``[]```. (R(R((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyt label_shapesOscCs|jS(sDA list of (name, shape) pairs specifying the outputs of this module.(R(R((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyt output_shapesXscCstƒtƒfS(sGets parameters, those are potentially copies of the the actual parameters used to do computation on the device. Subclass should override this method if contains parameters. Returns ------- ``({}, {})``, a pair of empty dict. (tdict(R((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyt get_params`s g{®Gáz„?cCsdS(s~Initializes the parameters and auxiliary states. By default this function does nothing. Subclass should override this method if contains parameters. Parameters ---------- initializer : Initializer Called to initialize parameters if needed. arg_params : dict If not ``None``, should be a dictionary of existing `arg_params`. Initialization will be copied from that. aux_params : dict If not ``None``, should be a dictionary of existing `aux_params`. Initialization will be copied from that. allow_missing : bool If ``True``, params could contain missing values, and the initializer will be called to fill those missing params. force_init : bool If ``True``, will force re-initialize even if already initialized. allow_extra : boolean, optional Whether allow extra parameters that are not needed by symbol. If this is True, no error will be thrown when arg_params or aux_params contain extra parameters that is not needed by the executor. N((Rt initializert arg_paramst aux_paramst allow_missingt force_initt allow_extra((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyt init_paramskscCsdS(sîUpdates parameters according to the installed optimizer and the gradients computed in the previous forward-backward batch. Currently we do nothing here. Subclass should override this method if contains parameters. N((R((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pytupdate†scCsB|jdkrdS|r(tdƒ‚n|j||jƒƒdS(s-Evaluates and accumulates evaluation metric on outputs of the last forward computation. Subclass should override this method if needed. Parameters ---------- eval_metric : EvalMetric labels : list of NDArray Typically ``data_batch.label``. Ns.PythonModule does not support presliced labels(RR t RuntimeErrorR!t get_outputs(Rt eval_metrictlabelst pre_sliced((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyt update_metrics twritec Cs1|jr$| r$|jjdƒdS|dks<tdƒ‚||_||_t|ƒt|jƒksot‚g|D]}|d^qv|jks›t‚||_||_ |dk r|j dk sÎt‚t|j ƒt|ƒksït‚g|D]}|d^qö|j kst‚n|j ƒ|_ dS(sÑBinds the symbols to construct executors. This is necessary before one can perform computation with the module. Parameters ---------- data_shapes : list of (str, tuple) Typically is ``data_iter.provide_data``. label_shapes : list of (str, tuple) Typically is ``data_iter.provide_label``. for_training : bool Default is ``True``. Whether the executors should be bind for training. inputs_need_grad : bool Default is ``False``. Whether the gradients to the input data need to be computed. Typically this is not needed. But this might be needed when implementing composition of modules. force_rebind : bool Default is ``False``. This function does nothing if the executors are already bound. But with this ``True``, the executors will be forced to rebind. shared_module : Module Default is ``None``. This is used in bucketing. When not ``None``, the shared module essentially corresponds to a different bucket -- a module with different symbol but with the same sets of parameters (e.g. unrolled RNNs with different lengths). grad_req : str, list of str, dict of str to str Requirement for gradient accumulation. Can be 'write', 'add', or 'null' (default to 'write'). Can be specified globally (str) or for each argument (list, dict). sAlready bound, ignoring bind()NR(s)Python module only support write gradienti(tbindedRtwarningtAssertionErrort for_trainingtinputs_need_gradtlenR RRR R t_compute_output_shapesR( RRRR,R-t force_rebindt shared_moduletgrad_reqtx((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pytbind¥s  !,   !/cCs tƒ‚dS(sÁThe subclass should implement this method to compute the shape of outputs. This method can assume that the ``data_shapes`` and ``label_shapes`` are already initialized. N(tNotImplementedError(R((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyR/×stlocaltsgdt learning_ratecCsdS(s™Installs and initializes optimizers. By default we do nothing. Subclass should override this method if needed. Parameters ---------- kvstore : str or KVStore Default `'local'`. optimizer : str or Optimizer Default `'sgd'` optimizer_params : dict Default `(('learning_rate', 0.01),)`. The default value is not a dictionary, just to avoid pylint warning of dangerous default values. force_init : bool Default `False`, indicating whether we should force re-initializing the optimizer in the case an optimizer is already installed. N((Rtkvstoret optimizertoptimizer_paramsR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pytinit_optimizerÞsN(s learning_rateg{®Gáz„?((s learning_rateg{®Gáz„?(t__name__t __module__t__doc__tloggingRtpropertyRRRRRRRR tFalseR R!R'tTrueR4R/R<(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyRs$        0 tPythonLossModulecBsqeZdZdd d edd„Zd„Zdd„Zed„Z dd„Z d „Z ed „Z d „Z RS(s;A convenient module class that implements many of the module APIs as empty functions. Parameters ---------- name : str Names of the module. The outputs will be named `[name + '_output']`. data_names : list of str Defaults to ``['data']``. Names of the data expected by this module. Should be a list of only one name. label_names : list of str Default ``['softmax_label']``. Names of the labels expected by the module. Should be a list of only one name. grad_func : function Optional. If not ``None``, should be a function that takes `scores` and `labels`, both of type `NDArray`, and return the gradients with respect to the scores according to this loss function. The return value could be a numpy array or an `NDArray`. tpylosstdatat softmax_labelcCs«tt|ƒj|||dgd|ƒ||_t|ƒdksJt‚t|ƒdksbt‚d|_d|_d|_ |dk ržt |ƒsžt‚n||_ dS(Nt_outputRi( RRDRt_nameR.R+R t_scorest_labelst _scores_gradtcallablet _grad_func(RtnameRRRt grad_func((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyRs     cCs|jd|jddfgS(s‘Computes the shapes of outputs. As a loss module with outputs, we simply output whatever we receive as inputs (i.e. the scores). RHii(RIR(R((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyR/scCsE|jd|_|dkr(|j}n|rA|jd|_ndS(s‘Forward computation. Here we do nothing but to keep a reference to the scores and the labels so that we can do backward computation. Parameters ---------- data_batch : DataBatch Could be anything with similar API implemented. is_train : bool Default is ``None``, which means `is_train` takes the value of ``self.for_training``. iN(RFRJR R,tlabelRK(Rt data_batchtis_train((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pytforwards   cCs|tkst‚|jgS(sIGets outputs of the previous forward computation. As a output loss module, we treat the inputs to this module as scores, and simply return them. Parameters ---------- merge_multi_context : bool Should always be ``True``, because we do not use multiple contexts for computing. (RCR+RJ(Rtmerge_multi_context((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyR#0s cCs5|dkstdƒ‚|js't‚|jƒdS(s,Backward computation. Parameters ---------- out_grads : NDArray or list of NDArray, optional Gradient on the outputs to be propagated back. This parameter is only needed when bind is called on outputs that are not a loss function. s+For a loss module, out_grads should be NoneN(R R+R,t_backward_impl(Rt out_grads((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pytbackward<s cCsd|jdk rW|j|j|jƒ}t|tjƒsKtj|ƒ}n||_n t ƒ‚dS(sáActual implementation of the backward computation. The computation should take ``self._scores`` and ``self._labels`` and then compute the gradients with respect to the scores, store it as an `NDArray` in ``self._scores_grad``. Instead of defining a subclass and overriding this function, a more convenient way is to pass in a `grad_func` when constructing the module object. Then it will be called to compute the gradients. N( RNR RJRKRtndtNDArraytarrayRLR5(Rtgrad((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyRVKs  cCs|tkst‚|jgS(sGets the gradients to the inputs, computed in the previous backward computation. Parameters ---------- merge_multi_context : bool Should always be ``True`` because we do not use multiple context for computation. (RCR+RL(RRU((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pytget_input_grads]scCs tƒ‚dS(s"Installs monitor on all executors.N(R5(Rtmon((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pytinstall_monitorhs(sdata(s softmax_labelN(R=R>R?R@R RR/RTRCR#RXRVR]R_(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyRDós      ( R?R@t base_moduleRRRtRRYRRD(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/module/python_module.pyts  ×