Yc@sdZddlZddlZddlmZddlmZddlmZ ddl m Z dd l m Z mZmZmZdd l mZdd lmZmZdd lmZdd lmZmZmZdefdYZdS(svA `Module` implement the `BaseModule` API by wrapping a `Symbol` and one or more `Executor` for data parallelization. iNi(tcontext(tndarray(t optimizeri(tDataParallelExecutorGroup(t_create_kvstoret_initialize_kvstoret_update_paramst_update_params_on_kvstore(tload_checkpoint(tUniformtInitDesc(tDataDesc(t BaseModulet_check_input_namest_parse_data_desctModulecBseZdZd%d&eejd'd'd'dZee dZ e dZ dZ e dZe dZe d Ze d Ze d Ze d Zd Zedd'd'e e e dZe ee dZd'ee e d'ddZd'dZddd)e dZdZd'dZd'dZdZedZ edZ!edZ"d'd'dZ#d Z$d!Z%d"Z&d#Z'd$Z(RS(*sModule is a basic module that wrap a `Symbol`. It is functionally the same as the `FeedForward` model, except under the module API. Parameters ---------- symbol : Symbol data_names : list of str Defaults to `('data')` for a typical model used in image classification. label_names : list of str Defaults to `('softmax_label')` for a typical model used in image classification. logger : Logger Defaults to `logging`. context : Context or list of Context Defaults to ``mx.cpu()``. work_load_list : list of number Default ``None``, indicating uniform workload. fixed_param_names: list of str Default ``None``, indicating no network parameters are fixed. state_names : list of str states are similar to data and label, but not provided by data iterator. Instead they are initialized to 0 and can be set by `set_states()`. tdatat softmax_labelc CsPtt|jd|t|tjr7|g}n||_|dkredgt|j}nt|t|jkst ||_ ||_ |dk rt |ng}|dk rt |ng}|dk rt |ng}|dk r t |ng}t ||dtt ||dtt ||dtt ||dt|j} |||} g| D]} | | kr}| ^q}|_||_|j|_||_||_||_|j|_d|_d|_t|_d|_d|_d|_d|_ d|_!d|_"d|_#d|_$d|_%dS(NtloggeriRtlabeltstatet fixed_param(&tsuperRt__init__t isinstancetctxtContextt_contexttNonetlentAssertionErrort_work_load_listt_symboltlistR tTruetFalsetlist_argumentst _param_namest_fixed_param_namestlist_auxiliary_statest _aux_namest _data_namest _label_namest _state_namest list_outputst _output_namest _arg_paramst _aux_paramst _params_dirtyt _optimizert_kvstoret_update_on_kvstoret_updatert_preload_opt_statest _grad_reqt _exec_groupt _data_shapest _label_shapes( tselftsymbolt data_namest label_namesRRtwork_load_listtfixed_param_namest state_namest arg_namest input_namestx((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyR?sL   !   (               cKset||\}}}td||}||_||_t|_|rad||f|_n|S(sxCreates a model from previously saved checkpoint. Parameters ---------- prefix : str path prefix of saved model files. You should have "prefix-symbol.json", "prefix-xxxx.params", and optionally "prefix-xxxx.states", where xxxx is the epoch number. epoch : int epoch to load. load_optimizer_states : bool whether to load optimizer states. Checkpoint needs to have been made with save_optimizer_states=True. data_names : list of str Default is `('data')` for a typical model used in image classification. label_names : list of str Default is `('softmax_label')` for a typical model used in image classification. logger : Logger Default is `logging`. context : Context or list of Context Default is ``cpu()``. work_load_list : list of number Default ``None``, indicating uniform workload. fixed_param_names: list of str Default ``None``, indicating no network parameters are fixed. R;s%s-%04d.states(RRR.R/R"tparams_initializedR5(tprefixtepochtload_optimizer_statestkwargstsymtargstauxstmod((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pytloadqs   cCs{|jjd|d||f}|j|tjd||rwd||f}|j|tjd|ndS(sSaves current progress to checkpoint. Use `mx.callback.module_checkpoint` as `epoch_end_callback` to save during training. Parameters ---------- prefix : str The file prefix to checkpoint to. epoch : int The current epoch number. save_optimizer_states : bool Whether to save optimizer states to continue training. s%s-symbol.jsons%s-%04d.paramssSaved checkpoint to "%s"s%s-%04d.statessSaved optimizer state to "%s"N(R tsavet save_paramstloggingtinfotsave_optimizer_states(R:RERFRRt param_namet state_name((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pytsave_checkpoints   cCs(t|_d|_d|_d|_dS(s(Internal function to reset binded state.N(R#tbindedRR7R8R9(R:((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyt _reset_binds   cCs|jS(s1A list of names for data required by this module.(R)(R:((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyR<scCs|jS(s3A list of names for labels required by this module.(R*(R:((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyR=scCs|jS(s/A list of names for the outputs of this module.(R-(R:((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyt output_namesscCs|jst|jS(sdGets data shapes. Returns ------- A list of `(name, shape)` pairs. (RVRR8(R:((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyt data_shapesscCs|jst|jS(s/Gets label shapes. Returns ------- A list of `(name, shape)` pairs. The return value could be ``None`` if the module does not need labels, or if the module is not bound for training (in this case, label information is not available). (RVRR9(R:((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyt label_shapess cCs|jst|jjS(sfGets output shapes. Returns ------- A list of `(name, shape)` pairs. (RVRR7tget_output_shapes(R:((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyt output_shapesscCs>|jr|jst|jr.|jn|j|jfS(sGets current parameters. Returns ------- `(arg_params, aux_params)` A pair of dictionaries each mapping parameter names to NDArray values. (RVRDRR0t_sync_params_from_devicesR.R/(R:((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyt get_paramss  g{Gz?c s(|jr'| r'tjddddS|js<tdfd}|jj}xH|jjD]7\} } t | |j | d} || | |qmWxH|j jD]7\} } t | |j | d} || | |qWt |_t|_|jj|j|j d|dS(sInitializes the parameters and auxiliary states. Parameters ---------- initializer : Initializer Called to initialize parameters if needed. arg_params : dict If not ``None``, should be a dictionary of existing arg_params. Initialization will be copied from that. aux_params : dict If not ``None``, should be a dictionary of existing aux_params. Initialization will be copied from that. allow_missing : bool If ``True``, params could contain missing values, and the initializer will be called to fill those missing params. force_init : bool If ``True``, will force re-initialize even if already initialized. allow_extra : boolean, optional Whether allow extra parameters that are not needed by symbol. If this is True, no error will be thrown when arg_params or aux_params contain extra parameters that is not needed by the executor. sNParameters already initialized and force_init=False. init_params call ignored.t stackleveliNs,call bind before initializing the parameterscs|dk ry||krA||}||k rv|j|qvqsZtd|ndk r||qn ||dS(s,Internal helper for parameter initializations%s is not presentedN(Rtcopytot RuntimeError(tnametarrtcachet cache_arr(t allow_missingt initializer(s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyt_impls     t allow_extra(RDtwarningstwarnRVRR t attr_dictR.titemsR tgetRR/R"R#R0R7t set_params( R:Rgt arg_paramst aux_paramsRft force_initRiRhtattrsRbRctdesc((RfRgs3build/bdist.linux-armv7l/egg/mxnet/module/module.pyt init_paramss"   c Cs|s8|jddd|d|d|d|d|dS|jr_| r_tjdd d dS|jj||d|t|_t|_dS( sAssigns parameter and aux state values. Parameters ---------- arg_params : dict Dictionary of name to `NDArray`. aux_params : dict Dictionary of name to `NDArray`. allow_missing : bool If ``True``, params could contain missing values, and the initializer will be called to fill those missing params. force_init : bool If ``True``, will force re-initialize even if already initialized. allow_extra : boolean, optional Whether allow extra parameters that are not needed by symbol. If this is True, no error will be thrown when arg_params or aux_params contain extra parameters that is not needed by the executor. Examples -------- >>> # An example of setting module parameters. >>> sym, arg_params, aux_params = mx.model.load_checkpoint(model_prefix, n_epoch_load) >>> mod.set_params(arg_params=arg_params, aux_params=aux_params) RgRpRqRfRrRiNsMParameters already initialized and force_init=False. set_params call ignored.R_i( RuRRDRjRkR7RoR"R0(R:RpRqRfRrRi((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyRo5s   twritec Cs|r|jn|jr0|jjddS||_||_t|_||_|sj| sjtnt |j |j ||\|_ |_ |dk rt|tr|jr|jst|j}t|jt|jkstnd}t|j|j|j|j |j |j|||d|jd|jd|d|j |_|jj|_|dk rt|_|j|_|j|_n|jr|jj|j|jn|jdkr|jdkstg|jj D])} t!j"| dj#d| dj$^q} d t%|j| D|_g|jj&D])} t!j"| dj#d| dj$^qD} d t%|j'| D|_|dk r|j(r|j)|ndS( sBinds the symbols to construct executors. This is necessary before one can perform computation with the module. Parameters ---------- data_shapes : list of (str, tuple) Typically is ``data_iter.provide_data``. label_shapes : list of (str, tuple) Typically is ``data_iter.provide_label``. for_training : bool Default is ``True``. Whether the executors should be bound for training. inputs_need_grad : bool Default is ``False``. Whether the gradients to the input data need to be computed. Typically this is not needed. But this might be needed when implementing composition of modules. force_rebind : bool Default is ``False``. This function does nothing if the executors are already bound. But with this ``True``, the executors will be forced to rebind. shared_module : Module Default is ``None``. This is used in bucketing. When not ``None``, the shared module essentially corresponds to a different bucket -- a module with different symbol but with the same sets of parameters (e.g. unrolled RNNs with different lengths). sAlready bound, ignoring bind()NRR?tgrad_reqR@itdtypecSsi|]\}}||qS(((t.0RbRc((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pys s cSsi|]\}}||qS(((RyRbRc((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pys s (*RWRVRtwarningt for_trainingtinputs_need_gradR"R6RRR<R=R8R9RRRRDR7RtexecsRRR RR%R&R+t_total_exec_bytesR.R/Rot param_arraystndtzerostshapeRxtzipt aux_arraysR(toptimizer_initializedtborrow_optimizer( R:RYRZR{R|t force_rebindt shared_moduleRwt shared_groupRCRR((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pytbind_sV      $  '       $99cCsS|jstt|j|j||\|_|_|jj|j|jdS(sReshapes the module for new input shapes. Parameters ---------- data_shapes : list of (str, tuple) Typically is ``data_iter.provide_data``. label_shapes : list of (str, tuple) Typically is ``data_iter.provide_label``. N( RVRRR<R=R8R9R7treshape(R:RYRZ((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyRs $tlocaltsgdt learning_ratec sjrjstjr<| r<jjddSjrRjnt|t j j \}}j j }|rd|jkrd|jkr||j9}nd|}t|tri}|r|jtj jnLxItt j D]2|jfdtj jDq Wt|}d|krh||ds t rescale_gradRItparam_idx2names;Optimizer created manually outside Module but rescale_grad s=is not normalized to 1.0/batch_size/num_workers (%s vs. %s). sIs this intended?R_itkvstoreRRpt param_namestupdate_on_kvstore(+RVRDRRRRzR0R]RRRR.R7t batch_sizettypet num_workersRtstrtupdatet enumerateRtrangetdicttopttcreateR;t OptimizerRRjRkR1R2R3RR4RRR%t set_optimizert get_updaterR"R5RG( R:RRtoptimizer_paramsRrRRRtidx2name((RR:s3build/bdist.linux-armv7l/egg/mxnet/module/module.pytinit_optimizers\  $ $                 cCsL|jst|j|_|j|_|j|_|j|_t|_dS(sBorrows optimizer from a shared module. Used in bucketing, where exactly the same optimizer (esp. kvstore) is used. Parameters ---------- shared_module : Module N(RRR1R2R3R4R"(R:R((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyRs     c Csu|jr|jsttd|jD}td|jD}||kr^t|drz|jrz|j}nCgt|j|D]*\}}t |j ||j |j ^q}t|dr|j r|j }njt|drE|jrEgt|j|jD]-\}} t |j | j|j |j ^q}nd}|j||n|jj||dS(sForward computation. It supports data batches with different shapes, such as different batch sizes or different image sizes. If reshaping of data batch relates to modification of symbol or module, such as changing image layout ordering or switching from training to predicting, module rebinding is required. See Also ---------- :meth:`BaseModule.forward`. Parameters ---------- data_batch : DataBatch Could be anything with similar API implemented. is_train : bool Default is ``None``, which means ``is_train`` takes the value of ``self.for_training``. css|]}|jVqdS(N(R(RyR((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pys @scss|]}|jVqdS(N(R(RyR((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pys Ast provide_datat provide_labelRN(RVRDRttupleR8RthasattrRRR RbRxtlayoutRRR9RRRR7tforward( R:t data_batchtis_traintcurr_data_shapestnew_data_shapest new_dshapeRRt new_lshapetj((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyR,s   @ IcCs/|jr|jst|jjd|dS(svBackward computation. See Also ---------- :meth:`BaseModule.backward`. Parameters ---------- out_grads : NDArray or list of NDArray, optional Gradient on the outputs to be propagated back. This parameter is only needed when bind is called on outputs that are not a loss function. t out_gradsN(RVRDRR7tbackward(R:R((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyRVsc Cs|jr|jr|js!tt|_|jr^t|jj |jj |j |jj nFt |jj |jj d|jdt|jd|j d|jj dS(sUpdates parameters according to the installed optimizer and the gradients computed in the previous forward-backward batch. See Also ---------- :meth:`BaseModule.update`. tupdatert num_deviceRRN(RVRDRRR"R0R3RR7Rt grad_arraysR2RRR4RR(R:((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyRgs!        cCs+|jr|jst|jjd|S(s6Gets outputs of the previous forward computation. If ``merge_multi_context`` is ``True``, it is like ``[out1, out2]``. Otherwise, it is like ``[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]``. All the output elements are `NDArray`. When `merge_multi_context` is `False`, those `NDArray` might live on different devices. Parameters ---------- merge_multi_context : bool Default is ``True``. In the case when data-parallelism is used, the outputs will be collected from multiple devices. A ``True`` value indicate that we should merge the collected results so that they look like from a single executor. Returns ------- list of NDArray or list of list of NDArray Output. tmerge_multi_context(RVRDRR7t get_outputs(R:R((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyR~scCs4|jr|jr|js!t|jjd|S(sGets the gradients with respect to the inputs of the module. If ``merge_multi_context`` is ``True``, it is like ``[grad1, grad2]``. Otherwise, it is like ``[[grad1_dev1, grad1_dev2], [grad2_dev1, grad2_dev2]]``. All the output elements are `NDArray`. Parameters ---------- merge_multi_context : bool Default is ``True``. In the case when data-parallelism is used, the outputs will be collected from multiple devices. A ``True`` value indicate that we should merge the collected results so that they look like from a single executor. Returns ------- list of NDArray or list of list of NDArray Input gradients R(RVRDR|RR7tget_input_grads(R:R((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyRs!cCs+|jr|jst|jjd|S(sGets states from all devices. If `merge_multi_context` is ``True``, it is like ``[out1, out2]``. Otherwise, it is like ``[[out1_dev1, out1_dev2], [out2_dev1, out2_dev2]]``. All the output elements are `NDArray`. Parameters ---------- merge_multi_context : bool Default is ``True``. In the case when data-parallelism is used, the states will be collected from multiple devices. A ``True`` value indicate that we should merge the collected results so that they look like from a single executor. Returns ------- list of NDArray or list of list of NDArray States R(RVRDRR7t get_states(R:R((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyRscCs/|jr|jst|jj||dS(sjSets value for states. Only one of the states & value can be specified. Parameters ---------- states : list of list of NDArrays source states arrays formatted like ``[[state1_dev1, state1_dev2], [state2_dev1, state2_dev2]]``. value : number a single scalar value for all state arrays. N(RVRDRR7t set_states(R:tstatestvalue((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyRs cCs|jj||dS(sDEvaluates and accumulates evaluation metric on outputs of the last forward computation. See Also ---------- :meth:`BaseModule.update_metric`. Parameters ---------- eval_metric : EvalMetric labels : list of NDArray Typically ``data_batch.label``. N(R7t update_metric(R:t eval_metrictlabels((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyRs cCs&|jj|j|jt|_dS(sSynchronizes parameters from devices to CPU. This function should be called after calling `update` that updates the parameters on the devices, before one can read the latest parameters from ``self._arg_params`` and ``self._aux_params``. N(R7R^R.R/R#R0(R:((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyR]scCs]|jst|jr+|jj|n.t|d}|j|jjWdQXdS(sSaves optimizer (updater) state to a file. Parameters ---------- fname : str Path to output states file. twbN( RRR3R2RRtopenRvR4R(R:tfnametfout((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyRRs  cCsN|jst|jr+|jj|n|jjt|djdS(sLoads optimizer (updater) state from a file. Parameters ---------- fname : str Path to input states file. trbN( RRR3R2RGR4RRtread(R:R((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyRGs cCs#|jst|jj|dS(s#Installs monitor on all executors. N(RVRR7tinstall_monitor(R:tmon((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyRs(sdata(RN(Rg{Gz?((Rg{Gz?()t__name__t __module__t__doc__RPRtcpuRRt staticmethodR#RMRURWtpropertyR<R=RXRYRZR\R^R RuR"RoRRRRRRRRRRRRR]RRRGR(((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyR'sL0&    > )  \ O  *         (RRPRjtRRRRRRtexecutor_groupRtmodelRRRRRRgR R tioR t base_moduleR R RR(((s3build/bdist.linux-armv7l/egg/mxnet/module/module.pyts  "