σ ω΅Θ[c@`sΟdZddlmZddlmZddlmZddlmZddlZddlZddlm Z m Z m Z m Z m Z dd lmZmZmZmZdd lmZmZmZmZmZmZdd lmZmZdd lmZdd lmZd„Zd„Z d„Z!d„Z"de#fd„ƒYZ$e%d„Z&e'd„Z(d„Z)d„Z*dd„Z+d„Z,de'e%d„Z.dde'e%d„Z/d„Z0de#fd„ƒYZ1dS( sAutograd for NDArray.i(tabsolute_import(tdivision(tarray(tLockN(tc_inttc_void_pt CFUNCTYPEtPOINTERtcasti(t_LIBt check_callt string_typestmx_uint(t NDArrayHandletc_arraytc_handle_arrayt c_array_buftMXCallbackListt SymbolHandle(tNDArrayt _ndarray_cls(t _GRAD_REQ_MAP(tSymbolcC`sAtjƒ}ttjtj|ƒtj|ƒƒƒt|jƒS(sζSet status to recording/not recording. When recording, graph will be constructed for gradient computation. Parameters ---------- is_recording: bool Returns ------- previous state before this set. (tctypesRR R tMXAutogradSetIsRecordingtbyreftbooltvalue(t is_recordingtprev((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyt set_recording#s  cC`sAtjƒ}ttjtj|ƒtj|ƒƒƒt|jƒS(sMSet status to training/predicting. This affects ctx.is_train in operator running context. For example, Dropout will drop inputs randomly when train_mode=True while simply passing through if train_mode=False. Parameters ---------- train_mode: bool Returns ------- previous state before this set. (RRR R tMXAutogradSetIsTrainingRRR(t train_modeR((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyt set_training4s  cC`s/tjƒ}ttjtj|ƒƒƒ|jS(sdGet status on recording/not recording. Returns ------- Current state of recording. (Rtc_boolR R tMXAutogradIsRecordingRR(tcurr((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyRFs cC`s/tjƒ}ttjtj|ƒƒƒ|jS(sjGet status on training/predicting. Returns ------- Current state of training/predicting. (RR"R R tMXAutogradIsTrainingRR(R$((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyt is_trainingQs t_RecordingStateScopecB`s)eZdZd„Zd„Zd„ZRS(sšScope for managing training state. Example:: with _RecordingStateScope(True, True): y = model(x) backward([y]) cC`s(||_||_d|_d|_dS(N(t_enter_is_recordt_enter_train_modetNonet_prev_is_recordt_prev_train_mode(tselft is_recordR ((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyt__init__gs   cC`sL|jdk r$t|jƒ|_n|jdk rHt|jƒ|_ndS(N(R(R*RR+R)R!R,(R-((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyt __enter__mscC`sf|jdk r1|j|jkr1t|jƒn|jdk rb|j|jkrbt|jƒndS(N(R(R*R+RR)R,R!(R-tptypeRttrace((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyt__exit__ss!!(t__name__t __module__t__doc__R/R0R3(((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyR']s   cC`s tt|ƒS(s˜Returns an autograd recording scope context to be used in 'with' statement and captures code that needs gradients to be calculated. .. note:: When forwarding with train_mode=False, the corresponding backward should also use train_mode=False, otherwise gradient is undefined. Example:: with autograd.record(): y = model(x) backward([y]) metric.update(...) optim.step(...) Parameters ---------- train_mode: bool, default True Whether the forward pass is in training or predicting mode. This controls the behavior of some layers such as Dropout, BatchNorm. (R'tTrue(R ((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pytrecordzscC`s tt|ƒS(s¬Returns a scope context to be used in 'with' statement for codes that do not need gradients to be calculated. Example:: with autograd.record(): y = model(x) backward([y]) with autograd.pause(): # testing, IO, gradient updates... Parameters ---------- train_mode: bool, default False Whether to do forward for training or predicting. (R'tFalse(R ((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pytpause’scC`s tdtƒS(sReturns a scope context to be used in 'with' statement in which forward pass behavior is set to training mode, without changing the recording states. Example:: y = model(x) with autograd.train_mode(): y = dropout(y) N(R'R*R7(((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyR ¦s cC`s tdtƒS(sPReturns a scope context to be used in 'with' statement in which forward pass behavior is set to inference mode, without changing the recording states. Example:: with autograd.record(): y = model(x) with autograd.predict_mode(): y = sampling(y) backward([y]) N(R'R*R9(((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyt predict_mode΅s twritec C`sΓt|tƒr9t|tƒs$t‚|g}|g}nt|tƒrbt|gt|ƒ}ng|D]}t|^qi}ttjt|ƒt |ƒt t t d|ƒƒt |ƒƒƒdS(sΦMark NDArrays as variables to compute gradient for autograd. Parameters ---------- variables: NDArray or list of NDArray gradients: NDArray or list of NDArray grad_reqs: str or list of str tIN( t isinstanceRtAssertionErrorR RtlenR R tMXAutogradMarkVariablesRRR R(t variablest gradientst grad_reqsti((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pytmark_variablesΕs      cC`sΛt|tƒr|g}nt|tƒr6|g}nt|ƒ}|dkr`tjdƒ}nat|ƒt|ƒks„tdƒ‚tt g|D]'}|dk r¬|j n t dƒ^q‘ƒ}||fS(s*parse head gradient for backward and grad.is5heads and head_grads must be lists of the same lengthN( R>RRR*RRR@R?RR thandle(theadst head_gradst head_handlest hgrad_handlesRE((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyt _parse_headίs     4c C`sƒt||ƒ\}}ttjt|ƒ||dtjdƒtj|ƒtjdƒtj|ƒtjdƒtjdƒƒ ƒdS(s[Compute the gradients of heads w.r.t previously marked variables. Parameters ---------- heads: NDArray or list of NDArray Output NDArray(s) head_grads: NDArray or list of NDArray or None Gradients with respect to heads. train_mode: bool, optional Whether to do backward for training or predicting. iN(RLR R tMXAutogradBackwardExR@RRR(RHRIt retain_graphR RJRK((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pytbackwardσs        c C`sZt||ƒ\}}t|tƒr0|g}nt|ƒsHtdƒ‚t|ƒ}|dk rf|n|}tjt ƒƒ} tjtj ƒƒ} t t j t|ƒ||t|ƒ|tj |ƒtj |ƒtj |ƒtj| ƒtj| ƒƒ ƒgtt|ƒƒD],} ttj| | t ƒd| | ƒ^q } t|tƒrV| dS| S(sRCompute the gradients of heads w.r.t variables. Gradients will be returned as new NDArrays instead of stored into `variable.grad`. Supports recording gradient graph for computing higher order gradients. .. Note: Currently only a very limited set of operators support higher order gradients. Parameters ---------- heads: NDArray or list of NDArray Output NDArray(s) variables: NDArray or list of NDArray Input variables to compute gradients for. head_grads: NDArray or list of NDArray or None Gradients with respect to heads. retain_graph: bool Whether to keep computation graph to differentiate again, instead of clearing history and release memory. Defaults to the same value as create_graph. create_graph: bool Whether to record gradient graph for computing higher order train_mode: bool, optional Whether to do backward for training or prediction. Returns ------- NDArray or list of NDArray: Gradients with respect to variables. Examples -------- >>> x = mx.nd.ones((1,)) >>> x.attach_grad() >>> with mx.autograd.record(): ... z = mx.nd.elemwise_add(mx.nd.exp(x), x) >>> dx = mx.autograd.grad(z, [x], create_graph=True) >>> print(dx) [ [ 3.71828175] ] s"variables cannot be an empty list.tstypeiN(RLR>RR@R?RR*RRR RR R RMRtrangeRR( RHRBRIRNt create_graphR RJRKt var_handlest grad_varst grad_stypesREtret((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pytgrads0+         BcC`s5tƒ}ttj|jtj|ƒƒƒt|ƒS(sβRetrieve recorded computation history as `Symbol`. Parameters ---------- x : NDArray Array representing the head of computation graph. Returns ------- Symbol The retrieved Symbol. (RR R tMXAutogradGetSymbolRGRRR(txthdl((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyt get_symbolYs "tFunctioncB`s“eZdZeeeeeeƒeeƒeeƒZeeeƒZde fd„ƒYZ e ƒZ d„Z d„Z d„Zd„Zd„ZRS(s.Customize differentiation in autograd. If you don't want to use the gradients computed by the default chain-rule, you can use Function to customize differentiation for computation. You define your computation in the forward method and provide the customized differentiation in the backward method. During gradient computation, autograd will use the user-defined backward function instead of the default chain-rule. You can also cast to numpy array and back for some operations in forward and backward. For example, a stable sigmoid function can be defined as:: class sigmoid(mx.autograd.Function): def forward(self, x): y = 1 / (1 + mx.nd.exp(-x)) self.save_for_backward(y) return y def backward(self, dy): # backward takes as many inputs as forward's return value, # and returns as many NDArrays as forward's arguments. y, = self.saved_tensors return dy * y * (1-y) Then, the function can be used in the following way:: func = sigmoid() x = mx.nd.random.uniform(shape=(10,)) x.attach_grad() with mx.autograd.record(): m = func(x) m.backward() dx = x.grad.asnumpy() t _RegistrycB`s eZdZd„Zd„ZRS(sCustomOp registry.cC`s"i|_d|_tƒ|_dS(Ni(t ref_holdertcounterRtlock(R-((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyR/–s  cC`s6|jjƒ|j}|jd7_|jjƒ|S(sGet index for new entry.i(R`tacquireR_trelease(R-tcur((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pytinc›s    (R4R5R6R/Rd(((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyR]”s cC`st|_d|_dS(N((R9t_usedt saved_tensors(R-((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyR/₯s cG`s ||_dS(N(Rf(R-targs((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pytsave_for_backward©sc `s©ˆj stdƒ‚tˆ_ttƒ}ˆj|Œ}t|ƒ|sN|S|}t|tƒro|f}ntj j ƒ‰‡fd†}‡fd†}tj |ƒtj |ƒg}g|D]}t |ttƒƒ^qΑ}ttt|ƒƒt tttƒ|ƒtttƒƒƒt ttdgt|ƒƒttƒƒƒ} ttjtt|ƒƒt|ƒtt|ƒƒt|ƒtj| ƒƒƒ| tj jˆ<|S(NsOEach Function instance can only be called once. Please create another instance.c `sΈyg|| D]$}ttj|tƒdtƒ^q}g||||!D]$}ttj|tƒdtƒ^qJ}gt|ƒD]}||^q}ˆj|Œ} t| tƒrΑ| f} nt | ƒt |ƒkst dˆj j t |ƒt | ƒfƒ‚xˆt || |ƒD]t\} } } t| tƒsKt dt| ƒƒ‚| dkr[tS| d krq| | (q| dkr| | 7(qqWWn!tk r³dtjƒGHtSXtS( sentry point for backward.twritables~%s.backward must return exactly the same number of NDArrays as the number of NDArrays arguments to forward.Expecting %d got %ds7autograd.Function.backward must return NDArrays, not %siiitaddsError in Function.backward: %s(ii(RRRR R9R7RQROR>R@R?t __class__tnametzipttypet Exceptiont tracebackt format_exc( t num_ogradst num_igradstptrstreqstis_traint_REt output_gradst input_gradstretstigradRVtreq(R-(sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pytbackward_entryΏs229# %"     c`s9ytjjˆ=Wn!tk r4dtjƒGHtSXtS(s%C Callback for CustomFunction::deletes%Error in autograd.Function.delete: %s(R\t _registryR^RoRpRqR9R7(Rw(tkey(sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyt delete_entryέs  (ReR?R7RR9tforwardR>RR\R~Rdt _bwd_functypet _del_functypeRRRRR@RRRR*R R tMXCustomFunctionRecordRRRR^( R-tinputstprev_recordingtoutputst ret_outputsR}R€t callbacksREtcontext((RR-sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyt__call__¬s<       (   cG`s t‚dS(sForward computation.N(tNotImplementedError(R-R…((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyRωscG`s t‚dS(sŒBackward computation. Takes as many inputs as forward's outputs, and returns as many NDArrays as forward's inputs. N(RŒ(R-Rx((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyROύs(R4R5R6RRRRR‚RƒtobjectR]R~R/RhR‹RRO(((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyR\ks%    M (2R6t __future__RRRt threadingRRpRRRRRRtbaseR R R R R RRRRRtndarrayRRRtsymbolRRR!RR&RR'R7R8R9R:R R;RFRLR*RORWR[R\(((sN/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/autograd.pyts8  (".           J